Monday, September 21, 2009

Cat QoS

3550 support both inbound and outbound policy. 3560 only support inbound policy.

1. 3550 per-port per-vlan policy
class-map match-any dscp-class
match ip dscp af31
!
class-map match-all vlan-class
match vlan 5 10-30 40
match class-map dscp-class
!
policy-map vlan-dscp
class-map vlan-class
set dscp CS3
police 128000 8000 exceed-action drop
!
inter fa 1/13
service-policy input vlan-dscp


2. 3560 SVI by using hierarchical policy maps
!
! Any non-IP traffic
!
mac access-list extended MAC_ANY
permit any any 0x0 0xFFFF

!
! Any IP traffic
!
ip access-list extended IP_ANY
permit ip any any

!
! Class for any non-IP traffic
!
class-map MAC_ANY
match access-group name MAC_ANY

!
! Class for any IP traffic
!
class-map IP_ANY
match access-group name IP_ANY

!
! Class to match the port connected to R1
!
class-map PORT_R1
match input-interface FastEthernet 0/1

!
! Class to match the port connected to R3
!
class-map PORT_R3
match input-interface FastEthernet 0/3

!
! Inteface-level policy-maps, limit rate per-port (R1 & R3)
!
policy-map PORT_R1
class PORT_R1
police 64000 8000

!
policy-map PORT_R3
class PORT_R3
police 512000 64000

!
! VLAN policy-map; two levels
!
policy-map VLAN_POLICY
class IP_ANY
set dscp 24
service-policy PORT_R1
class MAC_ANY
set dscp ef
service-policy PORT_R3
!
! Attach a switch-wide VLAN policy
!
interface VLAN 1
service-policy input VLAN_POLICY
!
! Enabe VLAN based-QoS on some ports
!
interface range FastEthernet 0/1, FastEthernet 0/3
mls qos vlan-based

Monday, September 14, 2009

Route-map for Redistribution

1. match ip next-hop prefix-list is not supported in redistribution route-map.
So it is better to use ACL whenever is possible in route-map when doing redistribution.

Wednesday, September 09, 2009

Wording

1. Using MQC, rate limit the traffic to 8kbps with minimum possible burst.
The burst doesn't Be, it is actually mean Bc. So the minimum configuration for bc is 1000
So the command would be "policy cir 8000 bc 1000"

Monday, September 07, 2009

NAT virtual interface

Legacy NAT is domain based NAT. You need to define inside and outside. And the order of routing and NAT is different.
Traffic from outside, NAT first then routing.
Traffic from inside, routing first then NAT.

The new NAT virtual interface has no difference between interface outside or inside.
1. First it will check the packet to see if it needs to be NAT
2. If it needs to be NAT, it will be routed to the virtual interface then doing the NAT.
3. After the NATed, it will be routed again.

Sample:

R3:
interface Serial 1/0.301 point-to-point
no ip nat inside
ip nat enable
!
interface Serial 1/0.302 multipoint
no ip nat outside
ip nat enable

!
! Remove old rules
!
no ip nat inside source static 155.1.13.1 155.1.23.1
no ip nat outside source static 155.1.23.2 155.1.13.2

!
! Add "domainless" rules
!
ip nat source static 155.1.13.1 155.1.23.1
ip nat source static 155.1.23.2 155.1.13.2


Wednesday, September 02, 2009

Frame relay QoS

MQC_Based Frame relay traffice shaping:

In summary:

- Legacy command frame-relay traffic-shaping is incompatible with MQC-based FRTS (you can’t mix them)
- Fancy queueing could not be used as a PVC-queueing strategy: CBWFQ is the only option available
- Per-VC CBWFQ is configured via hierarchical policy-maps configuration: Parent policy sets shaping values, while child policy implements CBWFQ
- You may apply policy-map per-interface (subinterface) or per-VC, using match fr-dlci under class-map submode
- You can’t apply FRF.12 fragmentation with MQC commands – it should be applied at physical interface level. By doing so, FRF.12 is effectively enabled for all PVCs
- Physical interface queue could be set to any of WFQ/CQ/PQ or CBWFQ (not restricted to FIFO as with FRTS legacy) – though this is rarely needed




Sample: Shape PVC DLCI 112 to 384Kpbs and enable FRF.12 fragmentation for all PVCs

class-map VOICE
match ip dscp ef
!
class-map DATA
match ip dscp cs1

!
! Match the specific DLCI
!
class-map DLCI_112
match fr-dlci 112

!
! "Child" policy-map, used to implement CBWFQ
!

policy-map CBWFQ
class VOICE
priority 64
class DATA
bandwidth 128
class class-default
fair-queue

!
! "Parent" policy map, used for PVC shaping
! With multiple classes, we can match different DLCIs
! all at the same physical interface (where they belongs)
!

policy-map INTERFACE_POLICY
class DLCI_112
shape average 384000
shape adaptive 192000
service-policy CBWFQ

!
! Apply the parent policy map at physical interface level
! Also, configure FRF.12 "global" settings here
!

interface Serial 0/0/0
service-policy output INTERFACE_POLICY
frame-relay fragment 640 end-to-end


==========================================================

Legacy Frame Relay traffic shaping:

- Enabled with frame-relay traffic-shaping command at physical interface level
- Incompatible with GTS or MQC commands at subinterfaces or physical interface levels
- With FRTS you can enforce bitrate per-VC (VC-granular, unlike GTS), by applying a map-class to PVC
- When no map-class is explicitly applied to PVC, it’s CIR and Tc are set to 56K/125ms by default
- Shaping parameters are configured under map-class frame-relay configuration submode
- Allows to configure fancy-queueing (WFQ/PQ/CQ) or simple FIFO per-VC
- No option to configure fancy-queueing at interface level: interface queue is forced to FIFO (if no FRF.12 is configured)
- Allows for adaptive shaping (throttling down to minCIR) on BECN reception (just as GTS) and option to reflect incoming FECNs as BECNs
- Option to enable adaptive shaping which responds to interface congestion (non-empty interface queue)

Tuesday, September 01, 2009

Frame Relay Fragmentation




FRF.12 - Enable under the class map

FRF.11 Annex C - use "vofr" under frame relay dlci configuration mode

Cisco - use "vofr cisco" under frame relay dlci configuration mode

Notes:
1. The class-map defines the fragment size, vofr [cisco] just states that the dlci is encapsulated using FRF.11 or Cisco.

Frame Relay Compression

Stacker vs Predictor
1. Stacker is more CPU intensive.
2. Predictor is more Memory intensive.

Frame relay compression schemes:
1. Data payload compression.
1.1 Cisco proprietary packet-by-packet payload compression. It uses Stacker compression
For a multiple interface use:
frame-relay map ip 10.1.1.1 100 payload-compress packet-by-packet
For P2P interface:
frame-relay payload-compress packet-by-packet.

1.2 FRF.9 uses Stacker. It has better compression ratio than packet by packet.
You should use IETF encapsulation for the pvc that uses FRF.9. Actually when you enable the rfr9 stac keyword, IETF encapsulation is automatically enabled.
For a multiple interface use:
frame-relay map ip 10.1.1.1 100 payload-compress FRF9 stac
For P2P interface:
frame-relay payload-compress FRF9 stac.



2. Packet header compression
2.1 TCP/IP. See RFC 1144
It is important to note that TCP/IP header compression is hop-by-hop compression scheme. The TCP/IP header must be replaced at each node. So it adds latency and CPU load.
And TCP/IP compression requires Cisco proprietary encapsulation.
For physical interface:
frame-relay ip tcp head-compression [passive]
For DLCI
frame-relay map ip 10.1.1.1 100 tcp header-compression [active|passive]
You can also disable it by:
frame-relay map ip 10.1.1.1 100 nocompress

2.2 RTP. See RFC 1889
It is also hop-by-hop compression. And only support Cisco encapsulation
frame-relay ip rtp header-compression [passive]
frame-relay map ip 10.1.1.1 100 rtp header-compression.
frame-relay map ip 10.1.1.1 100 compress (Enabel both tcp and rtp compression).