Introduction to Peer‑Group Based Policy Migration
Overview of Route‑Maps and Peer‑Groups
Route‑maps are the traditional mechanism for applying per‑neighbor BGP policy (filtering, attribute manipulation, etc.) on Cisco IOS/IOS‑XE platforms. Each BGP neighbor can have its own inbound and outbound route‑map, leading to a configuration that scales linearly with the number of peers. Peer‑groups, by contrast, allow a set of neighbors to share a common set of BGP parameters (timers, update‑source, filter‑lists, route‑maps, etc.). When a route‑map is attached to a peer‑group, every member inherits the policy unless overridden explicitly.
Benefits of Peer‑Group Based Policy
- Configuration reduction – a single route‑map statement replaces dozens of identical neighbor‑specific statements.
- Operational consistency – policy changes are made once and propagate to all group members, reducing drift.
- Blast‑radius control – a mis‑configured route‑map affects only the peer‑group, not the entire BGP process.
- Simplified auditing – diff checks become smaller because repetitive neighbor blocks are collapsed.
Migrating from individually attached route‑maps to peer‑group based policy must be done with explicit transaction boundaries, verification gates, and rollback steps to guarantee that advertisements remain unchanged throughout the process.
Pre‑Migration Steps
Assessing Current Route‑Map Configuration
- Export the running BGP configuration
show running-config | section router bgp - Collect all neighbor‑specific route‑map statements
show running-config | include neighbor.*route-map - Create an inventory table (device, neighbor IP, inbound route‑map, outbound route‑map, peer‑group membership if any).
- Validate that each route‑map is idempotent – apply it twice in a lab and confirm the resulting BGP table is unchanged after the second application.
Identifying Peer‑Groups and Policy Requirements
- Group by identical policy – neighbors that share the exact same inbound and outbound route‑maps (and other BGP timers) are candidates for a single peer‑group.
- Document exceptions – any neighbor that requires a unique tweak (e.g., different prefix‑list) must remain outside the group or be configured with an override after group inheritance.
- Determine peer‑group names – adopt a naming convention that reflects policy function (e.g.,
PG-ISP-IN,PG-CUST-OUT).
Creating a Migration Plan
| Step | Action | Pre‑check | Change Boundary | Verification Gate | Rollback Trigger | Blast Radius | Operator Intervention |
|---|---|---|---|---|---|---|---|
| 1 | Create peer‑group objects (no neighbors yet) | `show run | include neighbor PEER-GROUP-*` returns nothing | neighbor <PG> peer-group | Confirm peer‑group exists with `show run | include neighbor | None (local config only) |
| 2 | Assign route‑maps to peer‑group | Verify route‑map exists and compiles (show route-map) | neighbor <PG> route-map <RM> in/out | show ip bgp peer-group <PG> shows inherited route‑map | Any mismatch in show ip bgp peer-group <PG> | Limited to peers that will later join the group | Operator confirms before moving to step 3 |
| 3 | Move a canary subset of neighbors into the peer‑group | Pre‑check: neighbor state Established, no flaps in last 15 min (show ip bgp neighbors <ip> | include State/PfxRcd) | no neighbor <ip> route-map <RM> in/outneighbor <ip> peer-group <PG> | Post‑move verification: show ip bgp neighbors <ip> advertised-routes matches pre‑move output; no route‑flap > 5 min | If verification fails, execute rollback script for that neighbor | Operator decides to continue or halt | |
| 4 | Repeat step 3 in batches (e.g., 10 % of fleet) | Same as step 3 per batch | Same as step 3 | Same as step 3 | Same as step 3 | Limited to batch size | Operator reviews batch report before next batch |
| 5 | Decommission old neighbor‑specific route‑map statements | Confirm no neighbor still has inline route‑map (show run | include neighbor.*route-map) | no neighbor <ip> route-map <RM> in/out (only if already inherited via PG) | Final diff shows only peer‑group lines; BGP table unchanged | If any neighbor still shows inline RM, abort and investigate | Operator signs off migration complete |
Each batch is a discrete transaction with its own pre‑check, change boundary, verification gate, rollback trigger, blast radius, and operator intervention point.
Configuring Peer‑Groups
Defining Peer‑Group Structure
A peer‑group is created under the BGP process. It inherits all BGP timers, update‑source, and loop‑detection settings unless overridden. The group itself does not carry any IP address; it is a logical container.
Assigning Route‑Maps to Peer‑Groups
When a route‑map is attached to a peer‑group, the configuration is stored once and applied to every member. Overrides are possible by configuring a more specific statement directly on the neighbor (the neighbor statement wins).
Example Configuration using CLI
router bgp 65001
bgp log-neighbor-changes
! Define the peer‑group
neighbor PEER-GROUP-1 peer-group
! Optional: set common timers or update‑source for the group
neighbor PEER-GROUP-1 timers 10 30
neighbor PEER-GROUP-1 update-source Loopback0
! Apply the inbound route‑map to the group
neighbor PEER-GROUP-1 route-map ROUTE-MAP-1 in
! Apply the outbound route‑map to the group (if needed)
neighbor PEER-GROUP-1 route-map ROUTE-MAP-OUT out
Note: No remote-as is set on the peer‑group itself; each member must still specify its remote‑AS (or inherit it from a peer-group that has it set). In many designs the remote-as is also placed on the peer‑group for uniformity.
Migrating to Peer‑Group Based Policy
Enabling Peer‑Group Based Policy
The peer‑group becomes active as soon as the first neighbor is assigned to it. Until then, the group exists only in the configuration.
Applying Route‑Maps to Peer‑Groups
If the route‑maps already exist and are currently applied individually, the migration step is to remove the inline neighbor statements and add the group‑level statement (as shown above). The order matters: apply the group statement before removing the inline version to avoid a window where the neighbor has no policy.
Verifying Peer‑Group Configuration
show ip bgp peer-group PEER-GROUP-1
show ip bgp neighbor PEER-GROUP-1
show ip bgp neighbor PEER-GROUP-1 advertised-routes
show ip bgp neighbor PEER-GROUP-1 received-routes
The advertised-routes output must match the pre‑migration output for each neighbor that has been moved into the group. Any deviation triggers a rollback.
Troubleshooting and Validation
Common Issues During Migration
| Symptom | Likely Cause | Check |
|---|---|---|
| Neighbor flaps after moving to PG | Missing remote-as on neighbor (inherited incorrectly) | `show run |
| No routes advertised | Route‑map not applied because of typo in group name | show ip bgp peer-group <pg> |
| Duplicate route‑map entries | Inline RM not removed before group RM added | `show run |
| High CPU on route‑map processing | Overly complex RM applied to many peers via PG | Consider splitting PG or optimizing RM |
Using Debug Commands for Troubleshooting
debug ip bgp neighbor-events
debug ip bgp neighbor-updown
debug ip bgp updates
Caution: Enable debugs only on a single router or a limited peer‑group to avoid overwhelming the console. Capture output to a log file and disable after the issue is reproduced.
Validating Advertisements and Route‑Maps
- Pre‑move baseline – capture
show ip bgp neighbors <ip> advertised-routesfor each neighbor in the batch. - Post‑move check – run the same command and compare with
diff(orshow ip bgp neighbors <ip> advertised-routes \| diff -u <baseline>). - Attribute verification – ensure that AS‑PATH, MED, LOCAL_PREF, and community values are identical.
- Stability window – require no flaps for at least 2 × the hold‑time (e.g., 60 s for a 30 s hold‑time) before declaring the batch successful.
Transaction Boundaries and Rollback
Defining Transaction Boundaries
- Pre‑check – verify neighbor state, existing route‑map correctness, and that the peer‑group exists with correct timers.
- Change Boundary – the set of CLI commands that move a neighbor into the peer‑group and remove the inline route‑map. This boundary is not atomic on IOS; each command takes effect immediately.
- Verification Gate – comparison of advertised/received routes and neighbor stability.
- Rollback Trigger – any mismatch in the verification gate or a flap event during the stability window.
- Blast Radius – limited to the batch of neighbors currently being migrated (typically 5‑10 % of total peers per device).
- Operator Intervention Point – after the verification gate, before the next batch is started. The operator must sign off (e.g., via a ticket or chatops approval) to proceed.
Implementing Rollback Steps
Rollback is operational, not automatic. The steps are:
- Re‑apply the inline route‑map on the affected neighbor(s).
- Remove the neighbor from the peer‑group (
no neighbor <ip> peer-group <pg>). - Optionally clear the BGP session to force a re‑advertise with the inline policy (
clear ip bgp <ip> soft in). - Re‑verify advertisements match the baseline.
Example Rollback Script
#!/bin/bash
# rollback_peer_group.sh
# Usage: ./rollback_peer_group.sh <device> <peer-group> <neighbor-ip-list-file>
#
# This script assumes you have SSH access and that the device
# runs Cisco IOS/XE. It does NOT use any atomic replace feature;
# it performs a series of explicit CLI changes.
DEVICE=$1
PG=$2
NEIGH_FILE=$3
if [[ -z "$DEVICE" || -z "$PG" || -z "$NEIGH_FILE" ]]; then
echo "Usage: $0 <device> <peer-group> <neighbor-ip-list-file>"
exit 1
fi
while read -r NEIGH; do
echo "Rolling back $NEIGH on $DEVICE from peer-group $PG"
ssh "$DEVICE" <<EOF
configure terminal
no neighbor $NEIGH peer-group $PG
! Re‑apply the original inline route‑maps (example names)
neighbor $NEIGH route-map ROUTE-MAP-1 in
neighbor $NEIGH route-map ROUTE-MAP-OUT out
end
write memory
exit
EOF
done < "$NEIGH_FILE"
echo "Rollback complete. Verify advertisements with show ip bgp neighbors <ip> advertised-routes"
The script is deliberately simple; in production you would wrap it in a configuration management tool (Ansible, Nornir, etc.) and add error handling, logging, and confirmation prompts.
Scaling Limitations and Considerations
Peer‑Group Scaling
Peer‑groups reduce configuration lines but do not eliminate the per‑neighbor state maintained by the BGP process. Very large peer‑groups (hundreds of members) can still impact CPU during route‑map evaluation because each member’s packets are processed against the shared route‑map. To mitigate:
- Split large groups by function or geography.
- Optimize route‑maps (use prefix‑lists, community‑lists, and avoid costly match/set statements).
- Monitor CPU before and after migration (
show processes cpu | include BGP). - Consider BGP template features (e.g.,
bgp peer-templateon IOS‑XR) if finer‑grained inheritance is required.
With these controls, peer‑group based policy migration can be performed safely across large fleets while preserving exact advertisement behavior.