Building a Universal AI Assistant Test Harness
Overview of the Test Harness
To build a universal AI assistant test harness for reproducing broken OSPF adjacency states, we need to identify key OSPF adjacency states to reproduce, such as DOWN, LOST, and EXSTART. We also need to determine the necessary network topology for testing, which in this case will be a multi-area and multi-router topology. Additionally, we need to decide on the AI assistant interaction methods, which will be both CLI and API.
The following OSPF adjacency states will be used:
- DOWN: The initial state of an OSPF adjacency, where the router has not yet established a neighbor relationship.
- LOST: A state where the OSPF adjacency has been lost due to a network failure or other issue.
- EXSTART: A state where the OSPF adjacency is in the process of being established, but has not yet completed.
A network topology consisting of three routers, each in a separate area, and connected to each other using OSPF will be used.
Test Harness Architecture
The test harness architecture will consist of the following components:
- Network Topology Creation: Using Containerlab to create a network topology with multiple routers and areas.
- OSPF Configuration: Using CLI commands to configure OSPF on each router.
- AI Assistant Integration: Using an API interface to integrate the AI assistant with the test harness.
- Remediation Prompt Validation: Using CLI commands to verify the effectiveness of remediation prompts.
Reproducing Broken OSPF Adjacency States
Network Topology Creation
To create a network topology with multiple routers and areas, we will use Containerlab to create a topology with three routers, each in a separate area.
graph LR
A[Router 1] -->|OSPF|> B[Router 2]
B -->|OSPF|> C[Router 3]
C -->|OSPF|> A
The following Containerlab configuration will be used to create the topology:
topology:
routers:
- name: router1
image: csr1000v
- name: router2
image: csr1000v
- name: router3
image: csr1000v
links:
- router1:
- router2
- router2:
- router3
- router3:
- router1
We will then use CLI commands to configure OSPF on each router:
# Configure OSPF on router1
router1# configure terminal
router1(config)# router ospf 1
router1(config-router)# network 10.0.0.0 0.255.255.255 area 0
# Configure OSPF on router2
router2# configure terminal
router2(config)# router ospf 1
router2(config-router)# network 10.0.1.0 0.255.255.255 area 1
# Configure OSPF on router3
router3# configure terminal
router3(config)# router ospf 1
router3(config-router)# network 10.0.2.0 0.255.255.255 area 2
Broken Adjacency State Reproduction
To reproduce broken OSPF adjacency states, we will use CLI commands to introduce errors and verify the broken adjacency state. For example, to reproduce a DOWN state, we can shut down an interface on one of the routers:
# Shut down an interface on router1
router1# configure terminal
router1(config)# interface GigabitEthernet1
router1(config-if)# shutdown
We can then verify the broken adjacency state using CLI commands:
# Verify the broken adjacency state on router2
router2# show ip ospf neighbor
This should show that the adjacency state is DOWN.
Validating Remediation Prompts
AI Assistant Interaction
To send remediation prompts to the AI assistant, we will use an API interface. For example, we can send a remediation prompt to the AI assistant to fix the DOWN state:
import requests
# Send a remediation prompt to the AI assistant
response = requests.post('http://ai-assistant:8080/remediation', json={'prompt': 'Fix DOWN state on router1'})
We can then verify the AI assistant response and validate the effectiveness of the remediation prompt.
Remediation Prompt Validation
To verify the corrected adjacency state, we can use CLI commands to check the adjacency state on the router:
# Verify the corrected adjacency state on router2
router2# show ip ospf neighbor
This should show that the adjacency state is now FULL. We can then compare the corrected adjacency state to the expected output:
# Compare the corrected adjacency state to the expected output
expected_state = 'FULL'
actual_state = 'FULL'
if actual_state == expected_state:
print('Remediation prompt successful')
else:
print('Remediation prompt failed')
We can output the validation results to the console or a log file:
# Output validation results to console or log file
print('Validation Results:')
print('Expected State:', expected_state)
print('Actual State:', actual_state)
print('Remediation Prompt:', 'Successful' if actual_state == expected_state else 'Failed')
We can also use Mermaid.js to visualize the remediation prompt validation results:
graph LR
A[Remediation Prompt] -->|Validation|> B[Expected State]
B -->|Comparison|> C[Actual State]
C -->|Result|> D[Validation Result]
This will show a graph of the remediation prompt validation process, with the expected state, actual state, and validation result.