A comprehensive network monitoring and automation solution that provides end-to-end visibility into network performance through automated configuration, real-time telemetry collection, and intuitive dashboards.
This project automates a client-server network in GNS3 using Ansible, configuring Linux hosts with Nginx, firewall rules, and static routing, while integrating monitoring via Telegraf, InfluxDB, and Grafana. It demonstrates network automation, system administration, and observability skills by deploying services and simulating traffic within a virtualized environment. The setup leverages a tap0 interface for real-world connectivity.
- Network Setup: Two Linux hosts (Server1 and Client1) in a GNS3 topology, connected via a switch and a tap0 interface for real-world connectivity.
- Automation: Ansible playbooks to configure a web server (Nginx), SNMP, firewall rules (ufw), and monitoring (Telegraf) on Server1, and a traffic generator on Client1.
- Monitoring: Telegraf collects system metrics (CPU, memory, network) and SNMP data, sending them to InfluxDB Cloud for visualization in Grafana.
- Traffic Simulation: Client1 generates HTTP traffic to Server1 to simulate network activity.
- Static Routing: Uses a single subnet with static routing, avoiding the complexity of dynamic routing protocols.
To run this project locally, ensure you have the following:
- Operating System: Ubuntu 24.04 LTS (or similar Linux distribution).
- GNS3: Version 2.2 or later, installed and configured.
- Docker: Installed to run Linux containers in GNS3.
- Ansible: Version 2.9 or later, installed (
sudo apt install ansible
). - InfluxDB Cloud Account:
- Sign up for a free InfluxDB Cloud account at InfluxData.
- Create a bucket named
visinetra
in theDev
organization. - Create your own token and use it in
server1.yml
.
- Grafana: Set up Grafana (local or cloud) to visualize InfluxDB data.
- Add InfluxDB as a data source in Grafana using the same credentials as in
server1.yml
or create a new one.
- Add InfluxDB as a data source in Grafana using the same credentials as in
- Internet Access: Ensure your host system has internet access (e.g., via Ethernet
eno1
or Wi-Fiwlo1
) or use ip forwarding to access internet via host.
- tap0 Interface: A tap interface on the host system for GNS3 connectivity.
- IP:
192.168.69.254/24
.
- IP:
- Host System Configuration:
- Enable IP forwarding and configure NAT rules to allow GNS3 hosts to access the internet (see below).
-
Enable IP Forwarding:
-
Run the following commands to enable IP forwarding on your host system:
sudo sysctl -w net.ipv4.ip_forward=1 echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf
-
This ensures the GNS3 network can route traffic to the internet.
-
-
Configure NAT and Firewall Rules:
-
Assuming your internet interface is
eno1
(replace withwlo1
if using Wi-Fi):sudo iptables -t nat -A POSTROUTING -s 192.168.69.0/24 -o eno1 -j MASQUERADE sudo iptables -A FORWARD -i tap0 -o eno1 -j ACCEPT sudo iptables -A FORWARD -i eno1 -o tap0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-
These rules allow the GNS3 subnet (
192.168.69.0/24
) to access the internet via NAT.
-
-
Add tap0 Interface:
-
Ensure the tap0 interface is configured:
sudo ip addr add 192.168.69.254/24 dev tap0 sudo ip link set tap0 up
-
Verify:
ip addr show tap0
.
-
-
Add Docker Hosts:
- In GNS3, go to
Edit
→Preferences
→Docker
→Docker Containers
→New
. - Select an Ubuntu-based Docker image (e.g.,
ubuntu:20.04
). - Name the containers:
- First container:
Server1
. - Second container:
Client1
.
- First container:
- Set memory to 512 MB and use 1 vCPU for each.
- In GNS3, go to
-
Configure the Topology:
- Drag
Server1
andClient1
into the GNS3 workspace. - Add an Ethernet switch (
Switch1
). - Add a Cloud node (
Cloud1
):- Configure Cloud1 to use the
tap0
interface.
- Configure Cloud1 to use the
- Connect:
- Server1 eth0 to Switch1 Port 1.
- Client1 eth0 to Switch1 Port 2.
- Cloud1 (tap0) to Switch1 Port 3.
- Start all devices.
Topology Diagram:
The diagram shows:
- Server1 (
192.168.69.10
) and Client1 (192.168.69.11
) connected to Switch1. - Cloud1 (tap0:
192.168.69.254
) connected to Switch1 for external access.
- Drag
-
Configure Host IPs:
-
Access Server1’s CLI (via GNS3 console):
ip addr add 192.168.69.10/24 dev eth0 ip link set eth0 up ip route add default via 192.168.69.254
-
Access Client1’s CLI:
ip addr add 192.168.69.11/24 dev eth0 ip link set eth0 up ip route add default via 192.168.69.254
-
Install
iproute2
ifip
command is missing:apt update && apt install iproute2
.
-
-
Enable SSH Access:
-
On both Server1 and Client1:
apt update apt install openssh-server systemctl start ssh systemctl enable ssh
-
Set root password for Ansible access:
passwd root
- Use password:
password
(or your choice, updateinventory.yml
accordingly).
- Use password:
-
-
Verify Connectivity:
- From Server1:
ping 192.168.69.11
andping 192.168.69.254
. - From Client1:
ping 192.168.69.10
andping 192.168.69.254
. - From host system:
ssh [email protected]
andssh [email protected]
.
- From Server1:
-
Create Project Directory:
-
On your host system:
mkdir -p ~/code/visinetra cd ~/code/visinetra
-
-
Create
inventory.yml
:-
File:
inventory.yml
--- all: hosts: server1: ansible_host: 192.168.69.10 ansible_user: root ansible_password: password ansible_connection: ssh client1: ansible_host: 192.168.69.11 ansible_user: root ansible_password: password ansible_connection: ssh
-
-
Copy Playbooks:
- Save the provided
server1.yml
andclient1.yml
into the project directory. - These playbooks:
- server1.yml: Configures DNS, Nginx, SNMP, firewall rules, and a Telegraf simulator (sends metrics to InfluxDB Cloud).
- client1.yml: Tests Nginx connectivity and sets up a traffic generator to simulate HTTP requests to Server1.
- Save the provided
-
Install Required Packages:
-
On Server1 and Client1, ensure basic packages are installed:
-
Access each host via SSH and run:
apt update apt install nginx snmpd ufw curl
-
-
This step can be skipped since the playbooks handle most installations, but it’s a good precaution.
-
-
Run Playbooks:
-
From the project directory:
ansible-playbook -i inventory.yml server1.yml ansible-playbook -i inventory.yml client1.yml
-
Expected output for
server1.yml
:- Nginx starts, SNMP is configured, firewall rules allow SSH/HTTP, and Telegraf simulator sends metrics to InfluxDB Cloud.
-
Expected output for
client1.yml
:curl
tests connectivity to Server1, and a traffic generator service is set up (but not started).
-
-
Start Traffic Generation (Optional):
-
On Client1, manually start the traffic generator to simulate HTTP requests to Server1:
sshpass -p password ssh -o StrictHostKeyChecking=no [email protected] "systemctl start traffic-generator"
-
Check status:
sshpass -p password ssh -o StrictHostKeyChecking=no [email protected] "systemctl status traffic-generator"
-
Stop when needed:
sshpass -p password ssh -o StrictHostKeyChecking=no [email protected] "systemctl stop traffic-generator"
-
-
InfluxDB Cloud:
- Metrics from Server1 are sent to InfluxDB Cloud (
visinetra
bucket,Dev
org). - Log in to InfluxDB Cloud to verify data points (e.g., CPU, memory, network, SNMP uptime).
- Metrics from Server1 are sent to InfluxDB Cloud (
-
Grafana:
- Add InfluxDB as a data source in Grafana:
- URL:
your-influxdb-server-url
. - Token: Same as in
server1.yml
. - Organization:
Dev
. - Bucket:
visinetra
.
- URL:
- Create dashboards to visualize:
- CPU usage (
cpu_usage
). - Memory usage (
memory_usage
). - Network traffic (
network_rx
,network_tx
). - SNMP uptime (
snmp_uptime
).
- CPU usage (
- Add InfluxDB as a data source in Grafana:
-
Verify Metrics Locally:
-
On Server1, check the Telegraf simulator logs:
sshpass -p password ssh -o StrictHostKeyChecking=no [email protected] 'cat /var/log/telegraf-simulator/metrics.log'
-
Look for HTTP response codes (204 indicates successful data sends to InfluxDB).
-
-
Web Server Access:
- From Client1:
curl http://192.168.69.10
. - Expected: Returns the default Nginx page.
- From your host system: Open a browser and navigate to
http://192.168.69.10
.
- From Client1:
-
Firewall Rules:
- On Server1:
ufw status
. - Expected: Ports 22 (SSH) and 80 (HTTP) allowed.
- On Server1:
-
SNMP:
- On Server1:
snmpwalk -v2c -c public localhost 1.3.6.1.2.1.1.3.0
. - Expected: Returns system uptime.
- On Server1:
-
Traffic Generation:
- Start the traffic generator on Client1 (see Step 4.3).
- Monitor Grafana for increased network traffic metrics.
- No Internet Access:
- Verify IP forwarding and NAT rules on the host system.
- Check tap0:
ip addr show tap0
.
- Ansible Fails:
- Ensure SSH access:
ssh [email protected]
. - Run with verbose:
ansible-playbook -i inventory.yml server1.yml -v
.
- Ensure SSH access:
- Telegraf Fails to Send Data:
- Check DNS:
cat /etc/resolv.conf
on Server1 (should show8.8.8.8
and1.1.1.1
). - Verify InfluxDB token and bucket settings in
server1.yml
.
- Check DNS:
- Traffic Generator Not Working:
- Check service status:
ssh [email protected] 'systemctl status traffic-generator'
. - Ensure Client1 can reach Server1:
ping 192.168.69.10
.
- Check service status:
visinetra/
├── influxdb/
│ ├── grafana-token.token # Grafana token file
│ └── telegraf-token.token # Telegraf token file
├── .gitignore # Git ignore rules
├── Dockerfile-Client1 # Dockerfile for Client1
├── Dockerfile-Server1 # Dockerfile for Server1
├── README.markdown # Project README
├── client1.yml # Ansible playbook for Client1
├── inventory.yml # Inventory file
├── server1.yml # Ansible playbook for Server1
└── topology.png # GNS3 topology diagram
- Add VLANs to segment the network (e.g., VLAN 10 for the subnet).
- Configure static routes for a multi-subnet setup (e.g., add a second subnet
192.168.70.0/24
). - Integrate more advanced monitoring (e.g., HTTP response times, latency).
This project is licensed under the MIT License.