A comprehensive simulation framework for evaluating different SDN flow table caching mechanisms including traditional strategies (LRU, LFU), DQN-enhanced approaches, and optimal baselines.
- Source: https://tinyurl.com/flow-caching-datasets
- Focus: First three traces (pcap_1.csv, pcap_2.csv, pcap_3.csv)
- Format: CSV files with Time, Source, Destination columns
- Usage: Each trace represents different network traffic patterns for comprehensive evaluation
# Core scientific computing
pip install numpy pandas scipy
# Deep learning framework (for DQN)
pip install torch torchvision
# Data structures and utilities
pip install bitarray
# Networking utilities
pip install ipaddress
For DQN-enhanced simulations, ensure these files are in your directory:
bloomfilter.py
: Bloom filter implementationPerturbationDQN.py
: DQN agent implementation
Download the datasets:
# Download PCAP datasets from: https://tinyurl.com/flow-caching-datasets
Place the first three PCAP CSV files in ./Pcap/
directory:
./Pcap/
├── pcap_1.csv
├── pcap_2.csv
└── pcap_3.csv
CSV Format:
Time,Source,Destination
0.000123,192.168.1.1,10.0.0.1
0.000856,10.0.0.1,192.168.1.1
Note: This simulator focuses on the first three traces (pcap_1.csv, pcap_2.csv, pcap_3.csv) from the dataset collection available at https://tinyurl.com/flow-caching-datasets
# Basic LRU
python main.py --tablesize 32 --eviction_strategy LRU --disable_dqn_eviction --trace 1
# LRU with different table sizes
python main.py --tablesize 16 --eviction_strategy LRU --disable_dqn_eviction --trace 1
python main.py --tablesize 64 --eviction_strategy LRU --disable_dqn_eviction --trace 2
python main.py --tablesize 128 --eviction_strategy LRU --disable_dqn_eviction --trace 3
# LRU with custom timeouts
python main.py --tablesize 32 --eviction_strategy LRU --disable_dqn_eviction --idle_timeout 60.0 --trace 1
# Basic LFU
python main.py --tablesize 32 --eviction_strategy LFU --disable_dqn_eviction --trace 1
# LFU with different parameters
python main.py --tablesize 64 --eviction_strategy LFU --disable_dqn_eviction --RTI 0.02 --trace 2
# LFU with hybrid strategy
python main.py --tablesize 32 --eviction_strategy LFU_LRU --disable_dqn_eviction --trace 3
# Basic DQN+LRU
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.1 --trace 1
# DQN+LRU with different ETI intervals
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.05 --trace 2
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.2 --trace 3
# DQN+LRU with custom learning parameters
python main.py \
--tablesize 64 \
--eviction_strategy LRU \
--ETI 0.1 \
--dqn_lr 0.001 \
--dqn_epsilon 0.9 \
--dqn_batch_size 64 \
--trace 2
# Basic DQN+LFU
python main.py --tablesize 32 --eviction_strategy LFU --ETI 0.1 --trace 1
# DQN+LFU with hybrid strategy
python main.py --tablesize 32 --eviction_strategy LFU_LRU --ETI 0.1 --trace 2
# DQN+LFU with GPU acceleration
python main.py \
--tablesize 64 \
--eviction_strategy LFU \
--ETI 0.05 \
--dqn_device cuda \
--dqn_memory_size 20000 \
--trace 3
# Basic optimal
python optimal.py --tablesize 32 --trace 1
# Optimal with different table sizes
python optimal.py --tablesize 16 --trace 1
python optimal.py --tablesize 64 --trace 2
# Optimal with ETI tracking
python optimal.py --tablesize 32 --trace 3 --eti 0.1
# Optimal on data subsets (for comparison)
python optimal.py --tablesize 32 --trace 1 --start_percent 0 --end_percent 80 # Training set
python optimal.py --tablesize 32 --trace 1 --start_percent 80 --end_percent 100 # Test set
# Compare all strategies with same parameters
python main.py --tablesize 32 --eviction_strategy LRU --disable_dqn_eviction --trace 1 --processId "LRU_baseline"
python main.py --tablesize 32 --eviction_strategy LFU --disable_dqn_eviction --trace 1 --processId "LFU_baseline"
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.1 --trace 1 --processId "DQN_LRU"
python main.py --tablesize 32 --eviction_strategy LFU --ETI 0.1 --trace 1 --processId "DQN_LFU"
python optimal.py --tablesize 32 --trace 1 --processId "optimal"
# Test different table sizes with DQN+LRU
for size in 8 16 32 64 128; do
python main.py --tablesize $size --eviction_strategy LRU --ETI 0.1 --trace 1 --processId "size_$size"
done
# Test across the three main network traces
for trace in 1 2 3; do
python main.py --tablesize 32 --eviction_strategy LFU --ETI 0.1 --trace $trace --processId "trace_$trace"
done
Parameter | Options | Description | Example |
---|---|---|---|
--tablesize |
8, 16, 32, 64, 128, 256 | Flow table capacity | --tablesize 64 |
--trace |
1, 2, 3 | PCAP dataset selection | --trace 2 |
--RTI |
0.001-0.1 | Controller response time | --RTI 0.02 |
--idle_timeout |
10.0-300.0 | Flow timeout seconds | --idle_timeout 60.0 |
Parameter | Options | Description | Example |
---|---|---|---|
--eviction_strategy |
LRU, LFU, LFU_LRU | Base eviction policy | --eviction_strategy LFU |
--disable_dqn_eviction |
Flag | Disable DQN enhancement | --disable_dqn_eviction |
Parameter | Range | Description | Example |
---|---|---|---|
--ETI |
0.01-1.0 | DQN intervention interval | --ETI 0.05 |
--dqn_lr |
0.0001-0.01 | Learning rate | --dqn_lr 0.001 |
--dqn_epsilon |
0.1-1.0 | Exploration rate | --dqn_epsilon 0.9 |
--dqn_batch_size |
16-128 | Training batch size | --dqn_batch_size 64 |
--dqn_memory_size |
1000-100000 | Replay buffer size | --dqn_memory_size 20000 |
--dqn_device |
auto, cpu, cuda | Computation device | --dqn_device cuda |
# High-performance GPU training
python main.py \
--tablesize 128 \
--eviction_strategy LFU_LRU \
--ETI 0.02 \
--dqn_lr 0.0005 \
--dqn_gamma 0.95 \
--dqn_epsilon 0.8 \
--dqn_epsilon_decay 0.999 \
--dqn_memory_size 50000 \
--dqn_batch_size 128 \
--dqn_device cuda \
--dqn_hidden_layers "64_64_64" \
--trace 3
# Conservative CPU training
python main.py \
--tablesize 32 \
--eviction_strategy LRU \
--ETI 0.2 \
--dqn_lr 0.001 \
--dqn_batch_size 16 \
--dqn_memory_size 5000 \
--dqn_device cpu \
--trace 1
Parameter | Range | Description | Example |
---|---|---|---|
--eti |
0.01-1.0 | ETI tracking interval | --eti 0.1 |
--start_percent |
0-100 | Data start percentage | --start_percent 0 |
--end_percent |
0-100 | Data end percentage | --end_percent 80 |
data/raw/{expId}/{processId}/
├── info.log # Simulation logs
├── stat.json # Performance statistics
├── time_series.csv # Temporal data
├── eti_stats.csv # ETI interval data (optimal only)
└── model.pth # Trained DQN model (DQN only)
{
"hit_rate_percent": 87.34,
"total_flows": 50000,
"evictions": 1250,
"dqn_evictions": 234,
"lru_evictions": 1016,
"eviction_strategy": "LRU",
"table_size": 32
}
# Disable verbose logging for faster execution
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.1 --trace 1 --disable_frequent_logs
# Enable detailed logging for debugging
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.1 --trace 1 --enable_frequent_logs
# Create organized experiment directory structure
mkdir -p experiments/{baseline,dqn_enhanced,optimal}
# Baseline experiments across three traces
python main.py --tablesize 32 --eviction_strategy LRU --disable_dqn_eviction --trace 1 --expId "baseline" --processId "lru_trace1"
python main.py --tablesize 32 --eviction_strategy LFU --disable_dqn_eviction --trace 2 --expId "baseline" --processId "lfu_trace2"
# DQN experiments across three traces
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.1 --trace 1 --expId "dqn_enhanced" --processId "dqn_lru_trace1"
python main.py --tablesize 32 --eviction_strategy LFU --ETI 0.1 --trace 3 --expId "dqn_enhanced" --processId "dqn_lfu_trace3"
# Optimal baseline across three traces
python optimal.py --tablesize 32 --trace 1 --expId "optimal" --processId "optimal_trace1"
python optimal.py --tablesize 32 --trace 2 --expId "optimal" --processId "optimal_trace2"
python optimal.py --tablesize 32 --trace 3 --expId "optimal" --processId "optimal_trace3"
# Comprehensive comparison script for all three traces
#!/bin/bash
TABLE_SIZE=32
for TRACE in 1 2 3; do
echo "Running experiments for trace $TRACE"
# Traditional strategies
python main.py --tablesize $TABLE_SIZE --eviction_strategy LRU --disable_dqn_eviction --trace $TRACE --processId "traditional_lru_trace${TRACE}" &
python main.py --tablesize $TABLE_SIZE --eviction_strategy LFU --disable_dqn_eviction --trace $TRACE --processId "traditional_lfu_trace${TRACE}" &
# DQN-enhanced strategies
python main.py --tablesize $TABLE_SIZE --eviction_strategy LRU --ETI 0.1 --trace $TRACE --processId "dqn_lru_trace${TRACE}" &
python main.py --tablesize $TABLE_SIZE --eviction_strategy LFU --ETI 0.1 --trace $TRACE --processId "dqn_lfu_trace${TRACE}" &
# Optimal baseline
python optimal.py --tablesize $TABLE_SIZE --trace $TRACE --processId "optimal_trace${TRACE}" &
wait # Wait for current trace experiments to complete
done
# Verify dataset files exist (first three traces)
ls -la ./Pcap/pcap_1.csv ./Pcap/pcap_2.csv ./Pcap/pcap_3.csv
# Check dataset format
head -n 5 ./Pcap/pcap_1.csv
# Download missing datasets
echo "Download from: https://tinyurl.com/flow-caching-datasets"
# Quick 3-strategy comparison on trace 1 (1-2 minutes each)
python main.py --tablesize 16 --eviction_strategy LRU --disable_dqn_eviction --trace 1 --disable_frequent_logs
python main.py --tablesize 16 --eviction_strategy LRU --ETI 0.2 --trace 1 --disable_frequent_logs
python optimal.py --tablesize 16 --trace 1
# Comprehensive research comparison across three traces (30-90 minutes total)
for trace in 1 2 3; do
python main.py --tablesize 64 --eviction_strategy LFU --disable_dqn_eviction --trace $trace --processId "baseline_lfu_trace${trace}"
python main.py --tablesize 64 --eviction_strategy LFU --ETI 0.05 --trace $trace --processId "dqn_lfu_trace${trace}"
python optimal.py --tablesize 64 --trace $trace --eti 0.05 --processId "optimal_trace${trace}"
done
# Compare same strategy across different network traces
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.1 --trace 1 --processId "lru_trace1" --expId "cross_trace"
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.1 --trace 2 --processId "lru_trace2" --expId "cross_trace"
python main.py --tablesize 32 --eviction_strategy LRU --ETI 0.1 --trace 3 --processId "lru_trace3" --expId "cross_trace"
Results will be saved in ~/data/raw/
with detailed performance metrics for analysis and comparison across the three main network traces.