PerfTrack is a command-line tool that helps detect performance regressions in open-source projects and CI pipelines. It requires no external services and tracks performance locally.
Designed for Python projects (Django, Pandas, FastAPI, ML libraries), internal tooling, and personal benchmarking workflows
Performance regressions often go unnoticed because:
- CI timing is noisy and unreliable for benchmarking
- Tools like ASV or pytest-benchmark require setup and maintenance
- There is no simple utility to store a baseline and fail CI when performance drops
PerfTrack stores local performance snapshots and compares them against future runs, both locally and in CI.
- Wall-clock time
- CPU time
- Peak RSS memory usage
Baseline cached locally → regression check in CI.
pip install perftrack
# run a command and record performance
perftrack run "python script.py"
# set the latest run as baseline
perftrack baseline set-latest
# later compare new results with baseline
perftrack compare --fail-on-regression
# Generate Simple HTML report
perftrack report
Replace "python script.py" with whatever you want to benchmark (scripts, test suites, build steps, ML training, etc.)
Perftrack store results under:
.perftrack/
baseline.json
latest.json
PerfTrack guarantees that performance regressions are caught before merge. Here is a (pull request) where we intentionally slowed down code to demonstrate PerfTrack in CI.