Skip to content

PRE-TASK: MOT17 Multi-Edge Inference Benchmark (LFX TERM 3 2025) #231

@NishantSinghhhhh

Description

@NishantSinghhhhh

Proposal for LFX Term-3 at KubeEdge

Comprehensive Example Restoration for KubeEdge Ianvs

By Nishant Singh

Mentors: Zimu Zheng, Shujing Hu

**Parent Issue: ** #230


Background

The MOT17 example is a complete benchmarking framework for pedestrian tracking using multi-edge inference architecture with device learning. This example illustrates cloud-edge collaborative AI inference capability with a benchmarking scenario to benchmark person re-identification and tracking algorithms on the MOT17 dataset using ByteTrack and M3L approaches.


Critical Execution Issues Identified

  1. Configuration Path Mismatch (Critical):

    • Problem: The tracking_job.yaml and reid_job.yaml files contain incorrect relative paths for test environment configurations
    • Current Path:
      ./examples/pedestrian_tracking/multiedge_inference_bench/testenv/tracking/testenv.yaml
      
    • Actual Paths:
      ./examples/MOT17/multiedge_inference_bench/pedestrian_tracking/testenv/tracking/testenv.yaml
      
    • Impact: This prevents the benchmarking job from even starting, as Ianvs cannot locate the required test environment configuration files
    Image

  1. Wrong Path in Readme.md for MOT17:

    • The folder structure states that we need to have train and test model, but while using coco.py the path is different.
    Image

  1. convert_mot17_to_coco.py File Issue:

    • The convert_mot17_to_coco.py file has a different folder structure written inside its code.
    • We need to make changes in the folder structure.
    Image
    • The folder structure convert_mot17_to_coco.py accepts is shown below:
    Image

  1. Reid File Path Expectation:
    • reid_job.yaml also expects a required folder structure like tracking_job.yaml.
    • We should provide commands in the Benchmarking of Pedestrian Tracking docs that allow users to simply create the expected folders with one command.

  1. Improved User Experience for Dataset Conversion:

    • In README.md, provide commands for installing and running convert_mot17_to_coco.py and mot2reid.py inside the MOT17 folder (not in ianvs/).
    • This ensures newly generated files are stored under MOT17/ instead of polluting the root repo.
    Image

  1. Missing Dependencies (High):
    1. Missing cv2 (OpenCV):

      • For mot17_to_coco.py, cv2 is required but not part of dependencies → causes runtime errors.
      Image
    2. Missing mmcv (ReID job):

      • For mot2reid.py, mmcv is required but not present in requirements.txt.
      Image
    3. Impact: Algorithm execution will fail with import errors.


  1. Workspace Path Issue:
    • Both tracking_job.yaml and reid_job.yaml expect a workspace/ folder inside MOT17/.
    • We need to add setup commands to easily create the workspace directories.

  1. Report Generation Works:
    • The generateReport.py file works correctly provided the input data is structured properly.
    • It produces the required results successfully.
    • Demonstration: PR Report Generation Video

Goals

These objectives aim to elevate the MOT17 example from a non-working demo to a production-ready benchmarking example, highlighting Ianvs’s functionality.

  1. Crucial Bug Fixes

    • Fix configuration path mismatches in tracking_job.yaml and reid_job.yaml.
    • Resolve missing dependencies and correct folder structures.
  2. Improved Example Documentation & Setup

    • Complete setup documentation covering dataset prep, model download, dependencies.
    • Scripts to automate ByteTrack repo install, YOLOX dependencies, MOT17 dataset conversion.

Scope

The MOT17/multiedge_inference_bench example is a key real-time multi-edge inference benchmark, reflecting distributed KubeEdge use cases. Its current broken state leaves a major gap.

  • The example emphasizes inference rather than learning.
  • Focus: benchmarking pre-trained models across multiple edge nodes.
  • Aligns closely with real-world apps: surveillance, multi-camera pedestrian tracking, smart city AI.
  • Restoring the example reinstates Ianvs’s relevance to industry and CNCF ecosystem.

📊 Issue Evaluation Comparison

Evaluation Criterion Issue #231 (Strategic) Issue #247 (Tactical) Issue #248 (Maintenance)
Hypothesized Proposal Declarative, Extensible Paradigm Template Engine Addition of a New, Specific Metric (i.e. energy consumption) Code Refactor for technical debt (i.e. dataset loader)
Scope & Uniqueness Architectural & Transformational. It changes Ianvs from a static tool to a dynamic community platform; this is uniquely the ability for the user to define extensibility. Feature-level & Incremental. Update, and additionally, a new data point. This is a normal and expected feature evolution - not a unique competency. Code-level & Internal. Generally improves code health/reduction of future bugs, except this is a 'necessary, but not innovative' engineering activity.
Expected User & Value High value for all users. Empowers AI/ML Developers to test new algorithms, and Systems Engineers to create realistic stress tests for future-proofing. Low-to-Medium value for all users. Finally provides two personas (user) another new piece of information, but ultimately still does not enable fundamental new workflows. Very low direct user value. Benefits to all primarily internal development team (by improving code quality and minimizing future bugs).
Alignment with Core Mission High. Allows the benchmarking of essentially limitless "distributed synergy AI solutions" and creates an ecosystem so that the project meets its primary objective. Medium. Aids in uncovering "best practices" through additional data but does not materially extend the project's benchmark capabilities. Low. Affects the long term viability of the project but has no immediate, direct impact on furthering the mission of benchmarking or developing standards.
Innovation & Future Roadmap High & Foundational. Opens up a future roadmap, a "Paradigm Marketplace," integration of AutoML capabilities in more advanced ways, and the potential to set a market standard for defining the distribution of AI workloads. Low. A single, self-contained feature. An endpoint, not a platform for future innovation. None. Focuses on preserving what exists and does not engender or promote innovation.

Industry Impact & Relevance

  • Addresses smart cities ($2.5T market), vehicles, security.
  • Benchmarks latency, accuracy, efficiency — key metrics for enterprise adoption.
  • Demonstrates ROI of edge-compute deployments.

System Testing Strength

  • MOT17 tests the entire edge compute stack (communication, sync, distributed decision making).
  • Stresses robustness under network variance + computational load.
  • Commercially demonstrates bandwidth, latency, and privacy benefits vs cloud-only inference.

Detailed Design

I have been working on MOT17 since long and made changes to the files. After making path corrections, the code works and generates valid reports.

In the dataset, you can see the ReID job dataset and Tracking job dataset.

Image

Files & Documentation To Update

  1. Update Existing README:
    File: ianvs/examples/MOT17/multiedge_inference_bench/pedestrian_tracking/README.md
    • Fix paths for testenv.
    • Add workspace creation commands.
    • Add conversion script usage examples.
    • Add missing dependencies to requirements.txt.
    • Clarify benchmark run commands.

  1. Add New Documentation:
    Location: ianvs/docs/proposals/scenarios/MOT17.md

    This doc will cover:

    • Why MOT17 Edge Benchmarks Matter
    • Dataset Structure for Benchmarking
    • Related Works: Multi-Object Tracking in Edge Computing
    • Spotlight: MOT17 – Multi-Detector Pedestrian Tracking Benchmark
    • Conclusion & Usage

Metadata

Metadata

Labels

kind/cleanupCategorizes issue or PR as related to cleaning up code, process, or technical debt.kind/documentationCategorizes issue or PR as related to documentation.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions