-
Notifications
You must be signed in to change notification settings - Fork 83
Description
Proposal for LFX Term-3 at KubeEdge
Comprehensive Example Restoration for KubeEdge Ianvs
By Nishant Singh
Mentors: Zimu Zheng, Shujing Hu
**Parent Issue: ** #230
Background
The MOT17 example is a complete benchmarking framework for pedestrian tracking using multi-edge inference architecture with device learning. This example illustrates cloud-edge collaborative AI inference capability with a benchmarking scenario to benchmark person re-identification and tracking algorithms on the MOT17 dataset using ByteTrack and M3L approaches.
Critical Execution Issues Identified
-
Configuration Path Mismatch (Critical):
- Problem: The
tracking_job.yamlandreid_job.yamlfiles contain incorrect relative paths for test environment configurations - Current Path:
./examples/pedestrian_tracking/multiedge_inference_bench/testenv/tracking/testenv.yaml - Actual Paths:
./examples/MOT17/multiedge_inference_bench/pedestrian_tracking/testenv/tracking/testenv.yaml - Impact: This prevents the benchmarking job from even starting, as Ianvs cannot locate the required test environment configuration files
- Problem: The
-
Wrong Path in Readme.md for MOT17:
- The folder structure states that we need to have train and test model, but while using
coco.pythe path is different.
- The folder structure states that we need to have train and test model, but while using
-
convert_mot17_to_coco.pyFile Issue:- The
convert_mot17_to_coco.pyfile has a different folder structure written inside its code. - We need to make changes in the folder structure.
- The folder structure
convert_mot17_to_coco.pyaccepts is shown below:
- The
- Reid File Path Expectation:
reid_job.yamlalso expects a required folder structure liketracking_job.yaml.- We should provide commands in the Benchmarking of Pedestrian Tracking docs that allow users to simply create the expected folders with one command.
-
Improved User Experience for Dataset Conversion:
- In
README.md, provide commands for installing and runningconvert_mot17_to_coco.pyandmot2reid.pyinside the MOT17 folder (not inianvs/). - This ensures newly generated files are stored under
MOT17/instead of polluting the root repo.
- In
- Missing Dependencies (High):
- Workspace Path Issue:
- Both
tracking_job.yamlandreid_job.yamlexpect aworkspace/folder insideMOT17/. - We need to add setup commands to easily create the workspace directories.
- Both
- Report Generation Works:
- The
generateReport.pyfile works correctly provided the input data is structured properly. - It produces the required results successfully.
- Demonstration: PR Report Generation Video
- The
Goals
These objectives aim to elevate the MOT17 example from a non-working demo to a production-ready benchmarking example, highlighting Ianvs’s functionality.
-
Crucial Bug Fixes
- Fix configuration path mismatches in
tracking_job.yamlandreid_job.yaml. - Resolve missing dependencies and correct folder structures.
- Fix configuration path mismatches in
-
Improved Example Documentation & Setup
- Complete setup documentation covering dataset prep, model download, dependencies.
- Scripts to automate ByteTrack repo install, YOLOX dependencies, MOT17 dataset conversion.
Scope
The MOT17/multiedge_inference_bench example is a key real-time multi-edge inference benchmark, reflecting distributed KubeEdge use cases. Its current broken state leaves a major gap.
- The example emphasizes inference rather than learning.
- Focus: benchmarking pre-trained models across multiple edge nodes.
- Aligns closely with real-world apps: surveillance, multi-camera pedestrian tracking, smart city AI.
- Restoring the example reinstates Ianvs’s relevance to industry and CNCF ecosystem.
📊 Issue Evaluation Comparison
| Evaluation Criterion | Issue #231 (Strategic) | Issue #247 (Tactical) | Issue #248 (Maintenance) |
|---|---|---|---|
| Hypothesized Proposal | Declarative, Extensible Paradigm Template Engine | Addition of a New, Specific Metric (i.e. energy consumption) | Code Refactor for technical debt (i.e. dataset loader) |
| Scope & Uniqueness | Architectural & Transformational. It changes Ianvs from a static tool to a dynamic community platform; this is uniquely the ability for the user to define extensibility. | Feature-level & Incremental. Update, and additionally, a new data point. This is a normal and expected feature evolution - not a unique competency. | Code-level & Internal. Generally improves code health/reduction of future bugs, except this is a 'necessary, but not innovative' engineering activity. |
| Expected User & Value | High value for all users. Empowers AI/ML Developers to test new algorithms, and Systems Engineers to create realistic stress tests for future-proofing. | Low-to-Medium value for all users. Finally provides two personas (user) another new piece of information, but ultimately still does not enable fundamental new workflows. | Very low direct user value. Benefits to all primarily internal development team (by improving code quality and minimizing future bugs). |
| Alignment with Core Mission | High. Allows the benchmarking of essentially limitless "distributed synergy AI solutions" and creates an ecosystem so that the project meets its primary objective. | Medium. Aids in uncovering "best practices" through additional data but does not materially extend the project's benchmark capabilities. | Low. Affects the long term viability of the project but has no immediate, direct impact on furthering the mission of benchmarking or developing standards. |
| Innovation & Future Roadmap | High & Foundational. Opens up a future roadmap, a "Paradigm Marketplace," integration of AutoML capabilities in more advanced ways, and the potential to set a market standard for defining the distribution of AI workloads. | Low. A single, self-contained feature. An endpoint, not a platform for future innovation. | None. Focuses on preserving what exists and does not engender or promote innovation. |
Industry Impact & Relevance
- Addresses smart cities ($2.5T market), vehicles, security.
- Benchmarks latency, accuracy, efficiency — key metrics for enterprise adoption.
- Demonstrates ROI of edge-compute deployments.
System Testing Strength
- MOT17 tests the entire edge compute stack (communication, sync, distributed decision making).
- Stresses robustness under network variance + computational load.
- Commercially demonstrates bandwidth, latency, and privacy benefits vs cloud-only inference.
Detailed Design
I have been working on MOT17 since long and made changes to the files. After making path corrections, the code works and generates valid reports.
- Dataset: Kaggle Dataset and Reports (Access via KubeEdge-Ianvs account).
In the dataset, you can see the ReID job dataset and Tracking job dataset.
Files & Documentation To Update
- Update Existing README:
File:ianvs/examples/MOT17/multiedge_inference_bench/pedestrian_tracking/README.md- Fix paths for testenv.
- Add workspace creation commands.
- Add conversion script usage examples.
- Add missing dependencies to
requirements.txt. - Clarify benchmark run commands.
-
Add New Documentation:
Location:ianvs/docs/proposals/scenarios/MOT17.mdThis doc will cover:
- Why MOT17 Edge Benchmarks Matter
- Dataset Structure for Benchmarking
- Related Works: Multi-Object Tracking in Edge Computing
- Spotlight: MOT17 – Multi-Detector Pedestrian Tracking Benchmark
- Conclusion & Usage

