-
Notifications
You must be signed in to change notification settings - Fork 460
Update benchmark datasets for classification and segmentation #4349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update benchmark datasets for classification and segmentation #4349
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 4 out of 4 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (4)
tests/perf_v2/tasks/semantic_segmentation.py:32
- [nitpick] Verify that the dataset name 'tiny_human_railway_animal' aligns with the project's naming conventions for segmentation benchmarks and is descriptive enough for its content.
name="tiny_human_railway_animal",
tests/perf_v2/tasks/classification.py:55
- [nitpick] Ensure that the dataset name 'multiclass_tiny_pneumonia' is consistent with naming patterns used across classification test cases in the project.
name="multiclass_tiny_pneumonia",
tests/perf/test_semantic_segmentation.py:31
- [nitpick] Confirm that the updated dataset name 'tiny_human_railway_animal' is in line with the overall naming scheme for segmentation benchmarks in tests.
name="tiny_human_railway_animal",
tests/perf/test_classification.py:44
- [nitpick] Check that the dataset name 'multiclass_tiny_pneumonia' is descriptive and consistent with similar dataset naming patterns in the classification performance tests.
name="multiclass_tiny_pneumonia",
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we update this only for perf_v2? First version will be deleted, I hope
Good idea. Done. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates the benchmark datasets used for classification and semantic segmentation tasks by replacing the previously loop‐generated dataset configurations with explicit definitions. Key changes include updated dataset names, new dataset paths, and clearer grouping of datasets by size.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
File | Description |
---|---|
tests/perf_v2/tasks/semantic_segmentation.py | Replaces loop-generated dataset definitions with explicit entries for tiny, small, medium, and large segmentation datasets. |
tests/perf_v2/tasks/classification.py | Updates and expands dataset definitions for multi-class, multi-label, and h-label classification tasks. |
Comments suppressed due to low confidence (2)
tests/perf_v2/tasks/semantic_segmentation.py:32
- [nitpick] The dataset name 'tiny_human_railway_animal' may be too generic if multiple tiny datasets are expected. Consider updating the name to better reflect its content or purpose, if applicable.
name="tiny_human_railway_animal",
tests/perf_v2/tasks/classification.py:55
- [nitpick] The dataset name 'multiclass_tiny_pneumonia' should be reviewed for consistency with other naming patterns. Ensure that the name clearly indicates its content and is aligned with the overall dataset naming conventions.
name="multiclass_tiny_pneumonia",
cb616ec
into
open-edge-platform:develop
Summary
Checklist
License
Feel free to contact the maintainers if that's a concern.