You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Automated command to run the benchmark via MLFlow
3
+
## Automated command to run the benchmark via MLCFlow
4
4
5
5
Please see the [new docs site](https://docs.mlcommons.org/inference/benchmarks/language/deepseek-r1/) for an automated way to run this benchmark across different available implementations and do an end-to-end submission with or without docker.
6
6
@@ -13,6 +13,22 @@ You can also do pip install mlc-scripts and then use `mlcr` commands for downloa
13
13
- DeepSeek-R1 model is automatically downloaded as part of setup
14
14
- Checkpoint conversion is done transparently when needed.
15
15
16
+
**Using the MLC R2 Downloader**
17
+
18
+
Download the model using the MLCommons R2 Downloader:
The dataset is an ensemble of the datasets: AIME, MATH500, gpqa, MMLU-Pro, livecodebench(code_generation_lite). They are covered by the following licenses:
@@ -23,49 +39,40 @@ The dataset is an ensemble of the datasets: AIME, MATH500, gpqa, MMLU-Pro, livec
This will download the full preprocessed dataset file (`mlperf_deepseek_r1_dataset_4388_fp8_eval.pkl`) and the calibration dataset file (`mlperf_deepseek_r1_calibration_dataset_500_fp8_eval.pkl`).
54
+
55
+
To specify a custom download directory, use the `-d` flag:
0 commit comments