Skip to content

Commit 3e71f04

Browse files
BowShotDSYour Name
andauthored
update DFQ/EQ/Evaluate int8 perchannel quant tool (#1112)
* update DFQ/EQ/Evaluate int8 perchannel quant tool * apply code-format changes Co-authored-by: Your Name <[email protected]> Co-authored-by: BowShotDS <[email protected]>
1 parent f676175 commit 3e71f04

File tree

9 files changed

+2263
-21
lines changed

9 files changed

+2263
-21
lines changed

tools/quantize/CMakeLists.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ IF (${TENGINE_TARGET_PROCESSOR} MATCHES "X86")
3030
ADD_EXECUTABLE(
3131
${name}
3232
./quant_save_graph.cpp
33+
./algorithm/quant_dfq.cpp
34+
./algorithm/quant_eq.cpp
3335
./quant_utils.cpp
3436
../save_graph/save_graph.cpp
3537
../save_graph/tm2_op_save.cpp

tools/quantize/README.md

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ Status : int8, per-channel, symmetric
7676
Before use the quant tool, **you need Float32 tmfile and Calibration Dataset**, the image num of calibration dataset we suggest to use 500-1000.
7777

7878
```
79-
$ .quant_tool_int8 -m ./mobilenet_fp32.tmfile -i ./dataset -o ./mobilenet_int8.tmfile -g 3,224,224 -w 104.007,116.669,122.679 -s 0.017,0.017,0.017
79+
$ .quant_tool_int8 -m ./mobilenet_fp32.tmfile -i ./dataset -o ./mobilenet_int8.tmfile -g 3,224,224 -w 104.007,116.669,122.679 -s 0.017,0.017,0.017 -z 1
8080
8181
---- Tengine Post Training Quantization Tool ----
8282
@@ -111,6 +111,38 @@ Thread num : 1
111111
[Quant Tools Info]: Step 4, quantize activation tensor done.
112112
[Quant Tools Info]: Step 5, quantize weight tensor done.
113113
[Quant Tools Info]: Step 6, save Int8 tmfile done, ./mobilenet_int8.tmfile
114+
[Quant Tools Info]: Step Evaluate, evaluate quantitative losses
115+
cosin 0 32 avg 0.995317 ### 0.000000 0.953895 0.998249 0.969256 ...
116+
cosin 1 32 avg 0.982403 ### 0.000000 0.902383 0.964436 0.873998 ...
117+
cosin 2 64 avg 0.976753 ### 0.952854 0.932301 0.982766 0.958503 ...
118+
cosin 3 64 avg 0.981889 ### 0.976637 0.981754 0.987276 0.970671 ...
119+
cosin 4 128 avg 0.979728 ### 0.993999 0.991858 0.990438 0.992766 ...
120+
cosin 5 128 avg 0.970351 ### 0.772556 0.989541 0.986996 0.989563 ...
121+
cosin 6 128 avg 0.954545 ### 0.950125 0.922964 0.946804 0.972852 ...
122+
cosin 7 128 avg 0.977192 ### 0.994728 0.972071 0.995353 0.992700 ...
123+
cosin 8 256 avg 0.977426 ### 0.968429 0.991248 0.991274 0.994450 ...
124+
cosin 9 256 avg 0.962224 ### 0.985255 0.969171 0.958762 0.967461 ...
125+
cosin 10 256 avg 0.954253 ### 0.984353 0.935643 0.656188 0.929778 ...
126+
cosin 11 256 avg 0.971987 ### 0.997596 0.967681 0.476525 0.999115 ...
127+
cosin 12 512 avg 0.972861 ### 0.968920 0.905907 0.993918 0.622953 ...
128+
cosin 13 512 avg 0.959161 ### 0.935686 0.000000 0.642560 0.994388 ...
129+
cosin 14 512 avg 0.963903 ### 0.979613 0.957169 0.976440 0.902512 ...
130+
cosin 15 512 avg 0.963226 ### 0.977065 0.965819 0.998149 0.905297 ...
131+
cosin 16 512 avg 0.960935 ### 0.861674 0.972926 0.950579 0.987609 ...
132+
cosin 17 512 avg 0.961057 ### 0.738472 0.987884 0.999124 0.995397 ...
133+
cosin 18 512 avg 0.960127 ### 0.935455 0.968909 0.970831 0.981240 ...
134+
cosin 19 512 avg 0.963755 ### 0.972628 0.992305 0.999518 0.799737 ...
135+
cosin 20 512 avg 0.949364 ### 0.922776 0.896038 0.945079 0.971338 ...
136+
cosin 21 512 avg 0.961256 ### 0.902256 0.896438 0.923361 0.973974 ...
137+
cosin 22 512 avg 0.946552 ### 0.963806 0.982075 0.878965 0.929992 ...
138+
cosin 23 512 avg 0.953677 ### 0.953880 0.996364 0.936540 0.930796 ...
139+
cosin 24 1024 avg 0.941197 ### 0.000000 0.992507 1.000000 0.994460 ...
140+
cosin 25 1024 avg 0.973546 ### 1.000000 0.889181 0.000000 0.998084 ...
141+
cosin 26 1024 avg 0.869351 ### 0.522966 0.000000 0.987009 0.000000 ...
142+
cosin 27 1 avg 0.974982 ### 0.974982
143+
cosin 28 1 avg 0.974982 ### 0.974982
144+
cosin 29 1 avg 0.974982 ### 0.974982
145+
cosin 30 1 avg 0.978486 ### 0.978486
114146
115147
---- Tengine Int8 tmfile create success, best wish for your INT8 inference has a low accuracy loss...\(^0^)/ ----
116148
```

tools/quantize/algorithm/quant_dfq.cpp

Lines changed: 572 additions & 0 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)