mirror of https://github.com/alibaba/MNN.git
55 lines
1.4 KiB
Markdown
55 lines
1.4 KiB
Markdown
|
|
> Evaluate accuracy for ILSVRC 2012 (Imagenet Large Scale Visual Recognition Challenge) image classification task
|
||
|
|
|
||
|
|
[TOC]
|
||
|
|
|
||
|
|
# Compile
|
||
|
|
|
||
|
|
Set the option—MNN_EVALUATION in the top [CMakeLists](../../CMakeLists.txt) to be `ON` like this:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
cmake -DMNN_EVALUATION=ON ..
|
||
|
|
```
|
||
|
|
|
||
|
|
# Download dataset
|
||
|
|
|
||
|
|
Download ImageNet Validation Dataset(5W) from [here](http://image-net.org/request).
|
||
|
|
|
||
|
|
# Turn Label to Class ID
|
||
|
|
|
||
|
|
Use [script](./turnLabelToClassID.py) to generate the validation dataset class ID(generated by this script named `class_id.txt`). You should have two inputs:
|
||
|
|
|
||
|
|
1. [ Synset Words](../../demo/model/MobileNet/synset_words.txt) (If you use tensorflow model which generate 1001 category, add `background` before `tench, Tinca tinca`)
|
||
|
|
2. Validation Labels(download file `ILSVRC2012_devkit_t12.tar.gz`, and use [this script](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/accuracy/ilsvrc/generate_validation_labels.py) to generate validation labels)
|
||
|
|
|
||
|
|
# Run Evaluation
|
||
|
|
|
||
|
|
## Config the evaluation
|
||
|
|
|
||
|
|
```json
|
||
|
|
{
|
||
|
|
"format":"RGB",
|
||
|
|
"mean":[
|
||
|
|
127.5,
|
||
|
|
127.5,
|
||
|
|
127.5
|
||
|
|
],
|
||
|
|
"normal":[
|
||
|
|
0.00784314,
|
||
|
|
0.00784314,
|
||
|
|
0.00784314
|
||
|
|
],
|
||
|
|
"width":224,
|
||
|
|
"height":224,
|
||
|
|
"imagePath":"path/to/Val_2012_Images/",
|
||
|
|
"groundTruthId":"path/to/ILSVRC2012_devkit_t12/class_id.txt"
|
||
|
|
}
|
||
|
|
|
||
|
|
```
|
||
|
|
|
||
|
|
## run like this
|
||
|
|
|
||
|
|
```bash
|
||
|
|
./classficationTopkEval.out quantized_model.mnn config.json
|
||
|
|
```
|
||
|
|
|