1. New Tasks: OCR, Text Completion and Parallel Model Execution.
2. NNAPI-1.2 compatible Neural Networks added.
3. The total number of tests increased to 46.
4. Native hardware acceleration on Snapdragon and Exynos SoCs using the Hexagon NN / Eden delegates.
5. Extended accuracy measurements.
6. DSP / NPU throttling tests (PRO mode).
7. Running custom TFLite models (PRO mode).
8. Optimizations for low-RAM devices.
9. GPU-based AI acceleration is available on devices with OpenGL ES 3.0+ support.
Face Recognition, Image Classification, Text Completion...
Is your smartphone capable of running the latest Deep Neural Networks to perform these AI-based tasks? Does it have a dedicated AI Chip? Is it fast enough? Run AI Benchmark to professionally evaluate its AI Performance!
Current phone ranking: http://ai-benchmark.com/ranking.html
AI Benchmark measures the speed, accuracy and memory requirements for several key AI and Computer Vision algorithms. Among the tested solutions are Image Classification and Face Recognition methods, Neural Networks used for Image Super-Resolution and Photo Enhancement, AI models predicting text and performing Bokeh Effect Rendering, as well as algorithms used in autonomous driving systems. The visualization of the algorithms’ outputs allows to assess their results graphically and to get to know the current state-of-the-art in various AI fields.
In total, AI Benchmark consists of 46 tests and 14 sections provided below:
Section 1. Classification, MobileNet-V2
Section 2. Classification, Inception-V3
Section 3. Face Recognition, MobileNet-V3
Section 4. Parallel Model Execution, 8 x MobileNet-V2
Section 5. Optical Character Recognition, CRNN
Section 6. Photo Deblurring, PyNET
Section 7. Image Super-Resolution, VGG19
Section 8. Image Super-Resolution, SRGAN
Section 9. Bokeh Effect Rendering, U-Net
Section 10. Semantic Segmentation, DeepLabV3+
Section 11. Parallel Segmentation, 2 x DeepLabV3+
Section 12. Image Enhancement, DPED ResNet
Section 13. Text Completion, LSTM
Section 14. Memory Limits, SRCNN
Besides that, one can load and test their own TensorFlow Lite deep learning model in the PRO Mode.
A detailed description of the tests can be found here: http://ai-benchmark.com/tests.html
Note: Hardware acceleration is supported on all mobile SoCs with dedicated NPUs and AI accelerators, including Qualcomm Snapdragon, HiSilicon Kirin, Samsung Exynos and MediaTek Helio / Dimensity chipsets. Starting from AI Benchmark v4, one can also enable GPU-based AI acceleration on older devices in the settings ("Accelerate" -> "Enable GPU Acceleration", OpenGL ES-3.0+ is required).
This release comes in several variants (we currently have 4). Consult our handy FAQ to see which download is right for you.