Face Recognition, Image Classification, Question Answering...
Is your smartphone capable of running the latest Deep Neural Networks to perform these and many other AI-based tasks? Does it have a dedicated AI Chip? Is it fast enough? Run AI Benchmark to professionally evaluate its AI Performance!
Current phone ranking: ai-benchmark.com/ranking
AI Benchmark measures the speed, accuracy, power consumption and memory requirements for several key AI and Computer Vision algorithms. Among the tested solutions are Image Classification and Face Recognition methods, Neural Networks used for Image / Video Super-Resolution and Photo Enhancement, AI models predicting text and performing question answering, as well as AI solutions used in autonomous driving systems and smartphones for real-time Depth Estimation and Semantic Image Segmentation. The visualization of the algorithms’ outputs allows to assess their results graphically and to get to know the current state-of-the-art in various AI fields.
In total, AI Benchmark consists of 78 tests and 26 sections listed below:
Section 1. Classification, MobileNet-V2
Section 2. Classification, Inception-V3
Section 3. Face Recognition, MobileNet-V3
Section 4. Classification, EfficientNet-B4
Sections 5/6. Parallel Model Execution, 8 x Inception-V3
Section 7. Object Tracking, YOLO-V4
Section 8. Optical Character Recognition, CRNN
Section 9. Semantic Segmentation, DeepLabV3+
Section 10. Parallel Segmentation, 2 x DeepLabV3+
Section 11. Photo Deblurring, IMDN
Section 12. Image Super-Resolution, ESRGAN
Section 13. Image Super-Resolution, SRGAN
Section 14. Image Denoising, U-Net
Section 15. Depth Estimation, MV3-Depth
Section 16. Image Enhancement, DPED ResNet
Section 17. Image Enhancement, DPED Instance
Section 18. Bokeh Effect Rendering, PyNET+
Section 19. Learned Camera ISP, PUNET
Section 20. FullHD Video Super-Resolution, XLSR
Section 21/22. 4K Video Super-Resolution, VideoSR
Section 23. Text Completion, LSTM
Section 24. Question Answering, MobileBERT
Section 25. Text Completion, ALBERT
Section 26. Memory Limits, ResNet
Besides that, one can load and test their own TensorFlow Lite deep learning models in the PRO Mode.
A detailed description of the tests can be found here: ai-benchmark.com/tests.html
Note: Hardware acceleration is supported on all mobile SoCs with dedicated NPUs and AI accelerators, including Qualcomm Snapdragon, HiSilicon Kirin, Samsung Exynos , MediaTek Helio / Dimensity and UNISOC Tiger chipsets. Starting from AI Benchmark v4, one can also enable GPU-based AI acceleration on older devices in the settings ("Accelerate" -> "Enable GPU Acceleration", OpenGL ES-3.0+ is required).
1. New models and tasks: 4K Video Super-Resolution, Image Denoising, Question Answering, Object Tracking, Depth Estimation, etc.
2. Updated TFLite NNAPI, GPU, Hexagon NN and MediaTek Neuron delegates.
3. Added Qualcomm QNN delegate for direct inference on Snapdragon DSPs, HTPs and GPUs.
4. Added new inference options (Fast Single Answer, Sustained, Low Power).
5. Extended accuracy measurements.
6. Power consumption tests (PRO mode).
7. The total number of tests increased to 78.
Uploaded:May 18, 2022 at 7:46AM UTC
File size:4.94 MB
Uploaded:January 31, 2022 at 1:15AM UTC
File size:104.68 MB
Uploaded:March 12, 2021 at 12:22PM UTC
File size:142.72 MB
Uploaded:October 23, 2020 at 1:27PM UTC
File size:142.72 MB
Uploaded:October 1, 2020 at 4:45PM UTC
File size:142.81 MB
Uploaded:May 29, 2020 at 12:24AM UTC
File size:161.14 MB
Uploaded:May 28, 2020 at 6:03PM UTC
File size:142.74 MB
Uploaded:May 24, 2020 at 1:54PM UTC
File size:161.16 MB
Uploaded:May 24, 2020 at 1:10PM UTC
File size:143.02 MB
Uploaded:June 13, 2019 at 1:19PM UTC
File size:96.64 MB