Here mainly describes how to deploy PaddlePaddle to the mobile end, as well as some deployment optimization methods and some benchmark.
- Build PaddlePaddle for Android
- Build PaddlePaddle for IOS
- Build PaddlePaddle for Raspberry Pi3
- Build PaddlePaddle for NVIDIA Driver PX2
Optimization for the library:
Optimization for models:
- Merge batch normalization layers
- Compress the model based on rounding
- Merge model's config and parameters
- How to deploy int8 model in mobile inference with PaddlePaddle
- Benchmark of Mobilenet
- Benchmark of ENet
- Benchmark of DepthwiseConvolution
This tutorial is contributed by PaddlePaddle and licensed under the Apache-2.0 license.