fp16 fp32 int8

https://developer.nvidia.com/blog/tensorrt-integration-speeds-tensorflow-inference/

原文地址:https://www.cnblogs.com/morganh/p/13218155.html