Deploying Transformers at Scale

Share This Post

Key takeaways:

  • ONNXRuntime is the best inference package for Transformer networks;
  • Nvidia Triton, together with ONNXRuntime is the best solution for GPU inference;
  • Optimization matters. It’s quite easy to unlock a >10X performance gain in 2022.

About this session 

Transformer networks have taken the NLP world by storm, powering everything from sentiment analysis to chatbots. However, the sheer size of these networks presents new challenges for deployment, such as how to provide acceptable latency and unit economics.

The de-identification tasks Private AI services rely heavily on Transformer networks and involve processing large amounts of data. In this talk, I will go over the challenges we faced and how we managed to improve the latency and throughput of our Transformer networks, allowing our system to process Terabytes of data easily and cost-effectively.

Watch the full session:

This talk was originally delivered at the 2021 Toronto Machine Learning Summit.

Speaker bio: Pieter Luitjens is the Co-founder & CTO of Private AI. He worked on software for Mercedes-Benz and developed the first deep learning algorithms for traffic sign recognition deployed in cars made by one of the most prestigious car manufacturers in the world. He has over 10 years of engineering experience, with code deployed in multi-billion dollar industrial projects. Pieter specializes in ML edge deployment & model optimization for resource-constrained environments.

Contact us to request Pieter as a guest speaker.

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore


Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.