https://doi.org/10.1051/epjconf/202024501023
Highly Performant, Deep Neural Networks with sub-microsecond latency on FPGAs for Trigger Applications
1
Institut für Physik, Johannes Gutenberg-Universität Mainz, Mainz; Germany
2
Cluster of Excellence PRISMA +, Johannes Gutenberg-Universität Mainz, Mainz; Germany
* e-mail: schmittc@uni-mainz.de
Published online: 16 November 2020
Artificial neural networks are becoming a standard tool for data analysis, but their potential remains yet to be widely used for hardware-level trigger applications. Nowadays, high-end FPGAs, often used in low-level hardware triggers, offer theoretically enough performance to include networks of considerable size. This makes it very promising and rewarding to optimize a neural network implementation for FPGAs in the trigger context.
Here an optimized neural network implementation framework is presented, which typically reaches 90 to 100% computational efficiency, requires few extra FPGA resources for data flow and controlling, and allows latencies in the order of 10s to few 100s of nanoseconds for entire (deep) networks.
© The Authors, published by EDP Sciences, 2020
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.