https://doi.org/10.1051/epjconf/202532801001
Android-based Action Recognition with 3D CNN and UCF 101 dataset
1,2,3 Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, India
4,5 Assistant Professor, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, India
* Corresponding author: abhijithdasan05@gmail.com
Published online: 18 June 2025
A real-time video classification capability for mobile devices operates through an Android-based system supported by 3D CNN architecture and UCF-101 dataset processing. A new method uses MoViNet models integrated with TensorFlow Lite to execute video analysis directly on devices, so users get immediate feedback together with security of personal data. The system provides extensive practical benefits as it helps paralysis patients while also spotting dangerous training techniques and detecting strange movements during surveillance activities. The implemented model reaches 77.2% accuracy on UCF-101 while operating at a 45ms latency and requiring 6.0 GFLOPs which surpasses X3D-XL. Efficient resource management of this lightweight design enables mid-range mobile devices to work with the system which advances video analysis methods for edge computing systems. Future efforts will concentrate on raising power efficiency by using hardware-aware techniques while adding basic-processing technology support as well.
© The Authors, published by EDP Sciences, 2025
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.