doi: 10.56294/gr202437
ORIGINAL BRIEF
Real-time number plate detection using AI and ML
Detección de matrículas en tiempo real mediante IA y ML
Patakamudi Swathi1 *, Dara Sai Tejaswi1 *, Mohammad Amanulla Khan1 *, Miriyala Saishree1 *, Venu Babu Rachapudi1 *, Dinesh Kumar Anguraj1 *
1Koneru Lakshmaiah Education Foundation, Department of CSE, Vaddeswaram, Andhra Pradesh, India.
Cite as: Swathi P, Sai Tejaswi D, Amanulla Khan M, Saishree M, Babu Rachapudi V, Kumar Anguraj D. Real-time number plate detection using AI and ML. Gamification and Augmented Reality. 2024;2:37.https://doi.org/10.56294/gr202437
Submitted: 16-10-2023 Revised: 04-02-2024 Accepted: 28-04-2024 Published: 29-04-2024
Editor: Adrian
Alejandro Viton Castillo
ABSTRACT
The abstract presents a research study focusing on real-time license plate verification, a key feature of electronic systems that operate by rapidly identifying and removing identification numbers from vehicle registration in a dynamic global environment. The research leverages the combination of artificial intelligence (AI) and machine learning (ML) techniques, specifically the integration of region-based convolutional neural networks (RCNN) and advanced RCNN algorithms, to create a powerful and readily available system. In terms of methods, this research optimizes algorithm performance and deploys the system in a cloud-based environment to improve accessibility and scalability. Through careful design and optimization, the proposed system has achieved a consistent result in license recognition, as evident from the well-accounted evaluation of performance, including precision, recall, and computational efficiency. The results demonstrate the efficiency and usability of this system in a real installation and promise to revolutionize automatic vehicle identification. Finally, the integration of artificial intelligence and machine learning technology into real-time license plate recognition signifies changes in traffic management, assessment safety and smart city plans. Therefore, interdisciplinary collaboration and continuous innovation are crucial to shaping a sustainable and balanced future for intelligent transportation systems.
Keywords: Real-Time; Number Plate Detection, AI; ML; RCNN; Advanced RCNN; Cloud-Based Deployment.
RESUMEN
El resumen presenta un estudio de investigación centrado en la verificación de matrículas en tiempo real, una característica clave de los sistemas electrónicos que operan identificando y eliminando rápidamente los números de identificación del registro de vehículos en un entorno global dinámico. La investigación aprovecha la combinación de técnicas de inteligencia artificial (IA) y aprendizaje automático (AM), concretamente la integración de redes neuronales convolucionales basadas en regiones (RCNN) y algoritmos RCNN avanzados, para crear un sistema potente y de fácil acceso. En cuanto a los métodos, esta investigación optimiza el rendimiento de los algoritmos y despliega el sistema en un entorno basado en la nube para mejorar la accesibilidad y la escalabilidad. A través de un cuidadoso diseño y optimización, el sistema propuesto ha logrado un resultado consistente en el reconocimiento de licencias, como se desprende de la evaluación bien contabilizada del rendimiento, incluyendo precisión, recall y eficiencia computacional. Los resultados demuestran la eficiencia y usabilidad de este sistema en una instalación real y prometen revolucionar la identificación automática de vehículos. Por último, la integración de la inteligencia artificial y la tecnología de aprendizaje automático en el reconocimiento de matrículas en tiempo real supone cambios en la gestión del tráfico, la seguridad de las evaluaciones y los planes de las ciudades inteligentes. Por tanto, la colaboración interdisciplinar y la innovación continua son cruciales para configurar un futuro sostenible y equilibrado para los sistemas de transporte inteligentes.
Palabras clave: Tiempo eal; Detección de Matrículas; IA; ML; RCNN; RCNN Avanzado; Implementación Basada en la Nube.
INTRODUCTION
The realm of automated systems has witnessed a burgeoning demand for real-time number plate detection, an indispensable component in various domains such as traffic management, law enforcement, and urban infrastructure. As vehicular populations burgeon globally, the need for swift and accurate identification of alphanumeric characters from vehicle license plates becomes increasingly paramount. This necessity is underscored by the escalating challenges posed by traffic congestion, security threats, and the imperative for efficient urban mobility solutions. Internationally, initiatives focusing on smart city development and intelligent transportation systems have accentuated the importance of robust and efficient number plate recognition technologies. In this context, both academic research and industrial endeavors have endeavored to advance the state-of-the-art in real-time number plate detection. At the national level, diverse regulatory frameworks and infrastructural landscapes further underscore the significance of tailored detection solutions capable of addressing localized challenges. For instance, studies such as (Author et al., Year) have highlighted the efficacy of AI and ML techniques in enhancing traffic management and law enforcement efforts. Despite notable progress, the current state of number plate recognition systems remains characterized by inherent limitations, including variability in lighting conditions, plate designs, and environmental factors.
In light of these challenges, the research problem at hand seeks to develop a comprehensive and efficient real-time number plate detection system leveraging state-of-the-art AI and ML techniques. By integrating Region-based Convolutional Neural Networks (RCNN) and Advanced RCNN algorithms, the study aims to overcome existing limitations and deliver a robust solution capable of operating seamlessly in dynamic real-world environments.
The objective of this work is to design, implement, and evaluate a real-time number plate detection system utilizing AI and ML techniques, with a specific focus on the integration of RCNN and Advanced RCNN algorithms. Through meticulous optimization and deployment in a cloud-based environment, the research aims to achieve unparalleled accuracy and efficiency in identifying number plates, thereby contributing to the advancement of intelligent transportation systems.
Literature Survey
The field of real-time license plate detection has made significant progress in recent years, driven by research efforts and innovative algorithmic approaches. One of the important contributions identified by(1) introduced the concept of rich feature hierarchies for accurate object detection and semantic segmentation. This work laid the foundation for subsequent development of object detection methods. Following this, the study(2) introduced Faster RCNN, a framework that aims to achieve real-time object detection by integrating local proposal networks. This method significantly improves the efficiency and accuracy of object detection systems, setting a new standard for real-time performance. Additionally, the approach described in reference(3) introduces the You Only Look Once (YOLO) algorithm, which provides a unified solution for real-time object detection. YOLO's one-shot detection engine revolutionizes real-time object detection tasks by providing a streamlined and efficient approach.
Additionally, a study called(4) proposed a pyramid function network to solve problems related to size variation in object detection. By incorporating multi-scale object representations, this approach improves the ability to detect objects of different sizes. Additionally, Reference(5) presents a single-chamber multi-box detector (SSD) capable of real-time object detection by directly predicting object bounding boxes and class probabilities. The simplicity and efficiency of SSDs have contributed to their widespread adoption in a variety of real-time applications.
Deep learning architectures have also played an important role in the development of realtime license plate detection systems. The Very Deep Convolutional Network (VGG) specified in(6) demonstrated the effectiveness of deep convolutional networks for large-scale image recognition problems. Additionally, the deep residual learning framework proposed in(7) improved the performance of image recognition tasks by introducing residual connections to train deeper neural networks. DenseNet, also described in,(8) is a densely connected convolutional network that improves learning efficiency by promoting feature reuse and gradient flow between network layers. Architectural innovations, such as the Inception architecture detailed in,(9) proposed efficient use of computational resources through the implementation of multi-scale feature extractors. This approach contributed to improving the performance of image classification and object detection tasks. Finally, reference(10) highlights the importance of large-scale visual recognition problems, such as the ImageNet problem, for benchmarking and promoting state-of-the-art research in image recognition and object detection.
This work comprehensively presents the evolution and development of real-time license plate detection systems, demonstrating the contributions of different algorithmic approaches, architectural designs, and benchmarking initiatives to the development of the field.
METHODS
This research begins a comprehensive journey to develop an advanced real-time license plate detection system by leveraging the power of artificial intelligence (AI) and machine learning (ML) technologies in the complex environment of intelligent transportation systems. This work is accomplished through a careful combination of descriptive and analytical methodologies that aim not only to create functional systems, but also to explore the nuances of algorithmic design, optimization, and real-world deployment.(10,11,12,13)
At the core of this effort is a systematic approach to data collection that selects a variety of vehicle images to reflect the variety of conditions encountered in the real world. Each image is carefully annotated, laying the foundation for subsequent training and evaluation of the algorithm. This process is supported by rigorous preprocessing techniques, including image augmentation, normalization, and noise reduction, which help improve system robustness and generalization ability. Region-based convolutional neural network (RCNN) algorithms are an effective approach to object detection, providing a basic foundation for identifying objects in images. RCNN works by first generating a set of region proposals using a selective search algorithm. These proposals identify candidate regions of the image that may contain objects of interest. Each region proposal is then processed independently using a pretrained convolutional neural network (CNN), typically a deep learning model such as AlexNet or VGG16, to extract fixed-length feature vectors. This feature vector captures the semantic information associated with the objects contained in the sentence. Once the features are extracted, they are fed into a support vector machine (SVM) classifier. This classifier is trained to classify each local proposal into one of predefined feature categories (in this case, determining whether that area contains a license plate).(14,15,16,17) Finally, bounding box regression is used to specify the locations of detected objects in the proposed region.
RCNN laid the foundation for object detection by leveraging CNNs but suffered from computational inefficiencies due to its multi-stage pipeline. To address these limitations, an advanced RCNN algorithm is introduced to simplify the detection process while maintaining accuracy.
Advanced RCNN replaces the selective search algorithm used in RCNN with a Region Proposal Network (RPN) that uses convolutional layers with a subsequent object detection network. This integrated architecture allows efficient generation of region proposals directly from convolutional feature maps, eliminating the need for external proposal generation methods and significantly speeding up the overall process. RPNs work by moving a small network (usually a lightweight CNN) through convolutional feature maps generated from shared layers. At each sliding window location, the RPN simultaneously predicts objectness scores and bounding box offsets, which are used to generate a set of candidate region proposals. These suggestions are then refined and filtered based on objectness scores before being passed on to a subsequent object detection network.(18,19,20,21,22)
By integrating local proposal generation directly into the detection pipeline and splitting the computation between the two tasks, Advanced RCNN achieves impressive speedup over RCNN while maintaining high detection accuracy. This efficiency makes it particularly suitable for real-time applications such as license plate detection in intelligent transportation systems.(23)
The research journey initiated in this study goes beyond simple algorithm implementation and includes careful optimization and strategic deployment steps. Here, the system undergoes extensive tuning by carefully tuning hyperparameters such as the number of regions provided, input image size, and object type. These adjustments aim to achieve a delicate balance between the requirements of fast processing and accurate detection. Additionally, strategic deployment of systems in cloud environments is a key step to exploit the potential of parallel processing and significantly improve efficiency and throughput.
This research journey leverages a wide range of information sources, including scientific literature, technical documentation, and industry best practices. Validation of the proposed system is achieved through rigorous empirical testing, comparing it with existing methodologies to ensure reliability and robustness in real-world scenarios. This research aims to not only create a functional license plate detection system based on collaboration and consensus building between interdisciplinary teams, but also to push the boundaries of innovation in intelligent transportation systems.
This research is based on integrating advanced artificial intelligence techniques, especially region-based convolutional neural network (RCNN) and extended RCNN algorithm, for efficient license plate detection. The RCNN algorithm begins the search process by generating local proposals using a selective search method. These sentences are then refined and classified using an advanced RCNN algorithm, which uses sophisticated techniques to recognize plate boundaries and extract alphanumeric characters. The seamless integration of these complementary algorithms lays the foundation for unprecedented levels of accuracy and efficiency in real-time license plate detection, addressing the multifaceted challenges inherent in the task.
RESULTS
Developing a reliable real-time license plate detection system requires a comprehensive understanding of the underlying algorithms and methodologies. In this study, we set out on a journey to harness the power of artificial intelligence (AI) and machine learning (ML) technologies in the complex environment of intelligent transportation systems. Our research was conducted through a careful combination of descriptive and analytical methodologies that aim not only to create functional systems, but also to explore the nuances of algorithmic design, optimization, and real-world deployment. Our activities are based on a systematic approach to data collection. We selected a variety of vehicle images to represent different real-world conditions. Each image was carefully annotated to lay the foundation for subsequent training and evaluation of the algorithm. Our preprocessing techniques, including image augmentation, normalization, and denoising, aimed to improve system robustness and generalization ability. The region-based Convolutional Neural Network (RCNN) algorithm was used as the basis for object detection. However, multistage pipelines suffer from computational inefficiencies. To address these limitations, we introduce an advanced RCNN algorithm that simplifies the detection process while maintaining accuracy. Advanced RCNN replaces the selective search algorithm with Region Proposal Network (RPN), resulting in significant speed improvements. Before presenting the results, we performed extensive hyperparameter optimization to fine-tune the system performance. Parameters such as number of regions have been adjusted. We adjust the proposal, input image size, and object type to find a balance between processing speed and detection accuracy.
Now, let's delve into the results obtained from our experimentation, shedding light on the comparative performance metrics, hyperparameter optimization outcomes, and the impact of deployment environments on system performance.
Table 1. Comparison of RCNN and Advanced RCNN Performance Metrics |
||
Algorithm |
Mean Processing Time (ms) |
Detection Accuracy (%) |
RCNN |
248, 325, 270, 287, 305 |
92, 91, 93, 92, 94 |
Advanced RCNN |
110, 112, 115, 108, 111 |
95, 96, 95, 96, 97 |
This table presents a comparison of performance metrics between the RCNN and Advanced RCNN algorithms. Mean processing time and detection accuracy are listed for both algorithms, showcasing the efficiency and accuracy gains achieved with Advanced RCNN.
Figure 1. Mean Processing Time Comparison between RCNN and Advanced RCNN
The bar chart visually illustrates the mean processing time comparison between RCNN and Advanced RCNN, highlighting the significant reduction in processing time achieved with Advanced RCNN.
Table 2. Hyperparameter Optimization Results |
|
Hyperparameter |
Best Value |
Number of Proposals |
1000 |
Input Image Size |
512x512 |
Feature Types |
RGB |
This table outlines the results of hyperparameter optimization, reporting the best values for key parameters such as the number of region proposals, input image size, and feature types. These optimized parameters are crucial for maximizing detection accuracy and system performance.
Figure 2. Detection Accuracy with Varying Hyperparameters
The line graph showcases the detection accuracy achieved with varying hyperparameters, aiding in the identification of optimal parameter values for maximizing accuracy.
Table 3. Cloud-Based Deployment Performance |
|
Deployment Environment |
Throughput (plates/second) |
On-Premise |
68, 72, 76, 79, 91 |
Cloud |
112, 115, 125, 127, 143 |
This table presents the performance comparison between on-premise and cloud-based deployment environments, highlighting the throughput achieved in each scenario.
Figure 3. Throughput Comparison between On-Premise and Cloud Deployment
The bar chart visually compares the throughput between on-premise and cloud deployment environments, emphasizing the scalability and efficiency advantages of cloud computing.
DISCUSSION
The paragraphs accompanying each table and chart provide detailed explanations and insights into the presented results, elucidating the significance of each aspect in the development of a cutting-edge number plate detection system.(24,25)
CONCLUSION
In conclusion, our comprehensive journey to develop an advanced real-time license plate detection system has resulted in significant advancements in the field of intelligent transportation systems. Through the integration of advanced artificial intelligence techniques, especially region-based convolutional neural networks (RCNNs) and advanced RCNN algorithms, we have demonstrated the potential to achieve unprecedented levels of accuracy and efficiency. Comparison of RCNN and Advanced RCNN algorithms shows significant improvements in processing time and detection accuracy, highlighting the important role of algorithmic innovation in improving system performance.
Additionally, thorough hyperparameter optimization studies have highlighted the importance of fine-tuning system parameters. By optimizing parameters such as the number of proposed regions, input image size, and object type, we successfully achieved a balance between processing speed and detection accuracy. This optimization process highlights the complex nature of algorithm design and its important contribution to maximizing the performance of license plate detection systems in real-world scenarios.
Additionally, studies of deployment environments have revealed significant benefits of cloud deployments for real-time applications. The move to cloud infrastructure has not only significantly increased throughput, but also unlocked the potential for parallel processing and improved scalability. These innovative changes highlight the key role of cloud computing in transforming the efficiency and effectiveness of intelligent transportation systems. Overall, our research journey has not only produced a functional license plate detection system, but also laid the foundation for future innovations based on collaboration, optimization, and strategic deployment strategies.
BIBLIOGRAPHIC REFERENCES
1. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 580-587).
2. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (NIPS) (pp. 91-99).
3. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real- time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 779-788).
4. Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2117-2125).
5. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). SSD: Single shot multibox detector. In European conference on computer vision (ECCV) (pp. 21- 37).
6. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
7. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 770-778).
8. Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 4700-4708).
9. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2818-2826).
10. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211-252.
11. Shafi S, Pavan Sai Kumar Reddy T, Silla R, Yasmeen M. Deep Learning based Real-time Stolen Vehicle Detection Model with Improved Precision and Reduced Look Up Time. 2023 3rd International Conference on Intelligent Technologies (CONIT), 2023, p. 1-6. https://doi.org/10.1109/CONIT59222.2023.10205684.
12. Kumar B, Kumari K, Banerjee P, Jha P. An Implementation of Automatic Number Plate Detection and Recognition using AI. 2023 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), 2023, p. 1-9. https://doi.org/10.1109/ACCAI58221.2023.10199855.
13. Amin A, Mumtaz R, Bashir MJ, Zaidi SMH. Next-Generation License Plate Detection and Recognition System Using YOLOv8. 2023 IEEE 20th International Conference on Smart Communities: Improving Quality of Life using AI, Robotics and IoT (HONET), 2023, p. 179-84. https://doi.org/10.1109/HONET59747.2023.10374756.
14. Elfaki AO, Messoudi W, Bushnag A, Abuzneid S, Alhmiedat T. A Smart Real-Time Parking Control and Monitoring System. Sensors 2023;23:9741. https://doi.org/10.3390/s23249741.
15. Srenath Kumar GS. Implementation of a Car Number Plate Recognition-based Automated Parking System for Malls. 2023 8th International Conference on Communication and Electronics Systems (ICCES), 2023, p. 1394-9. https://doi.org/10.1109/ICCES57224.2023.10192667.
16. Prakash-Borah J, Devnani P, Kumar-Das S, Vetagiri A, Pakray P. Real-Time Helmet Detection and Number Plate Extraction Using Computer Vision. Computación y Sistemas 2024;28. https://doi.org/10.13053/cys-28-1-4906.
17. Gupta SK, Saxena S, Khang A, Hazela B, Dixit CK, Haralayya B. Detection of Number Plate in Vehicles using Deep Learning based Image Labeler Model. 2023 International Conference on Recent Trends in Electronics and Communication (ICRTEC), 2023, p. 1-6. https://doi.org/10.1109/ICRTEC56977.2023.10111862.
18. Angelika Mulia D, Safitri S, Putra Kusuma Negara G. YOLOv8 and Faster R-CNN Performance Evaluation with Super-resolution in License Plate Recognition. International Journal of Computing and Digital Systems 2024;16:1-10. https://doi.org/10.12785/ijcds/XXXXXX.
19. Kabiraj A, Pal D, Ganguly D, Chatterjee K, Roy S. Number plate recognition from enhanced super-resolution using generative adversarial network. Multimed Tools Appl 2023;82:13837-53. https://doi.org/10.1007/s11042-022-14018-0.
20. Nhan J. The Emergence of Police Real-Time Crime Centers. The Impact of Technology on the Criminal Justice System, Routledge; 2024.
21. Ettalibi A, Elouadi A, Mansour A. AI and Computer Vision-based Real-time Quality Control: A Review of Industrial Applications. Procedia Computer Science 2024;231:212-20. https://doi.org/10.1016/j.procs.2023.12.195.
22. Pattanaik A, Balabantaray RC. Enhancement of license plate recognition performance using Xception with Mish activation function. Multimed Tools Appl 2023;82:16793-815. https://doi.org/10.1007/s11042-022-13922-9.
23. Thakur N, Bhattacharjee E, Jain R, Acharya B, Hu Y-C. Deep learning-based parking occupancy detection framework using ResNet and VGG-16. Multimed Tools Appl 2024;83:1941-64. https://doi.org/10.1007/s11042-023-15654-w.
24. Lee H, Chatterjee I, Cho G. A Systematic Review of Computer Vision and AI in Parking Space Allocation in a Seaport. Applied Sciences 2023;13:10254. https://doi.org/10.3390/app131810254.
25. Mahale J, Rout D, Roy B, Khang A. Vehicle and Passenger Identification in Public Transportation to Fortify Smart City Indices. Smart Cities, CRC Press; 2023.
FINANCING
The authors did not receive financing for the development of this research.
CONFLICT OF INTEREST
The authors declare that there is no conflict of interest.
AUTHORSHIP CONTRIBUTION:
Conceptualization: MD.Aman
Data curation: P.Swathi
Formal analysis: M.Saishree
Acquisition of funds: None
Research: P.Swathi, D.SaiTeja, MD.Aman, M.Saishree
Methodology: Dara sai Tejaswi
Project management: P.Swathi, D.SaiTeja, MD.Aman, M.Saishree
Resources: M.Saishree
Software: MD.Aman
Supervision: R.Venubabu, A.DineshKumar
Validation: R.Venubabu, A.DineshKumar
Display: MD.Aman
Drafting - original draft: Dara sai Tejaswi
Writing - proofreading and editing: P.Swathi