Efficient Training Data Caching for Deep Learning in Edge Computing Networks
DOI:
https://doi.org/10.32628/CSEIT20631113Keywords:
Edge Computing, Deep Learning, Training Data Caching, Distributed Systems, Latency Reduction, Data Redundancy Minimization, Resource Optimization, Adaptive Caching, Real-Time Learning, Decentralized Networks.Abstract
Efficient training data caching is a critical aspect of enhancing deep learning performance within edge computing networks, where computational resources and data bandwidth are often constrained. This paper investigates innovative methodologies for optimizing data caching mechanisms to address challenges associated with latency, data redundancy, and resource utilization in distributed edge systems. The exponential growth in data generation, coupled with the increasing demand for real-time learning and deployment, necessitates advanced techniques to manage and cache training datasets effectively. Traditional caching methods, designed for centralized cloud environments, are inherently unsuitable for the decentralized and resource-constrained nature of edge computing. This study presents a detailed exploration of adaptive caching strategies, data prioritization techniques, and compression algorithms tailored for edge systems, emphasizing their integration with deep learning workflows to ensure minimal delay and optimal performance. The research introduces a comprehensive framework for managing training data across distributed edge nodes, leveraging predictive caching models that incorporate reinforcement learning and statistical optimization to anticipate data needs dynamically. These models adapt to varying workload patterns, data access frequencies, and network conditions, thus enhancing cache hit rates and reducing computational overhead. Furthermore, the paper examines techniques for minimizing data redundancy, such as deduplication and data partitioning, which are crucial for optimizing storage and bandwidth in edge networks. The integration of these approaches with edge-based deep learning systems enables efficient data sharing and collaborative model training, fostering improved scalability and robustness in distributed environments. (1) The proposed solutions are evaluated through rigorous experimental setups, including real-world edge computing scenarios, to analyze their effectiveness in reducing latency, improving model training times, and optimizing resource utilization. The results demonstrate that adaptive caching mechanisms and data-aware scheduling significantly enhance the performance of deep learning applications in edge networks. Additionally, the study addresses the trade-offs between computational efficiency and data consistency, highlighting strategies to balance these competing objectives in edge systems. This research contributes to the growing body of knowledge on edge computing by providing actionable insights and practical guidelines for deploying efficient data caching systems tailored to deep learning tasks. The findings underscore the potential of intelligent caching to bridge the gap between the increasing computational demands of modern deep learning models and the limited resources available in edge networks. Moreover, the paper discusses the implications of these advancements for emerging applications, such as autonomous vehicles, smart cities, and industrial IoT, where real-time decision-making and low-latency processing are paramount. By presenting a unified approach to managing training data in edge computing environments, this work lays the foundation for future research into optimizing deep learning workflows in decentralized systems.
References
- M. Satyanarayanan, "The Emergence of Edge Computing," Computer, vol. 50, no. 1, pp. 30–39, Jan. 2017.
- Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, "A Survey on Mobile Edge Computing: The Communication Perspective," IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2322–2358, Fourthquarter 2017.
- Y. Du, W. Zhang, and X. Huang, "Distributed Deep Learning and Its Challenges and Mitigations in Edge Computing," IEEE Network, vol. 33, no. 6, pp. 233–240, Nov. 2019.
- H. Wu, C. Wu, Z. Niu, and H. Zhang, "Learning Caching Mechanisms in Heterogeneous Edge Networks: A Deep Reinforcement Learning Approach," IEEE Transactions on Wireless Communications, vol. 18, no. 4, pp. 1975–1987, Apr. 2019.
- A. Prateek, J. G. M. Dy, and R. Sion, "Statistical Caching: Applications to Distributed Machine Learning," in Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, Jul. 2018.
- J. Zhang, X. Wang, and L. He, "EdgeCache: Leveraging Edge Storage for Content Delivery in Mobile Networks," IEEE Transactions on Mobile Computing, vol. 19, no. 3, pp. 591–605, Mar. 2020.
- Z. Zhang, Z. Zheng, and H. Zhang, "Deep Reinforcement Learning-Based Smart Cache Allocation in IoT Networks," IEEE Internet of Things Journal, vol. 7, no. 6, pp. 4999–5009, Jun. 2020.
- S. Wang, R. Urgaonkar, T. He, M. Zafer, and K. K. Ramakrishnan, "Dynamic Service Placement and Caching in Edge Computing Networks," IEEE Transactions on Mobile Computing, vol. 18, no. 10, pp. 2163–2176, Oct. 2019.
- Y. Jiang et al., "Collaborative Caching for Edge AI in Industrial IoT Systems," IEEE Transactions on Industrial Informatics, vol. 16, no. 8, pp. 5285–5294, Aug. 2020.
- Q. Yang, Y. Liu, T. Chen, and Y. Tong, "Federated Machine Learning: Concept and Applications," ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 2, pp. 1–19, Jan. 2019.
- N. H. Tran, C. S. Hong, S. Lee, Z. Han, and D. Niyato, "Collaborative Mobile Edge Computing in 5G Systems: New Paradigms, Scenarios, and Challenges," IEEE Communications Magazine, vol. 55, no. 4, pp. 54–61, Apr. 2017.
- W. Shi and S. Dustdar, "The Promise of Edge Computing," Computer, vol. 49, no. 5, pp. 78–81, May 2016.
- H. Huang, A. Sharma, R. Fujiwara, T. Yu, and S. Yoshida, "A Cache Replacement Algorithm Based on Content Popularity and Edge Node's Workload," IEEE Access, vol. 8, pp. 37204–37215, Mar. 2020.
- M. Chen et al., "Edge Caching for Mixed Reality Content in 5G Networks," IEEE Network, vol. 34, no. 1, pp. 174–181, Jan. 2020.
- X. Wang, X. Peng, L. Zhu, J. Li, and W. Jia, "A Blockchain-Based Trust Management System for Distributed Machine Learning in Edge Computing," IEEE Transactions on Industrial Informatics, vol. 16, no. 7, pp. 4943–4952, Jul. 2020.
- T. Ouyang et al., "Adaptive Data Compression and Caching in Edge Computing for Real-Time IoT Applications," IEEE Internet of Things Journal, vol. 7, no. 9, pp. 8526–8537, Sep. 2020.
- K. Zhang, Y. Zhu, S. Maharjan, and Y. Zhang, "Edge Intelligence and Blockchain: A Distributed Computing Architecture for AI-Empowered IoT Applications," IEEE Internet of Things Journal, vol. 7, no. 5, pp. 3994–4002, May 2020.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRCSEIT

This work is licensed under a Creative Commons Attribution 4.0 International License.