Energy-Reduced Bio-Inspired 1D-CNN for Audio Emotion Recognition

Authors

  • Jiby Mariya Jose Independent Researcher, India Author
  • Jeeva Jose Independent Researcher, India Author

DOI:

https://doi.org/10.32628/CSEIT25113386

Keywords:

Computational Efficiency, Energy Reduction, audio Emotion Detection, Lightweight Convolutional Neural Network (CNN),, Artificial Intelligence on Edge, audio Databases, Hierarchical Framework

Abstract

This paper proposes EPyNet, a deep learning architecture designed for energy reduced audio emotion recognition.In the domain of audio based emotion recognition, where discerning emotional cues from audio input is crucial, the inte- gration of artificial intelligence techniques has sparked a transformative shift in accuracy and performance.Deep learn- ing, renowned for its ability to decipher intricate patterns, spearheads this evolution. However, the energy efficiency of deep learning models, particularly in resource-constrained environments, remains a pressing concern. Convolutional operations serve as the cornerstone of deep learning systems. However, their extensive computational demands leading to energy-inefficient computations render them as not ideal for deployment in scenarios with limited resources. Ad- dressing these challenges, researchers came up with one-dimensional convolutional neural network (1D CNN) array convolutions, offering an alternative to traditional two-dimensional CNNs, with reduced resource requirements. How- ever, this array-based operation reduced the resource requirement, but the energy-consumption impact was not studied. To bridge this gap, we introduce EPyNet, a deep learning architecture crafted for energy efficiency with a particular emphasis on neuron reduction. Focusing on the task of audio emotion recognition, We evaluate EPyNet on five pub- lic audio corpora—RAVDESS, TESS, EMO DB, CREMA D, and SAVEE.We propose three versions of EPyNet, a lightweight neural network designed for efficient emotion recognition, each optimized for different trade-offs between accuracy and energy efficiency. Experimental results demonstrated that the 0.06M EPyNet reduced energy consumed by 76.5% while improving accuracy by 5% on RAVDESS, 25% on TESS, and 9.75% on SAVEE. The 0.2M and 0.9M models reduced energy consumed by 64.9% and 70.3%, respectively. Additionally, we compared our Proposed 0.06M system with the MobileNet models on the CIFAR-10 dataset and achieved significant improvements. The proposed system reduces energy by 86.2% and memory by 95.7% compared to MobileNet, with a slightly lower accuracy of 0.8%. Compared to MobileNetV2, it improves accuracy by 99.2% and reduces memory by 93.8%. When compared to MobileNetV3, it achieves 57.2% energy reduction, 85.1% memory reduction, and a 24.9% accuracy improvement. We further test the scalability and robustness of the proposed solution on different data dimensions and frameworks.

Downloads

Download data is not yet available.

References

Y. U¨ . SO¨ NMEZ and A. VAROL, “In-depth investigation of speech emotion recognition studies from past to present the im- portance of emotion recognition from speech signal for ai,” In- telligent Systems with Applications, p. 200 351, 2024.

R. Guo, H. Guo, L. Wang, M. Chen, D. Yang, and B. Li, “Devel- opment and application of emotion recognition technology—a systematic literature review,” BMC psychology, vol. 12, no. 1, p. 95, 2024.

H. R. Kirk, B. Vidgen, P. Ro¨ttger, and S. A. Hale, “The ben- efits, risks and bounds of personalizing the alignment of large language models to individuals,” Nature Machine Intelligence, pp. 1–10, 2024.

H. Ouhaichi, D. Spikol, and B. Vogel, “Research trends in mul- timodal learning analytics: A systematic mapping study,” Com- puters and Education: Artificial Intelligence, p. 100 136, 2023.

Y. Wu, J. Han, Z. Jian, and W. Xu, “Human voice sensing through radio-frequency technologies: A comprehensive re- view,” IEEE Transactions on Instrumentation and Measure- ment, 2024.

A. A. Anthony and C. M. Patil, “Speech emotion recognition systems: A comprehensive review on different methodologies,” Wireless Personal Communications, vol. 130, no. 1, pp. 515–525, 2023.

U. Maniscalco, A. Minutolo, P. Storniolo, and M. Esposito, “Towards a more anthropomorphic interaction with robots in museum settings: An experimental study,” Robotics and Au- tonomous Systems, vol. 171, p. 104 561, 2024.

J. Yu, A. Dickinger, K. K. F. So, and R. Egger, “Artificial intelligence-generated virtual influencer: Examining the effects of emotional display on user engagement,” Journal of Retailing and Consumer Services, vol. 76, p. 103 560, 2024.

Z. Yang, S. Zhou, L. Zhang, and S. Serikawa, “Optimizing speech emotion recognition with hilbert curve and convolu- tional neural network,” Cognitive Robotics, vol. 4, pp. 30–41, 2024.

P. Kozlov, A. Akram, and P. Shamoi, “Fuzzy approach for audio-video emotion recognition in computer games for chil- dren,” Procedia Computer Science, vol. 231, pp. 771–778, 2024.

J. M. Jose, “Optimizing neural network energy efficiency through low-rank factorisation and pde-driven dense layers,” International Journal of Research Publication and Reviews, vol. 6, no. 1, pp. 5483–5487, Jan. 2025, IssN: 2582-7421.

R. Desislavov, F. Mart´ınez-Plumed, and J. Herna´ndez-Orallo, “Trends in ai inference energy consumption: Beyond the performance-vs-parameter laws of deep learning,” Sustainable Computing: Informatics and Systems, vol. 38, p. 100 857, 2023.

N. Aslam, W. Yang, R. Saeed, and F. Ullah, “Energy transition as a solution for energy security risk: Empirical evidence from bri countries,” Energy, vol. 290, p. 130 090, 2024.

L. Alzubaidi, J. Zhang, A. J. Humaidi, et al., “Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions,” Journal of big Data, vol. 8, pp. 1–74, 2021.

G. Habib and S. Qureshi, “Optimization and acceleration of convolutional neural networks: A survey,” Journal of King Saud University-Computer and Information Sciences, vol. 34, no. 7, pp. 4244–4268, 2022.

Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, “A survey of convo- lutional neural networks: Analysis, applications, and prospects,” IEEE transactions on neural networks and learning systems, vol. 33, no. 12, pp. 6999–7019, 2021.

Q. Zhang, M. Zhang, T. Chen, Z. Sun, Y. Ma, and B. Yu, “Re- cent advances in convolutional neural network acceleration,” Neurocomputing, vol. 323, pp. 37–51, 2019.

C.-C. J. Kuo and A. M. Madni, “Green learning: Introduction, examples and outlook,” Journal of Visual Communication and Image Representation, vol. 90, p. 103 685, 2023.

S. Kiranyaz, O. Avci, O. Abdeljaber, T. Ince, M. Gabbouj, and D. J. Inman, “1d convolutional neural networks and applica- tions: A survey,” Mechanical systems and signal processing, vol. 151, p. 107 398, 2021.

A. Mehrish, N. Majumder, R. Bharadwaj, R. Mihalcea, and S. Poria, “A review of deep learning techniques for speech pro- cessing,” Information Fusion, p. 101 869, 2023.

Y. Zhou, M. Shen, X. Cui, Y. Shao, L. Li, and Y. Zhang, “Tribo- electric nanogenerator based self-powered sensor for artificial intelligence,” Nano Energy, vol. 84, p. 105 887, 2021.

M. Pandiyan and T. N. Babu, “Systematic review on fault di- agnosis on rolling-element bearing,” Journal of Vibration Engi- neering & Technologies, pp. 1–35, 2024.

P. Boopathy, M. Liyanage, N. Deepa, et al., “Deep learning for intelligent demand response and smart grids: A comprehensive survey,” Computer Science Review, vol. 51, p. 100 617, 2024.

S. Abdoli, P. Cardinal, and A. L. Koerich, “End-to-end envi- ronmental sound classification using a 1d convolutional neural network,” Expert Systems with Applications, vol. 136, pp. 252– 263, 2019.

Y. Huang, T. Ando, A. Sebastian, M.-F. Chang, J. J. Yang, and Q. Xia, “Memristor-based hardware accelerators for artificial intelligence,” Nature Reviews Electrical Engineering, pp. 1–14, 2024.

A. A. Laghari, X. Zhang, Z. A. Shaikh, A. Khan, V. V. Es- trela, and S. Izadi, “A review on quality of experience (qoe) in cloud computing,” Journal of Reliable Intelligent Environments, pp. 1–15, 2023.

M. R. Falahzadeh, E. Z. Farsa, A. Harimi, A. Ahmadi, and A. Abraham, “3d convolutional neural network for speech emotion recognition with its realization on intel cpu and nvidia gpu,” IEEE Access, vol. 10, pp. 112 460–112 471, 2022.

J. M. Jose, “Edge intelligence: Architecture, scope and appli- cations,” Journal homepage: www. ijrpr. com ISSN, vol. 2582,p. 7421,

L. Wang and K.-J. Yoon, “Knowledge distillation and student- teacher learning for visual intelligence: A review and new out- looks,” IEEE transactions on pattern analysis and machine in- telligence, vol. 44, no. 6, pp. 3048–3068, 2021.

H. Yu, X. Feng, and Y. Wang, “Enhancing deep feature rep- resentation in self-knowledge distillation via pyramid feature refinement,” Pattern Recognition Letters, vol. 178, pp. 35–42, 2024.

Z. Yang, Y. Zhang, D. Sui, Y. Ju, J. Zhao, and K. Liu, “Expla- nation guided knowledge distillation for pre-trained language model compression,” ACM Transactions on Asian and Low- Resource Language Information Processing, vol. 23, no. 2, pp. 1–19, 2024.

Y. Tang, Y. Wang, J. Guo, et al., “A survey on transformer com- pression,” arXiv preprint arXiv:2402.05964, 2024.

S.-K. Yeom, P. Seegerer, S. Lapuschkin, et al., “Pruning by ex- plaining: A novel criterion for deep neural network pruning,” Pattern Recognition, vol. 115, p. 107 899, 2021.

Y.-J. Zheng, S.-B. Chen, C. H. Ding, and B. Luo, “Model compression based on differentiable network channel pruning,” IEEE Transactions on Neural Networks and Learning Systems, 2022.

P. P. Ray, “A review on tinyml: State-of-the-art and prospects,” Journal of King Saud University-Computer and Information Sciences, vol. 34, no. 4, pp. 1595–1623, 2022.

S. I. Young, W. Zhe, D. Taubman, and B. Girod, “Trans- form quantization for cnn compression,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 5700–5714, 2021.

P. V. Astillo, D. G. Duguma, H. Park, J. Kim, B. Kim, and I. You, “Federated intelligence of anomaly detection agent in iotmd-enabled diabetes management control system,” Future Generation Computer Systems, vol. 128, pp. 395–405, 2022.

J. Hwang, A. S. Uddin, and S.-H. Bae, “A layer-wise extreme network compression for super resolution,” IEEE Access, vol. 9, pp. 93 998–94 009, 2021.

M. Hussain, M. Fiza, A. Khalil, et al., “Transfer learning-based quantized deep learning models for nail melanoma classifica- tion,” Neural Computing and Applications, vol. 35, no. 30, pp. 22 163–22 178, 2023.

A. Kumar, A. Vishwakarma, and V. Bajaj, “Multi-headed cnn for colon cancer classification using histopathological images with tikhonov-based unsharp masking,” Multimedia Tools and Applications, pp. 1–20, 2024.

J. Chen, S.-W. Jun, S. Hong, W. He, and J. Moon, “Eciton: Very low-power recurrent neural network accelerator for real-time inference at the edge,” ACM Transactions on Reconfigurable Technology and Systems, vol. 17, no. 1, pp. 1–25, 2024.

Y. Liu, J. Xue, D. Li, W. Zhang, T. K. Chiew, and Z. Xu, “Image recognition based on lightweight convolutional neural network: Recent advances,” Image and Vision Computing, p. 105 037, 2024.

X. Ma, S. Lin, S. Ye, et al., “Non-structured dnn weight prun- ing—is it beneficial in any platform?” IEEE transactions on neural networks and learning systems, vol. 33, no. 9, pp. 4930– 4944, 2021.

S. Mittal, “A survey on optimized implementation of deep learning models on the nvidia jetson platform,” Journal of Sys- tems Architecture, vol. 97, pp. 428–442, 2019.

J. Park, J. Lee, and D. Sim, “Low-complexity cnn with 1d and 2d filters for super-resolution,” Journal of Real-Time Image Pro- cessing, vol. 17, no. 6, pp. 2065–2076, 2020.

A. Pourdaryaei, M. Mohammadi, H. Mubarak, et al., “A new framework for electricity price forecasting via multi-head self- attention and cnn-based techniques in the competitive electricity market,” Expert Systems with Applications, vol. 235, p. 121 207, 2024.

Y. Wang, S. Zhao, H. Jiang, et al., “Diffmdd: A diffusion-based deep learning framework for mdd diagnosis using eeg,” IEEE Transactions on Neural Systems and Rehabilitation Engineer- ing, 2024.

M. I. Shirazi, S. Khatir, D. Boutchicha, and M. A. Wahab, “Fea- ture extraction and classification of multiple cracks from raw vi- brational responses of composite beams using 1d-cnn network,” Composite Structures, vol. 327, p. 117 701, 2024.

P. Pang, J. Tang, J. Luo, M. Chen, H. Yuan, and L. Jiang, “An explainable and lightweight improved 1d cnn model for vi- bration signals of rotating machinery,” IEEE Sensors Journal, 2024.

B. Zhang, J. Han, Z. Huang, J. Yang, and X. Zeng, “A real- time and hardware-efficient processor for skeleton-based action recognition with lightweight convolutional neural network,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 66, no. 12, pp. 2052–2056, 2019. doI: 10.1109/TCSII. 2019.2899829.

A. Sen, G. Rajakumaran, M. Mahdal, et al., “Live event de- tection for people’s safety using nlp and deep learning,” IEEE Access, 2024.

A. K. Das and R. Naskar, “A deep learning model for depres- sion detection based on mfcc and cnn generated spectrogram features,” Biomedical Signal Processing and Control, vol. 90, p. 105 898, 2024.

C. Dixit and S. M. Satapathy, “Deep cnn with late fusion for real time multimodal emotion recognition,” Expert Systems with Applications, vol. 240, p. 122 579, 2024.

E. Mancini, A. Galassi, F. Ruggeri, and P. Torroni, “Disrup- tive situation detection on public transport through speech emo- tion recognition,” Intelligent Systems with Applications, vol. 21, p. 200 305, 2024.

S. Kapoor and T. Kumar, “Fusing traditionally extracted fea- tures with deep learned features from the speech spectrogram for anger and stress detection using convolution neural net- work,” Multimedia Tools and Applications, vol. 81, no. 21, pp. 31 107–31 128, 2022.

N. Patel, S. Patel, and S. H. Mankad, “Impact of autoencoder based compact representation on emotion detection from au- dio,” Journal of Ambient Intelligence and Humanized Comput- ing, pp. 1–19, 2022.

K. Mustaqeem, A. El Saddik, F. S. Alotaibi, and N. T. Pham, “Aad-net: Advanced end-to-end signal processing system for human emotion detection & recognition using attention-based deep echo state network,” Knowledge-Based Systems, vol. 270, p. 110 525, 2023.

M. Maithri, U. Raghavendra, A. Gudigar, et al., “Automated emotion recognition: Current trends and future perspectives,” Computer methods and programs in biomedicine, vol. 215, p. 106 646, 2022.

M. Mohan, P. Dhanalakshmi, and R. S. Kumar, “Speech emo- tion classification using ensemble models with mfcc,” Procedia Computer Science, vol. 218, pp. 1857–1868, 2023.

S. P. Mishra, P. Warule, and S. Deb, “Speech emotion recog- nition using mfcc-based entropy feature,” Signal, Image and Video Processing, vol. 18, no. 1, pp. 153–161, 2024.

C. Hema and F. P. G. Marquez, “Emotional speech recogni- tion using cnn and deep learning techniques,” Applied Acous- tics, vol. 211, p. 109 492, 2023.

L. Yunxiang and Z. Kexin, “Design of efficient speech emotion recognition based on multi task learning,” IEEE Access, vol. 11, pp. 5528–5537, 2023.

J. de Lope and M. Gran˜a, “An ongoing review of speech emo- tion recognition,” Neurocomputing, vol. 528, pp. 1–11, 2023.

M. R. Ahmed, S. Islam, A. M. Islam, and S. Shatabda, “An ensemble 1d-cnn-lstm-gru model with data augmentation for speech emotion recognition,” Expert Systems with Applications, vol. 218, p. 119 633, 2023.

P. Singh, M. Sahidullah, and G. Saha, “Modulation spectral features for speech emotion recognition using deep neural net- works,” Speech Communication, vol. 146, pp. 53–69, 2023.

L.-M. Zhang, G. W. Ng, Y.-B. Leau, and H. Yan, “A parallel- model speech emotion recognition network based on feature clustering,” IEEE Access, 2023.

S. Rajesh and N. Nalini, “Polyphonic instrument emotion recognition using stacked auto encoders: A dimensionality reduction approach,” Procedia Computer Science, vol. 218, pp. 1905–1914, 2023.

A. Bakhshi, J. Garc´ıa-Go´mez, R. Gil-Pita, and S. Chalup, “Vi- olence detection in real-life audio signals using lightweight deep neural networks,” Procedia Computer Science, vol. 222, pp. 244–251, 2023.

S. P. Mishra, P. Warule, and S. Deb, “Variational mode decom- position based acoustic and entropy features for speech emotion recognition,” Applied Acoustics, vol. 212, p. 109 578, 2023.

U. Bilotti, C. Bisogni, M. De Marsico, and S. Tramonte, “Mul- timodal emotion recognition via convolutional neural networks: Comparison of different strategies on two multimodal datasets,” Engineering Applications of Artificial Intelligence, vol. 130, p. 107 708, 2024.

V. Singh and S. Prasad, “Speech emotion recognition system using gender dependent convolution neural network,” Procedia Computer Science, vol. 218, pp. 2533–2540, 2023.

Z.-T. Liu, M.-T. Han, B.-H. Wu, and A. Rehman, “Speech emotion recognition based on convolutional neural network with attention-based bidirectional long short-term memory net- work and multi-task learning,” Applied Acoustics, vol. 202, p. 109 178, 2023.

D. Sanaguano-Moreno, J. Lucio-Naranjo, R. Tenenbaum, and G. Sampaio-Regattieri, “Real-time impulse response: A methodology based on machine learning approaches for a rapid impulse response generation for real-time acoustic virtual re- ality systems,” Intelligent Systems with Applications, vol. 21,p. 200 306, 2024.

A. Khurana, S. Mittal, D. Kumar, S. Gupta, and A. Gupta, “Tri- integrated convolutional neural network for audio image classi- fication using mel-frequency spectrograms,” Multimedia Tools and Applications, vol. 82, no. 4, pp. 5521–5546, 2023.

M. G. Campana, F. Delmastro, and E. Pagani, “Transfer learn- ing for the efficient detection of covid-19 from smartphone au- dio data,” Pervasive and Mobile Computing, vol. 89, p. 101 754, 2023.

R. Trebilco, J. K. Baum, A. K. Salomon, and N. K. Dulvy, “Ecosystem ecology: Size-based constraints on the pyramids of life,” Trends in ecology & evolution, vol. 28, no. 7, pp. 423–431, 2013.

A. R. Kumar and A. M. R. Khan, “Paradigm shift of energy from past to present,” 2017.

G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger, “Deep networks with stochastic depth,” in Computer Vision– ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, Springer, 2016, pp. 646–661.

S. Pradhan and S. Longpre, Exploring the depths of recurrent neural networks with stochastic residual learning, 2016.

C. P. Woods, “Impact of stochastic depth on deterministic and probabilistic resnet models for weather modeling,” Ph.D. dis- sertation, Monterey, CA; Naval Postgraduate School, 2022.

Author(s) on pypi.org, Pyrapl.

S. R. Livingstone and F. A. Russo, “The ryerson audio-visual database of emotional speech and song (ravdess): A dynamic, multimodal set of facial and vocal expressions in north american english,” PloS one, vol. 13, no. 5, e0196391, 2018.

K. Dupuis and M. K. Pichora-Fuller, “Toronto emotional speech set (tess)-younger talker happy,” 2010.

K. DONUK, “Crema-d: Improving accuracy with bpso-based feature selection for emotion recognition using speech,” Jour- nal of Soft Computing and Artificial Intelligence, vol. 3, no. 2, pp. 51–57, 2022.

N. J. Shoumy, L.-M. Ang, D. M. Rahaman, T. Zia, K. P. Seng, and S. Khatun, “Augmented audio data in improving speech emotion classification tasks,” in Advances and Trends in Arti- ficial Intelligence. From Theory to Practice: 34th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2021, Kuala Lumpur, Malaysia, July 26–29, 2021, Proceedings, Part II 34, Springer, 2021, pp. 360–365.

F. Burkhardt, A. Paeschke, M. Rolfes, W. F. Sendlmeier, B. Weiss, et al., “A database of german emotional speech.,” in In- terspeech, vol. 5, 2005, pp. 1517–1520.

J. Ye, X.-C. Wen, Y. Wei, Y. Xu, K. Liu, and H. Shan, “Tempo- ral modeling matters: A novel temporal emotional modeling ap- proach for speech emotion recognition,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Sig- nal Processing (ICASSP), IEEE, 2023, pp. 1–5.

J. Zhao, X. Mao, and L. Chen, “Speech emotion recognition using deep 1d & 2d cnn lstm networks,” Biomedical signal pro- cessing and control, vol. 47, pp. 312–323, 2019.

D. Issa, M. F. Demirci, and A. Yazici, “Speech emotion recogni- tion with deep convolutional neural networks,” Biomedical Sig- nal Processing and Control, vol. 59, p. 101 894, 2020.

H. Hong, D. Choi, N. Kim, et al., “Survey of convolutional neu- ral network accelerators on field-programmable gate array plat- forms: Architectures and optimization techniques,” Journal of Real-Time Image Processing, vol. 21, no. 3, pp. 1–21, 2024.

A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (canadian institute for advanced research),” URL http://www. cs. toronto. edu/kriz/cifar. html, vol. 5, no. 4, p. 1, 2010.

C.-J. Chang, C.-C. Chen, and B.-H. Chen, “Bearing fault diag- nosis based on an advanced method: Id-cnn-lstm,” in 2023 IEEE 6th Eurasian Conference on Educational Innovation (ECEI), 2023, pp. 63–66. doI: 10.1109/ECEI57668.2023.10105356.

D. Sinha and M. El-Sharkawy, “Thin mobilenet: An enhanced mobilenet architecture,” in 2019 IEEE 10th annual ubiquitous computing, electronics & mobile communication conference (UEMCON), IEEE, 2019, pp. 0280–0285.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.

A. Howard, M. Sandler, G. Chu, et al., “Searching for mo- bilenetv3,” in Proceedings of the IEEE/CVF international con- ference on computer vision, 2019, pp. 1314–1324.

J. Zarei, M. A. Tajeddini, and H. R. Karimi, “Vibration analysis for bearing fault detection and classification using an intelligent filter,” Mechatronics, vol. 24, no. 2, pp. 151–157, 2014.

A. H. Boudinar, N. Benouzza, A. Bendiabdellah, et al., “Induc- tion motor bearing fault analysis using a root-music method,” IEEE Transactions on Industry applications, vol. 52, no. 5, pp. 3851–3860, 2016.

S. Singh, A. Kumar, and N. Kumar, “Motor current signature analysis for bearing fault detection in mechanical systems,” Pro- cedia Materials Science, vol. 6, pp. 171–177, 2014.

X. Zhang, B. Zhao, and Y. Lin, “Machine learning based bear- ing fault diagnosis using the case western reserve university data: A review,” IEEE Access, vol. 9, pp. 155 598–155 608,2021.

Downloads

Published

20-06-2025

Issue

Section

Research Articles