Human-Centered Prompt Engineering: Techniques for Ethical and Inclusive LLM Outputs

Authors

  • Kapil Kumar Goyal Alumnus, MBA (Strategy and Negotiation), University of California, Irvine (UCI), Irvine, California, USA Alumnus, B. Tech in Computer Science and Engineering, Amity University, Noida, Uttar Pradesh, India Author

DOI:

https://doi.org/10.32628/CSEIT25113357

Keywords:

Prompt Engineering, Responsible AI, Bias Mitigation, Inclusive AI, Large Language Models, Human-Centered Design

Abstract

Large Language Models (LLMs) are being integrated into public-facing applications more and more despite their ethical and social outputs being of growing concern. They often generate biased recommendations, reflect exclusionary language, and amplify societal inequities. LLMs often reflect exclusionary language patterns in their responses. Our societal inequities writ large are their target. This paper positions the issue as fundamentally a human-centered design challenge—a prompt engineering problem—by critiquing the way prompts shape LLM behavior, rather than attributing the issue solely to societal inequities reflected in their outputs.

Downloads

Download data is not yet available.

References

R. R. Hoffman, T. Miller, G. Klein, and P. J. Feltovich, “Metrics for explainable AI: Challenges and prospects,” arXiv preprint arXiv:1812.04608, 2018. [Online]. Available: https://arxiv.org/abs/1812.04608

R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proc. 22nd ACM SIGSAC Conf. Comput. Commun. Secur. (CCS), Denver, CO, USA, 2015, pp. 1310–1321. [Online]. Available: https://dl.acm.org/doi/10.1145/2810103.2813687

L. Floridi, J. Cowls, M. Beltrametti, et al., “AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations,” Minds and Machines, vol. 28, no. 4, pp. 689–707, 2018. [Online]. Available: https://doi.org/10.1007/s11023-018-9482-5

F. Scaramuzza, G. Quattrocchi, and D. A. Tamburri, “Engineering trustworthy machine-learning operations with zero-knowledge proofs,” arXiv preprint arXiv:2505.20136, May 2025. [Online]. Available: https://arxiv.org/abs/2505.20136

K. A. M. Carandang, J. M. P. Araña, E. R. A. Casin, C. P. Monterola, D. S. Y. Tan, J. F. B. Valenzuela, and C. M. Alis, “Are LLMs reliable? An exploration of the reliability of large language models in clinical note generation,” arXiv preprint arXiv:2505.17095, May 2025. [Online]. Available: https://arxiv.org/abs/2505.17095

E. Mata i Noguera, R. Ortiz Uroz, and I. Labastida i Juan, “Enabling the reuse of personal data in research: A classification model for legal compliance,” arXiv preprint arXiv:2505.15183, May 2025. [Online]. Available: https://arxiv.org/abs/2505.15183

A. Cleven and R. Winter, “Regulatory compliance in information systems research–Literature analysis and research agenda,” in Enterprise, Business-Process and Information Systems Modeling, Lecture Notes in Business Information Processing, vol. 29, Springer, 2009, pp. 174–186. [Online]. Available: https://link.springer.com/chapter/10.1007/978-3-642-01862-6_15

D. S. Schiff, A. Ayesh, L. Musikanski, and J. C. Havens, “IEEE 7010: A new standard for assessing the well-being implications of artificial intelligence,” arXiv preprint arXiv:2005.06620, May 2020. [Online]. Available: https://arxiv.org/abs/2005.06620

U. Ehsan and M. O. Riedl, “Human-centered explainable AI: Towards a reflective sociotechnical approach,” arXiv preprint arXiv:2002.01092, Feb. 2020. [Online]. Available: https://arxiv.org/abs/2002.01092

L. Cheng, K. R. Varshney, and H. Liu, “Socially Responsible AI Algorithms: Issues, Purposes, and Challenges,” arXiv preprint arXiv:2101.02032, Jan. 2021. [Online]. Available: https://arxiv.org/abs/2101.02032

W. van der Maden, D. Lomas, M. Sadek, and P. Hekkert, “Positive AI: Key Challenges in Designing Artificial Intelligence for Wellbeing,” arXiv preprint arXiv:2304.12241, Apr. 2023. [Online]. Available: https://arxiv.org/abs/2304.12241

Downloads

Published

03-06-2025

Issue

Section

Research Articles