Walking and Survival AI Using Reinforcement Learning - Simulation

Authors

  • Bharate Nandan Lahudeo Artificial intelligence and data science, Zeal College of Engineering And Research, Pune , Maharashtra, India Author
  • Makarand Vayadande Artificial intelligence and data science, Zeal College of Engineering And Research, Pune , Maharashtra, India Author
  • Rohit Malviya Artificial intelligence and data science, Zeal College of Engineering And Research, Pune , Maharashtra, India Author
  • Atharva Haldule Artificial intelligence and data science, Zeal College of Engineering And Research, Pune , Maharashtra, India Author

DOI:

https://doi.org/10.32628/CSEIT2390629

Keywords:

ANN, Reinforcement learning, DQN, VR

Abstract

This research paper presents a novel approach to training an AI agent for walking and survival tasks using reinforcement learning (RL) techniques. The primary research question addressed in this study is how to develop an AI system capable of autonomously navigating diverse terrains and environments while ensuring survival through adaptive decision-making. To investigate this question, we employ RL algorithms, specifically deep Q-networks (DQN) and proximal policy optimization (PPO), to train an AI agent in simulated environments that mimic real-world challenges. Our methodology involves designing a virtual environment where the AI agent learns to walk and make survival-related decisions through trial and error. The agent receives rewards or penalties based on its actions, encouraging the development of strategies that optimize both locomotion and survival skills. We evaluate the performance of our approach through extensive experimentation, testing the AI agent's adaptability to various terrains, obstacles, and survival scenarios.              

Downloads

Download data is not yet available.

References

G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of Lipschitz-Hankel type involving products of Bessel functions,” Phil. Trans. Roy. Soc. London, vol. A247, pp. 529–551, April 1955. (references) DOI: https://doi.org/10.1098/rsta.1955.0005

J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68–73.

I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange anisotropy,” in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271–350. DOI: https://doi.org/10.1016/B978-0-12-575303-6.50013-0

K. Elissa, “Title of paper if known,” unpublished.

R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press.

Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “Electron spectroscopy studies on magneto-optical media and plastic substrate interface,” IEEE Transl. J. Magn. Japan, vol. 2, pp. 740–741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982]. DOI: https://doi.org/10.1109/TJMJ.1987.4549593

M. Young, The Technical Writer’s Handbook. Mill Valley, CA: University Science, 1989.

Downloads

Published

14-03-2024

Issue

Section

Research Articles

How to Cite

[1]
B. N. Lahudeo, M. . Vayadande, R. . Malviya, and A. . Haldule, “Walking and Survival AI Using Reinforcement Learning - Simulation”, Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol, vol. 10, no. 2, pp. 51–54, Mar. 2024, doi: 10.32628/CSEIT2390629.

Similar Articles

1-10 of 194

You may also start an advanced similarity search for this article.