Walking and Survival AI Using Reinforcement Learning - Simulation

Authors

  • Bharate Nandan Lahudeo  Artificial intelligence and data science, Zeal College of Engineering And Research, Pune , Maharashtra, India
  • Makarand Vayadande  Artificial intelligence and data science, Zeal College of Engineering And Research, Pune , Maharashtra, India
  • Rohit Malviya  Artificial intelligence and data science, Zeal College of Engineering And Research, Pune , Maharashtra, India
  • Atharva Haldule  Artificial intelligence and data science, Zeal College of Engineering And Research, Pune , Maharashtra, India

DOI:

https://doi.org//10.32628/CSEIT2390629

Keywords:

AI, Reinforcement learning, DQN, VR

Abstract

This research paper presents a novel approach to training an AI agent for walking and survival tasks using reinforcement learning (RL) techniques. The primary research question addressed in this study is how to develop an AI system capable of autonomously navigating diverse terrains and environments while ensuring survival through adaptive decision-making. To investigate this question, we employ RL algorithms, specifically deep Q-networks (DQN) and proximal policy optimization (PPO), to train an AI agent in simulated environments that mimic real-world challenges. Our methodology involves designing a virtual environment where the AI agent learns to walk and make survival-related decisions through trial and error. The agent receives rewards or penalties based on its actions, encouraging the development of strategies that optimize both locomotion and survival skills. We evaluate the performance of our approach through extensive experimentation, testing the AI agent's adaptability to various terrains, obstacles, and survival scenarios.

References

  1. G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of Lipschitz-Hankel type involving products of Bessel functions,” Phil. Trans. Roy. Soc. London, vol. A247, pp. 529–551, April 1955. (references)
  2. J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68–73.
  3. I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange anisotropy,” in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271–350.
  4. K. Elissa, “Title of paper if known,” unpublished.
  5. R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press.
  6. Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “Electron spectroscopy studies on magneto-optical media and plastic substrate interface,” IEEE Transl. J. Magn. Japan, vol. 2, pp. 740–741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982].
  7. M. Young, The Technical Writer’s Handbook. Mill Valley, CA: University Science, 1989.

Downloads

Published

2024-03-30

Issue

Section

Research Articles

How to Cite

[1]
Bharate Nandan Lahudeo, Makarand Vayadande, Rohit Malviya, Atharva Haldule, " Walking and Survival AI Using Reinforcement Learning - Simulation, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 10, Issue 2, pp.51-54, March-April-2024. Available at doi : https://doi.org/10.32628/CSEIT2390629