Autonomous Recognition of Human Activity to Aid in the Development of Robots

Authors

  • Arjun Singh, Hemant Nautiyal, Viveksheel Yadav

DOI:

https://doi.org/10.17762/msea.v71i4.814

Abstract

It is a challenging effort due to challenges such as backdrop clutter, partial occlusion, differences in size, perspective, lighting, and appearance to identify human activities captured in video sequences or still images. A wide variety of applications, such as video surveillance systems, human-computer interfaces, and robots for human behavior classification, each need their own unique activity identification system in order to function properly. In this article, we provide a comprehensive review of recent and cutting-edge research accomplishments in the field of classifying human activities. We begin by presenting a taxonomy of human activity research approaches and then proceed to evaluate the benefits and drawbacks associated with each approach. We divide human activity classification algorithms into two primary categories according to whether or not they use data from a wide variety of different modalities. The first group is described as "not using data from a wide variety of modalities," whereas the second group is described as "utilizing data from a wide variety of modalities." After that, each of these categories is further separated into subcategories that represent how they copy human behaviors and the sort of activities in which they are engaged. These subcategories are also based on how they interact with humans. In addition, we provide a detailed analysis of the human activity classification datasets that are already accessible to the general public, as well as an investigation into the characteristics that should be met by an ideal human activity identification dataset. Both of these analyses can be found in the following paragraphs. In conclusion, we address certain issues that still need to be resolved in terms of human activity identification and describe the characteristics of possible future study fields. Humanoid robots often use a dialogue system that is based on pre-programmed templates. This kind of system can react well inside a certain discourse domain; nevertheless, it is unable to respond appropriately to information that falls outside of that discourse domain. The rules for the dialogue system are created by hand rather than being formed automatically. This is due to the fact that the interactive elements do not have a method for detecting emotions. Both a humanoid robot open-domain chat system and a deep neural network emotion analysis model were developed specifically for the aim of achieving this goal. The former is intended to do an investigation on the feelings that may be experienced by interacting items. Analysis of a person's emotional state is a component of this method, along with studies on Word2vec and language coding. After that, the emotional state of a humanoid robot is taught by using a Training and emotional state analysis paradigm, which is quite specific. Creating templates is at the heart of the conventional approach for humanoid robots to carry on conversations. This approach is able to create appropriate responses while inside the designated discussion zone, but it cannot do so outside of that zone. The rules of the communication system are made by hand, and there is no emotional recognition included in them. This research developed an open-domain communication system for a humanoid robot as well as an emotion analysis model based on a deep neural network. Both of these were accomplished via the same study. The model was used in order to determine how the interacting items felt about one another. Language processing, coding, feature analysis, and Word2vec are all necessary components of an emotional state analysis. The findings of an emotional state analysis training session conducted on a humanoid robot are broken down and discussed in this article, along with the implications of those findings. Robots have gradually permeated every facet of human existence in recent years as a result of advances in science and technology. Robots are used in many different fields, including manufacturing, the military, home healthcare, education, and laboratories [1]. According to the three tenets that serve as the foundation of robotics [2, 3], the ultimate aim of robot development is to attain human-like behavior in robots, to assist people in doing their duties in a more efficient manner, and to realize their ambitions. Individuals need to improve the quality of their communication with the robot in order for the human-robot cooperation project to be successful [4, 5]. A person will typically interact with a computer by inputting data using a keyboard, mouse, and a variety of other manual input devices, while the computer will typically output data to a person via a display and a variety of other peripherals. This is the standard method of human-computer interaction. For this interaction, quite a few more resources are required. There are some people in the actual world who do not have access to computers [6]. Natural ways of communication between people and technology include the use of speech, vision, touch, hearing, proximity, and other human interactions [7]. This form of connection is not only common but also advantageous to both parties involved. [8] In order to facilitate more fruitful collaboration between people and robots. [9] The emotion analysis model of the humanoid robot is able to assess and detect the emotional information of the interacting object when the object is engaging with one another. The language of the item provides a great deal of emotional information when it is touched, and the written content demonstrates a deep comprehension of human thought.

Downloads

Published

2022-09-16

How to Cite

Arjun Singh, Hemant Nautiyal, Viveksheel Yadav. (2022). Autonomous Recognition of Human Activity to Aid in the Development of Robots. Mathematical Statistician and Engineering Applications, 71(4), 2543–2552. https://doi.org/10.17762/msea.v71i4.814

Issue

Section

Articles