Los Angeles, California: Making progress on the vision of Robots to do various tasks, USC researchers have designed a system that lets robots autonomously learn complicated tasks from a very small number of demonstrations and even imperfect ones. The paper, titled Learning from Demonstrations Using Signal Temporal Logic, was presented at the Conference on Robot Learning.
"Many machine learning and reinforcement learning systems require large amounts of data and hundreds of demonstrations--you need a human to demonstrate over and over again, which is not feasible," said lead author Aniruddh Puranic, a Ph.D. student in computer science at the USC Viterbi School of Engineering.
Also Read: How Artificial Intelligence Transforms Your Workplace
This new system helps robots to learn from demonstrations, the ways humans learn from each other. This further helps in controlling the movements of the robots even for complex tasks. However, depending on the expertise of the person who is demonstrating, the learning process of robots is determined in terms of safe or unsafe actions, desirable or undesirable actions. Learning from demonstrations is becoming increasingly popular in obtaining effective robot control policies