Hardware-and software-based robotics have improved human productivity by automating routine tasks. These are being extended to non-routine tasks using AI. As full automation cannot be achieved in all areas in the near future, designing the interface between robots and humans and enabling each to share complementary assignments to create optimal interaction remains the challenge.
IT supports automation. IT also supports robots lined up in a factory. For 24 hours/365 days a year, robots incessantly repeat the movements preassigned by the program, producing results at a consistent level of quality.
At huge ecommerce stores that sell billions of dollars in merchandise, the number of items processed exceeds 100 million. In a huge warehouse that supports their logistics, more than 1,000 product shelves are stored, each measuring more than 2 meters in height and 100 kilograms in weight. The workers, however,do not need to walk around this huge warehouse looking for a particular shelf that stores an individual product. The robot, whose installation began around 2010, lifts the entire shelf and delivers it to the workers. The workers wait at a designated location and follow the instructions on the screen to load or unload the needed product. After loading or unloading is complete, the next shelf is delivered immediately. The robot and human work efficiency is 4 to 5 times higher than that of a human alone.
It is clear that this type of robot-driven automation would not be possible without the support of IT. The IT system controls as many as 100 robots in an integrated manner including the movement to: relocate the appropriate shelf to the right location; select an efficient route of delivery; avoid collision between robots; and detect an obstacle on the floor and designate the surrounding area as unpassible. The robot also manages its own power charge. These automatic warehouses controlled by robots are already commonplace. Multiple vendors exist that sell robots for warehouses throughout the world, including China, which operates the world’s largest ecommerce website.
IT cannot automate non-routine tasks that cannot be preprogrammed. An example of a non-routine task is “picking and stowing” products to and from shelves in an ecommerce warehouse. A human worker can look at a product to determine how to pick it up, depending on its shape, weight and hardness. The worker predicts what parts to grab with what degree of strength, and what strength is required when lifting the product. The worker can also adjust the pressure immediately if, after actually grabbing the product, the weight or hardness is different from the previous assumption.
From the human perspective, these movements are based on rules. To IT, those rules are too diverse, complex and ambiguous. Additionally, adjustments to situations that change every second cannot be preprogrammed. A human studies and establishes generalized rules, then adds flexible modifications to adjust to the situation before acting. As a result, non-routine tasks, which require sophisticated human actions, have been placed outside the scope of automation. Product handling, dangerous tasks at factories, serving food on plates at restaurants, operating/driving machines and many other non-routine tasks still depend on humans. What can be streamlined using robotic process automation (RPA) is limited to routine and repetitive tasks. The application of this method to non-routine tasks has not yet been successful and continues to be a longstanding challenge.
The use of AI and machine learning for automating non-routine tasks has long been studied. One of the most famous demonstrations used AI to recognize a variety of mixed and unorganized objects of different sizes and shapes in an image, determining a way to grab them and control the robot’s arm to lift the packages. The AI program repeated the learning process until it could lift the objects with the robot’s arm flawlessly.
Today, more practical trials and contests are currently being conducted. For example, in one contest for a large ecommerce company the AI looked at randomly placed and mixed, popular products in a bin, identified what they were and sorted them as it moved the robot arm. Some products were hidden from view by others within the bin. Two kinds of products were identified, products for which enough data was provided in advance, and ones with a sample image provided immediately preceding the trial. The winners of these robot contests, where participants compete on work accuracy under conditions close to reality, are said to be able to extract all products and sort them. This is the result of combining multiple AI technologies, such as the use of an AI that has separately learned general object images well enough to identify an object by sight.
Autonomous driving is a more sophisticated, non-routine task that requires complex learning. In the real world, road conditions change constantly, and other vehicles and pedestrians enter in and out of the scene. As a driver, you must assess situations, identify risks and be prepared to take defensive action quickly. One theory is that driving data for travel over more than 10 billion kilometers is required to cover all such contingencies using AI technology.
Although AI remains key to providing a solution, reproduction of every scenario is impossible. For example, to replicate and exhaustively test the multitude of complex environments including a combination of weather changes, surrounding vehicles, pedestrians and road surfaces, is unrealistic. This requires use of a computer simulator to virtually reproduce the complex conditions randomly and perform repeated learning. It is thought that by using AI, particularly a generative adversarial network (GAN), images in the simulator and physical laws can be improved to a level equal to reality. In learning to mimic actual human driving, Generative Adversarial Imitation Learning (GAIL), a GAN learning method, can be used to achieve sophisticated mimicry based on a small data sample. Thus, the leading-edge technologies of AI are being introduced one after another to continue efforts to overcome the challenge of automating sophisticated non-routine tasks.
As we get closer to the time where robots will play an active role even in non-routine tasks, both the ways we sort tasks between humans and robots, and the establishment of a co-working relationship are now recognized as major issues. Automation of non-routine tasks may advance rapidly, but faultless automation with no human intervention is still far from a reality. Until then, continued adjustments in the scope of coordination is necessary, that is, how much to automate and how much should humans intervene. Over time, task details of robots that perform non-routine tasks will become closer to those of humans. The physical distance between robots and humans will also disappear. Once divided by a fence, robots and humans will stand next to each other in the future.
The difficulty of coordination becomes most apparent when considering self-driving, or autonomous, cars and humans. In the latest experiments with self-driving cars, the vehicle arrives at the destination by being told the location, with no human intervention. However, the vehicle can sometimes fail to respond properly to sudden actions, such as a signal from a traffic-control officer, a child who jumps out into the road or a vehicle that cuts in from an adjoining lane. Human drivers are required to keep their hands on the steering wheel at all times to prepare for these cases in which the self-driving car cannot respond appropriately. This experiment is built upon a presupposition that the driver, who is supposed to be relaxed, can perform a highly urgent and difficult task at a moment’s notice. This is not optimal coordination between robot and human.
A new UI design is necessary so that autonomous vehicles will not drive automatically into an area where human intervention might be needed. In this situation, the self-driving car can retract its steering wheel, encouraging the human to rely completely on the car when it is running through an area where it can be operated safely. However, if the self-driving car determines that it cannot drive on its own, it can then extend the steering wheel to where the human’s hands are, encouraging the human driver to take control. This switch in drivers is done overtly based on the judgment of the surroundings and with plenty of time to safely shift control of the vehicle. The human recognizes and coordinates the change. Although this is far from a perfect solution, it is one approach that shows possibility.
A deep understanding of the tasks to be automated is indispensable to the design of robot and human coordination. To analyze a task that humans have been performing casually and change it into a more efficient, interactive process between human and machine, involves more than just an understanding of the task. It also demands a deep grasp of behavior attributes inherent in humans.