Advancements in perception technology for images and voice is enabling robots to acquire enhanced environmental awareness, providing opportunities to exploit its use within products such as self-driving cars and drones. These higher-level operational capabilities will transform the industrial structure.
It is safe to say that robots have better eyesight than humans do. One reason for this is that Deep Learning technology has increased the accuracy of image recognition year after year. This was proven at the 2016 ImageNet Large Scale Visual Recognition Challenge (ILSVC), in which a robot identified the name of an object in an image with 97.0% accuracy, as compared to the 94.9% for humans. In addition, SLAM<sup>*1</sup>, a technology that simultaneously estimates a robot’s own location and creates a map of its surroundings based on information from a camera and a sensor, has enabled highly accurate capture of a 3D space. Yet another new technology allows the capture of a space with only a smartphone using a monocular camera, which will enable the effortless creation of indoor 3D maps. This technology will likely become widespread in many places like commercial facilities, warehouses and factories.
Voice recognition has also become more accurate, achieving human level and giving robots effective ears as well as eyes. Finally, robots can recognize things that humans cannot, such as ultrasonic waves, infrared light and magnetism; this capability is inherent to machines. It is anticipated that robots that possess the perceptual aptitude of humans in addition to other capabilities will rapidly expand their range of application.
As the ability of spatial recognition improves, robotic contests, whose purpose is to enhance the functionality and performance of robots, are on the rise. For example, at an Amazon Picking Challenge, robots compete on their ability to place products on shelves and remove them, while at a RoboCup, robots compete on rescue of humans on a soccer field or disaster site. In addition, at the inaugural event of a robotic car race called the DARPA Grand Challenge in 2004, none of the participants could make the finish line. However, five cars completed the race in 2005, building the foundation for the self-driving technology of today.
In addition to automating simple tasks that humans have been performing in the past, even advanced tasks that only experts were able to perform are now being automated. For example, agricultural applications include a drone equipped with a camera and sensor that sprays pesticide only over areas where pests inhabit, or that adjusts the amount of fertilizer depending on the condition of crop growth in a particular area. This drone can perform tasks with a precision far higher than that of humans, and such automation can result in significant savings on pesticides and fertilizers.
Individual customization of products has been difficult to bring to fruition due to cost-related problems. However, in the future autonomous factories may emerge where, based on data acquired from manufacturing machines and sensors as well as from sales and material procurement, a robot autonomously determines the necessary materials, most efficient manufacturing process and methods for coordinating with other machines; thus automatically changing production lines. As a result, mass customization may become a reality.
The global robot-related market is predicted to more than double from 91.5 million dollars in 2016 to 188 million dollars in 2020<sup>*2</sup>, with the competition of functions and pricing of robots increasing in the future.
In particular, the automobile industry will very likely reach a significant tipping point. During the development phase of self-driving technology, the car’s driving performance is the major focus. However, once a fully automatic self-driving car is completed, driving performance is assumed, and the transportation experience itself will become the determining factor. This means that the car industry will likely shift from the traditional form of selling things to that of selling experiences and services. The definition of customers will also change from people who wish to own cars to all people who have transportation needs.
LIDAR, a sensor that uses light and recognizes 3D spaces, can measure particle sizes smaller than radio radars. Because it can even recognize the shape and moving speed of an object, it is receiving special attention as the potential “eye” of a self-driving car. Although LIDAR is costly today, manufacturers are targeting a cost of $100 or less in five years. Efforts are also underway to install LIDAR’s sensor in a single microchip with a potential price of only $10. This type of LIDAR would be installable in many devices, and its use would quickly spread to robots and home electronics. Concurrent with this, an effort to achieve a self-driving function without sensors such as LIDAR is underway using improved camera performance and AI to recognize objects and measure distances. Either of these evolving technologies may become the optimal choice as robots’ eyes.
The growing field of Biomimetics models the superior functions and structures that living organisms have attained, and applies these results to technological development. Biomimetics is helping to further develops robotics. For example, robots in which a tactile sensor that models human pain are now able to feel discomfort upon impact and to act to avoid the impact. This will likely enable the reliable use of robots in situations where they need to work closely with humans.
A self-driving car may be considered a robot that operates autonomously while recognizing its surroundings. It is also the robot that is garnering the most attention nowadays. IT companies have entered the self-driving car race in addition to automobile manufactures, accelerating the trend toward mergers and acquisitions. The year 2016 also saw proof-of-concepts of self-driving buses and experimental services by self-driving taxis on public streets. In addition, the world’s first self-driving delivery truck ran an autonomous trial run on a 190 km stretch of expressway. Importantly, the arrival of deliveries by self-driving trucks is expected to reduce significantly the current truck driver shortage, which continues to increase due to the rapid expansion of e-commerce. It will take time before a completely autonomous car, which does not need human intervention under any conditions, emerges. However, autonomous driving is already available under specific circumstances.
Drones are being utilized for a wide variety of business purposes including surveying, 3D map creation, inspections, security, search and rescue, investigations, deliveries and for entertainment purposes. Drones can fly over difficult locations at low costs and identify accurate spatial information of a location. As a result, they hold the potential to increase efficiency and provide new services in an incomparable way.
Robots are also spreading their working arena to commercial facilities, households and public spaces. For example, there are now robots that use a camera and a sensor to patrol the product display shelves to find out-of-stock products, wrong product placements and messy displays, raising the potential for significantly reducing labor. To assist in everyday life, there are now self-driving vacuum cleaners and communication robots, as well as those that suggest recipes using ingredients stored in the refrigerator, and even those that can cook.
High-performance robots will also bring higher risks of injuring humans and infringing on privacy. Accordingly, the future development of robots will require the resolution of these problems including new legislation.
Looking at the mid- to-long-term future, discussions are currently underway to impose a robot tax on owners under the assumption that robots are electronic humans. In addition, the introduction of basic income grants to all citizens in order to maintain a minimum standard of living has been much discussed, with experiments starting in Finland and San Francisco. These discussions are assuming that social structures will significantly change with robots and AI replacing people at jobs. However, as with computers, new professions will emerge but different skills will be required. In addition to systemic adjustments in taxes and life security, the education programs needed to fill this skill gap will become vital in the future.
*1 The official name is Simultaneous Localization and Mapping.
*2 Worldwide Semiannual Commercial Robotics Spending Guide, IDCGuide, IDC