The barriers to using AI are being overcome by its accelerating evolution. Companies without massive data or experts in machine learning can also have opportunities to use AI. Furthermore, the development of algorithms and hardware for mobile and IoT will bring about the pervasiveness of autonomous AI in every dimension of life.
The burgeoning world of artificial intelligence (AI) is being championed not only by specialty journals but also by the general media. This is because AI is evolving rapidly with new technologies and services being released daily. For example, AlphaGo, the computer program well-known for having AI that beat a professional Go player, continues to evolve. Alpha Zero, introduced in December 2017, is the latest release of the program demonstrating world-class ability not only in Go but also in chess and shogi. What surprised the world is that Alpha Zero taught itself this expertise after learning only basic game rules. This indicates that AI can autonomously develop abilities by extrapolating rules for use in other applications. Since AI learns differently from humans, it has the potential to create groundbreaking techniques and approaches from which humans can learn.
In a reading comprehension test in January 2018 using a dataset called SQuAD1 , AI exceeded human scores for the first time. AI has also outsmarted humans in image and voice recognition abilities. In the future, anyone will likely be able to comprehend complex information found in medical and court documents, for example, using AI to translate text into layman’s terms.
1 SQuAD is an abbreviation for Stanford Question Answering Dataset
While AI continues to evolve, how much is it being used in business today? Deep learning, which is singlehandedly driving the advancement of AI, becomes more accurate as the amount of data increases. However, processing such data requires robust computing power. As a result, a gap may exist in the successful implementation and harnessing of AI for the development of products and services within large companies that possess huge amounts of data and rich computer resources, versus smaller companies that do not have such resources.
Recent developments in AI technology have begun to narrow this capability gap. For example, in deep reinforcement learning, AI repeats trial and error simulations to autonomously learn without prepared data. In transfer learning, AI can also adapt learning from one specific area to another. This enables learning with small amounts of initial data to produce highly accurate results. For instance, accessible data for medical images are limited due to privacy issues. Nevertheless, AI can learn from the huge amounts of data in common objects, transferring this knowledge to infer diagnoses with a high degree of accuracy.
Learning algorithms themselves have evolved. When recognizing an object using traditional deep learning, the positional relationships and directions of each part of the image are not considered. For this reason, learning data of different patterns is necessary. In a breakthrough technique called a capsule network, spatial attributes such as the positional relationships and angles of each part of an object are extracted for AI to learn. Using this technique, AI requires only small amounts of data to then recognize an object.
Recent developments encourage the use of AI at many companies. Robust computing power can now be used on cloud networks without an initial investment, and learning models can be designed via the neural network without programming. In addition, parameters can be optimized automatically. In the future, setting a task and preparing data is all that will be needed to easily create AIs specialized in individual applications.
Given that any firm can now leverage AI, what are the factors that will differentiate one company from another? One is the ability to apply AI to business in the most appropriate way. In other words, the competency to leverage AI to solve business challenges. Since the AI evolution is accelerating, another differentiator is speed of application. Consequently, speedier development cycles with faster updates to the latest model will be important.
Another critical factor is building a system that can continuously accrue data. Although AI can now be used in fields with little accumulated data, the amount still differentiates one company from others using AI. For this reason, it will be important to compile feedback data from users through services and then continuously improve the quality of the services. An example of this process is a service offered by Google in which AI can recognize a picture drawn by a human. Using the one billion pieces of data accumulated through this service, Google has also introduced another service to convert a picture drawn by a person into an illustration. The concept of building an ecosystem where data accumulated through a service creates a different service will likely be necessary for continuous corporate development.
Because deep learning performs calculations based on intricate network structures, the derivation process is also complex. This makes it difficult to understand AI decisions. Discussions are underway on the ethical aspect of AI development and use, with some people desiring more transparency. The Asilomar AI Principles2 and the AI Development Guideline for International Debate include principles3 regarding AI transparency, which stipulate that AI’s reasoning for a judgment should be verifiable in case it causes harm or negatively impacts humans.
To meet this need, efforts are ongoing to account for the reasoning behind AI decisions. For example, images taken by a camera mounted in autonomous vehicles have a strong impact on AI judgment. These cameras have enabled AI to incorporate visualization into its decision making. Development with transparency will probably increase the appeal of AI. However, such a requirement may also slow the pace of AI innovation. Although no legal restrictions currently exist for transparency in AI, it may become important for society to strategically invest in the development of such transparency in the future.
2 Future of Life Institute: https://connect.xfinity.com/appsuite/#!!&app=io.ox/mail&folder=default0//vcth/Umvt
3 AI Network Social Promotion Conference: https://futureoflife.org/ai-principles/
The common method to process data acquired from edge devices such as mobile terminals and IoT devices, is to transmit the data to a cloud where it is processed and then to communicate the results back to the device. The problems with this method are that the location of data processing is limited to where a network connection is available, and data traveling back and forth may delay processing or cause information to leak. For this reason, data processing directly at the network edge is desirable. However, models learned via deep learning are complex and large in volume. The size of some highly accurate, object-recognition models may run from hundreds of megabytes to several gigabytes. This makes it very difficult in some situations for edge devices to provide proper services with currently available computational resources.
A technology that solves this problem has recently been attracting attention. This technology compresses learning models to make them significantly smaller, within an accuracy deterioration rate of a few percent. In some cases, the size is reduced to one 500ths of the original. In addition, the emergence of a framework for generating and processing small-size learning models and AI chips that enable high-speed processing, will also allow any device to process at high speeds wherever it may be. In the near future, advanced learning will almost certainly be available at edges, with individuals freely building and using AIs on their own devices. The advent of a world where individual AIs permeate all aspects of life and in which AIs cooperate with each other may dramatically transform the face of our lives and businesses.