TECHNOLOGY TREND 01

TT01 Socially Accepted AI

As AI continues to permeate society, practical issues are emerging. Technology advancements that improve both the accuracy and ease of building AI will further expand its application. Additionally, ensuring transparency will create more socially accepted AI, accelerating its integration into society.

Improving AI Development and Maintenance

As AI becomes commonplace and the myth of an omnipotent AI dispelled, users are focusing on its practical applications. One factor that has popularized AI is technology that makes it easier to develop. Deep learning acted as a catalyst for today’s AI boom by achieving an overwhelming degree of accuracy over traditional methods at an image recognition contest in 2012. This initiated a significant boost in AI-related research and the building of development frameworks. Consequently, advanced deep-learning-based services are now considerably easier to develop. However, expertise in neural network structures and parameter tuning is needed to further improve accuracy.

In 2018, Google launched a service called Cloud AutoML, which automatically generates high-quality, machine learning models. With use of this service, the learning data itself is all that is required to build an accurate AI system. As a result, AI will likely begin to permeate fields where its application has been slow due to lack of expertise.

To sustain performance, AI requires maintenance, including adaptation to environmental changes and resolution of issues identified after implementation. As AI pervades society and business, maintenance costs and upgrades will become increasingly important. One solution to this issue is a technology called active learning, in which AI automatically identifies and learns from cases where it has low confidence in its understanding, efficiently improving its accuracy. Use of active learning not only generates judgement improvements, but enables a proactive division of labor between humans and AI. Moreover, these technological advancements streamline the task of AI experts, easing the global AI labor shortage.

Efforts to Better Learn and Adapt

While AI has outperformed humans in the accuracy of image and voice recognition within a controlled, research environment, it is important to remember that AI and humans recognize things differently in the real world. For example, a person can easily recognize the attributes of a can of beer – the brand, that it’s an alcoholic beverage for consumption by adults, if it is full or empty, and if it is cold or warm. AI, on the other hand, lacks this cognitive capacity and it is impractical to provide AI with all real-world knowledge in its entirety.

One approach to resolve this difference is called embodied AI. Humans acquire knowledge through interactions with society and by using the five senses. Assuming both a physical body and intelligence are necessary to acquire certain knowledge, this method uses an AI robot with sensors to move around the environment and look at, listen to and touch objects. If this approach proves successful, AI will likely be able to make physical connections to objects and adapt to individual situations in a more flexible way.

There is also a significant deviation in the amount of data and time humans and AI require to learn a task. AI must learn any new undertaking from scratch using huge amounts of data, while humans can learn incrementally with minimal input. This is because humans have accrued a basis for learning. One means for AI to develop this human capacity is called meta learning. Using meta learning a robot can replicate a simple task, like placing an object in one of multiple containers that are set in different positions, after viewing a video that shows the relevant movements. Further advancement of this technology will likely widen the range of AI applications for manufacturing and home robots.

The Transparency Requirement

While AI has already been implemented in loan screening, adoption of recruitment criteria and prediction of repeat criminal offenses, it sometimes causes problems because its logic is not transparent. For instance, one case found that the AI was using a logic that gave priority to men over women in a recruitment process. To use AI in fields that deal with lives, including self-driving cars and medical diagnoses, accountability is required to prevent such potential problems.

Against this background, development of technologies to enable AI to explain the basis for its judgment are gaining momentum. One of these projects is called visual question answering that allows AI to answer relevant questions about a given image. For instance, to answer a question about what kind of sport children are playing, AI must distinguish objects in the image such as children as well as the type of ball. To date, AI has been able to visualize the basis for its judgment and verbalize an explanation with such an image.

In healthcare, researchers are experimenting with ways to present patient data and physicians’ diagnostic notes that corroborate AI prediction of death risks of inpatients and duration of hospitalization. Due to the complex and entangled factors involved in such decision-making, it is crucial to present an objective basis to physicians for judgment. Elucidating the basis for judgment will likely empower people to make faster decisions with confidence based on AI predicted results.

Progress toward Socially Accepted AI

While the rapid advancement of AI technologies has resulted in convenience and efficiency, ethical issues have emerged including privacy concerns, discrimination and apprehension over AI-based weapons. In fact, the utilization of AI in various fields has led to an increased need to address AI-related ethics. In the future, companies will be required to assume social responsibility while creating innovations and new services using AI.

A world where humans and AI coexist will necessitate an ethical methodology for solving these types of issues. To undertake this, a project called the Moral Machine collects and visualizes opinions about the moral basis of AI decisions as it relates to self-driving cars. In making such judgements, different nations and cultures may have disparate ideas about the priority of decision criteria in case of an accident, i.e., pedestrians, passengers, the number of people saved or a person’s age. Self-driving cars may even have to be equipped with variable decision priorities depending on the country. The establishment of a social environment and principles by which governments, corporations and users can cooperate to make AI accepted will lead to a society where the advantages of AI can be experienced by all.

What are you looking for? search