Machines will expand our capabilities and help us better understand our internal state

Pattie Maes (MIT Media Lab)

Research Projects

As technologies such as IoT and wearable devices continue to evolve, they will begin to blend seamlessly into our lives. Professor Pattie Maes, who spearheads the Fluid Interfaces research group, is proposing 21st century devices that will not disrupt our lives and negatively impact our health and communication. Rather, they will fit into our lives and with our bodies and expand our creativity as well as our cognitive ability. These devices cover a wide range of fields from medical to education, and with NTT DATA, the group is jointly researching digital therapeutics for people with Alzheimer’s and dementia. There is a lot of hope riding on this field of research, and we spoke withProfessor Maes to learn what its future may look like.

Interfaces that enhance us and help us thrive

--Can you talk about your research concept?

The goal of the Fluid Interfaces research group is the seamless integration of person and machine, with a key focus on wearable devices. veryone lives with devices today, whether it be a smartphone, a laptop, tablet, or smart watches. And their impact on our lives is immeasurable. Some people have back and neck problems because they are using their laptops all the time, and these devices also impact sleep quality, mental health and our communication with others. Two people sitting together at a table are likely to be on their devices rather than talking to each other, and even just having a device on the table makes two people relate to each other less. So, devices today have many negative effects, but we are aspiring to do something about this by creating seamless interfaces that are designed to people’s lives.

--What do such interfaces look like?

The key focus of interface design for smartphones and smart watches has been on delivering information. Not much focus has been on supporting other qualities that a person needs to lead a successful life such as motivation, concentration, and creativity. Having said that, nowadays people are living with their devices 24/7, so there is an opportunity for these devices to play a larger role and help us with reducing stress, changing bad behaviors, and enhancing memory, creativity and attention. We named our group “Fluid Interfaces” because we believe this is the next stage for personal devices, to seamlessly and fluidly support us in achieving our goals to become the individuals we want to be.

--More specifically, what kind of devices can we expect?

Our devices today require our complete attention to be used, thereby removing us from the current moment. But in the future, devices will be better integrated and require less task switching by instead augmenting our perception. Our glasses, for example, may be equipped with augmented reality and amplify our visual perception. Enhancing hearing is another interesting area. There are many noise cancelling headphones available today, but what if headphones could reduce the background noise level and amplify your voice helping you and others to better concentrate on the conversation? So rather than simply blocking noise, they could help us pay attention to the things we want to pay attention to.

Another prototype we have been working on more recently is a system for silent speech input. What this means is that when we talk to ourselves and ask ourselves a question without speaking out loud, the system can do a Google search and whisper the answer in our ear so as to make access to information asnatural, and minimally disruptiveas possible.

Having your very own personal “teacher” to help you day by day

--Every year NTT DATA publishes the NTT DATA Technology Foresight, which forecasts trends in the coming three to ten years. And within it there is a section called “Conversations with machines will become part of our daily lives” under the chapter, “Natural Interaction.” Machines will understand the context of our conversations and the speaker’s emotions and voluntarily acquire the ability to converse. It won’t be long before machines will give us a new awareness and help us think.

While a lot of the discussion around AI to date has had an adversarial slant and viewed AI as a threat to people, I believe AI has great potential to complement us. Integrated systems of people and AI that closely support and complement each other will outperform people or AI systems working alone. I anticipate that augmentation, be it physical or cognitive, will a hot topic. It can be thought of as tools and designs that expand our physical or mental capabilities.
Technology, of course, already empowers us today, but with the development and deployment of AI systems we will be able to perform even better. Note that smart technology can also enhance us by facilitating people learning from other people.

For example, “Be My Eyes” is a service that lets the visually impaired ask for help from other people, thereby improving their quality of life and ability to live independently. So, let’s say a blind person is cooking and doesn’t know what temperature the oven has been set to. They can take a picture of the oven and upload it to the system’s server, and another person who has signed up to help and happens to be available at that moment inspects the picture and replies, “Your oven is set to 200℃.” Such real-time support from other people is another type of augmentation mediated by clever technology.

--Is the “Wearable Wisdom” project similar to that?

Yes. In Japanese, a teacher is “sensei,” right? But what if you had a teacher embedded in a pair of glasses, and it was naturally a part of your life? You always have a small sensei with you, giving you advice when you ask a question or have a thought. That sensei could be your grandmother passing on her wisdom in the moment, or an expert in the field offering relevant input regarding the problem you are dealing with. Technologyenables access to a treasure trove of collective wisdom, but we still don’t have the right systems in place to mediate those connections and learning opportunities in real time. I have been working on this idea even before the World Wide Web existed.

--Wouldn’t such a system be great for education as well?

I think so. It’s been a while since schools were invented, but schools aren’t the only places you learn. And besides, you often forget things you learned in school right away. But what if you discover something in real life and want to learn more about it, and you could call on a mentor to tell you more in that moment, when you are most motivated to learn? Let’s say you’re throwing a rock. If you could see the gravity, acceleration, and the projected path of the rock visualized while you throw it, you may become interested in Newtonian physics. I think what we need is a system that will give us the information we want right when we become interested in something to help further pique our interest and give us an opportunity to go deep and learn.

Professor Maes’ research lab

Professor Maes’ research lab

Digital therapeutics that help support people from the inside

There is a huge opportunity for use of novel wearable devices in the area of healthcare as well. Today’s health wearables are primarily focused on collecting data such as our activity, heart rate and more. But the use of wearables for health interventions or therapy is underexplored. In our lab we developed a pair of glasses equipped with sensors that can measure EEG brainwave activity as well as EOG or eye movements. These sensors can tell us whether the person is paying attention right now or if their mind may be wandering, in which case it Such intervention is effective for people with ADHD or ADD, or for anyone who has problems paying attention in today’s stimulating world.
So, for example, if someone’s mind starts to wander during a meeting because they’re bored, by providing auditory or haptic feedback, you can help remind the personto focus without anyone knowing. By receiving such regular feedback, people will also be able to better understand their own mental and emotional states so they can better control them.

Because these devices will record a person’s state continuously, they will be able to know when you are feeling anxious, for example, and will tell you to take some deep breaths to calm down, or may even influence your breathing rate in more subtle ways, for example by altering the volume of the music you listen to in a rhythmic pattern. They can always be by your side, helping you when you need it.

--The joint research project with NTT DATA, the “Digital Memory Book,” is for people diagnosed with dementia and Alzheimer’s?

Reminiscence therapy is a known form of therapy that treats people with dementia and Alzheimer’s by showing them pictures and videos from the past thereby exercising their memories. We are designing a form of digital reminiscence therapy, in which memories of the past are tagged and structured using a semantic network and shown to patients so they can revisit their memories, explore links between people, places, events and more, and remain a little bit more connected.

Such digital reminiscence tools can be used both for monitoring the progress of the disease as well as for therapy. In addition, such digital memories may help people who feel isolated because they’re losing their memories feel more grounded. By offering memory support in the moment, for example when someone can not remember the name of a person visiting them, they can help people regain their self-worth, while revisiting and rehearsing memories offline, for example in the form of a game, may also prevent their condition from worsening.

Technologies that “expand our capabilities” rather than simply being convenient

--These devices are becoming a part of our lives, but what do you think about the ethics of these new technologies?

What we want to create are technologies that augment our capabilities, that help integrate human and machine. But we must also really think about the social consequences and human consequences of the technologies that we develop. The data captured by wearable devices, in particular, is highly personal. It is key that we ensure that such data is kept private, and only controllable by the specific user, and that the users of these devices can understand how the system works and what its limitations are.

We always emphasize that the user should decide what kind of augmentation they make use of. It shouldn’t come from a big company or the government. It’s important that the users understand what value, what benefits these technologies have and how they work before they decide to use them.

--So, users also need to have digital literacy as well.

Indeed, and in addition, it’s also very important that we do not become too dependent on these technologies. Because when we start to delegate certain tasks to machines, we will end up not being able to do them as well ourselves. If, say, a five-year-old child has a smartphone and uses the calculator to do calculations from the very beginning, they would probably never develop a good internal understanding of numbers and arithmetic. What is important is not whether their ability to calculate will decline, but the lack of understanding and the effect that may have on their ability to reason and think.

In other words, we need to carefully decide what types of tasks are okay to delegate to machines and what tasks we should still develop skills for ourselves, even though they may require a lot of time and effort to learn. What we hope is that these devices and interfaces will not just automate tasks on our behalf, but will also help us to learn something and internalize some ability or knowledge.

Professor Maes is holding a wearable device in her hands that emits different aromas depending on the user’s mood to help them regulate their emotions.

Discoveries that come from myriads of questions

--Application in the real world is key to any research. What are your thoughts about industry-academia collaboration?

The Media Lab is unique in having invented and adopted a form of industry-academia collaboration that is a win-win for both. We collaborate closely with companies all the time and while they do not direct what we work on, their real world perspective is highly valuable and ensures we focus on problems that matter. Companies want their investment in the research to ultimately lead to products and services, so research findings can be applied to the real world more quickly. Once we do that, we discover new challenges. We hope to continue this type of trial and error to address fundamental issues.

--You also have many people with different backgrounds in your research group at the Media Lab.

We believe the best innovations come from diverse teams and as such I built a team with people from a wide range of backgrounds including AI, electrical engineering, neuroscience, psychology, and design. We also believe that taking an art-and-science approach is absolutely essential. Great artists ask “Where is the world heading?” and make us realize what questions we should be asking ourselves. Art gives rise to interesting “questions,” and design transforms our research into an experience. This approach helps generate new ideas and create great work.

--You try to have diversity in terms of disciplines, but what about gender diversity?

We actually have dedicated diversity coordinators, who help attract women and support them with various issues they may encounter at work. They also help both men and women with careers after the Media Lab and mentor themon topics like how to put their resumes together. The first coordinator we hired was a woman, and thanks to her a lot of changes happened. Currently the gender ratio of students and researchers who come to the Media Lab is 50-50, and she was the one who made this possible.

--That’s excellent. Lastly, could you talk about the future vision of Fluid Interfaces?

I want everybody to be able to reach their full potential. Technology will never disappear from our personal lives, but it can become a more positive force helping us lead healthier lives, offering us opportunities to learn and grow, and generally supporting us to make the most of our lives. We need technologies that can cater to people’s needs at different levels and abilities. I hope when that time comes, there will be less emphasis on screens that people stare at all the time and that pull them away such as smartphones and PCs. I hope we will develop interfaces that represent a more natural and seamless augmentation of our being, and are designed to help us grow into the persons we want to be.

March 15, 2021

Cambridge, Massachusetts, United States