Emotional Machines!

Paula LaBrot

Of all people, George Orwell said, “Perhaps one did not want to be loved so much as to be understood.”

What a prophet! If he only knew how “understood” we are getting! I wonder what would he think of today’s hottest research topic in Artificial Intelligence: Affective AI?

Affective Computing came onto the scene with Dr. Rosiland W. Picard, the director of the Affective Computing Research Group at MIT. In 1997, Dr. Picard (so Star Trek) published a book on the subject. She put technology together with medicine, psychology and neurology. The idea was to make human-machine interactions more natural. (That means more emotional, so just substitute emotional for affective.) To make Affective Computing more “human,” we rely on the concept of Machine Learning.

According to Wikipedia, Machine Learning is when a computer improves its performance through constant collection and analysis of data. With the vast amounts of data collected from anyone using social media or being tracked by every purchase they make or website they visit, we make Artificial Intelligence more sophisticated every day.

 

HOW DOES THAT HAPPEN?

By collecting and feeding lots and lots and lots of data about you and everyone else in the world into huge data bases. With the speed and processing power of computers logarithmically evolving, the process of machine learning is moving beyond human-supervised collection and analysis into the realm of unsupervised collection and analysis called Deep Learning, where the computer has actual neural networks that mimic the way human brains work. The machines start “thinking.” They “think” analytically and emotionally.

The website, emotionalAI.org suggests the kinds of data necessary for Affective AI are Facial Coding, Voice Analysis, and Galvanic Skin Responses (GSR), read by wearable devices like your Apple Watch, which can measure heart rate, muscle tension, skin temperature, respiration, and brain activity.

Researchers have used Facial Analysis to spot struggling students in computer tutoring sessions. Robotic teachers can be trained to recognize different levels of engagement, boredom and frustration, so that the system could know when the students were finding the work too easy or too difficult. This technology could be useful to modify and improve the learning experience in online platforms,” Independent.co.uk reports.

How about a robot on a jury watching someone who has sworn to tell the truth, the whole truth and nothing but the truth? No more need for polygraphs. How about a pair of glasses that warns you if a person you meet is dangerous? Or tells you a person is a great match for you?

Caution: Don’t play poker with a robot.

BeyondVerbal in Israel has an Affective AI software based on Voice Analysis. The raw intonations of the human voice allow their app to “extract people’s moods, personalities and attitudes.” Think of what a wonderful tool this could be for an autistic person who is helpless to read social signals on their own. Affective AI could act as an interpreter, cluing the autistic person into the nuances of human interaction that are invisible to him/her.

Medically, BeyondVerbal has developed software to predict cardiovascular dysfunction before it strikes. Listening to the tone of voice of a patient, Mayo Clinic cardiologists analyze data-finding vocal biomarkers where atherosclerosis might affect anatomic structures. This is an amazing tool for treatment, especially for people in remote areas who just need to speak into a phone to be on their way to early treatment.

Another software called Cogito Dialogue analyzes voice signals in call centers to determine customer engagement and frustration, giving real-time feedback to employees, allowing them to adjust their approach. ZDNet thinks “this could have interesting implications in diplomacy and conflict resolution and is bound to improve our experience with AI assistants and robots.”

Regarding wearables like Fitbit and Apple Watch, AI is, literally, a real lifesaver. Information on your physiology will allow you to be warned of impending problems. Your device can and will summon help without you telling it to, in case you are unable or unconscious. It will measure your sleep and warn you not to drive. It will recognize stress and remind you to calm down, giving you clear, logical, tailored-just-to-you advice about a situation you may find yourself confused by. Based on its analysis of data from your physiologic reactions, it will pick out music, books, movies, maybe even your friends.

In the future, Artificial Intelligence will become more and more human-like. Affective AI devices will know us better than we know ourselves. They will recognize and be predictive of our needs and wants. They will be companionable, empathic, and able to make us feel almost anything a human could, most times a lot better than a human could.

Affective AI is going to be deeply integrated into the human family.

We will all be very “understood,” Mr. Orwell.

Vamos a ver!

Paula LaBrot

Paula LaBrot is a 30-year resident of Topanga, a futurist with a special interest in the uncharted waters of cyberspace. plabrot@messengermountainnews.com

No Comments Yet

Leave a Reply

Your email address will not be published.