MIT Technology Review

Recently, I took myself to one of my favorite places in New York City, the Public Library, to look at some of the hundreds of original letters, writings, and musings by Charles Darwin. The famous English scientist loved to write, and his curiosity and observational skill come alive on the pages.

In addition to proposing the theory of evolution, Darwin studied the expressions and emotions of people and animals. In his writings, he discussed how scientific, universal, and predictable emotions are in fact, and drew figures with exaggerated expressions, which were displayed by the library.

The topic rang a bell for me.

Lately, as everyone has been excited about ChatGPT, AI general intelligence, and the potential for bots to take over people’s jobs, I’ve noticed regulators ramping up warnings against AI and emotion recognition.

Emotion recognition, in this distant Darwinian vein, is the attempt to identify a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings.

The idea isn’t very complex: An AI model might see an open mouth, closed eyes, and shrunken cheeks with a head held back, for example, and register it as laughter, inferring that the subject is happy.

But in practice, this is incredibly complex—and some argue that it’s a dangerous and invasive example of the kind of pseudoscience that AI often produces.

Some privacy and human rights advocates, such as European digital rights And access now, calling for a blanket ban on emotion recognition. And while not an outright ban, the version of the European Union’s Artificial Intelligence Act approved by the European Parliament in June bans the use of emotion recognition in policing, border management, workplaces and schools.

Meanwhile, some US lawmakers have advocated for this particular area, and it appears as a potential contender in any eventual AI regulation; Sen. Ron Wyden, who is one of the lawmakers leading the regulatory push, recently praised European Union to address it and warned, “Your facial expressions, your eye movements, your tone of voice, the way you walk are terrible ways to judge who you are or what you will do in the future. However, millions and millions of dollars are being channeled into developing AI to detect emotions based on informal science.” .

But why is this a major concern? How valid are concerns about emotion recognition — and could strict regulation here really hurt positive innovation?

There are a few companies already selling this technology for a variety of uses, although it is not widely deployed yet. Affectiva, for example, has been exploring how AI that analyzes people’s facial expressions could be used to determine if a motorist is tired and to rate how people react to a movie trailer. Others, like HireVue, have sold sentiment recognition as a way to screen promising job candidates (a practice that has been met with harsh criticism; you can listen to our investigative podcast on the company here).

I am generally in favor of allowing the private sector to develop this technology. There are important applications such as Enable the blind Daniel Castro, vice president of the Information Technology and Innovation Foundation, a DC-based think tank, told me in an email.

But other applications of the technology are more worrisome. Many companies sell software to law enforcement who try to ascertain if someone is lying or can report supposedly suspicious behavior.

A pilot project called iBorderCtrl, sponsored by the European Union, is offering a version of emotion recognition as part of the suite of technology managing border crossings. According to its website, the automatic deception detection system “identifies the likelihood of deception in interviews by analyzing the subtle nonverbal gestures of interviewees” (though it acknowledges “scientific controversy over its effectiveness”).

But the most well-known use (or misuse, in this case) of emotion recognition technology comes from China, and this one is undoubtedly on lawmakers’ radars.

The country has repeatedly used emotional AI for surveillance — especially to monitor the Uighurs in Xinjiang, according to a software engineer who claimed to have installed the systems in police stations. Sentiment recognition was intended to identify a nervous or anxious “state of mind”, like a polygraph. As one human rights advocate warned the BBC, “People who live in very coercive conditions, under enormous pressure, are understandably stressed, and that is taken as an indication of guilt.” Some schools in the country have also used technology on students for measurement comprehension and performance.

Ella Jakobuska, senior policy advisor at Brussels-based European Digital Rights, told me that she had yet to hear of “any reliable use case” for emotion recognition: “Both[facial recognition and emotion recognition]are about social control; about who Who watches and who watches; about where we see a concentration of power.”

Furthermore, there is evidence that only emotion recognition paradigms I can not be specific. Emotions are complex, and even humans are often very poor at recognizing them in others. Even as technology has improved in recent years, thanks to the availability of more and better data as well as increased computing power, accuracy varies widely depending on what outcomes the system aims for and how good the data it contains.

“Technology is not perfect,” Castro told me, “though that probably has less to do with the limitations of computer vision and more to do with the fact that human emotions are complex, vary based on culture and context, and are imprecise.”

Three children crying from a series of old patterns of a reglander.  Rectangular boxes like those used to train AI over their faces.
A composite of solar images taken by Oskar Gustav Rejlander, the photographer who worked with Darwin to capture human expression.

Which brings me back to Darwin. The primary tension in this area is whether science can identify emotions. We may see progress Affective Computing Where the basic science of emotion continues to advance – or maybe we don’t.

It’s kind of an anecdote for this broader moment in artificial intelligence. Technology is going through a period of intense noise, and the idea that artificial intelligence can make the world more knowable and predictable can be attractive. However, as AI expert Meredith Broussard asked, can everything be distilled into an arithmetic problem?

What else am I reading

  • Political bias seeps into AI language models, according to new research published this week by my colleague Melissa Hekila. Some models lean more to the right and others lean more to the left, and a truly unbiased model might be out of reach, some researchers say.
  • Stephen Lee Myers of The New York Times He has a fascinating long read on how Sweden has thwarted the Kremlin’s targeted online information operations, aimed at sowing division within the Scandinavian country as it works toward joining NATO.
  • Kate Lindsay Beautiful reflection books In the Atlantic on the changing nature of death in the digital age. Long-lived emails, texts, and social media posts live on our loved ones, altering grief and memory. (If you’re interested, I wrote a few months ago about how this shift relates to changes in deletion policies by Google and Twitter.)

What I learned this week

A new study from researchers in Switzerland He sees the news as highly valuable to Google Search and accounts for the bulk of its revenue. The results offer some optimism about the economics of news and publishing, especially if you, like me, care deeply about the future of journalism. Courtney Radsch Books about the study In one of my favorite publications, Tech Policy Press. (On a related note, you should also read This sharp piece On how to fix local news from Stephen Waldman at The Atlantic.)

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *