قالب وردپرس درنا توس
Home / Science / It’s time to adjust the AI ​​that explains human emotions

It’s time to adjust the AI ​​that explains human emotions



During the pandemic, technology companies have been promoting their emotion recognition software to remotely monitor workers and even children. Take a system called 4 Little Trees as an example. The program was developed in Hong Kong and claims to be able to assess children’s emotions during class. It maps facial features to assign each student’s emotional state into a category, such as happiness, sadness, anger, disgust, surprise, and fear. It can also measure “motivation” and predict performance. There are already similar tools on the market that can provide surveillance for remote workers. According to one estimate, the emotion recognition industry will grow to 37 billion U.S. dollars by 2026.

There are scientific disagreements about whether AI can detect emotions. The 201

9 reviews did not find reliable evidence. The study concluded: “Tech companies are likely to ask a fundamentally wrong question.” Wait. Psychol.Scientific public interest 20, 1–68; 2019).

For the use and abuse of these technologies, science is paying more and more attention. Last year, Rosalind Picard co-founded an artificial intelligence startup called Affectiva with Boston University, and led the emotional computing research group at the Massachusetts Institute of Technology in Cambridge. Scholars call for mandatory and strict review of all AI technologies used in recruitment, and public disclosure of investigation results. In March, a citizen group convened by the Ada Lovelace Institute in London said that an independent legal agency should oversee the development and implementation of biometric technology (see go.nature.com/3cejmtk). This kind of supervision is essential to defend against systems driven by what I call “physical impulse”: deriving false assumptions about internal state and abilities from the outside, with the goal of extracting more information about a person, not They choose to reveal.

Countries/regions all over the world have formulated regulations that require scientific rigor to develop drugs to treat the human body. At least the same protection should be provided for the tools that protect our thoughts. For years, scholars have been calling on the federal government to regulate robotics and facial recognition technology. This should also be extended to emotion recognition. Now is the time for national regulators to guard against unconfirmed applications, especially for children and other vulnerable groups.

Experience from clinical trials shows why regulation is so important. Federal requirements and subsequent advocacy have made more clinical trial data available to the public and have undergone rigorous verification. This becomes the basis for better policy making and public trust. Supervision of emotional technology will bring similar benefits and responsibilities. It can also help establish norms in response to excessive corporate and government intervention.

A polygraph is a useful parallel line. This “polygraph” test was invented in the 1920s and used by the FBI and the U.S. military for decades. The results were inconsistent and hurt thousands of people, until federal law largely prohibited the use of the polygraph. instrument. It was not until 1998 that the U.S. Supreme Court concluded that “there is no consensus on reliable polygraph evidence.”

Psychologist Paul Ekman (Paul Ekman) claimed that there is a general emotional facial expression, which is a formative image. In the 1960s, he traveled the highlands of Papua New Guinea to test his controversial hypothesis that all humans exhibit a small amount of inherent, cross-cultural, and consistent “universal” emotions. Earlier, the anthropologist Margaret Mead (Margaret Mead) objected to this idea, saying that it deprived of background, cultural and social factors.

However, the six emotions described by Ekman are fully in line with the emerging model of computer vision.As I wrote in my 2021 book Artificial Intelligence Atlas, The reason for using his theory is that it is suitable for the work that the tool can accomplish. As long as more complex issues are ignored, the six consistent emotions can be standardized and processed automatically on a large scale. Ekman sold his system to the US Transportation Security Administration after the terrorist attacks on September 11, 2001, to assess which airline passengers showed fear or pressure, so it may be terrorist. He was strongly criticized for lack of credibility and racial prejudice. However, many of today’s tools (such as 4 small trees) are based on Ekman’s six sentiment classifications. (Ekman insists that faces do convey general emotions, but he says he has no evidence that automated technology can work.)

However, the company continues to sell software that affects people’s opportunities without clear records and independent audit evidence of effectiveness. Unfair judgments are being made of job applicants because their facial expressions or voices do not match the facial expressions or voices of employees; students are reported at school because their faces look angry. Researchers also show that facial recognition software makes black faces more negative emotions than white faces.

We no longer allow emotion recognition technology to be unregulated. Now is the time to legislate to prevent unproven use of these tools in all areas (education, health, employment, and criminal justice). These safeguards will carry out rigorous scientific research in the near future, and reject the myth that the internal state is just another data set that can be grabbed from our faces.


Source link