Friday, October 4, 2024
No menu items!
HomeScience and Nature10 Crazy AI Controversies… So Far

10 Crazy AI Controversies… So Far

The last few years have seen a rapid improvement in various types of artificial intelligence, many of which are now widely used by both businesses and the general public. However, like most disruptive new technologies, the potential benefits of AI must be balanced with the risks it poses.

Some are unsure whether this can be done. After all, not many new technologies come with the risk of becoming smarter than the people who created them. Happily, this has not happened yet, but AI has been at the center of numerous controversies, crimes, and scandals to get to where it is today. There will likely be many more to come, but here are ten of the most interesting and surprising AI controversies so far.

Related: 10 Ways Artificial Intelligence Is Revolutionizing Healthcare

10 The Wizard of Oz Technique

Wizard of Oz Method in UX

In the late 2010s, several tech firms were exposed for employing humans to do tasks that they claimed—or at least gave the impression—that their cutting-edge AI was doing. Some described this practice as “pseudo-AI,” but others named it the “Wizard of Oz technique” in reference to the moment in the classic film when the curtain is pulled back to reveal that the giant, fiery wizard was really just an old man operating a machine.

Companies caught using the technique included Facebook, which at one point employed people to act as “virtual” assistants for an expense management app called Expensify and scheduling services X.ai and Clara. As early as 2008, a voicemail-to-text conversion service called Spinvox was said to be hiring overseas workers to transcribe the audio instead of using their software.

This does not necessarily mean they were a scam. Doing this helps firms make sure that there is enough demand for a service before spending money to automate it. However, there are many privacy concerns surrounding the practice. In many cases, people’s private communications were handed over to other people without their knowledge.[1]

9 AI Interrogation

The Architect of the CIA’s Enhanced Interrogation Program, James Mitchell

With shady projects like MK-Ultra on its record, it is probably no surprise to find that the CIA was one of the earliest organizations to look into AI. Papers show that they were carrying out tests using basic AI as long ago as the early 1980s, such as in 1983 when they used a crude piece of software called “Analiza” to try and interrogate one of their agents.

The basic idea was that the program would remember the agent’s answers and then select a fitting reply, threat, or question from a bank it had saved. It was crude by today’s standards but probably more sophisticated than many would think for the time. Like a real interrogator, it would look for the agent’s vulnerabilities. It also assessed things such as how much he talked and how hostile he was.

It is unclear whether the CIA has continued to work on AI interrogators. While they would likely be less violent than other ways of getting answers out of people, concerns have been raised about taking away the final layer of humanity from what is already a very cruel and cold process, leaving captives with nobody to even plead with.[2]

8 North Korean Job Applications

Report: North Koreans using fake names to land remote jobs | WION

The CIA might have been one of the earliest state actors to take AI seriously, but now governments all over the world are looking into it and not always with good intentions. One country that has been nefariously using it is North Korea. The CIA’s counterparts there are believed to have been using AI to generate thousands of applications for remote jobs in the U.S.

AI automation tools help operatives send out hundreds of job applications under different identities and actually get and do one or more of the jobs. They then use the income they make to fund their regime. One of the U.S. companies they targeted was a tech startup called Cinder, which turned out to be a pretty poor choice because it is run by former intelligence officials.

They drew media attention to the problem, but other firms who are probably less on the ball are believed to have been affected. Even little mom-and-pop businesses have been targeted, and the government has said that some of these North Korean workers are making as much as $300,000 a year in income. This translates to hundreds of millions for the North Korean regime.[3]

7 Deepfake Scams

Worker Sends $25 Million To Deepfake CFO In WILD Fraud Case #TYT

One of the most concerning developments in AI is how real deepfakes are now able to look. These are videos where the face, and often the voice and body, of somebody else is digitally imposed over the person actually being filmed. This can now be done so well that it is very easy for scammers to pose as people’s family members or colleagues. Some have already been doing this very successfully.

In early 2024, the story broke that a finance worker at the large engineering firm Arup had unknowingly sent $25 million to scammers using the technology. He said he first received an email that claimed to be from the chief financial officer, but he was suspicious about it because it was asking for secret transactions to be made.

However, he stopped worrying and made the transfers after a video call with who he thought were his colleagues and the CFO. They were actually all deepfake recreations. The large sum of money taken by the scammers in this case meant that it was very widely reported, but there could be many more cases like it on a smaller scale.[4]

6 The Hollywood “Double Strike”

How AI Took Center Stage In The Hollywood Writers’ Strike

In 2023, new TV and film releases were put on hold because both writers and actors went on strike, in large part to protect their careers against the existential threat posed by AI. Writers feared a future where entire scripts would be written by large language models like ChatGPT or where executives generated source material using AI and then paid writers to adapt it. Writers typically get less money and credit for adaptations than for original material.

Eventually, the studios agreed that these things would not happen. Writers could not be forced to use AI but could decide to use it if they wanted, and they would receive no less credit or payment for doing so. As for actors, their main concerns were related to AI technologies such as deepfakes. Their likeness could be captured on film only once and then reproduced by AI indefinitely. The deal secured by their union said that studios have to ask for actors’ consent to do this, which presumably would not be forthcoming without a large payment.

These were good outcomes for those industries, but they are far from the only professions threatened with replacement by AI. There could be similar lengthy strikes in the future.[5]

5 Copyright Theft

ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them?

“Training” is the analog of human education for AI, and it also requires materials in the form of words, images, or sounds. The trouble is that once it has been trained, AI can remember materials much more accurately than people can. For example, while a human can only give you a short summary of a novel they have read, AI could potentially reproduce it word for word. Or it could rewrite the entire book and all its details in different words.

This is not exactly fair to the original author, which is why copyright laws exist. However, some AI firms have been accused of flagrantly disregarding the copyright of the content that they use to train their models. Many argue that it is okay to use the work without the creator’s consent because training AI is “fair use.” This includes the head of Microsoft AI, Mustafa Suleyman, who stated in an interview that he believes there is a “social contract,” which means that anything published online is effectively free for anyone to copy or recreate.[6]

4 Hallucinations

Why Large Language Models Hallucinate

Much of the material that companies use to train AI is found online. And, as impressive as it is that AI can quickly retrieve and reproduce parts of this material in response to prompts, one obvious obstacle it faces is that an awful lot of what is said online is simply not true. This has resulted in some AIs spouting convincing-sounding information that is entirely fictional.

One Canadian lawyer discovered this to her peril in early 2024 after using ChatGPT to help her with legal research. Advocating for a father in a child custody case, she used ChatGPT to help her find previous cases with similar circumstances. It suggested three cases, and she submitted two to the court. The lawyers for the mother in the case, however, could not find any record of those cases anywhere.

It turned out that the AI had made them up in an example of a type of error called a “hallucination.” Luckily, the judge put it down to naivety and did not believe that the lawyer had intended to mislead the court. However, he did express his concern that if such things went unchecked, they could lead to a miscarriage of justice.[7]

3 Hiring and Firing

Many people are worried about their jobs being taken by AI, but so far, it is still unable to replace humans in most roles. However, although it cannot replace them by itself, AI can replace them with other people. Amazon is one company that has been accused of using technology to fire workers. It uses an automated system to track workers’ productivity and efficiency, which is said to send out warnings and even fire people if they fall below targets.

Critics are concerned that, like with the CIA’s AI interrogator, employees are left without the last resort of a sympathetic human they can plead with. Amazon has defended itself by saying that employees are first placed on a training plan, that there is an appeal process, and that supervisors can override the system. However, employees have said that the automatic supervision makes them feel like robots instead of people.

The pressure is so high that many have been forced to avoid bathroom and prayer breaks. Documents show that one Amazon warehouse fired around 10% of its full-time workforce for productivity reasons between 2017 and 2018.[8]

2 Racial Disparity

WHY Face Recognition acts racist

One application of AI that is already here to stay in many countries is facial recognition. This technology rapidly improved in the 2010s, with research showing that between 2010 and 2018, it got 25 times better at picking the right person out of a large database. However, studies also showed that facial recognition algorithms kept running into the same problem—the error rate for black faces was much higher than it was for white ones. In fact, it could be as many as 10 times greater.

One study found that the false match rate for white faces was around one in 10,000, while for black faces, it was one in 1,000 when matching photos of women. The technology, in this case, came from a leading French security firm called Idemia. Still, others made by Amazon, Microsoft, and IBM were reportedly also less accurate when it came to darker skin.

Concerns that facial recognition technology used by the government could result in racial disparity prompted a lot of criticism of the technology. Some places, such as San Francisco, banned it from being used.[9]

1 Bad for Well-Being

AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

Although AI founders and devotees have been very vocal about how the technology will shape the future and do all sorts of good for the world, the evidence so far is not as optimistic. One study published in February 2024 asked more than 6,000 people about the impact of different types of technology, including AI, on their lives. The results showed that the health and well-being of those with greater exposure to AI were worse.

This was in line with previous studies, and although the authors did not look for specific reasons to explain the outcome, they did suggest that job insecurity and loss of autonomy might be among them. However, while the impact of AI on people’s lives is negative right now, there might still be hope for the future.

The same study showed that technologies that have been around for a while, such as laptops and instant messaging, actually had a positive effect on well-being. Environmental conditions and the way technology is designed and used also play a role. In time, AI technologies might find a better way to fit into people’s lives.[10]




fact checked by
Darci Heikkinen

Read More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments