Hollywood actors strike over use of AI in movies, etc.
Artificial intelligence can now create images, novels, and source code from scratch. However, it's not actually from scratch. Because training these AI models requires vast amounts of human-generated samples. This has angered artists, programmers, and writers and led to a series of lawsuits.
Hollywood actors are the latest group of creatives to take on AI. They believe that movie studios control their likeness, make them “star” in films without ever going on set, and perhaps take on roles they would like to avoid or say lines or perform scenes that they find offensive. I'm afraid that I might do something like that. Even worse, you may not get paid for it.
That's why the Screen Actors Guild and the 160,000-member American Federation of Television and Radio Artists (SAG-AFTRA) are on strike until they can negotiate AI rights with studios.
At the same time, Netflix has been criticized by actors for posting job listings for people with experience in the AI field and paying salaries of up to $900,000.
AI trained on AI-generated images produces glitches and blur
When it comes to training data, we wrote last year that the proliferation of AI-generated images could become a problem if they end up online in large numbers as new AI models seek out images for training. I did. Experts warned that the end result would be a deterioration in quality. At the risk of creating obsolete references, the AI will slowly destroy itself, like a degraded copy of a copy of a copy.
Now, fast forward a year, and that's exactly what appears to be happening, with another group of researchers issuing the same warning. A team at Rice University in Texas found evidence that AI-generated images are fed into training data in large numbers, slowly distorting the output. But there is also hope. Researchers have found that this degradation can be halted by keeping the amount of these images below a certain level.
Is ChatGPT having a hard time with math problems?
Corruption of training data is just one of the reasons why AI starts to break down. One study this month claimed that ChatGPT is becoming worse at math problems. When asked to check whether 500 numbers are prime, the version of his GPT-4 released in March recorded 98 percent accuracy, while the version released in June The version's score was only 2.4 percent. Curiously by comparison, GPT-3.5's accuracy appears to have jumped from just 7.4 percent in March to almost 87 percent in June.
In another study, Princeton University's Arvind Narayanan found other changes in performance levels and attributed the problem to “an unintended side effect of the tweaks.” Essentially, the creators of these models are fine-tuning their models to make the output more reliable, more accurate, or potentially less computationally intensive, in order to reduce costs. Also, while this may improve some things, it can negatively impact other tasks. As a result, while the AI may work well now, future versions may perform significantly worse, and it may not be clear why.
Using larger AI training datasets can produce more racist results
It's no secret that many of the advances in AI in recent years have come simply from scale: larger models, more training data, and more computing power. This has made AI expensive, cumbersome, and resource-intensive, but it has also become much more capable.
Indeed, there is a lot of research being done to scale down AI models to make them more efficient and work on better ways to advance the field. But scale is a big part of the game.
But there is now evidence that this can have serious downsides, including making the model even more racist. The researcher performed an experiment on two open source datasets. One had 400 million samples and the other had 2 billion samples. They found that models trained on larger datasets were more than twice as likely to associate black female faces with the “criminal” category and more than twice as likely to associate black male faces with the “criminal” category. We found it to be 5 times more expensive.
Drones equipped with AI targeting system claim to be “better than humans''
Earlier this year, we covered the bizarre story of an AI-powered drone that “killed” its pilot in order to reach its intended target, which turned out to be complete nonsense. The story was quickly denied by the US Air Force, but did little to stop it from being reported around the world.
Now, there are new claims that AI models are better at identifying targets than humans, but the details are too secret to reveal and therefore cannot be verified.
A spokesperson for the company that developed the software said: “We can see if people are wearing a certain type of uniform, if they are carrying a weapon, if they are showing signs of surrender.'' There is. Let's hope they're right and AI can wage war better than identifying prime numbers.
If you enjoyed this roundup of AI news, try our special series exploring the most pressing questions about artificial intelligence. Find everything here.
How does ChatGPT work? | What generative AI really means for the economy | The real risks posed by AI | How to use AI to simplify your life | The scientific challenges AI helps solve | Can AI have consciousness?
topic: