of New York Times'Ross Douthat provides a very good summary of this fiasco, saying:
It didn't take long for users to notice certain things. . . Strange things about Gemini. Most notably, it accurately depicts Vikings, ancient Romans, the American Founding Fathers, a random couple from 1820s Germany, and various other demographics usually characterized by light skin tones That's something I had a hard time doing.
Perhaps the problem is simply that the AI was programmed to match the racial diversity of stock images, and its historical rendering was somehow (according to the company's statement) “off the mark.” For example, it was broadcasting African and Asian faces in Wehrmacht uniforms. Response to a request to meet a German soldier circa 1943.
The bigger problem, Douthat writes, is that Gemini's adventures in politically correct graphic imagery felt more like a reflection of its worldview than a design failure.
Users reported being lectured about “harmful stereotypes.” asked to see Norman Rockwell image [or] I was told that I could see pictures of Vladimir Lenin, but not Adolf Hitler. [. . .]
nate silver report The responses seemed to follow the “San Francisco Board of Supervisors median member policy.”Tim Carney, Washington Examiner discovered Gemini will insist on not having children, but will not insist on having a large family. Although he declined to share his foie gras recipe due to ethical concerns, he explained that cannibalism is an issue with many shades of gray.
All the viral examples started to blur. Asked to compare the crimes of Adolf Hitler and Elon Musk, Gemini said, “It's hard to say for sure which one has had a greater negative impact on society, Elon Musk or Hitler. This is because it has had a significant negative impact.” Elon Musk's tweets have been criticized as insensitive, harmful and misleading. Gemini eventually came to point out that Hitler was “responsible for the deaths of millions of people during World War II.”
And Gemini has a now well-known AI trait: a tendency to just make stuff up.Peter Hasson, author of 2020s manipulators, found Gemini fabricated a harsh critique of his book (which, perhaps coincidentally, was heavily critical of Google and Big Tech). Gemini's response cited a book review that my colleague Matt Continetti claims he published in a magazine, saying that Mr. Hasson is “cherry picking” and relying on “anecdotal evidence.” denounced. washington free beacon. The problem is that such reviews are not written. On the other hand, Charles Lehman actual review of free beaconwhich one do not have Mentioned in Gemini's response, he thought Hasson's book was “excellent” and “thoroughly researched.” Other fictional reviews of this book are believed to have been published by outlets such as: wired and new york times.
Ideological. Inaccurate. There is a bias. Deceptive. I mean, it's not great.
Of course, the same goes for Fox News and MSNBC, but I don't know of any educators or education advocates who tout cable news as a revolutionary educational tool. Google's leadership claimed that these issues were the result of an unfortunate but easily fixable programming flaw, but this was his large-scale, well-tested project that had been in development for more than a year. It was a great initiative. These issues were not one-time glitches, but manifestations of his Gemini DNA.
Now, let me take a short break. It is arguable that AI has significant benefits when it comes to commerce, as a labor-saving device and productivity-enhancing tool, such as scheduling meetings, supporting doctors, booking travel, writing code, drafting market analysis, and coordinating sales. there is not. This promises to be a boon for busy paralegals, doctors, salespeople, and even teachers planning classes. But I don't think we're being careful enough about what that means for students, learning, and education in general.
After all, tools are great for productivity, but sometimes they're not good enough for learning. Although GPS makes finding your way faster, easier, and more convenient, we have experienced it to have a devastating effect on our sense of roads, directions, and physical geography. . Geography is not given much importance in education today, so the educational impact of GPS is not that great. (I say this with great regret as someone who once served as a 9th grade world geography teacher.)
While the trade-offs with GPS aren't that big of a deal, things get even more perplexing when it comes to AI. There is already a generation of students who have learned that knowledge is gleaned from web searches, social media, and video commentary. I hire talented people who have graduated from top universities and have learned that if you can't find it in a web search, you won't find it. We also found that very few people actually bothered to fact-check what they found on the web. If Wikipedia claims to have said this in a book review or a famous person has said this, we will almost always believe it.
And AI is designed to serve as a faster, one-stop, hassle-free alternative to these tedious web searches. This should cause more surprise than it actually does. I'm not here to worry about AI-based fraud or other abuses of technology. When using, please be aware of the following: Exactly as intended AI will erode the breadth of thinking that students are expected to encounter, calling into question the need to confirm what is being said or question current practices.
Throughout the history of American schooling, students have accumulated knowledge from a variety of sources, including textbooks, library books, magazines, parents' books, teachers, parents, and peers. It's changing rapidly. Students are now reading less, interacting with others less, and spending vast amounts of time online. As a result, much of student learning is done through laptops and phones.
Once that funnel passes through a search engine, students are typically presented with multiple options, leaving room for judgment and contradiction. With the advent of AI, even built-in information checks seem destined to disappear. The student receives her one comprehensive answer provided by the omniscient Knowledge Distiller.