At first I thought, “Hmm,” but then I thought, “Hmm.” An editor sent me a link to an article titled “Google tests AI tools that can create news stories.” What's interesting about her is that she followed it up with, “LOL.” My pride as a human being made me think a little more deeply about something. It's not a question of whether generative AI can write news articles (it already can), but what that means for news readers and news/information consumption.
Is it true? Ten years from now, I'll be sitting alone in my shabby apartment in the city, penniless, with only my VR goggles for work, talking about how Bitcoin and his BSV blockchain will power a digital society. You may be reading a news article generated by a machine learning LLM. Future economy? You might even be able to use goggles to have live, interactive discussions with artificially generated news anchors. News anchors provide completely unbiased updates and analysis that I need to stay optimistic.
News writing seems like a perfect match for an LLM and machine learning. This is very formulaic, with the first paragraph containing the “hook” and important key points. The second “nut graph” outlines why the article exists. The rest of the article includes back-up details and quotes, and the final graph (which very few people will read) concludes it all. Even when humans write news articles, it often feels like muscle memory is being used more than actual brainpower or creativity (am I exaggerating here?).
The first thing I did was a test, and I asked ChatGPT, “Would you please write a 600-word news article in Bloomberg News style about how artificial intelligence and LLMs will soon be able to write news articles?'' ?” I asked.
I have to say that the result was not bad, although a little bland. It took him less than 20 seconds to post via ChatGPT. His grammar was perfect and he explained the facts. The only thing that made me laugh was the repeated references to “Language Modeling Models” (LLM). To be honest, this is one thing I didn't expect to get wrong. Oh my job is safe!
Generative AI isn't that great right now
My false sense of relief was further heightened by reports that ChatGPT can get worse with age. Testers noticed that when GPT-4 was given math problems, visual reasoning tests, and exams, its accuracy dropped significantly. One theory being debated on social networks as to why this is happening is that the programmers at OpenAI, the creators of ChatGPT, have introduced restrictions designed with “AI safety” in mind. The idea is that by introducing a new technology, we may have inadvertently (or intentionally) stunted its growth. And avoid answers that may offend.
My own experience with GPT3.5 has been mostly eye-opening. This is because GPT3.5 produces more lines of text apologizing or upfront discussing why a particular task cannot be performed than outputting any useful (or desirable) material.
Of course, there's no point in gloating over the mistakes AI and LLMs make in their current stage of development. In doing so, I am reminded of those who said in the late 1990s that the Web would never take off as a mass medium because it was too slow to stream video. Recall also his first DARPA Grand Challenge on self-driving cars in 2004. Not a single participant covered more than 12 km of his 240 km course in the middle of the Mojave Desert. However, a year later, with five vehicles completing the course, he was the only participant to fail to beat his 12km record from 2004.
Just because some technology isn't working well now doesn't mean it won't work well in the future. It seems obvious, but many people keep making that mistake. OpenAI will resolve issues with GPT if it determines that it is hindering your project. You'll probably end up offering different versions to different clients, depending on the type of content they need to produce. In any case, putting aside the debate over whether machine learning and LLM are forms of “intelligence”, it is unwise to judge future performance based on current examples. Let's assume that someday, and soon, generative AI will be able to generate compelling news content. Journalists working today will have no choice but to deal with it.
One thing I noticed about the ChatGPT article is this. If I hadn't requested it, and therefore had known in advance that it was auto-generated, I probably wouldn't have noticed it (despite the odd terminology mistake). When looking at daily news from mainstream or independent media, it is impossible to tell whether it is written by a machine or by a machine with human editorial oversight. . Things get even harder to tell when you move on to articles in the “General/Technical Information” or “Life/Health Advice” categories.
A machine that writes news for machines to read
With all this in mind, it is possible to infer and predict that most of the content we read and watch in the future will be generated by generative AI. Perhaps many of them already are. There are many examples of content written online that appears to be auto-generated, or at least written by uninterested and unimaginative humans. The entire Twitter thread is full of responses that seem like they came from his LLM rather than a real human being.
How will people react to this? Will they continue to consume (and trust) news the same way they do now? In recent years, several media outlets have reported on public opinion polls investigating the level of public trust in the news media. The existence and increasing frequency of these polls suggests that some kind of panic is beginning. The survey results show that trust levels in mass media news are steadily declining, and that the gap in trust levels between people of different political leanings is deepening. For example, CNN and MSNBC have less than 30% trust from Republicans, but receive +50% ratings from Democrats. Some studies show confidence levels to be well below 50% overall. The most trusted news source in the US is the Weather Channel, the only network with an average above 50%.
Introducing generative AI into this mix is unlikely to even affect the level of trust, the level at which the public can judge it. Viewers/readers assume that automatically generated content has the exact same biases as human authors and treat it the same way. We've all seen collage videos of human newsreaders and commentators on different stations all appearing to be reading from the same script. We follow accounts with human faces and names on his Twitter, but these are not “real” people who have real opinions and have the time to write them, at worst In the back of my mind, I suspect that it might be an AI bot. Maybe it's just a fake account run by a PR firm, churning out astronomical personal content with factory-level efficiency.
One of the dirty secrets of the news media industry today (and for the past several decades) is that the content it produces is not intended to inform or entertain people in the present, but rather in the future. It exists so that it can be indexed so that it can be cited as a reference. The longer a material has been around, the more valuable it is for some reason.
Regardless of whether it is biased or unbiased, accurate, proven or false, it will appear in search results. This also applies to book publishing, especially academic nonfiction. The information contained in a book can be cited and referenced by others for years to come, whether or not they have actually read the book.
As generative AI generates all content and is widely assumed to do so, the real battleground for future news will shift to the backend. The content itself is nothing more than window dressing, while those seeking to shape public opinion and gain “consent to manufacture” rely on the data generation on which the AI is trained, hoping to move the needle towards their own ends. will try to influence you. In the past, activists sought to flood the news and social networks with content supporting their cause. These days, they are working just as hard to remove content that is unfavorable to them. Therefore, machines will generate the content. Its main purpose is for future generative AI systems to “read” and generate more content… for machine audiences further down the road.
The bottom line of all this is that, yes, news reporters will eventually be replaced by generative AI, and so will the majority of news consumers. Real people may lose interest in news media all together, treat news media as entertainment, or use news media to confirm existing prejudices. Is this already happening even without mass AI intervention? probably. Would you trust news written by a machine any more than you would trust something made by a human? And the most important question for news reporters, and the one I intentionally put at the end of the last paragraph, will news sites still employ humans to produce their content? The answer is probably yes, but probably more of an opinion editor or analytical commentator role. That's why this article is an editorial, not a dry report on the latest developments.
For artificial intelligence (AI) to function properly within the law and succeed in the face of growing challenges, it must integrate enterprise blockchain systems that ensure the quality and ownership of data input. This makes it possible to ensure data security while also guaranteeing immutability. of data. Check out CoinGeek's coverage Learn more about this new technology Why enterprise blockchain is the backbone of AI.
Watch CoinGeek Roundtable with Joshua Hensley: AI, ChatGPT, and Blockchain
New to blockchain? Check out CoinGeek's Blockchain for Beginners section. This is the ultimate resource guide to learn more about blockchain technology.