
Long-time internet-based tech news publisher CNET got caught using AI to write more than 70 articles in their finance section. But... is that a bad thing? Yes, and no.
We've all seen reporting about the new artificial intelligence chatbot ChatGPT, and many of you have probably even played around with it. It's really impressive technology, and it's fascinating to wonder about where this sort of tech could lead in the future. And for writers like myself, that's intriguing and terrifying.
On one hand, AI could mean offloading the boring and banal articles I might need to write to a machine that can pump them out in a fraction of the time it would take for me to research, understand, and synthesize the information into a coherent explainer. Then all I would have to do is give it a once-over as an editor to make sure it all makes sense, and then publish. Tada, instant article! Then I can go an focus on writing articles about more interesting things that require thought and forming an opinion, which is not something AI can do (yet?). Like this article.
If that first step on the ladder becomes a robot, how is anybody supposed to follow in my footsteps?
On the other hand, writing stuff is my job. While I don't get excited to write the boring and the banal, and I'm making a point to leave the boring and banal off of CrackBerry, it is still a job. It's a job that we could hire a junior writer to do, somebody with no real experience that needs an opportunity to get their feet wet, so to speak. If a machine takes over that job, that eliminates the entry level positions that make up the bulk of many publishing operation headcounts. It was a job like that, writing for the now-defunct PreCentral.net, that got me into this position today. If that first step on the ladder becomes a robot, how is anybody supposed to follow in my footsteps?
And on the third hand you'll find the matter of trust. Tools like ChatGPT are really impressive, but can we really trust them? Not in a "developing sentience and going all Skynet/Ultron/Matrix on humanity" sort of way, but can we just trust these AI tools to know what they're doing? The most glaring flaw with ChatGPT is that it speaks with unquestioning confidence, even when it is wrong. There's not clarity into the inner workings to know how reliable the information it provides truly is, and there's no hedging when it might have incomplete information, and no nuance to how it answers. There are only basic checks on what it will produce — it's easy enough to get ChatGPT to produce some startlingly racist responses, because it's deriving what it knows by neutrally evaluating some unexplained awful sources on the internet and not using a human brain that can gut check what it's about to say.
Which brings us all to CNET and their AI-written finance articles. First uncovered by marketer Gael Breton, CNET was using an AI bot of some sort to produce explainer articles on finance topics and publishing them under the author name of "CNET Money Staff". It wasn't until a user clicked on the author name that they saw an explanation that the article had been generated by AI:
Obviously, trying to mask who or what wrote your article is bad form. Pen names are one thing, and an anonymous editorial can be okay under the right circumstances. But when it comes to something like explaining personal finance, you want to be able to trust that you've received accurate information from somebody who knows what they're talking about. Nothing seems factually wrong with any of the articles I've read, and summation of factual information is something AI is unsurprisingly good at — as long as it has good sources to glean that information from.

CNET isn't alone, as Breton's digging also revealed that Bankrate was doing the same. CNET and Bankrate share the parent company Red Ventures, and I wouldn't be surprised to learn that the AI article generation tech is being used on or considered for other Red Ventures sites like Lonely Planet, The Points Guy, and Reviews.com.
After being so publicly called out kinda being deceptive about the authoring of these articles, CNET revised the author name to just "CNET Money" and added a disclaimer right below it to explain how the article was "assisted by an AI engine" and then reviewed by a real human. We have a similar disclaimer on all our articles about how if you buy stuff through our links we may get a commission from it (please buy stuff through our links so we can get a commission on it), and I'm sure that many of you didn't even notice it — it's the sort of furniture on a website you filter out when looking for the content you actually want to read, which the site designers always makes sure to highlight with bigger and bolder text.
And that "assisted by" is something of a stretch, as once CNET admitted and explained the program that the AI is creating the article's content. It's an assist in the same way that hiring a junior staff writer is an "assist" — an editor will still have to review the article for quality in information and writing, but the writer still did the bulk of the work.
It's obvious why CNET deliberately obscured that AI wrote these articles: Google.
It's plain as day that CNET deliberately obscured that AI wrote these articles, and anybody that's worked in publishing for a hot minute can tell you exactly why they did it the way they did: these articles were created for search engine optimization, not for regular readers to come across them while browsing the site or Twitter. It's all part of the publishing game these days: the more pages you have online, the more likely it is one of them will strike rich with the Google algorithm and drive a bunch of page views and ad impressions, making you more money. It's all a numbers game, and operating at that scale means employing a bunch of writers to produce those articles. What if you could just have a machine do it instead?
Automating things has been the biggest driver of improving the human condition, but it's always come at a cost. From domesticating oxen and inventing the plow, to shifting factories from humans at an assembly line to lightning fast and accurate robots, every step along the way has meant that fewer people had to be involved in back-breaking labor while also forcing those same humans to find something else to do, or even create something else to do. My job was inconceivable even fifty years ago, and only enabled by the proliferation of technology that eliminated old "entry level" jobs in favor of automation and machines. Humanity has been through this before and we're staring down the barrel at another revolution of industry.
What CNET is doing is an evolution of what some publishers have been doing for years. As was noted by Engadget, automation has played a role in some content generation for several years already:
Using text generators isn't currently a widespread practice throughout the journalistic sphere but outlets like the Associated Press and Washington Post have used them for various low-level copywriting tasks — the latter employing them to write about high school football and the equally unimportant 2016 Rio Olympics.
The quality difference between CNET's system and the AP's is a stark one. The AP system is a glorified mail merge, shoving specific pieces of data into preformatted story blanks for daily blotter posts and other highly repetitive journalistic tasks. CNET's system, on the other hand, appears to be far more capable, able to compose feature length explainer posts on complex financial concepts — a far cry from the journalistic Mad Libs the AP engages in.
We're at the start of a new revolution in content creation, and CNET and Red Venture's deployment of AI writing tools is one of the first salvos in a new arms race. I have no doubt that we will see an exponential expansion of this sort of publishing in the years ahead. And I also have little doubt that initially it will be heralded by the people running these businesses as a fantastic tool to expand their coverage area but they fully intend to continue employing and even growing their human headcount on the back of the traffic the AI-generated articles will create. At least until they run the numbers and realize that paying for several AI developers to create the tools to churn out articles en masse costs less than a newsroom full of human authors. In the end, the almighty dollar always wins.
The big question is whether or not Google will care. After all, these articles are being created with the express intent of capturing prime spots in Google Search. Google has hemmed and hawed around this issue, in part because they've been developing their own AI tools (and sounded the alarm internally about ChatGPT and its primary backer — Microsoft) but also because the users of Google Search are humans and the humans that run Google aren't sure what humans want.
But will readers care? I don't think they will. Who checks the author of an article anyway?
I'm going to put a stake in the ground right here: readers largely won't care. Especially for this kind of content, where it's a factual explainer of compound interest or whatever, I just want an answer: what is it and how does it work? If that answer is provided by a machine or a person, it doesn't really matter to me. Heck, in the course of researching this article I used Google a few times and was presented with Google's "featured snippets", which use Google's own machine learning and AI tools to dissect content from across the internet and present it to me right at the top of my search results — no further clicks required.
We're on the cusp of something wild. Chatbots like ChatGPT and whatever CNET used to create those finance articles are just the beginning, and there's no telling where this technology will be expanded in the years ahead. I won't promise that these sort of tools won't end up in use on CrackBerry someday, because maybe it will make sense later down the line.
But it's not what we're doing now. The way we're running this site just isn't compatible with what CNET used AI to do — we want to focus on the articles that are worth writing and worth reading, not just chasing clicks in Google Search.
Read more

The Google Pixel 8 will be unveiled on October 4th — here's what we know
Google pinned down 4 October 2023 to show off the new Pixel 8 and Pixel 8 Pro, and while we're not expecting anything groundbreaking in the hardware, the real star in the Pixel line has always been the software anyway.

What happens if everybody sends their Google Duet AI to the call?
Google's new Duet AI tool can attend a Google Meet call on your behalf — offering notes from you to the other callers, and taking notes for you about what is discussed. And it can also be a modern AI-powered Clippy for Docs, Sheets, and more.

OnStar hands non-emergency functions to Google AI to cut waits (and humans)
Automobile manufacturing giant General Motors partnered with Google to run basic OnStar help questions, cutting down on wait times for a human assistant.

ChatGPT Enterprise is OpenAI's faster business-grade private AI
OpenAI has a new offering for businesses: pay for the fancier version of ChatGPT with faster responses and better data privacy.