Today’s Binary Response is in response to this article by Clayton Ramsey a few days ago about his students using Artificial Intelligence (AI), such as ChatGPT, to do their assignments.
AI continues to be a hot topic of discussion, even outside of tech circles. After reading Ramsey’s article, it seemed like a good time to revisit for the first article written about AI here on Binary News, which turned out to be a story I wrote about Sports Illustrated writers using ChatGPT to fully write their articles. That article was published back in January 2024 on our sports blog Double Overtime.
Ramsey’s recent piece—which was linked to in a weekly newsletter—really resonated. It echoed thoughts I’d already shared with others including fellow writer Steve Leblang: AI occasionally lends a hand with research, editing for typos, style, and flow.
Why use it? More often than not, it’s a matter of being in a rush and accidentally repeating the same idea two or three different ways. Whether it’s a news piece, an email, or a post like this one responding to someone else’s editorial, the goal remains the same—clean, conversational writing with no typos and no unnecessary repetition.
Now, let’s take a closer look at Ramsey’s article.
The first part that really hit home was when he wrote: Many researchers consider reviewing ancillary to their already-burdensome jobs; some feel they cannot spare time to write a good review and so pass the work along to a language model.
Research plays a major role before and during the writing process for nearly every article. AI tools like ChatGPT or Microsoft Copilot often serve as a helpful starting point, speeding up research and making it easier to begin drafting. However, the sources provided by these tools are frequently dead links—pages that have been moved or taken down—requiring additional effort to track down credible and accurate information independently. While AI may offer direction, the bulk of the work still relies on manual digging. Even after the language has been tightened with AI assistance, careful proofreading remains essential to catch factual mistakes or awkward phrasing that may slip through.
One particularly striking part of Ramsey’s article was the mention of peers who believe large language models produce better writing than they can on their own. There’s some truth to that. AI can enhance flow, correct errors, and elevate tone. But it’s far from perfect. It often introduces factual inaccuracies or rewrites based on faulty assumptions, which must be double-checked and corrected. That’s why independent research remains a non-negotiable part of the process—to ensure the final product is both polished and trustworthy.
Later in the article, Ramsey touches on the idea of whether something is “worth doing badly,” and includes these two points which I thought were powerful:
Every single time, the model obscures the original meaning and adds layers of superfluous nonsense to even the simplest of ideas. If you’re lucky, it at least won’t be wrong, but most often the model will completely fabricate critical details of the original writing and produce something completely incomprehensible.
That point rings true. AI-generated nonsense has shown up more than once, especially when attempting to verify information and landing on dead links.
I have a little more sympathy for programmers, but the long-term results are more insidious. You might recall Peter Naur’s Programming as Theory Building: writing a sufficiently complex program requires not only the artifact of code (that is, the program source), but a theory of the program, in which an individual must fully understand the logical structure behind the code.
Here’s the thing, folks: For developers who regularly write custom PHP or JavaScript, AI can be useful for jogging the memory on older code. Ramsey’s sympathy toward programmers is appreciated, but it’s also true—AI isn’t a silver bullet. There are plenty of times developers spend fixing broken code on websites or in computer software which was clearly generated by AI and not functioning as intended.
With that… No one—and that really means no one—should rely on AI to get everything right. The reason AI is starting to edge out traditional search engines has less to do with accuracy and more to do with convenience. Smartphones and their built-in assistants like Alexa, Gemini, and Siri have trained people to expect quick, simple answers with minimal effort.
Whether you use AI tools or voice assistants regularly or just occasionally, you should always keep one thing in mind: the output isn’t likely to be inaccurate.