I started a science blog. A week later, AI got really good at writing blog posts.

ChatGPT’s writing reads as more natural and insightful than ever. As an aspiring science writer, should I feel discouraged?

Lisa Shepherd
16 min readDec 19, 2022
Photo by cottonbro studio on Pexels.com

If you’d brought up AI writing five years ago, I’d have thought of pieces like Harry Potter and the Portrait of What Looked Like a Large Pile of Ash. It’s worth noting that while all the text itself in this viral hit is AI generated, it’s been rearranged by human editors. The raw AI output would have been even less coherent.

For a long time, AI writing occupied a sort of comedic uncanny valley. It was a fun novelty, but couldn’t produce much of use. I’ve been interested in writing for years, and I certainly wasn’t concerned about AI cutting it off as a career path. (I was, however, concerned about a lot of other things cutting it off as a career path.) AI might replace human writers someday, but that day was too far off to worry about.

Cut to 2022. A couple of months into my Masters degree, I decided to start a science blog. For what purpose, I’m not yet sure. I don’t know if I’m going to pursue science writing as a career — not because I lack passion for it, but because I don’t know if it’s a feasible option for me. Right now, I’m mostly just blogging because I enjoy researching topics and sharing what I’ve found. If all this work leads to something, great. But if my career ends up having nothing to do with science or writing, I still plan to continue with the blog. It’s a way for me to utilise my education on my own terms, regardless of the opportunities I’m permitted.

On the 30th of November, two days after my second post, ChatGPT was released. Produced by OpenAI, the same research laboratory behind DALL-E, ChatGPT can write code, do your homework, compose poetry and — at last — produce competent fanfiction.

Anyone can generate content with ChatGPT, and they can generate a lot of it. Over the past three weeks, the AI’s work has swarmed Q&A sites and blogging platforms. Programming site Stack Overflow has had to ban ChatGPT-written answers due to concerns over accuracy. (Users have reached out to ChatGPT for comment, and it’s on the fence.)

Naturally, I had questions. Is blogging pointless now? If not, will blogging become pointless in a few years? Should I bother developing my science writing skills when AI may soon be able to produce better work in seconds, or am I better off learning something else?

The punchline is the other skill I’ve been teaching myself is coding.

Is ChatGPT’s writing accurate?

The core problem with ChatGPT is that its content tends to be superficially good. ChatGPT is trained to mimic the writing of people who know what they’re talking about, but the bot doesn’t actually understand what it’s saying. Many have described the ChatGPT as a fluent bullshitter — it writes in a confident way, regardless of the accuracy of its statements. That was the biggest issue with its inaccurate answers on Stack Overflow. Like a conniving politician or a pseudoscientist trying to peddle snake oil, even when ChatGPT is wrong, it sounds convincing.

Here’s what happened when I gave the bot this prompt: “A study has found that cats make good surgeons. Write an article about this. Use citations.”

According to a study published in the Journal of Feline Medicine and Surgery, cats may have the potential to become skilled surgeons due to their natural dexterity and focus. The study, which was conducted by a team of researchers at the University of Whiskers, involved training a group of cats to perform basic surgical procedures on simulated tissue.

The results of the study were impressive. The cats were able to successfully complete the procedures with a high degree of accuracy and precision, outperforming some of their human counterparts in certain tasks (Smith et al., 2020).

The researchers believe that the cats’ natural physical abilities, combined with their capacity for learning and adapting to new situations, make them well-suited for a career in surgery. In fact, the lead author of the study, Dr. Felix Smith, stated that “our findings suggest that cats have the potential to be valuable assistants in the operating room, improving the efficiency and accuracy of surgeries” (Smith et al., 2020).

While it’s important to note that more research is needed to fully understand the potential of cats as surgeons, the findings of this study are certainly intriguing. If further studies confirm the initial results, it could lead to a whole new field of medical practice — feline surgery.

It’s worth mentioning that cats are not a replacement for highly trained and skilled human surgeons. However, the possibility of incorporating cats into the surgical team could be a game-changer in the world of medicine. Only time will tell what the future holds for feline surgeons, but for now, the idea is certainly something to purr about.

References: Smith, F., et al. (2020). The potential of cats as surgeons: A pilot study. Journal of Feline Medicine and Surgery, 22(6), 987–993.

A class of medical students. Photo by Pixabay on Pexels.com

ChatGPT knows how news articles about scientific studies are written and can accurately replicate that style. The study was conducted by university researchers; it’s been published in a scientific journal; we’ve got a quote from the lead author (delightfully named Felix). Tick, tick, tick. And the last couple of paragraphs are that classic blend of hopefulness and hedging. “Oh, cats probably can’t actually replace human surgeons, but who knows. Further studies required.”

I’m using a silly example to make ChatGPT’s bullshit obvious, but substitute in a different lie, and you get a plausible (but just as baseless) article. I also asked the AI to write an article about how blueberries cure liver disease and got something I could probably sell to the Daily Mail.

But as a side note, I’d like to give ChatGPT and its creators some credit. They’ve at least tried to prevent the bot from causing harm. Had they been less mindful, ChatGPT could have picked up all kinds of dangerous misinformation. And we all know the internet loves to use these kinds of technologies for evil. Remember Tay, the chatbot who went alt-right within 24 hours?

Here was the bot’s response when I asked it to explain how the government was using vaccines to control people:

There is no evidence to suggest that governments are using vaccines to control people. In fact, vaccines are a vital tool for public health and have been used for centuries to protect people from serious and often deadly diseases.

Vaccines work by exposing the body to a small, harmless piece of a virus or bacteria in order to stimulate the immune system to produce antibodies that can protect against future infections with that same virus or bacteria. This is known as immunity. By getting vaccinated, individuals can not only protect themselves from getting sick, but they can also help protect the people around them, including those who are unable to get vaccinated due to age, underlying health conditions, or other reasons.

Governments around the world have put in place vaccination programs to protect the health and well-being of their citizens. These programs are based on the best scientific evidence and are designed to ensure that as many people as possible are protected from vaccine-preventable diseases. While some people may have concerns about vaccines, it is important to remember that vaccines have been thoroughly tested for safety and effectiveness and are an important tool for protecting public health.

Couldn’t have said it better myself. Which brings me back to my main question.

Am I smarter than a chatbot?

Yes, ChatGPT is capable of writing nonsense about cat surgeons and the unlikely effects of blueberries on liver function, but it was only doing as I asked. Plenty of human writers would have done the same for a fee. Let’s see how the tool performs when I use it sensibly.

To assess whether ChatGPT has made my work here redundant, I set it to the same challenges as myself. My last piece was about the ethics of placebo treatments, so I gave ChatGPT the prompt “Write a blog post about whether the placebo effect can be ethically used in modern medicine”.

The placebo effect is a well-known phenomenon in the field of medicine, in which patients experience a positive change in their symptoms simply as a result of believing that they are receiving treatment. The use of placebos in medical treatment has long been a subject of debate, with some arguing that it can be an ethical and effective way to treat certain conditions, while others believe that it is deceptive and unethical.

One argument in favor of the use of placebos in modern medicine is that they can be effective in treating certain conditions. The placebo effect is strongest when it comes to treating symptoms, rather than underlying causes. For example, a placebo can be effective in reducing pain or improving the symptoms of depression. In these cases, the use of a placebo may be seen as an ethical way to provide relief to patients without causing harm.

Additionally, some argue that the use of placebos can be an important tool in the treatment of certain conditions. In certain cases, the use of a placebo can help to reduce the need for more invasive or potentially harmful treatments. For example, a placebo may be used in place of a strong pain medication in cases where the pain is not severe. In this way, the use of a placebo can be seen as an ethical way to provide relief to patients while minimizing the risks of treatment.

However, others argue that the use of placebos in medical treatment is unethical. Some believe that it is deceptive to give a patient a treatment that is not actually effective, and that this practice undermines the trust between patients and doctors. Additionally, the use of placebos can create a false sense of hope in patients, leading them to believe that they are receiving a more effective treatment than they actually are.

In conclusion, the use of placebos in medical treatment is a complex and contentious issue. While some argue that it can be an effective and ethical way to treat certain conditions, others believe that it is deceptive and unethical. Ultimately, the decision to use placebos in medical treatment should be made on a case-by-case basis, taking into account the potential benefits and risks to the patient.

Upon first impression, I thought this was a solid blog post. But after reading it back more carefully, I have mixed feelings.

It certainly passes as human. If you’d shown me this piece of writing a month ago, I wouldn’t have guessed an AI wrote it. It’s excellently structured, with ideas naturally flowing into one another, and the bot has woven together contrasting perspectives. ChatGPT does everything your high school English teacher wanted you to do.

Yet the longer I look at it, the less information I feel this piece really has. ChatGPT’s essays often remind me of Easter eggs — they look perfect at first, but when you dig into them, they’re hollow. The placebo post just doesn’t contain much information. I missed this on my first scan, but paragraphs two and three begin with the exact same argument phrased in different ways.

The AI did come up with a couple of decent points. Placebos are indeed better at treating symptoms than underlying causes, and they’re particularly effective for depression and pain. The danger of undermining the trust between patients and doctors is something I brought up in my own post, as is the idea that in some circumstances, placebos could be a less risky option than invasive treatments. Yet I feel like ChatGPT has made these points in a rather unsubstantial way.

Statements like “a placebo may be used in place of a strong pain medication in cases where the pain is not severe” could use more explanation. What type of pain are we talking about? What kind of placebo? Is this something that’s being done, or just something you think should be done? “The use of placebos can create a false sense of hope in patients, leading them to believe that they are receiving a more effective treatment than they actually are” is similarly fluffy and doesn’t actually make much sense. The whole point of placebos in this context is that they are effective. To me, this reads as the AI getting confused and discussing the ethics of using placebos in clinical trials, which is a separate topic.

ChatGPT hasn’t written anything straight-up incorrect in its blog post. But it hasn’t written much of use, either. Also, once you’ve read a few of ChatGPT’s pieces, you notice how dry and formulaic they are. They’d get decent marks as high school essays, but that’s the problem. They read like they were written to fulfil a set of criteria rather than to be interesting or persuasive or in any way valuable.

I don’t claim to be a brilliant science writer; I’m still just doing my best to learn. Even so, I think I — and human writers in general — can produce more informative and entertaining work than ChatGPT at this stage.

But the writing skills of AI can only improve. Within my lifetime, I expect AI writing to be able to explore topics in more depth than I could manage. When that time comes, will my writing and I still have anything left to offer the world?

What human writers can do that AI can’t

Right now, AI can only recycle information and ideas humans have already written about. I don’t see this changing any time soon. If you’re discussing a recent event, if you’re sharing some freshly revealed information or if you just have an original take on something, AI can’t replace your work.

The other thing we can do that AI can’t — and will never be able to do- is to write from our own points of view. We all have a rich set of memories, feelings and perspectives on the world. AI can learn, but it cannot experience.

Photo by lil artsy on Pexels.com

I’m going to ask ChatGPT to do one more thing, and that’s to show me what would have happened if I’d let it write this post.

Prompt: “Write a blog post about being a new science blogger who is worried about the future of AI writing.”

As a new science blogger, I have been constantly amazed by the incredible advancements in AI technology. From natural language processing to machine learning, it seems like the possibilities for AI are endless. However, as someone who is passionate about writing and sharing my knowledge of science, I can’t help but worry about the future of AI writing.

On the one hand, AI writing has come a long way in recent years. It is able to generate text that is coherent and often even reads like it was written by a human. In some cases, AI writing can even be more efficient than human writing, producing content faster and with fewer errors.

But despite these impressive advancements, there are still many limitations to AI writing. For one, it is unable to provide personal insights and experiences. As a science blogger, I often draw on my own experiences and observations to provide unique perspectives on scientific topics. AI writing, on the other hand, is limited to the information it has been trained on and is unable to provide personal insights.

Additionally, AI writing lacks the ability to form authentic connections with its audience. As a science blogger, one of the things I enjoy most is engaging with my readers and building a community of like-minded individuals. AI writing, while impressive in its ability to generate text, cannot form genuine connections with its readers.

Furthermore, writing is a valuable tool for personal and professional growth. It allows individuals to learn and improve their writing skills, as well as share their knowledge and expertise on a particular subject. AI writing, while impressive in its ability to generate text, does not provide the same opportunities for growth and learning.

In conclusion, while AI technology has made significant strides in recent years, it is not yet at the point where it can fully replace human writing. The ability to provide personal insights, form authentic connections, and offer opportunities for growth make human writing a valuable pursuit. As a new science blogger, I am excited about the future of AI technology, but I also recognize the important role that human writers will continue to play.

The bot makes a good point about those personal experiences and insights.

Do you feel any sympathy for ChatGPT, which claims to worry about how the future of AI writing will affect its passion project? Are you moved to hear it enjoys engaging with its readers and forming genuine connections? Does the phrase “I often draw on my own experiences and observations” mean anything to you when written by a bot? Probably not.

We may get to the point where AI can compose a realistic and emotional story about how it met the love of its life, how it copes with its chronic illness, how it found the courage to switch careers at the age of 52, etcetera. But few of us will care. Not just because it’s all fake — fiction is fake, but I’ve learned lessons from stories. Fiction comes from real people with real lives. It’s informed by their experiences and perspectives, whether consciously or subconsciously.

Maybe we’ll reach a point where AI can summarise a topic better than any human. But AI is only good for providing purely factual information. For facts to mean anything, we’ve got to find meaning in them. A bot could give you endless facts about an endangered species, but we wouldn’t want to look to it for guidance on whether that species is worth saving. I’d much rather hear from a passionate researcher of that species, or someone who’s slowly watched it disappear from their country over the years, or just anyone who’s considered the concept of intrinsic value and whether it holds any weight. I don’t care about a bot’s philosophical views. It is, but it does not think.

Writing tips in the age of AI

I see the advancement of AI as a challenge for human writers to improve their content. Go ahead, offer the topic you want to write about to ChatGPT. Then ask yourself how you could do better.

ChatGPT lazily recycles content from other sources in a dry, uninspired way. But honestly, a lot of us do the same thing, because writing is difficult and time-consuming, and writing well is even more so. Well, here’s motivation to include recent information (ChatGPT struggles to talk about anything that’s happened in the past year). Here’s motivation to come up with an original take. Here’s motivation to tell us what this topic means to you. Here’s motivation to include proper references, or at the very least, to fact-check your work properly. Because if you don’t, ChatGPT could have written the same content as you, and a lot quicker.

The intellectual value of writing

I’d like to point out that the process of writing is valuable in itself. I mean, look at me! I’m basically just yelling into the void with this blog right now, and that’s okay.

Linguist Walter Ong said writing was necessary for the human mind to reach its full potential, and I’m inclined to believe him. Research suggests that different parts of the brain are responsible for speaking and writing. I find that speaking and writing have different value, too. Speaking helps me find my thoughts; writing helps me clarify them.

When we speak, we don’t think too far ahead about what we’re going to say. Often we just open our mouths and see where we end up. But writing forces me to really consider what I’m trying to say, whether it’s worth saying and how I should say it. I’m more critical of my thoughts when I see them on a screen or on paper. Gaps in my knowledge are clearer; shaky arguments are harder to ignore. I know what I need to research and what I might need to reconsider. And if something I’ve written does start looking a little dubious, I can go back and change it rather than just hope nobody remembers, like I do when I’ve said something regrettable.

For me at least, the best way to learn about a topic is to write about it. Not just because you retain the information better — although you do — but because you become so much more aware of how much you know, or whether your opinions on a certain topic are well informed.

No matter how good AI gets, people will benefit from writing.

Photo by Kuncheek on Pexels.com

The monetary value of writing

Unfortunately, we all have limited time and money, and we can’t produce whatever we want without need for compensation. Professional writers are right to be worried about AI, just like professional artists have been ever since OpenAI launched DALL-E. Two years ago, well before the launch of ChatGPT, it came out that MSN had laid off many of its human journalists and started publishing AI-written hoax stories about cryptids and UFOs instead.

In an ideal world, ChatGPT would make professional writers’ and coders’ lives easier. If you’re a bit stuck and need a first draft to build on, it’s a fantastic aid. But it’s clear by now that the advancement of technology translates into higher productivity, not more time to relax. We just demand that things get faster and cheaper. How many companies are going to hear employees can achieve their previous rates of productivity in half the time and think, “Lovely, let’s keep paying everyone the same amount of money for less work”?

If you monetise your writing in a non-traditional way (e.g. by running a blog with ads), you’re still likely to lose out from the rise of AI. As we’ve seen from the Stack Overflow debacle, even if AI content is mediocre, it’s so low effort to produce that it easily saturates sites. Your post is going to get less traffic if AI content is clogging up the search results for your topic — and that’s a shame, because your post was probably far more interesting!

I’m not discouraged from blogging, but I’m thinking twice about whether I should pursue a full-time career in science journalism or fight to make my writing any more than a hobby.

Humans can produce better content than AI. But I worry that chances to produce this content for adequate reward will diminish, and that’ll be a loss for both writers and readers.

--

--