I’m sure you’ve been inundated by posts about ChatGPT over the past couple of weeks. If you managed to avoid it the short version is that there is a new model from OpenAI that can write articles, create poetry, and basically answer your homework. Lots of people are testing it out for things as mundane as writing Amazon reviews or creating configurations for routers.
It’s not a universal hit though. Stack Overflow banned ChatGPT code answers because they’re almost always wrong. My own limited tests show that it can create a lot of words from a prompt that seem to sound correct but feel hollow. Many others have accused the algorithm of scraping content from others on the Internet and sampling it into answers to make it sound accurate but not the best answer to the question.
Are we ready for AI to do our writing for us? Is the era of the novelist or technical writer finished? Should we just hang up our keyboards and call it a day?
Byte-Sized Content
When I was deciding what I wanted to do with my life after college I took the GMAT to see if I could get into grad school for an MBA. I scored well on the exam but not quite to the magical level to get a scholarship. However, one area that I did do surprising well in for myself was the essay writing section. I bought a prep book that had advice for the major sections but spent a lot of time with the writing portion because it was relatively new at the time and many people were having issues with how to write an essay. The real secret is that the essay was graded by a computer, so you just had to follow a formula to succeed:
- Write an opening paragraph covering what you’re going to say with three points of discussion.
- Write a paragraph about point 1 and provide details to support it.
- Report for points 2 & 3
- Write a summary paragraph restating what you said in the opening.
That’s it. That’s the formula to win the GMAT writing portion. The computer isn’t looking for insightful poetry or groundbreaking sci-fi world building. It’s been trained to look for structure. Main idea statements, supporting evidence, and conclusions all tick boxes that provide points to pass the section.
If all that sounds terribly boring and formulaic you’re absolutely right. Passing a test of competence isn’t the same and pushing the boundaries of the craft. A poet like e e cummings would have failed because his work has no structure and contains capitalization errors compared to the standards of grammar. Yet no one would deny that he is a master of his craft. Likewise, always following the standards is only important when you want to create things that already exist.
Free Thinking
Tech writing is structured but often involves new ideas that aren’t commonplace. How can you train an algorithm to write about Zero Trust Network Architecture or VR surgery if no examples of that exist yet? Can you successfully tell ChatGPT to write about space exploration through augmented reality if no one has built it yet? Even if you asked would you know what sounded correct from the reply.
Part of the issue comes from content consumption. We read things and assume they are correct. Words were written so they must have been researched and confirmed before being committed to the screen. Therefore we tend to read content in a passive form. We’re not reacting to what we’re seeing but instead internalizing it for future use. That’s fine if we’re reading for fun or not thinking critically about a subject. But for technical skills it is imperative that we’re constantly challenging what’s written to ensure that it’s accurate and useful.
If we only consumed content passively we’d never explore new ideas or create new ways to achieve outcomes. Likewise, if the only content we have is created by algorithm based on existing training and thought patterns we will never evolve past the point we are today. We can’t hope that a machine will have the insight to look beyond the limitations imposed upon it by the bounds of the program. I talked about this over six years ago where I said that machine learning would always give you great answers but true AI would be able to find them where they don’t exist.
That’s my real issue with ChatGPT. It’s great at producing content that is well within the standard deviation of what is expected. It can find answers. It can’t create them. If you ask it how to enter lunar orbit it can tell you. But if you ask it how to create a spacecraft to get to a moon in a different star system it’s going to be stumped. Because that hasn’t been created yet. It can only tell you what it’s seen. We won’t evolve as a species unless we remember that our machines are only as good as the programming we impart to them.
Tom’s Take
ChatGPT and programs like Stable Diffusion are fun. They show how far our technology has come. But they also illustrate the importance that we as creative beings can still have. Programs can only create within their bounds. Real intelligence can break out of the mold and go places that machines can’t dream of. We’ve spent billions of dollars and millions of hours trying to train software to think like a human and we’ve barely scratched the surface. What we need to realize is that while we can write software that can approximate how a human can think we can never replace the ability to create something from nothing.