Artificial Intelligence (AI) & Content Creation with Megan Skalbeck
In this episode of the Sell Smarter. Sell Faster. podcast, we’re exploring the ever-changing role of artificial intelligence (AI) in content...
Unlike many articles on this subject, none of the below was written by artificial intelligence. But is that a good thing?
From a productivity standpoint, this blog was created at a glacial pace compared to AI-written content from large language models (LLMs) like ChatGPT. The time used to write this manually could have been spent on other revenue-generating tasks.
However, this confuses the ends with the means.
AI-written content is the dream of many marketers and executives: a “click one button and generate the perfect article” solution. The technology has existed for years (ChatGPT was far from the first attempt at AI-written content). Before that, content creators used “content spinners” to re-phrase and re-word competitor articles.
Some modern generative AI offerings do fit the bill of “single-button blog-generator.” In fact, “bulk generators” exist that will spit out ten 2,000 word articles in the course of ten minutes. All you need to enter is the title of each article. But what does this accomplish?
Yes, you’ve got lots of text on a page, which may even attract some traffic. If your entire business model revolves around impressions and ads, you may be set. Unfortunately, those who want to convert readers into leads and customers, are investing in their own thought leadership, and want to ensure they are delivering real value and the best possible content, need to think a little differently about what AI can and cannot do.
AI-written content is great at filling up a page but not always great at converting. 100 blogs with a conversion rate of 0% are worth less than a single manually written blog that converts 0.01% of traffic. That said, keeping your writing workflow entirely manual is quickly becoming a competitive disadvantage. That’s where AI-driven writing comes into play.
AI-driven writing combines the strengths of generative artificial intelligence and humans. It is more like a feedback loop between machine and human than a one-click blog post. The primary difference between AI-written and AI-driven is who is responsible for the final draft. AI-written content has artificial intelligence as the final arbiter of quality, maybe with a final edit or proof. This is not a great idea for most businesses or content creators.
AI-driven writing is still very efficient but can also be more effective. However, you can’t use it to its fullest until you understand where its weaknesses lie. We’ll use the infamous ChatGPT as an example, but much of this applies to large language models in general.
Commentators like to summarize ChatGPT and its ancestors as “spicy auto-complete” or as “stochastic parrots.” They’re referring to the underlying mechanism that drives large language models in general.
Like autocomplete, the goal of text-based Generative Pre-Trained Transformers (GPTs) is to predict what the next most likely word in a sentence is based on what it already learned. Unlike your phone’s autocomplete, ChatGPT was trained on most of the World Wide Web and then some (hence the “large” in “large language model”).
If ChatGPT is asked to complete a sentence that starts with the word “Happy,” it will calculate what the most likely next word will be based on a model. One of the top choices will inevitably be “Birthday.” When it sees “Happy Birthday,” it will calculate the next most likely word, which almost certainly will be “to” followed by “you.”
Detractors say that this is far from actual intelligence, and we shouldn’t take ChatGPT or its brethren very seriously. But if that’s true, how did ChatGPT score in the 90th percentile for the bar exam and pass a Wharton MBA exam with flying colors?It’s because calling ChatGPT “spicy autocomplete” is no different than calling a computer “a bunch of 1s and 0s.” Yes, it’s true, but it also REALLY undersells what a computer can accomplish.
I am not arguing that ChatGPT demonstrates human-level intelligence or even a modicum of human understanding. At its core, large language models are indeed spicy autocompletes. They have no conception of “truth” or “emotion” or “morality.” It can certainly give you a dictionary definition, but that’s just more spicy autocomplete at play.
Because of this, large language models, including ChatGPT, tend to - and I’m putting this lightly - lie like a dog.
ChatGPT faces a similar situation as Hunter S. Thompson: too many hallucinations. And trust me, you don’t want either to be in the driver’s seat.
In large language models, a hallucination is an “incorrect, nonsensical, or irrelevant answer.” Hallucinations can stem from many different sources, such as overfitting, decoding errors, gaps in training data, ambiguous context, and biases.
But more importantly, hallucinations are an inherent part of large language models like ChatGPT. Imagine if ChatGPT were truly “spicy autocomplete.” It would always pick the most likely word. So, no matter how often it sees the word “Happy” it will always say the next term is “Birthday.” But it’s not.
The power of ChatGPT comes from the fact that it will sometimes choose a less likely option. That’s why it can give output that appears “creative” or “original.” Playground versions of ChatGPT include a setting known as “temperature” that you can increase or decrease. The higher the temperature, the less likely the model is to pick “Birthday” as the word after “Happy.” Temperature is sometimes referred to as a gauge of how “creative” ChatGPT will be.
However, the more “creative” ChatGPT is, the more likely it is to generate something incoherent or wrong.
Instead of cursing hallucinations, recognize that they may well be at the core of what makes ChatGPT interesting. For instance, instead of asking ChatGPT to generate a compelling Tweet (or X-post or whatever Musk calls it these days), ask it to generate 30 compelling posts..
Then, it’s time for you to curate the results and pick the best one. The longer the copy, the more human intervention is required. A Facebook post may only require a quick read-through, while a blog post requires massive rewriting to eliminate any “bad” hallucinations.
ChatGPT shines at AI-driven content. This includes using ChatGPT for ideation, outlines, analysis, first drafts, and editing. It can produce more ideas than you can in a shorter period of time. Your job is curating the best of ChatGPT's products while leaving out the rest.
Now that you understand where ChatGPT (and, more generally, large language models) go wrong, you can harness their strengths. It requires going beyond telling ChatGPT to be engaging and copy-pasting the result into a Word document.
True AI-driven content combines the evaluative abilities that humans excel at with the ability of large language models to generate many ideas in a short period of time. One day, we may have AI-written content that is hallucination-free and creative will be fulfilled.
Unfortunately, today’s large language models just aren’t up to the task.
In this episode of the Sell Smarter. Sell Faster. podcast, we’re exploring the ever-changing role of artificial intelligence (AI) in content...
1 min read
The issue of AI is certainly what many would describe as a “hot button.” You don’t have to look far to find a firm opinion on the matter. And sure,...
Artificial Intelligence (AI) is rapidly transforming our world. From virtual assistants to self-driving cars, its applications are becoming...