OK, full confession. The headline is either a joke or clickbait. No AI tool wrote this post. But artificial intelligence had some ideas (more on that in a bit).
Without the desire for clickbait and to make a point, I would have called this post “Frenemies: AI and PR.”
Let me explain.
I started my career in public relations in 1984 with a highly visible Fortune 500 company — Coors. I learned the fun side of PR (like publicizing a Sugar Ray Leonard bout) and the not-so-fun side (like answering union allegations of worker mistreatment) firsthand. In the (many) years that followed, I’ve worked with multibillion-dollar technology companies, startups and everything in between. I’ve also been fortunate enough to start and manage my own agency, working with an incredible array of interesting people and organizations.
Nevertheless, I have read no fewer than a dozen articles in the last month predicting the demise of my career, my company and maybe even my industry.
To that, I say: Not so fast, my friends. Skynet has not become self-aware. Not yet, anyway. At the same time, I truly believe AI in its many forms will be a factor more disruptive to many businesses than the internet itself, and it will do so in less time.
Here’s my current take.
PR and AI: May 2023
Full confession: I’m a geek. My Mac is running beta software. Apple’s WWDC is right up there with the Super Bowl for me, and I upgrade my iPhone because I can, not because I need to. When ChatGPT and other AI tools started to hit, I couldn’t wait to experiment and learn.
I use ChatGPT and a handful of other AI tools, most often as a brainstorming partner. I might ask an AI tool to suggest headline ideas or suggest words or phrases that may be more concise than my choice, especially if I’ve been working on a big project for a couple of hours and need a boost. And our company has a Grammarly account. Grammarly watches us write in real time, catches typos and punctuation errors and offers myriad writing suggestions.
Even with these tools, I don’t think my job is in any danger yet. Even a simple and straightforward tool like Grammarly doesn’t always get it right, and my writing style leads me to reject more of its suggestions than I accept.
Another obvious reason for confidence: Much of what we do concerns introducing the world to something brand new, and generative AI tools need data to work. ChatGPT, for example, can’t scrape the internet for information about a company, product or service that doesn’t exist yet. Not today, anyway.
I tried two experiments to test my current take on AI.
- First, I went online and pulled a spec sheet for a consumer technology product. I asked ChatGPT to write a news release based on the specifications I put in (important note: I used publicly available data posted online, not anything confidential).
Honestly, the output wasn’t bad. If the release had been a real assignment, was it client-ready? No, but it wasn’t the worst first draft I’ve seen. The weakest part was the quote the AI made up for a company spokesperson. I believe quotes play an important role in news releases to move the story forward by giving substantive opinions that would not be appropriate in news writing if not attributed to a person. A CEO can say she believes a certain product is the best on the market, for example, that such a statement would make a poor choice for a headline. ChatGPT, after probably looking at too many weak quotes online, spit out: “[Company] is committed to developing products that meet the needs of today’s consumers.” Nope.
But, in the end, if I was under a deadline and needed to get some facts organized quickly into a rough draft, this might have been a good head start. - Then, I asked ChatGPT to write my bio without providing specific input. My information is readily available online, but what I got back was, well, hot garbage. The generated bio was not even close to accurate (my degree is not in journalism, and it is not from the University of Colorado) and full of fluff (“Doyle Albee, a prominent public relations expert, has made a name for himself in the world of media and communications.” Blech.). In this case, the ChatGPT-fueled bio wasn’t even a good start.
In the end, these are great tools that can save time and spark ideas for PR professionals. The danger comes in, as with most professions at this stage, when someone asks an AI tool to do something the human lacks the expertise to evaluate.
For example, if ask ChatGPT to write a contract for me, I’ll likely get something that is full of legal terms and likely looks pretty good to me. But I’m not an attorney, so the contract may be missing many elements needed to protect me in the transaction. Much like my news release example, it could be a great first draft for a qualified barrister, but I probably shouldn’t rely on it for an actual transaction.
I’ll make the same case for the news release draft ChatGPT provided. It wasn’t bad, and my years of experience could save some time using that draft for the initial organization. However, if someone with no experience simply started sending that release to reporters, that person would likely be disappointed with the results (and I could write an entire post about how even a great news release is only one small part of the process required to achieve coverage).
And you certainly won’t be seeing my AI-generated “bio” anywhere.
Just like buying a word processor doesn’t give anyone all the tools needed to write a New York Times bestselling book, AI isn’t a complete replacement for human experience and expertise.
At least, that’s my take in May 2023. Stay tuned.