Max Thake is the cofounder of peaq, the Web3 network powering the Economy of Things.
It’s a scary time to be a writer. Just a few years ago, we could all laugh at AI masterpieces such as “Harry Potter And The Portrait Of What Looked Like A Large Pile of Ash,” but now more and more companies are tapping AI for content creation.
That said, it’s also a scary time to be hiring a writer. Should you run every incoming test assignment through an AI checker? How much can you trust that thing? How should you go about generative AI in general, actually?
A neo-Luddite approach tolerating no AI use may seem noble, but it goes against common sense and business logic. AI is a tool that enables you to spend less and do more. If you, as a business, ignore it while others don’t, you end up biting the dust. So maybe you need a prompt engineer, not a writer?
I thought about this a lot as we set off to hire a new writer, and we decided that AI is a tool, and a tool needs a wielder who’d use it with insight, knowledge and vision. Thus, the journey began.
The First Touchpoint
The hiring process begins with the job description, and there, we explicitly stated that the candidate “should be open to leveraging ChatGPT.” In the age of AI, don’t ignore the tool that has already made such a bang.
We hit the first conundrum when coming up with the tasks for the test assignment. Fundamentally, the one skill we were looking for was writing, the actual writing, not prompt engineering. Still, as using generative AI is a useful skill that could save up some time, we also wanted to see how good the applicant was with that, leaving the scope of its use at their discretion.
The challenges that we came up with included writing tweets, converting a press release into a blog post, coming up with a few blog post ideas (just a few sentences), writing an outline for one of them and using AI to write the intro and outro for this outline. It’s a hefty amount, but it covered the bases for what the person would be expected to do. We explicitly allowed the use of AI for all of these, and that came to bite us.
Game On
Shortly after we posted the job, the responses began to flood in, and we’re beyond grateful to have caught the eye of so many wordsmiths. The test assignment only went to the applicants who successfully passed two interviews, so it took some time before those began to roll in, and there, we quickly realized we had a problem. Whether by their own choice or thanks to the references to ChatGPT in the job description and during the interviews, the candidates leaned heavily into it. Or did they?
We simply couldn’t tell, as the content checkers would give us a high probability of AI-generated text, but given the permission to use AI, this was not a disqualifiable offense. The problem was that it denied us a chance to evaluate the candidate’s own writing.
With that came our first lesson: When hiring in the era of generative AI, you must clearly and explicitly communicate the intended scope of its use to the candidate—and make sure to make this scope observable in their submissions.
The Pattern In The Patterns
After the initial setback, we slapped a disclaimer on the test assignment, clarifying that, if using AI, the applicant must make sure to add a way to distinguish between AI-generated content and their own content, such as using bold and italics. We also asked them to share the prompts they were using to let us better understand their approach and thinking process.
From there, we had a bit more insight into what was happening under the bonnet, which helped us get a better sense of the next candidates. However, we still saw a tendency to over-rely on generative AI across multiple submissions. Our test assignment included coming up with three ideas for blog posts, and we noted that in a few submissions, they were very similar—which was not a positive.
Here’s another example: We open our blogs with a short intro that delivers the reader a TLDR of the piece sectioned into three short questions and answers. This section comes before the body of the blog and works as a nod to the TikTok-era attention spans.
As we discovered, though, if you feed peaq’s blogs to AI to get a sense of its style and voice, the model would turn this three-question formula into the structure for the entire blog. The TLDR would be gone, and the body itself would be sub-sectioned with these questions, which is indeed in line with the pattern one can find in the texts but kind of kills the entire idea.
In this bit, human comprehension and ingenuity would have saved the day. In a few cases, though, this didn’t happen—was it too much trust in the AI’s ability to dissect the patterns in the data and replicate them? Was it us putting too much emphasis on AI? Was it simple negligence? We would never know, but there was another lesson: While you may accept that the use of AI is fair game, make it clear to the candidate that it’s always their own contribution and effort that are in the spotlight.
Epilogue
So, where does this bring us?
One the one hand, AI as it is today lacks many of the features that make or break a writer. It’s a tool that can be handy but not a replacement. On the other hand, though, Bard, Claude or ChatGPT may very well knock on the door and beg to differ. Humans are really, really bad at anticipating and planning for exponential change. And AI’s trajectory? It’s pretty exponential.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here