Do our language choices make AI sound more human and intelligent than it currently is?

Catherine Wells, Associate Creative Director

A colleague recently shared a learning resource for decoding AI technical jargon. As a copywriter whose job is to make the complex sound simple, it got me questioning why the language around AI would require a glossary to begin with?

With AI platforms such as ChatGPT offering a visually pared-back interface and intuitive functionality purposefully designed to be accessible, surely the language around AI is an intentional choice as well? If so, why hasn’t it been made simpler?

It sent me down a wormhole of AI words, but ultimately what interested me the most – and what I’ll cover here – wasn’t the impenetrable tech-speak, but the weirdly humanising language used to describe something which, by its very definition, is artificial.

To hallucinate is to be human

As a desk-based creative, I’m probably one of the few people who could hallucinate at work and see an improvement in performance. From teachers and truck drivers to surgeons and social workers, hallucinating on the job would lead to serious consequences – yet it’s almost worn as a quirky badge of honour for large language models (LLMs).

IBM describes AI hallucinations as when a LLM “perceives patterns or objects that are non-existent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate”.

So, why not rebrand ‘hallucinations’ as ‘errors’ instead? While equally negative in its connotations, ‘error’ is a fitting, familiar word in this context (think: system errors, user errors) and its banality would draw less attention to AI’s limitations than the hyperbolic ‘hallucination’.

The cynical explanation is that for those pushing the adoption of the tech it’s more beneficial for AI to be perceived as intelligent than accurate, and the association with having human and animal traits (such as hallucinations) outweighs the negative impact of discussing the downsides.

As Usama Fayyad, Executive Director for the Institute for Experiential Artificial Intelligence at Northeastern University, explains in regards to chatbots:

“When you say hallucinations, you’re attributing too much to the model … You’re attributing intent; you’re attributing consciousness; you’re attributing a default mode of operating rationally; and you’re attributing some form of understanding on the part of the machine.”

Or, giving the benefit of the doubt, ‘hallucination’ could be considered the right descriptor, as IBM go on to explain:

“…from a metaphorical standpoint, hallucination accurately describes these outputs, especially in the case of image and pattern recognition (where outputs can be truly surreal in appearance). AI hallucinations are similar to how humans sometimes see figures in the clouds or faces on the moon.”

Whichever side you err towards, the irony remains that while hallucinations are often seized upon by those fearful of AI as a reason not to trust it, on a broader, subconscious level, every mention of hallucinations only reinforces the collective perception of AI as having human capabilities.

When is a prompt not just a prompt?

‘Prompt’ is another word we’re suddenly hearing everywhere. Having never prompted anything before in my life, I’m now regularly prompting prompts to create better prompts.

It makes sense that new technology would usher in new terminology, but as with hallucinations, we already have the existing vocab. We’ve been programming, coding, feeding, seeding, scripting, instructing, requesting, formatting and commanding for decades… so why are we now prompting?

Asking AI itself where the phrase originates, I was given this summary (via the Google overview):

“In AI, the term ‘prompt’ is used because it signifies providing a hint, cue or suggestion to guide the AI model towards a desired output. Think of it as giving the AI a starting point or a direction, similar to how a prompt in a creative writing exercise encourages a writer to begin.”

Again, it feels like an attempt to humanise the tech. Like an absent-minded professor, AI only needs a gentle nudge in the right direction to unleash its genius. The human input is minimised to a mere ‘cue’, while AI is deemed sentient enough to compare its process to creative writing.

It’s predicted AI will reach this level of ability in the near future, but in its current form, to reduce the human input to a ‘prompt’ feels like an overclaim – because what use would models like ChatGPT be if we weren’t here to prompt them, thoroughly, across several iterations?

While in some industries (such as automotive) AI can be genuinely autonomous, my personal experience as a copywriter has so far been that the thinking that goes into prompting and refining and fact-checking the output makes AI a very useful tool, but overall, doesn’t speed up the creative process or reduce the human effort (read: cost) required to get to the same result.

Learning and understanding

I’m lumping these words together because they share a common thread: they’re familiar, human concepts that are to used explain the complex process of tokenisation (the building blocks LLMs use to function) in a way that’s instantly gettable.

But the truth is that, despite being loosely modelled on the human brain, AI doesn’t learn or understand in a way comparable to humans. So why do we use these comparisons? Is it just human nature to use the easiest available metaphor to explain the intangible? Or are humanisms baked into AI’s creation? Ian Mitchell, Director of CTO Advisory, thinks so:

“AI appearing so human is no coincidence. This is because AI has been programmed by humans. It’s human nature to make things in our own image, it makes us feel more comfortable and secure. This is what makes tools like ChatGPT sound alive.”

Philosopher Damian K. F. Pang even goes as far as to claim AI is human, in as much as we’re the source of its content:

“For chatbots, the sample data comprises a large number of snippets from conversations that were fed into the system. All these snippets have been created by real people. In that sense, the content is indirectly generated by humans – and merely reassembled by AI – which is one of the main reasons it feels so real.”

Reframing the use of human language to describe AI from an encroachment on our territory to a homage certainly feels more palatable, even if the real-world implications remain the same.

One last word: words matter

Whether it’s a deliberate move by tech companies to overstate AI’s abilities or people understandably grasping at familiar language to explain unfamiliar concepts, it’s fair to say that the language of AI mimics the human condition, and intentionally or not, that shapes the way we think about and relate to the technology.

To help retain my own humanness – and professional usefulness – in the era of AI, these will be the two things I take forward when it comes to the language surrounding it. Will you be doing the same?

1. Mind your language

While it’s important to use the correct AI terminology to clearly communicate and appear credible in your understanding, could you use phrases that dial down the humanness and more accurately reflect AI’s current capabilities without losing the meaning? For example, did an LLM’s conscious mind really hallucinate, or did it simply take two tokens and two tokens and make six fingers?

And vice versa, I’ll be checking my own speech and writing for signs of AI-isms creeping in and making a more concerted effort to ensure the information I’m putting into ChatGPT, and in turn training it with, is as unbiased as it can be.

 

2. Don’t downplay your expertise

Yes, AI is impressive. The ta-da moment when you’re returned a great result is a high, and it can be tempting to try to recreate that with colleagues and clients when explaining how you created a piece of work. But if you fine-tuned your prompt and pored over the output for hours, you too are very impressive – and not acknowledging that expertise (even when it doesn’t feel like expertise!) has consequences.

In the short term, clients may believe the tech is more capable than it really is, leading to an overreliance on AI in the creative process and disappointing results that require rework. In the long term, it only feeds the narrative that we humans are replaceable.

CHAT TO US

Contact Kirstin Wilson, Client Partnerships Director
kirstin.wilson@krowgroup.com