Written by: William Robertson – The Pro Forum Community of Practice

Technology continues to march forward as the years go by, bringing all manner of advancements and new ways of performing everyday tasks. Notably, in the last couple of years, there has been the rise of AI chatbots and even more recently, there has been a rapid advancement in the ability and capability of these chatbots. Many businesses have jumped to implement this technology, some even too eagerly, it must be said, though others have expressed caution about its increasing use.

However, AI chatbots are still considerably flawed, lacking the necessary human ability to create quality content, and yet many businesses and people are wholly outsourcing writing content to chatbots and using generated content as is, which can have major repercussions.

While they may seem obvious, it is important to remember two things about AI chatbots they are not human, and writing is more than just stringing words together, especially in a formal or business context. This is perhaps best demonstrated through how chatbots work. Chatbots like ChatGPT work by attempting to understand a given prompt and then spitting out a string of words that it believes best answers your question. This response is based on the data the chatbot has been trained on, which is, amongst the other information that it has been fed, comprised of data scraped from the internet. With AI generation based on word prediction, it does not have human understanding, comprehension, or context, which are all vital characteristics in writing, meaning that AI-generated content often falls short in terms of syntax, tone, and information accuracy.

That last point is perhaps the most notable pitfall to using AI-generated content and also the one that can have the most significant consequences. In June, two US lawyers were fined $5,000 and had their case thrown out after their legal case, made using AI, cited non-existent cases as precedent. The lawyer who conducted the research for the case, having never used Chat GPT before, exclusively used it to conduct legal research and had falsely assumed that it would not make stuff up and then not bothered to verify the information it generated. Knowingly or not, spreading fake news or misinformation can, at the very least, only ruin the reputation of your business, but it can also affect those outside your business and have a corrosive effect on the fabric of society. This is further demonstrated in the above example as information generated by the lawyers attached real judges to the fake legal opinions mentioned in their case.

While the consequences are not as dire, having poor syntax and tone in your business’s content looks unprofessional and can also cause reputational damage. Chatbots tend to write in a stop-start robotic way giving content an unnatural tone that can be off-putting for your consumers. AI content can also be very generic, meaning your business content can lose its unique voice and get lost in the crowd. This is also on top of the SEO penalties for using auto-generated content that Google has.

AI-generated content can still have a role to play and can be a great assistance in the workplace, helping to correct grammar and helping to produce content. But it should not be relied upon, and important content should still be created by humans or, at the very least, reviewed by them.

 

NOTE: The content of this article is intended to provide a general guide to the subject matter, and specialist advice should be sought about your specific circumstances. The content must not be relied upon as legal, technical, financial or other professional advice.