Generative AI - the 3 elephants in the room

In the past two months we have met with over 50 CEO’s and senior leaders across every imaginable industry. When it comes to generative AI, what do the executive teams of super funds, retailers, financial planners, manufacturers, transportation operators and layers have in common? These three same questions, and they are… the elephants in the room.

AI image - 3 elephants

Elephant #1 - How can we use ChatGPT if we can’t trust the answers?

This elephant is in reference to the well publicised instances where ChatGPT will “hallucinate” and make up answers to a question or prompt. Just this week the media has been enjoying the story of two New York lawyers who used ChatGPT to write a court briefing, which cited six nonexistent cases.

This is a big elephant, and yes it’s real. The elephant itself is not a hallucination! 

Large Language Models such as ChatGPT do not work in the way much of the internet does, such as search engines which retrieve information from a database and display it back to us.

As such, an important way to avoid hallucinations is education of your team, and common sense. I hope that lawyers don’t cut-and-paste case law from Wikipedia, Reddit or from Facebook. Likewise, don’t trust everything that ChatGPT provides you.

A less known, but very effective method of reducing or removing hallucinations is to give ChatGPT your own knowledge-base from which to retrieve information from. Apart from giving ChatGPT ‘focus’ it also allows other benefits such as inclusion of up-to-date content (ChatGPT stopped ‘learning’ in September 2021) and you can include your own proprietary information if you wish. In effect, you can have your own internal private ChatGPT.

There are also advanced prompting techniques such as step-by-step, Chain of Thought, Tree of Thought and “SmartGPT”  (watch the excellent video).

 Furthermore, you could try Bing, ChatGPT in Browsing mode or ChatGPT plug-ins – all of which have access to the Internet and so are much less likely to invent answers to your questions.

What you should do today: educate your company on the limitations of Large Language Models such a ChatGPT, and put guidelines or policy in place for their use. Experiment with advanced prompts such as “Let's work this out in a step by step way to be sure we have the right answer”. Watch the SmartGPT video.

What you might do tomorrow: consider what internal company information could be made more accessible and usable if you had your own private “ChatGPT for your documents”

 

Elephant #2 - Will OpenAI steal my private company data?

There has been a lot of misinformation on this topic. The simple answer is no, not if you don’t want it to. By default, any information you type or paste into ChatGPT can be used in the training data, to improve the system for other users.

But, if your employees are considering using ChatGPT for anything other than public domain uses, you should instruct them to switch off training in ChatGPT settings (under Data Controls). This will turn off training for any conversations created while training is disabled. Or you can submit this form. Once you opt out, new conversations will not be used to train ChatGPT or other OpenAI models.

If you are using the OpenAI API, for incorporating GPT into your own applications, by default your data is not used for training the AI model. Your data is kept private. 

OpenAI have also just released their Security Portal, which provides a lot of information about their data handling and security processes.

There are also Enterprise-ready alternatives to OpenAI, such as the newly announced Amazon Bedrock services , which provides a choice of foundation models (LLMs) via an API, with AWS tools for security and scalability. Other alternatives are Microsoft’s Azure OpenAI Service, or Google’s Vertex AI.

What you should do today: for company use, staff should switch off training in ChatGPT. Your IT team should review https://trust.openai.com

What you might do tomorrow: if you want to incorporate ChatGPT-style functionality into your business workflows, look into data privacy and security via APIs from OpenAI, Google, Microsoft or Amazon / AWS.

 

 

Elephant #3 - Who owns the copyright of text and images I generate?

This is an elephant with a curly tail, as there are many unanswered questions. The way we discuss this with clients is in two ways;


1. Copyright of inputs into the AI model

Large Language Models such as ChatGPT (GPT-4) have been trained on vast swathes of content on the Internet. Image generation apps such as Stable Diffusion and Midjourney have been trained on billions of images from the Internet and image databases. The AI model has ‘learned’ from this training data, and can then be used to create new novel outputs.  

Whether the companies building these models have breached copyright is hotly debated, and for the most part depends on the definition of “Fair Use”. There are court cases being heard now, and more will follow.

Adobe are making a big deal about the fact their new image generation app Adobe Firefly is “commercially safe”. The model for Adobe Firefly was trained on Adobe Stock images, openly licensed content and public domain content (where copyright has expired). It is designed to generate images safe for commercial use. Currently, the image quality of Adobe Firefly lags behind Midjourney or Stable Diffusion, but we expect it to improve quickly. 

2.  Copyright ownership of outputs from the AI model

A question we are asked often is; if you generate an image using Stable Diffusion, can you own the copyright of that image? The extent to which copyright can safeguard creations made with AI remains uncertain. In Australia, typically “human authorship” is a prerequisite for copyright protection. Consequently, the degree of human involvement in each instance will likely determine the applicability of copyright. 

There are yet to be any cases that put this to the test. Is writing a text prompt, and selecting the best image from a selection, enough human involvement? What if the is manipulated by a (human!) designer? The lines of human authorship are yet to be drawn. 

Your company might decide that images you generate for use in your company are not necessarily required to have copyright protection. You should consider the risks for different use-cases.

Also, it’s important to respect potential copyright ownership of images you generate, for example you can create a realistic image using the prompt “bottle of Coke”, which naturally you should not use in your next social post. Just because you created it, does not mean you now own it.

Please note - we are not lawyers, and this is not legal advice.


What you should do today: if you are nervous about commercial use but want to get started with AI image generation today, try Adobe Firefly.

What you might do tomorrow: keep an eye on progress in this space, we expect some legal cases will set a precedent before too long, and provide brands with some comfort about AI images for commercial use.

 Now that we have addressed the elephants in the room, hopefully you can be more comfortable with educating others in your business about safe, sensible and ethical use of AI in your company. If you have any questions, or would like to suggest a name for one of our pet elephants, please get in touch.




This article is for informational purposes only and should not be taken as legal advice. If you have specific legal questions or concerns, please consult with a legal professional.


Previous
Previous

Navigating the Next Wave: Generative AI in Retail

Next
Next

Coding games without any dev knowledge