5 actions for the responsible use of generative AI

Responsible use of generative AI

Last week, I had the pleasure of presenting at HumAIn, which had an awesome lineup of speakers discussing the impact of generative AI on the advertising and marketing world. Below is a quick summary of what I presented. Essentially, summarizing what we’ve learned at Time Under Tension through our work, R&D, and general immersion in the generative AI space in relation to the responsible use of generative AI.

If you’re not acquainted with the topic and importance of the role we all have to play, I would definitely recommend watching the AI Dilemma if you haven’t seen it (from the people who made the Netflix hit The Social Dilemma).

The AI Dilemma from The Centre of Human Technology

1. Know your tools

With the explosion of generative AI tools, it’s imperative to know how these tools work (e.g., what data were they trained on) before using them commercially to understand their limitations and how to work with them.

In our Connect4GPT experiment, we learned that while ChatGPT can sound very good at something, it can also be very bad at it at the same time!

Action: Safe to say, the biggest issue isn’t the tools themselves but how well we understand them, so taking the time to do so and educating your stakeholders (from staff to customers) is vital. We’ve written more about handling the elephants in the room here.

2. Behave ethically

The gold rush to launch generative AI-enabled apps is resulting in significantly damaging shortcuts being taken, like the problematic Snap Chat MyAI.

In our I Spy With My AI prototype, we quickly saw how easy it is to take ChatGPT off its guardrails.

“Just because you can, it doesn’t mean you should” is a good mantra to follow. It seems it won’t be long before laws will need to be followed in how we roll out generative AI.

Action: Establish an ethics framework for the adoption of generative AI in your organisation.

3. Be bias aware

While we’ve made (arguably nowhere near enough!) progress on Diversity and Inclusion over the past decade, the AIs have been trained on data that is as old as the internet or even older, meaning biases from our yesteryear societies are real and intrinsic in these models.

Action: When implementing generative AI solutions, we need to be more bias-aware than ever before to ensure we don’t regress (e.g., try asking Midjourney for a “realistic image of a group of nurses”).

4. Practice transparency

The importance of demonstrating genuine transparency over how generative AI is being used was something that Levi’s learned the hard way when trialing the replacement of humans with generative AI models.

In building MenuGPT (a prototype that scans a menu and has ChatGPT come back with the healthiest options), we learned the importance of stating and labelling generative AI outputs clearly.

Action: establish rules for the labelling and identification of generative AI content being presented to users.

5. Work as one

The final action we highlight is the need for organizations to act as one in driving the generative AI agenda forward. We all have a responsibility to demand that our organizations guide customers and staff alike on how generative AI is being governed.

Action: Use of the number of frameworks for the responsible and ethical organisational rollout of generative AI tech as a springboard (like ITEC or the Australian government AI guidelines).

In summary

Feedback from the presentation gave me confidence that there are as many people looking into the ethical and responsible side of using generative AI as there are people looking for those golden use cases.

If your organization is looking for help for driving forward with generative AI, Time Under Tension is here to help 😊

Previous
Previous

Speaking gen AI at Online Retailer

Next
Next

Dazed & Diffused vol. 4