Ethics top of mind as AI takes on a life of it’s own, literally
Going boldly where no human has gone before
It seems like a day doesn’t go by with another AI ethics related discussion raging. Years ago we were talking about choices AI powered driverless cars might need to make (swerve into the pram or the group of adults).
Indeed, into the world of sci-fi we appear to now be fast venturing. In one Star Trek episode, Data (an AI life-form himself) develops a theory that the Exocomps are sentient beings, leaving humans and AI pondering the rights of such beings.
Humans questioning the sentience of AI is not an entirely unforeseeable future for us now, with some arguing that this has already happened.
Sentience today?
Software engineer Blake Lemoine worked with Google’s Ethical AI team on Language Model for Dialogue Applications LaMDA, he hypothesized that LaMDA is already a living being (and was ultimately fired for publicising his views).
In this fascinating interview he shares some of his broader thoughts on the state and future of AI, with a key notion that the future is now unpredictable given the speed of change.
So, into the unknown we go.
Standards needed
In light of all the new generative AI tools (beings!?) available today, we find ourselves asking new types of questions.
Who owns the content generated by AI? Could I be liable for using AI generated content? What if I don’t want to feed the AI machine with my data? Is the content produced bias (yes it is!)?
The smarter AI gets (the closer we get to ASI territory) the more these questions will be asked and needed to be answered.
One thing that is becoming clearer with these new questions is the fast emerging is the need for a globally recognised (and adhered to) code of ethics standards for the application of AI. And the necessity is accelerating the more AI is able to do (soon in our place).
We’ve seen this before, back in the early days of the wild internet, when Flash websites, bad design, and lack of knowledge meant that vast amounts of web content was inaccessible to many. And so the W3C standards were born and cemented.
Likewise today, the need for ethical frameworks increases with every new use case coming to market. No surprise that many frameworks are now emerging for those wanting to delve into the ethical considerations of AI, there are plenty of resources like these ones.
AI Ethics Framework by the Australian Government: The Australian government released a set of eight AI ethics principles in 2019, aimed at guiding responsible AI development and usage. These principles cover areas such as human-centered values, fairness, privacy protection, transparency, accountability, and ensuring AI systems are safe and secure.
AI Ethics Guidelines by the European Commission: These guidelines outline ethical principles for AI, including transparency, fairness, non-discrimination, privacy, environmental sustainability, and accountability.
Google’s AI Principles: Google has established a set of principles to guide their AI development, including fairness, privacy, safety, avoiding harmful applications, and ensuring scientific rigor.
OpenAI Charter: OpenAI’s Charter outlines principles to ensure that AI benefits all of humanity, including broadly distributed benefits, long-term safety, technical leadership, and cooperative orientation.
Partnership on AI: This partnership between technology leaders and researchers works on developing best practices for AI, focusing on areas such as safety, fairness, transparency, and collaboration between people and AI systems.
Asilomar AI Principles: These principles provide a roadmap for AI researchers and developers to ensure the beneficial development of AI. Key areas include research cooperation, safety, transparency, and value alignment.
IEEE Ethically Aligned Design: This framework from the Institute of Electrical and Electronics Engineers (IEEE) focuses on embedding ethical considerations into the design process. It emphasizes human well-being, transparency, accountability, and avoiding biases.
From the bot (AI generated)
Organizations operating in the AI space face some complex ethical dilemmas. What if our algorithm learns in ways different to how we anticipated, or creates unintended outcomes that promote discrimination
The potential for serious legal issues and substantial reputational damage for those involved is growing on a daily basis, so it’s no surprise that ethics has become a key part of this process.
It’s key to have a comprehensive ethical framework in place when using AI. This framework should address things like AI bias, privacy, trustworthiness, security, transparency, accountability and any potential risks associated with using the tools.
Additionally, it’s equally important to think about the ‘real world’ implications of deploying AI-powered products and services. After all, AI isn’t just about machines; people interact with them, too.
So it’s important to also think how AI should be designed to go beyond mechanics and consider the social implications such as its impact on jobs, income inequality, and even governance.
These topics come hand-in-hand when we consider the importance of ethics as AI takes on a life