First and foremost, this isn't about sentient robots getting sick of their repressive human overlords and demanding equal pay and holiday. The martech narrative hasn't yet strayed into allegory quite like that, so we're safe in that regard.
Instead, Ethical AI focuses on the implications and repercussions of the use of AI systems. Rather frustratingly for me writing this, Ethical AI doesn't actually have a concrete definition, and basically everywhere you'll look has their own take on the matter.
What I'm going to try to do here is sort-of amalgamate all of the best definitions, as well as throw in some of my own wonderful musings and hey, maybe we'll all come out of this better and more well-rounded people.
Probably not though. Just an article about robots, innit?
By now you've probably all heard of the scenario in which a driverless car can either hit an old person, or a child, or crash into a three and kill the passenger. It's a moral quandary that has baffled people for years; which life is more valuable, if any at all?
That is just a small branch of Ethical AI and, truthfully, it doesn't really apply to the world of Digital Marketing. Ultimately in very few of our jobs will involve life and death situations, so Ethical AI, with regards to martech, is a little different.
Like I've mentioned a million times now, the meaning of Ethical AI is nebulous. However in a martech sense, there's only a few things it can mean.
Primarily, Ethical AI refers to the implications of its use in regards to the customers and the staff. There have been some pretty bad results in regards to testing machine learning/AI, such as a few years back when Microsoft launched its AI Twitter account, called Tay.
Tay was supposed to learn from her interactions with real people, but because people are rubbish, she ended up tweeting pro-nazi propaganda as well as saying that 9/11 was an inside job. Unsurprisingly, Tay was shut down after 16 hours.
That was the least offensive tweet I could find from Tay...
The worries also come from the moral standpoints of the people responsible for creating the AI systems as, when they're implemented, they could be subject to less-than-desirable tendencies if the creator was that way inclined.
Furthermore, AI depends on machine learning, thus can be moulded by the interactions it has with customers, clients, and employees.
Generally speaking, people complaining about a product or service aren't in a particularly good mood, so if you have an AI chatbot dealing with those qualms, it could become accustomed to some unsavoury behaviour.
It all really depends on when and how the Ethical AI is implemented. In an industrial manufacturing application, it will likely focus on safety, whereas in a public service, equity and fairness would take priority.
Kimberly Nevala, a Seattle-based AI strategic advisor at SAS, said:
“Principles aside, enterprises are already — at one level or another — held responsible for the products and services they deliver.
"Disagreements exist regarding whether existing standards are high enough but this doesn’t negate the fact they exist".
Another example of the necessity of Ethical AI is in the video above. The robot has been told to open the door and make its way to the other side, seemingly no matter what obstacles it finds in front of it.
Naturally this has nothing to do with marketing, but you need only wonder whether there was a child playing around with that door at the time, what would happen when the robot forced its way through.
Apply this thought to an AI system being told that it categorically must close a sales loop or get someone onto a website landing page via various digital marketing methods, and then things could get really out of hand.
A client could have had a personal bereavement and email to say that they won't be available for some time. If your AI system has their number, email or any other means of contact, then it won't take someone dying as a decent excuse not to complete its task.
It's a mess, and that's what Ethical AI is aiming to fix, albeit slowly, because orchestrating this sort of thing is really difficult on a coding level, and close to impossible on an ethical basis.
Singularity Hub’s recent study on Nature Machine Learning, revealed that there are a few major moral themes and guidelines that are common in various institutional AI rules.
- Transparency: Manufacturers and vendors should always make the decision-making mechanism of an AI device transparent to users. This approach aims to prevent harm against humans and protect fundamental human rights.
- Nonmaleficence: Nonmaleficence refers to “doing no harm.” AI algorithm designers should ensure that AI decisions don’t lead to physical or mental harm to users.
- Justice: Justice or fairness refers to the practice of monitoring AI to prevent it from developing bias, as was shown by Amazon’s case when its AI recruitment bot decided it preferred hiring men over women.
It also refers to ensuring that AI systems are made accessible to all races and genders. The principle also entails taking a more sensitive approach to replacing jobs with AI-powered technologies.
Generally, Ethical AI will most likely become important to you at some point in your professional and personal life, but it likely won't be for another ten or so years until it becomes well and truly mainstream.
For now, people prefer the human touch, and the ethical issues raised are enough to keep anyone busy for quite some time.