ethics in AI
Artificial Intelligence

How to ensure ethical AI without compromising creativity in tech

Last night I was fortunate to attend the SODA Social - Ethics in AI (artificial intelligence) meetup in London. The format discussion was a panel made up of people with a variety of different expertise and experiences working with AI. Now for me who has some introverted tendencies and generally hates being  in large groups of strangers, meetup concepts are terrifying. But seeing as the topic and developing inclusive technology is something I am particularly interested in I decided to challenge myself and go along. I’m so glad I did.

Can we frame the discussion to develop it in a way that doesn’t harm creativity in tech?

What’s that image? This posts feature image is there to represent drug development  - I throwback to my chemistry degree.  If AI is our drug, what can we learn from other industries with high impact on humanity? Though these systems are by no means perfect, is there something we can take from this model to start creating a wireframe for the discussion?

Discussion Points: How can we frame the discussion on ethical AI?

Now I could happily discuss each of the themes I pulled out from the meetup discussion for hours but I’ll try to be concise.

developing inclusive technology

How do you make ethical AI and ensure we are developing inclusive technology?

Algorithm bias, where an algorithm shows bias toward one scenario, misogynistic or racist outputs, is a reasonably well known phenomenon among data scientists. Though it may not present always in the obvious ways such as difficulty recognizing non-white faces, it’s something we need to be aware of. How can we prevent the biases we all as humans have and use AI to instead push forward a more inclusive agenda?

This was something that the panel discussed at length. Above all, I think we can all agree the best way ensure inclusive AI is to create it in an inclusive way. If we are actively working to increase diversity in our teams and take account of more voices we will naturally have a better chance of reducing the statistical significance of individual bias. This is something we can all take accountability of. It should be part of our commitment a global citizens allowing us to be developing inclusive technology.

How do you program for human values?

Do we want to have AI responsible for deciding what is and isn’t in line with human values? My view is no. We shouldn’t be asking machines to determine these things on our behalf. From this perspective it is then up to us to define where we do and don’t use AI.

One of the most interesting discussions in this space I have found is AI powered weapons. On the pro side there is the opportunity to increase accuracy and reduce the risk to life sending troops into combat. For the negative, can we rely on a machine to be able to make the choice of who would live or die? Will it be able to recognize the innocent civilian? AI is still a program built by a person, a person who will have human errors just like everyone else. Do we risk deploying weapons that could have bugs in the code?

what makes us humanWhat makes us human and the machine a machine?

I think the above points bring us nicely onto the next topic of ‘what makes us human’. How do we define our own humanity? I think the point made by the panel on this was a really great one. Humanity is creative, it has empathy and emotional intelligence to react differently within context. Now I know many of you will say, and rightly so, that AI systems already have shown the ability to mimic some of these traits. Does that make them human though? I think the key word here is mimic. We are a long way off having systems that can display these traits across the broad spectrum of scenarios that humans do.

ethical aiDo we understand our own data footprint and how this is used?

For me this is a huge topic and one that is core to my reasoning for setting up this blog. I do not believe the majority of people have a true understanding of their data footprint. Equally, I do not believe that they understand how it is created or used. To the point made at the meetup, we do just throw our data away. It is recorded constantly in most of our daily interactions. There is a significant gap between those that know this, those that accept this and those that genuinely don’t have a clue. The latter are most vulnerable to the negative side of big data systems.

I don’t want to go deep on this now - I will cover it separately - but it is something we will need to address if we plan to be developing inclusive technology. It will be necessary for defining ethics in AI.

Who is accountable for the real world impact of AI powered systems?

This was one I picked up with some incredible women I was talking to after the panel. Following discussion, my takeaway was, companies often rely on the consumer’s own understanding of AI and machine learning when developing features or services. The example discussed was that of a chat bot. Many of will understand that often when talking to Customer Services online we are in fact talking to a bot. But not everyone would.

This is assuming a level of knowledge of chat bots and how these work. For those that don’t understand and then find out is a trust buster. With any new technology we have a responsibility to ensure the consumer has a ‘good’ understanding of it. At the very least, an awareness. This is key for developing inclusive technology.

Interested in learning more about Machine Learning? Try this post on why I love it!

education on AIHow can we support education on AI in the mainstream?

For me this question really gets to the crux of the issue of ethics in AI. This is the point I will close on. If we give everyone a good enough level of understanding about the topic, they can have a voice in it’s use. This will allow us to clearly define what is ethical in AI and take the pressure off programmers to know all the answers!

The best example I have of this is my experience on jury duty. I did not want to do jury duty. What it did show me however was the power of diverse groups of people to work together.  We came to a collective decision based on evidence. I believe the decision was the right one and seeing this process in action gave me greater faith in humanity.

My mission with this blog is to make big data concepts accessible to all. This is done through education. We need to get better at helping people. Especially people who are not working in the industry day to day what it means and how it impacts them.

Do you agree with my thoughts? Comment below

Want to learn more about AI? Check out my beginners guide here


One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *