The format discussion was a panel made up of people with a variety of different expertise and experiences working with artificial intelligence. Now for me, who has some introverted tendencies and generally hates being in large groups of strangers, Meetup concepts are terrifying.
But seeing as the topic and developing inclusive technology is something I am particularly interested in I decided to challenge myself and go along.
I’m so glad I did.
Can we frame the discussion to develop it in a way that doesn’t harm creativity in tech?
What’s that image? This post’s featured image is there to represent drug development – I throwback to my chemistry degree. If AI is our drug, what can we learn from other industries with high impact on humanity? Though these systems are by no means perfect, is there something we can take from this model to start creating a wireframe for the discussion?
Discussion Points: How can we frame the discussion on ethical AI?
Now I could happily discuss each of the themes I pulled out from the meetup discussion for hours, but I’ll try to be concise.
How do you make ethical AI and ensure we are developing inclusive technology?
Algorithm bias, where an algorithm shows bias toward one scenario, misogynistic or racist outputs, is a reasonably well-known phenomenon among data scientists.
Though it may not always present in the obvious ways such as difficulty recognizing non-white faces, it’s something we need to be aware of when developing the technology. How can we prevent the biases we all as humans have, and use artificial intelligence to push forward a more inclusive agenda instead?
This point on inclusivity was something that the panel discussed at length.
Above all, I think we can all agree the best way to ensure inclusive artificial intelligence is to create it inclusively.
If we are actively working to increase diversity in our teams and take account of more voices, we will naturally have a better chance of reducing the statistical significance of individual bias.
Increasing diversity is something we can all take accountability for. It should be part of our commitment as global citizens allowing us to be develop inclusive technology.
How do you program for human values?
Do we want to have artificial intelligence responsible for deciding what is and isn’t in line with human values? My view is no.
We shouldn’t be asking machines to determine these things on our behalf. From this perspective, it is then up to us to define where we do and don’t use artificial intelligence.
One of the most interesting discussions in this space I have found is AI-powered weapons.
On the positive side, there is the opportunity to increase accuracy and reduce the risk to life sending troops into combat.
For the negative, can we rely on a machine to be able to choose who would live or die? Will it be able to recognize the innocent civilian? AI is still a program built by a person, a person who will have human errors just like everyone else.
Do we risk deploying weapons that could have bugs in the code?
What makes us human and the machine a machine?
I think the above points bring us nicely to the next topic of ‘what makes us human.’ How do we define our humanity?
I think the point made by the panel on this was a great one. Humans are creative; they have empathy and emotional intelligence to react differently within context.
Now I know many of you will say, and rightly so, that AI systems already have shown the ability to mimic some of these traits. Does that make them human though?
I think the key word here is mimic. We are a long way off having systems that can display these traits across the broad spectrum of scenarios that humans do.
Do we understand our data footprint and how companies use it?
For me, this is a huge topic and one that is core to my reasoning for setting up this blog. I do not believe the majority of people have a real understanding of their data footprint.
Equally, I do not believe that they understand how it is created or used. To echo the point made at the Meetup, we do throw our data away.
Data about us recorded during our daily interactions.
There is a significant gap between those that know this, those that accept this and those that genuinely don’t have a clue. The latter are most vulnerable to the negative side of big data systems.
I don’t want to go deep on this now – I will cover it separately – but it is something we will need to address if we plan on developing an inclusive technology.
It will be necessary for defining ethics in AI.
Who is accountable for the real world impact of AI-powered systems?
This was one I picked up with some incredible women I was talking to after the panel. Following the discussion, my takeaway was, companies often rely on the consumer’s understanding of AI and machine learning when developing features or services.
The example discussed was that of a chatbot. Many of will understand that often when talking to Customer Services online, we are talking to a bot.
But not everyone would.
Not disclosing the technology is assuming a level of knowledge of chatbots and how these work. For those that don’t understand and then find out is a trust buster.
With any new technology, we have a responsibility to ensure the consumer has a ‘good’ understanding of it. At the very least, an awareness.
Interested in learning more about Machine Learning? Try this post on why I love it!
How can we support education on AI in the mainstream?
For me, this question gets to the crux of the issue of ethics in AI. Education is the point I will close on. If we give everyone a good enough level of understanding about the topic, they can have a voice in its use.
More voices in the discussion will allow us to clearly define what is ethical in AI and take the pressure off programmers to know all the answers!
The best example I have of this is my experience on jury duty.
I did not want to do jury duty.
What it did show me, however, was the power of diverse groups of people to work together. We came to a collective decision based on evidence. I believe the decision was the right one and seeing this process in action gave me greater faith in humanity.
My mission with this blog is to make big data concepts accessible to all. Accessibility is achieved through education. We need to get better at helping people. Especially people who are not working in the industry day to day what it means and how it impacts them.
Do you agree with my thoughts? Comment below
Want to learn more about AI? Check out my beginners guide here.
This post was proofread by Grammarly
Advertising Disclosure: Artificially Intelligent Claire may be compensated in exchange for featured placement of certain sponsored products and services, or your clicking on links posted on this website.