AI, Ethics, Assumptions and Privacy

My podcasts were up to date and I started looking for something new to listen to in the car and while doing my chores and I’m so glad I did.

I landed with Professor Genevieve Bell and the Boyer Lectures starting with 04

Fast, smart and connected: How to build our digital future

and I will be recommending it to my colleagues and associates and of course to you dear reader because it is so pertinent to our lives today.

She talks about privacy and how we willingly give up privacy in return for services we enjoy, but also how that data than gets shared or sold to third parties and how much organisations like NetFlix, Google and Facebook know about us. ‘They almost know more about us than we do ourselves.

In my framework, I relate to her thoughts that we build artificial intelligence based on assumptions, biases and historical information. That means the code in our algorithms for AI are more based on Skinner rooted psychology rather than the more complex behaviours of humans. I think back to seeing video of a fighter jet attacking a base in the Middle East, sighting people wearing burqa or niqab and determining they must be enemies.

We aren’t black and white thinkers and I don’t think IBM’s Watson beating a human playing chess should pass the Turing test. I’m heartened in my work to see the ‘customer’ word used and studied in detail today and am hopeful that we can recognise, in designing our future cities and countries that people are complex and that emotions are a lot more than selections of binary on-off switches, no matter how many transistors we sequence.

She quoted Bill Gates’ concerns about the threat of Smart Machines and even Elon Musk who many might be considered an evangelist of AI, given his focus on driverless cars and other ‘smart technologies’ who says AI is the biggest threat to civilisation.

Growing up on a diet of Science Fiction, like Asimov whose hopeful Laws of Robotics have already been consigned to history; Dick, Heinlein and contemporaries warned of potential dystopian futures that seem a lot more realistic today.

The problem, Genevieve pointed out with machine learning is the biases that go into the programming and then lead to digital biases that ‘thinking’ machines might develop exponentially could lead to extremes of electronic thinking on a totally different track to the human traits Ariely describes as predictably irrational.

I think I’d better go and do my chores, it’s Sunday and dry and I’m at my computer. Here’s a thought and I’ll share my last thought with the quote from Elon Musk that AI is vastly more risky than North Korea.

IMG_0108Be totally honest and truthful and ask yourself, given what we do to each other and our planet, if an AI were given the ability to examine mankind, would it not have to come to the conclusion that humans are the greatest risk to the survival of Planet Earth?


About Luigi Cappel

Writer for hire, marketing consultant specialising in Location Based Services. Futurist and Public Speaker Auckland, New Zealand
This entry was posted in Artificial intelligence, Autonomous cars, Customer Research, Driverless car, endangered species, Future Technology, Futurist, Google, Intelligence, Intelligent Transport Systems, Internet Privacy, IoT, Privacy, Social Media Marketing, Society, Technology, the future and tagged , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s