AI has blasted its way into the public consciousness and our everyday lives. It is powering advances in medicine, weather prediction, factory automation, and self-driving cars. Even golf club manufacturers report that AI is now designing their clubs.
Every day, people interact with AI. Google Translate helps us understand foreign language webpages and talk to Uber drivers in foreign countries. Vendors have built speech recognition into many apps. We use personal assistants like Siri and Alexa daily to help us complete simple tasks. Face recognition apps automatically label our photos. And AI systems are beating expert game players at complex…
In this article in The Gradient, I explain why self-driving cars might actually be less safe than human drivers and how we can test these vehicles to ensure that we don’t turn our roads into a dangerous experiment.
Enjoyed my conversation with Neil Hughes on the Tech Talks Daily Podcast: https://t.co/zvu4qzJnWX?amp=1
Interesting discussion in NYT by Thomas Edsall of the contribution of automation to inequality. But it reiterates the fear that we will have human-level AI by 2040–2050. As I explain in my book, this fear has no scientific basis: www.AIPerspectives.com/evil-robots
What strikes me about the just-proposed EU regulations on AI is that there is no mention of self-driving vehicles. Self-driving vehicles are also not included in the list of high-risk applications.
Author of “Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity” published Feb 9, 2021 by Fast Company Press.