Is anyone concerned about artificial intelligence as I am?
You know the technology is growing at such a rapid rate that humans might be subservient to robots soon. Or so say a few prognosticators.
Being somewhat a libertarian, I do not favor government interference a lot of things, but AI is one emerging technology that we need to monitor closely. I don’t trust the Google’s of the world that tell us the technology will only enhance our lifestyles.
I can’t help but visualize human-like robots run by computers that have far superior intellect that us mere mortals, enslaving and eventually killing us all. Maybe I’ve watched too many movies of this genre to think realistically, but for some reason it sticks in the back of my mind.
Yes artificial intelligence can supply us with a wide array of useful purposes. My son talks about using ChatGPT, an AI writing assistant to write essays for college classes. Apparently in his assessment, it provides a better written essay on any topic than most students. The application is the fastest growing consumer application of all-time boasting 100 million users.
Yet the warning signs swirl in my head. Michael Osborne, concerned to be an expert in the development of AI, wrote this in the Guardian newspaper:
“Currently, AI is inscrutable, untrustworthy and difficult to steer – flaws that have and will lead to harm. AI has already led to wrongful arrests (like that of Michael Williams, falsely implicated by an AI policing program, ShotSpotter), sexist hiring algorithms (as Amazon was forced to concede in 2018), and the ruining of many thousands of lives (the Dutch tax authority falsely accused thousands, often from ethnic minorities, of benefits fraud).
Perhaps most concerning, AI might threaten our survival as a species. In a 2022 survey (albeit with likely selection bias), 48% of AI researchers thought AI has a significant (greater than 10%) chance of making humans extinct. For a start, the rapidly advancing, uncertain, progress of AI might threaten the balance of global peace. For instance, AI-powered underwater drones that prove capable of locating nuclear submarines might lead to a military power thinking it could launch a successful nuclear first strike.
If you think that AI could never be smart enough to take over the world, please note that the world was just taken over by a simple coronavirus. That is, sufficiently many people had their interests aligned just enough (eg “I need to go to work with this cough or else I won’t be able to feed my family”) with those of an obviously harmful pathogen that we have let Sars-CoV-2 kill 20 million people and disable many tens of millions more. That is, viewed as an invasive species, AI might immiserate or even eliminate humanity by initially working within existing institutions.”
Of course he is not the only expert firing warning shots. Elon Musk has called for a pause in the development of AI, until further controls can be applied to keep the technology from going rogue.
I believe, if we slow down development, and apply meaningful controls to the myriad of companies racing to apply the emerging technology to better our lives and the world, we will be in a much better place. Without these controls, who knows what evildoer will apply AI for unscrupulous uses.