To: John Connor
Humankind was destroyed when SkyNet became self-aware. At least, that is the plot line of The Terminator franchise created by writer and director James Cameron. SkyNet sounds eerily like internet. And while Minnesota went on to elect a former professional wrestler as governor, California elected a cyborg.
Artificial intelligence (AI) is often defined as technology where software is able to provide solutions and perform tasks previously done by humans. It is a form of intelligence in opposition to native intelligence (because we got here before machines). The potential for confusion with human intelligence has vexed the AI community for a long time. Alan Turing, the famous code breaker of the Enigma machine, proposed what later became known as the Turing test. An evaluator quizzes two screened participants, one human, one machine. If the questioner cannot make a non-randomized evaluation of which respondent is human and which respondent is machine, it can be said that the machine truly thinks. This sounds like a benign philosophical exercise.
The Terminator is not benign. And neither is AI, according to the late physicist Stephen Hawking, who warned it could be the worst phenomena in the history of civilization. Elon Musk, entrepreneur of Tesla and SpaceX, has warned that a global arms race for artificial intelligence will cause the third world war.
Even if one does not envision an internet of things becoming self-aware and consumed with its own preservation, the threat to human jobs is real. A recent McKinsey automation jobs report projects the number of Americans who will have to find new jobs by 2030 to range from 16 million to 54 million. Those whose jobs have been terminated will not find AI to be benign.
The pace of AI development is exceedingly fast; speech recognition programs, for example, have surpassed human capabilities in just the past year. It’s a little unnerving that all of these AI-enhanced speech recognition applications use the female voice—the wonder women Siri and Alexa (note: she is an Amazon). This may be the first example of an artificial intelligence cyber #MeToo movement. Other business applications include the following:
News. A number of news and information sites rely on AI-generated content. The Economist reports that Bloomberg News utilizes a program that scans SEC filings to suggest possible business articles. Further, there are now AI programs to eliminate “fake news”—or at least recognize left and right bias in the writing of headlines of certain news content. Professional journals and content of highly specialized information can now be summarized by AI-driven programs. You may have read one today (this is not one).
Security. Facial recognition AI programs have increased their accuracy. A Chinese insurance company believes it can use AI recognition to spot dishonesty in customers who apply for loans through its app. Companies in almost every major industry now use AI to monitor cyber-security threats and other risks, including those posed by disgruntled employees. With the explosive growth of video-monitored public spaces, the application of recognition AI algorithms to spot potential criminal behavior—and to predict possible criminal behavior—will soon be upon us.
Hiring. A number of companies, some more reluctantly public than others, use AI to scan resumes and job applications. These programs utilize employee profiles, social media content and industry-derived metrics to preliminarily filter and exclude potential job applicants from human interviews. How long human interviews will be part of the hiring process is anybody’s guess.
Health. AI “triage” systems have already been developed for field use to determine the type of medical intervention required for a specific condition. Some of these systems are in ambulances today, and others are in regional hospitals. The day will soon come where the initial diagnosis of a human health condition, its possible treatment and long-term chance of recovery/success will be done by artificial intelligence only.
Legal. A great deal of the routine work lawyers do in drafting contracts, compliance filing and litigation response will be done by AI-driven systems. There are systems in use in specific states and in specific areas of the law. These will rapidly increase. They hold the promise of reducing legal costs, increasing efficiency and harmonizing our legal process.
While it’s true that the future is hard to predict, there are AI systems in existence that analyze large quantities of data and make predictions by projecting historical trends. Some of these have predicted World Cup soccer champions. Not all artificial intelligence turns out to be right. Judgment, experience and human interaction iteratively affecting human actors are concepts difficult for an algorithm or self-learning artificial system to replicate. That uncertainty has given many of us hope that artificial intelligence will always remain the servant of humans.
The more difficult question posed by artificial intelligence is whether or not that difference has any significance. Or in other words, human attributes may have meaning to humans, but have no further use.
Vance K. Opperman
With hand on the power switch
Vance K. Opperman (email@example.com) is owner and CEO of MSP Communications, which publishes Twin Cities Business.