Spike Narayan-
Spike Narayan is a seasoned hi-tech executive managing exploratory research in science and technology at IBM.
Artificial Intelligence(AI) is an overused term today and as with any ubiquitous technology it is either loved or feared. Why does it have such a split following? It has a lot to do with its history and to some extent with how it is marketed today and more importantly how it is predicted to impact all facets of our lives tomorrow.
AI made its humble beginnings many decades ago in academic circles as an endeavor to understand and mimic how humans think and act. While the goal was to understand and copy brain function, the field of AI developed largely with very little active collaboration between computer scientists and neuroscientists. As a result, AI developed in those days, never really had human like cognitive functions. In, as early as 1968, 2001: A Space Odyssey, a movie based on a science fiction novel by Arthur C Clarke, described an on-board computer, HAL, on Discovery One that had uncanny decision-making capability to control the mission to Jupiter. Much later in 1977, the movie Star Wars depicted two human-like robots R2-D2 and C-3PO that had human like attributes. While HAL had undeniably evil overtones the Star Wars robots were merely amusing and harmless. I would argue that the divergence in opinions on AI started during that period. Soon after the 80s, academia seemed to lose interest in AI and the field went into a Rip Van Winkle like 20-year slumber.
So, what has changed when the field of AI re-awakened in the last decade? The ideas upon rebirth were not particularly novel but the researchers had access to three important enabling technologies. First was the availability of relatively cheap and abundant computing power, the second was access to massive amounts of data that machines could train on (i.e. be taught with) and third was novel compute algorithms developed to use all this data (big data) and compute horsepower. This brought about amazingly rapid progress in the field of AI. The early manifestations of AI were at best clunky in that they exhibited a very rudimentary ability to learn and recognize simple objects. Even that basic function required extensive training with large amounts of labeled data i.e., show the system thousands of cat pictures and then it was able to identify a cat from a picture that it had not seen before. However, with the rapid progress of neural networks and especially with the advent of deep neural nets, the machines learned to draw inferences with increasing accuracy making them suitable for real-life enterprise and consumer applications.
As more AI applications begin to appear in the horizon a new set of fears has emerged. This time it had to do with at least three important issues. One, will AI replace humans? second, can AI be trusted? and finally, can we be sure the results are not biased? These are being researched actively today. The issues of trust and bias will not be discussed at length here as they are worthy of more extensive discussions. The discussion of machines replacing humans in the work force is not always based on observed facts but more from apprehension and more recently is taking on political overtones. However, industry and academia are making great strides in understanding the basis for the fear. There has been a big shift in how intelligent machines are being touted. It has gone from machines mimicking human function to machines helping humans do more. This shift in perspective is very important as it represents how these machines will be used in real life applications thus helping reduce anxiety in the work force. Many centers have emerged across the globe that talk about machines and humans working together to excel in a task. One such center is at Stanford and is called the Institute for Human Centered AI (https://hai.stanford.edu/). Per this website, this center helps with understanding and guiding the human and societal impact of AI, augmenting human capabilities, and developing AI technologies inspired by human intelligence. These projects promise to play a significant role in defining future work in AI from academia to industry, government, healthcare, and civil society.
The issues of trust and bias will turn out to be pivotal in real-life applications as they will have even more far-reaching implications on our lives. The trust vector has to do with the fact that neural networks cannot easily explain how a decision or inference was made. In certain applications like in health care where AI tools can be used to help doctors make better decisions, there will be limited acceptance if the doctor cannot see the decision-making rationale. The bias topic is even more scary because the machines are trained with extensive amounts of past data and if the data were biased then the results will be too. The more humans are involved in training the machines the harder it is to eliminate bias.
In summary, AI has a great promise of augmenting human capability. With that will come whole new fields of education and job opportunities. The issues of trust and bias will be addressed over time to allay our fears.