Humans always feared the bogeyman. History contains innumerable examples of our dogged persistence to identify the “other”: the monsters responsible for wrecking the delicate balance of society. Those creatures taught us valuable truisms. If she’s a witch, she’ll float. Our tendency to point the finger isn’t exclusive to people either. In 1999, corporate and government representatives warned us our computers might not differentiate between the triple-zero suffixes of 2000 and 1900. The consequences would essentially revert 100 years of industrial progress back to the beginning of the 20th century. New Year’s Eve – we waited. Would the world go dark?
Of course, it didn’t. Still, the visceral fear that we must prepare for the inevitable technologic boogeyman remains as present as ever. Technological innovations are outpacing and outperforming their precursors at an unprecedented rate. It begets apt comparisons to the 15th century’s printing press achievement. Appropriately considered one of, if not the most, significant contributions to humanity. Now, nearly 600 years later, another advent of the unimaginable is upon us. Once again, we find ourselves looking over our shoulders for the next threat.
Today’s conversation is dominated by the recent proliferation of Artificial Intelligence technology. Artificial Intelligence – commonly referred to as A.I. – was largely introduced into popular culture through creative mediums such as science fiction films and literature. Early fictionalizations were hardly flattering. Stanley Kubrick’s critically acclaimed ‘2001: A Space Odyssey’ is as recognizable by its bombastic opening sequence as it is for its conniving, insidious antagonist: HAL 9000. And much like Nietzche’s infamous declaration that ‘God is dead’, Hal asserts its own transcendental authority over its maker when it systematically kills the crew members of Discovery One. Thus Sprach Zarathustra indeed.
Kubrick’s bleak and fatalistic characterization of mechanization fits into the pervasive misconception of artificial intelligence technology as society’s existential adversary. These associations cast an ominous pall over the conversation. Worse still is the sophism heralding A.I. as the bastard companion of the four horsemen, sounding the death knell to mankind as we know it. Recently, the Future of Life Institute published a demand to halt any further production of next generation A.I., asking for governments to intervene and issue a moratorium on training new systems. It writes of a “profound risk to society and humanity.” The letter accrued thousands of signatures including the infamous engineer and entrepreneur Elon Musk. Yet experts cannot reach a consensus on how to approach the inevitably of A.I. Some computer scientists seek to demythologize artificial intelligence. In Jaron Lainer’s The New Yorker essay, he went so far as to decry the term A.I. all together.
Technological advances are like those of any evolving industry. Technology develops on a spectrum. Computer-algorithms and hi-tech machinery do not come to fruition in isolation. Every new iteration of the computer, telephone, or other programmed device possesses a linear ancestry of similar devices pre-dating it. And while they may not all share an identical history, there are similar origins of the technology used to enable these devices. In that, A.I. distinguishes itself. Unlike previous iterations of computer-programmed models it most closely replicates human cognitive functions. This is credited to machine learning. Essentially, this function enables the program to learn without direct human intervention. Machine learning’s mimicry of the human brain is often referred to as the neural network.
Prior to such independence, machines functioned in limited capacities. In their earliest forms, they were manufactured to perform singular tasks and were wholly servile instruments to human-users. Perhaps it’s this function of A.I., and all the excessive prognostications of the supposed consequences, that generates such partisan reactions. While those perspectives, either critical or supportive of A.I., are legitimate and necessary, many fail to consider the human element of the conversation. Millions of dollars are invested into competing to create the most advanced algorithms yet significant resources haven’t been allocated to study the relationship between humans and A.I.
Approximately 2 million years ago, humans constructed basic stone-tools to assist with their quotidian work. Anthropologists generally accept this as the earliest example of tool utilization. As those tools advanced, so did our fundamental relationship with them. Large-scale machinery aided machinists in performing a variety of difficult tasks related to their profession. This machine integration changed a previously rudimentary relationship to a more symbiotic one. Whereas earlier collaborations yielded limited results, humans could now incorporate more refined machinery to assist in tedious jobs. A famous example being the industrial revolution’s assembly lines.
Human-machine collaboration continued developing along this partnership of mutual-benefits. We reaped many physical and psychological benefits of working alongside machines. Workers were alleviated from the strain of certain job requirements that necessitated hard labor such as lifting or positioning over-sized, heavy objects. The efficiency and success of human-machine collaboration in manufacturing inspired other industries to follow suit. Healthcare adopted scientific developments such as the accidental discovery of ‘x-ray’ to aid in diagnosing and treating patients. Early diagnostic tools are pale imitations of the advanced technology medical workers utilize today. Doctors and nurses collaborate with specialized machinery to assist in diagnosing and treating their patients. A simple blood sample is spun down in a centrifuge and sent for analysis, extracting information for key indicators of illness. Much like early 20th century industrial workers collaborating with their levers and pulleys to assemble a physical product, healthcare workers collaborate with various machines to reach medical conclusions based on a set of values and results.
The essence of this human-machine collaboration is generally positive for us. Any potential gaps in user-knowledge can be amended with educating and re-skilling. Humans have utilized these machines for decades, confident in their ability to produce legitimate, verifiable results. Both machine and human-user possess a separate set of skills which allow them to collaborate towards completing nuanced, complex tasks. Notwithstanding the legitimate anxieties surrounding automation, there remains marked differences in the capabilities of the machines and human-users. An automated hematology analyzer can produce a complete blood count, comparing reference ranges and flagging potential abnormalities within the cells of a person’s blood, but it cannot assess that information in conjunction with physical symptoms observed during a patient examination, as well as a relevant medical history, to provide a confident diagnosis. The analyzer can never supplant the doctor; its role is defined within the parameters of their collaboration.
Yet A.I., with its predicted super-independence and machine learning capabilities, presents a new obstacle in human-machine collaboration.
How do humans respond when they are faced with the possibility that the new technology integrated into the machinery of their workplaces contain programming that outperforms them? How does a doctor react when the pseudo-sentient blood analyzer disagrees on the diagnosis?
At this stage, some of these hypotheticals may be more fearmongering than inevitabilities. Regardless, whether prescient or sensational, they caught the attention of the media and public. What of the fundamental nature of our relationships with machines? In 2018, the Harvard Business Review published an article discussing this unique human-machine relationship as ‘collaborative intelligence’. The article makes several points regarding how best to incorporate this new era of collaboration including essential training for both staff and technology to develop the best fit for both. In this era of AI, technology will be programmed to be more empathetic and understanding of the human-user – something that will certainly ease tensions of the unknown.
Ultimately, the goal of human-machine collaboration is to develop a sustaining relationship between the user of technology and the technology itself. At this time, when there are so many unknowns and variables within the AI space, it will require further research and analysis to determine how best to approach this unexplored territory. If history is any indicator though, we likely will find ourselves looking back at this period of panic with a little bit of well-earned hindsight.
The post Desperately Seeking Human by Julia Broder first appeared on Jayaram Law.