- 16th April 2018
- Posted by: Manolis
Alongside the excitement and hype about our growing reliance on artificial intelligence, there’s fear about the way the technology works. A recent MIT Technology Review article titled “The Dark Secret at the Heart of AI” warned: “No one really knows how the most advanced algorithms do what they do. That could be a problem.” Thanks to this uncertainty and lack of accountability, a report by the AI Now Instituterecommended that public agencies responsible for criminal justice, health care, welfare and education shouldn’t use such technology.
Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of A.I., the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.
There’s particular concern about this in health care, where A.I. is used to classify which skin lesions are cancerous, to identify very early-stage cancer from blood, to predict heart disease, to determine what compounds in people and animals could extend healthy life spans and more. But these fears about the implications of black box are misplaced. A.I. is no less transparent than the way in which doctors have always worked — and in many cases it represents an improvement, augmenting what hospitals can do for patients and the entire health care system. After all, the black box in A.I. isn’t a new problem due to new tech: Human intelligence itself is — and always has been — a black box.
Let’s take the example of a human doctor making a diagnosis. Afterward, a patient might ask that doctor how she made that diagnosis, and she would probably share some of the data she used to draw her conclusion. But could she really explain how and why she made that decision, what specific data from what studies she drew on, what observations from her training or mentors influenced her, what tacit knowledge she gleaned from her own and her colleagues’ shared experiences and how all of this combined into that precise insight? Sure, she’d probably give a few indicators about what pointed her in a certain direction — but there would also be an element of guessing, of following hunches. And even if there weren’t, we still wouldn’t know that there weren’t other factors involved of which she wasn’t even consciously aware.
If the same diagnosis had been made with A.I., we could draw from all available information on that particular patient — as well as data anonymously aggregated across time and from countless other relevant patients everywhere, to make the strongest evidence-based decision possible. It would be a diagnosis with a direct connection to the data, rather than human intuition based on limited data and derivative summaries of anecdotal experiences with a relatively small number of local patients.
But we make decisions in areas that we don’t fully understand every day — often very successfully — from the predicted economic impacts of policies to weather forecasts to the ways in which we approach much of science in the first place. We either oversimplify things or accept that they’re too complex for us to break down linearly, let alone explain fully. It’s just like the black box of A.I.: Human intelligence can reason and make arguments for a given conclusion, but it can’t explain the complex, underlying basis for how we arrived at a particular conclusion. Think of what happens when a couple get divorced because of one stated cause — say, infidelity — when in reality there’s an entire unseen universe of intertwined causes, forces and events that contributed to that outcome. Why did they choose to split up when another couple in a similar situation didn’t? Even those in the relationship can’t fully explain it. It’s a black box.
The irony is that compared with human intelligence, A.I. is actually the more transparent of intelligences. Unlike the human mind, A.I. can — and should — be interrogated and interpreted. Like the ability to audit and refine models and expose knowledge gaps in deep neural nets and the debugging tools that will inevitably be built and the potential ability to augment human intelligence via brain-computer interfaces, there are many technologies that could help interpret artificial intelligence in a way we can’t interpret the human brain. In the process, we may even learn more about how human intelligence itself works.
Perhaps the real source of critics’ concerns isn’t that we can’t “see” A.I.’s reasoning but that as A.I. gets more powerful, the human mind becomes the limiting factor. It’s that in the future, we’ll need A.I. to understand A.I. In health care as well as in other fields, this means we will soon see the creation of a category of human professionals who don’t have to make the moment-to-moment decisions themselves but instead manage a team of A.I. workers — just like commercial airplane pilots who engage autopilots to land in poor weather conditions. Doctors will no longer “drive” the primary diagnosis; instead, they’ll ensure that the diagnosis is relevant and meaningful for a patient and oversee when and how to offer more clarification and more narrative explanation. The doctor’s office of the future will very likely include computer assistants, on both the doctor’s side and the patient’s side, as well as data inputs that come from far beyond the office walls.
When that happens, it will become clear that the so-called black box of artificial intelligence is more of a feature, not a bug — because it’s more possible to capture and explain what’s going on there than it is in the human mind. None of this dismisses or ignores the need for oversight of A.I. It’s just that instead of worrying about the black box, we should focus on the opportunity and therefore better address a future where A.I. not only augments human intelligence and intuition but also perhaps even sheds light on and redefines what it means to be humanin the first place.