Slow thinking and machine learning in medicine

How human cognition could intersect with, and enhance automated cognition.

Recently, several high-profile institutions have called attention to the issue of inclusion and equity when artificial intelligence (AI) algorithms are applied in medicine. Leaders from the law, medicine, social sciences, and computer sciences are speaking out about the challenges of using smart algorithms to solve social problems.

Photo modified from original by Dynamic Wang on Unsplash.

While the critiques of ML ethics might easily be dismissed as anti-progress by the tech community, they should be reexamined as a genuine offer to partner leading human slow-thinkers with intuitive fast-thinking machines. Collaboration can improve the impact of AI on society.

How human cognitive algorithms can be biased

The terms “slow-thinking” and “fast-thinking” were coined by psychologist Daniel Kahneman who shared a 2002 Nobel Prize for identifying routine cognitive biases in humans — including academics trained in statistics. Kahneman’s work replaced prevailing theories of humans as fully logical, utilitarian decision-makers with a more compassionate view of choices made using flawed, altruistic and often irrational cognitive algorithms.

The takeaway should not be that humans are stupid, but rather to acknowledge that rational thought and cognitive biases coexist in all of us — despite our best intentions. We need to design independent safeguards to protect our decision-making from sloppy heuristics.

Recently, leading scientists at the Stanford Presence Center’s AI in Medicine: Inclusion and Equity (AiMIE) symposium pointed out that this process applies to machines as well as humans.

Current controls on human cognition

Intuition evolved to be the quick-and-dirty substitute for thought when speed was necessary for survival.

Although efficient, intuition is also subconscious, illogical and biased by experience. The Implicit Association Test from Harvard identifies subconscious biases from microsecond delays in choice. It was publicized on the web so scientists, public servants and policymakers can be more aware of and compensate for prejudice. A societal example of formalized processes for the same includes research ethics boards whose members are recruited from diverse sectors in society to analyze the ethical implications of proposed research projects.

So how is this relevant to AI?

We might assume that machine learning is better at solving problems with emotionless intuition. However, it is designed to mimic human cognitive processes, which makes it as susceptible to fast-thinking bias as human thought.

“I think we need to move beyond assuming that technology can inherently solve social or economic problems,” says Dr. Sanjay Basu an Assistant Professor of Medicine at Stanford University, “And instead ask why these problems persist and whether our technology or other tools are really being designed to reinforce or challenge these problems.”

How machine learning algorithms can be biased

At the AiMIE symposium, several luminaries weighed in on potential sources of AI bias that need to be intentionally corrected:

  • Access to data. Because there is a digital divide in access to smartphones and computers, disadvantaged populations are persistently underrepresented in public datasets. Just like human intuition, if the data is biased, so are the algorithms. As Dr. Nirav Shah, Stanford professor and former COO of clinical operations for Kaiser Permanente in Southern California artfully suggested, “Maybe data should be considered a determinant of health.”

  • Failure in algorithm design. One of the values of lean computer programming is to “ release early, release often,” but executing prototypes in social sciences can be disastrous. As Dr. Mark Cullen, director of the Stanford Population for Health Studies, dryly commented, “Intelligence rather than artificial intelligence might be more useful in reducing disparities in the short run.”

  • Positive-feedback loops. Flaws in AI design may compound. As Judge Mariano Cuellar, a California Supreme Court justice explains: “If we train the machines only on the data we have today we will end up with machines that deepen the inequities of our system.”

  • Failure to adjust. While humans can comprehend outliers and tailor solutions, machines are universally merciless. “Personalization can very easily become persecution.” explains Virginia Eubanks, a professor of political science at the University of Albany and author of the recent book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor which illustrated several recent injustices from poorly-designed AI algorithms used in disadvantaged populations.

So how do we prevent bias? It's simple, machine bias can be addressed with the same mechanisms we use to compensate for human bias: structured thought, written logic, and checking assumptions with diverse representation.

In conclusion, we need humans and AI

Just like human thought, intuitive decision-making and strategic decision-making coexist and each has benefits in different situations.

While we need the analytical tools that AI and machine learning can give us, we also need human capacities for messy debate, structured thought, and the willingness to face uncomfortable realities.

Humans trained in this type of cognition often work in public roles such as social sciences, law, and medicine. These partners are willing and able to collaborate with industry — often without compensation. These collaborations may take time and effort and require clear negotiation of competing interests up front; however, they make innovative products more robust, valuable, and available to a broader audience.

Perhaps what we need is not more human thinking or more AI, but better integration of the two heuristics so that we can achieve a cognitive stereovision.

In closing, we offer a quote by Glenn Cohen, JD, a Harvard law professor and another speaker at AiMIE:

“William Gibson once said something along the lines of ‘The future is already here, it is just not very evenly distributed.’ If the use of AI in health care is going to improve care for everyone, we have to make sure its starting assumptions, training sets, and cascade effects reduce not exacerbate existing divisions.”

Drea Burbank is a physician-entrepreneur. Ayo Roberts and Chandi Broadbent also contributed to this article.

Reprinted with permission. Originally published at https://www.kevinmd.com on November 10, 2018.

Drea Burbank

MD-technologist consulting for high-tech in critical sectors.

Previous
Previous

Todreamalife Awarded AI-in-Medicine Grant

Next
Next

Economics of Small Modular Reactors