Sunday, July 10, 2011
Challenges beyond Jeopardy
IBM has an amazing history. It built its corporate culture and initial wealth building and selling mainframe computers to large bureaucratic organizations. IBM tried to make the adjustment to networks of microcomputers in the 1980s and had to shatter and reinvent its corporate culture in the process. IBM invested in Second Life apparently thinking that the next great thing might be in virtual worlds. Now it seems to be betting the store on its ability to move from the information business to the knowledge business and in so doing transform many industries including healthcare.
Let's assume for the moment that a comprehensive knowledge base of evidence-based clinical pathways has been build and that some very advanced computer like Watson has the software inference engine to make important decisions about patient care based upon the input it has instructed human medical providers to enter. And let's further assume that organizational changes have been implented in the United States that have resulted in a very high degree of medical conformance in implementing "Dr. Watson's" instructions. Would this necessarily be a good thing? What would be some of the consequences?
A Star Trek fan, I remember the episode titled Spock's Brain first broadcast September 20, 1968. Dr. McCoy is tasked with the responsibility to put Mr. Spock's stolen brain (being used to run the public works infrastructure of a city on some other planet) back in place and reconnect it to his nervous system. But the knowledge of how to do that, once known, has been lost. McCoy puts on a device known as, "the teacher" that allows him to recapture the knowledge needed to perform the work.
I cite the episode of Star Trek to suggest the if we did have a computer system like IBM's Watson loaded with evidence-based medical pathways we would in the short term advance medical knowledge but in the long run we would lose grasp of medical knowledge. Watson, as amazing as it is, does not have knowledge. It only processes patterns. The GPS unit that I sometimes use while driving seems to have a capacity for thought and I sometimes project into its voice the evidence of judgment. I sometimes image that its spoken word, "recalculating" is really its saying, "You dummy, I told you to turn back there!"
There is the knowledge that exists in individual human minds, and there is social knowledge that exists in social networks. While computers can facilitate human knowledge (both individual and social) they do not have knowledge and are not likely to gain that ability. My point is that as we become more dependent upon computer systems we risk losing "the old knowledge" that we will need to not begin to treat computers as if they have knowledge.
The risk resides not so much in the potentials of technology as in the capacity of humans to anthropomorphize computers and robots. Build an attractive robot (it does not even have to have human features) and put something like the inference engine of Watson behind it and people will begin to trust this entity that in fact has neither knowledge nor emotions. "Watson," in fact, understands nothing. Human care givers will become the interface between the technology and patients but will lack the ability to effectively judge decisions suggested (or made) by the technology. There will be no "teacher" device that one can put on to know what is represented in the computer system in forms that are less than knowledge. Computers are dumb but they have massive memories and incredibly fast processors. Computers can be networked together into massive arrays. Humans are smart but have tiny working memories, slow processing speeds and as of yet we have not created high bandwidth social networks. Clearly there is need to design more effective joint cognitive system (see book by Hollnagel and Woods) for medical and other purposes, while facing the prospect losing our knowledge of how computers are making decisions without knowledge.