An article in The New York Times by Steve Lohr published March 5, 2012 reports on a study using existing data that concluded that availability of digital records may not cut health costs. Someone expressed doubt in the findings because they were based on correlations in existing data rather than a controlled test intended to test the hypothesis regarding electronic records and costs.
http://www.nytimes.com/2012/03/06/business/digital-records-may-not-cut-health-costs-study-cautions.html?_r=1
In my opinion, not every study requires a method like that of a clinical trial and it is both valid and cost-effective to make use of existing data in responsible ways. Correlation certainly does not prove causality, of course. But I believe the discovery of interesting correlations resulting from hypothesis-based research can be worthy of the attention of professionals. I am not advocating for just mindlessly searching available data for any possible relationship among variables and then pretending to have completed disciplined research.
It is reasonable to anticipate that computerized medical offices have the potential to both save money (by reducing the need for repetition of tests already taken) and to spend additional money as the availability of data stored in electronic records drives the propensity of physicians and patients to want still more data. The new iPad 3 with its high resolution screen is likely to push the, "let's get more images" propensity further. Medical providers want to do what is possible and the availability of higher-quality data easily accessible extends what is possible.
Thoughts and observations regarding modern healthcare administration in the context of policy reform.
Showing posts with label research methods. Show all posts
Showing posts with label research methods. Show all posts
Friday, March 9, 2012
Digital records and costs
Wednesday, January 4, 2012
Smarter than the average pixel?
What I understand of this TEDMED presentation by Dr. Eric Schadt is that it is not adequate to try to understand complex systems using simple linear thought (unless, of course, you are running for high political office). The part I am struggling to understand regards the idea that the causation of disease cannot be determined by studying populations of people and using statistical methods of analysis. Isn't epidemiology based on the notion that aggregate research designs can lead to insights into the causations of diseases in populations? And if an independent variable is important in the aggregate explanation for a disease or condition, isn't it likely to be important in the understanding of specific cases? It might be helpful to me to hear Dr. Schadt engage in a conversation with an epidemiologist about patterns of medical causations in individuals and populations.
It is the metaphor of the movie and the "average pixel" that I have not yet understood. Yes, there is no perfectly "average" patient. A specific instance of a disease or condition may be unique in causal origin. But I want to believe that understanding the health of populations sheds light on understanding the likely causes of instances of diseases/conditions. Patterns in complex systems are usually fractal in nature, meaning that the same patterns are evident at multiple scales. Perhaps I am trying to think too deeply about this or am simply missing some essential insight. Reader, I would welcome your comment that could shed some light.
It is the metaphor of the movie and the "average pixel" that I have not yet understood. Yes, there is no perfectly "average" patient. A specific instance of a disease or condition may be unique in causal origin. But I want to believe that understanding the health of populations sheds light on understanding the likely causes of instances of diseases/conditions. Patterns in complex systems are usually fractal in nature, meaning that the same patterns are evident at multiple scales. Perhaps I am trying to think too deeply about this or am simply missing some essential insight. Reader, I would welcome your comment that could shed some light.
Labels:
casusation,
complex systems,
complexity,
epidemiology,
Eric Schadt,
genetics,
genome,
research methods
Sunday, January 16, 2011
Reflecting on Evidence-based Management
This is an initial reflection upon an assigned reading by Kovner and Rundall in our textbook Health Services Management, Cases, Readings and Commentary (9th) by Kovner, McAlearney and Neuhauser. The essence of the reading is that healthcare managers should make decisions based upon evidence just as physicians should practice evidence-based medicine. Basically this means there should be close ties between scholarly research and managerial practices. My approach to this is shaped by my experiences as a academic person who teaches public administration.
If there is a disjoin between scholarly research and managerial practice the easy explanation is to fault practitioners for not reading academic journals. It is not that simple. Even in public administration (PA), which is an applied field of study and practice, there is a substantial divide between scholarship and practice. The best practitioners were often not outstanding students. Successful PA scholars are not necessarily able to make a transition to successful practice. Success in scholarship requires a deep, narrow focus. Success as a practitioner requires a wide variety of interests and abilities. Scholarly journals are much more geared toward the needs of academic persons than practitioners. Even in PA, an article seldom includes an "executive summary," to clearly identify the relevance of findings to practice. Getting a paper published in a scholarly journal often requires the use of advance mathematics in the data analysis. Few practitioners have either the need or the interest to work through the mathematics. The bottom line is that practitioners are more likely to learn through informal communities of practice than by reading articles written by academics as required to advance their academic careers. Peer-review does not usually include practitioners as reviewers and what is required by peer-review to get work accepted for publication is sometimes not as rational and scientific as the public may assume. Assuming that what I have observed in PA applies to healthcare management the status quo does not favor greater use of evidence based management practices. As indicated by Kovner and Rundall, healthcare managers claim to practice evidenced-based decision making but do not cite scholarly research as the evidence they draw upon.
If a disjoin exists between research and managerial practices I believe the scholars must accept at least part of the responsibility for closing the divide. Academic cultures are probably among the most durable of all organizational cultures. It is unlikely that in the near future tenured or tenure-track faculty members will be rewarded for their abilities to span related areas of knowledge or to contribute to successful practice. If this is correct, this is sad. There is a degree of distain for academic "ivory towers" among some practitioners. And there is a degree to which some scholars look down upon successful practitioners. It is cause for concern when former students who did not display advanced cognitive skills as students sometimes move quickly into high-paying positions with major responsibilities. While evidenced-based management practices are surely important they are probably not highly correlated with successful careers as practitioners. "Success" of course can be defined in different ways, but that is probably more evident to scholars than practitioners. If the major institutions of society were ever managed by persons with the most advanced cognitive abilities it appears that those entering the systems now may be less well prepared to practice evidence-based management. If there is a gap between research and practice it is the responsibility of all concerned to try to address that divide.
If there is a disjoin between scholarly research and managerial practice the easy explanation is to fault practitioners for not reading academic journals. It is not that simple. Even in public administration (PA), which is an applied field of study and practice, there is a substantial divide between scholarship and practice. The best practitioners were often not outstanding students. Successful PA scholars are not necessarily able to make a transition to successful practice. Success in scholarship requires a deep, narrow focus. Success as a practitioner requires a wide variety of interests and abilities. Scholarly journals are much more geared toward the needs of academic persons than practitioners. Even in PA, an article seldom includes an "executive summary," to clearly identify the relevance of findings to practice. Getting a paper published in a scholarly journal often requires the use of advance mathematics in the data analysis. Few practitioners have either the need or the interest to work through the mathematics. The bottom line is that practitioners are more likely to learn through informal communities of practice than by reading articles written by academics as required to advance their academic careers. Peer-review does not usually include practitioners as reviewers and what is required by peer-review to get work accepted for publication is sometimes not as rational and scientific as the public may assume. Assuming that what I have observed in PA applies to healthcare management the status quo does not favor greater use of evidence based management practices. As indicated by Kovner and Rundall, healthcare managers claim to practice evidenced-based decision making but do not cite scholarly research as the evidence they draw upon.
If a disjoin exists between research and managerial practices I believe the scholars must accept at least part of the responsibility for closing the divide. Academic cultures are probably among the most durable of all organizational cultures. It is unlikely that in the near future tenured or tenure-track faculty members will be rewarded for their abilities to span related areas of knowledge or to contribute to successful practice. If this is correct, this is sad. There is a degree of distain for academic "ivory towers" among some practitioners. And there is a degree to which some scholars look down upon successful practitioners. It is cause for concern when former students who did not display advanced cognitive skills as students sometimes move quickly into high-paying positions with major responsibilities. While evidenced-based management practices are surely important they are probably not highly correlated with successful careers as practitioners. "Success" of course can be defined in different ways, but that is probably more evident to scholars than practitioners. If the major institutions of society were ever managed by persons with the most advanced cognitive abilities it appears that those entering the systems now may be less well prepared to practice evidence-based management. If there is a gap between research and practice it is the responsibility of all concerned to try to address that divide.
Labels:
education,
evidence-based management,
knowledge management,
public administration,
research methods,
scholarship,
universities
Thursday, January 13, 2011
Making Sense of Hospital Charges Data
This blog post is related to a reading assignment in a course I am taking online taught at Medical College of Georgia -- soon to become Georgia Health Sciences University. The article which was published in The New York Times is titled, "In Health Care, Cost Isn't Proof of High Quality." This article by Reed Abelson observed that there is substantial variation in the costs of various medical services among institutions and that higher costs do not necessarily correlate with better outcomes or higher quality of care. The data was derived from reports submitted by hospitals in Pennsylvania.
http://www.nytimes.com/2007/06/14/health/14insure.html
http://www.phc4.org/reports/hpr/09/
Abelson's point appears to be that payers are questioning why they are apparently sometimes paying providers relatively larger amounts of money for medical services that do not appear to be producing better outcomes overall. The data is reported by procedure/treatment, by hospital.
The measures of quality of care include mortality rating, length of stay, and readmission ratings -both for any reason and for reason of complication or infection. Average charge per case (for each selected medical procedure/ treatment) is shown for each surveyed hospital. The data reflects risk adjustment factors for all of the variables. The data is for fiscal year 2009.
So, what are people to make of this? In many instances the number of cases of a particular treatment in FFY 2009 in a given hospital is very few. Averages based upon a very few cases can be dramatically skewed by one or two exceptional cases. The data is reported in a way that is a bit confusing because unexpected high rates of short average lengths of stay are appear with the same large dark circle that otherwise is used to represent high mortality rates and high readmission ratings. It is hard to interpret the data by just looking at the representation of it, which resembles the way years/models of automobiles are rated in Consumer Reports publications. It does not appear to me that high mortality rates tend to be associated with either higher or lower costs per case. Nor does it appear that average length of stay correlates with average charge. It does not appear that the number of cases treated in FFY 2009 is correlated with any of the other data. It would take a substantial amount of quantitative analysis to test hypotheses for each procedure and treatment. It would be helpful if the data that has been published was aggregated by hospital rather than only by procedure/treatment.
My guess is that detailed quantitative data analysis would not produce any clear explanation of why some hospitals charge substantially more (on average) than others for the same procedures/treatments. I think a qualitative approach to data analysis might produce more insights. I suggest identifying the hospitals that tend to produce higher charges across most categories of procedures/treatments and then ask insiders what other attributes those hospitals share. They might be hospitals that provide high levels of charitable services and need to shift the cost burden onto patients with insurance or other sources of revenue. They might be hospitals that offer up-scale accommodations. They might tend to be for-profit hospitals, or hospitals deeply in debt. Given the list of the "high chargers," I bet one or more explanatory themes would quickly become apparent.
http://www.nytimes.com/2007/06/14/health/14insure.html
http://www.phc4.org/reports/hpr/09/
Abelson's point appears to be that payers are questioning why they are apparently sometimes paying providers relatively larger amounts of money for medical services that do not appear to be producing better outcomes overall. The data is reported by procedure/treatment, by hospital.
The measures of quality of care include mortality rating, length of stay, and readmission ratings -both for any reason and for reason of complication or infection. Average charge per case (for each selected medical procedure/ treatment) is shown for each surveyed hospital. The data reflects risk adjustment factors for all of the variables. The data is for fiscal year 2009.
So, what are people to make of this? In many instances the number of cases of a particular treatment in FFY 2009 in a given hospital is very few. Averages based upon a very few cases can be dramatically skewed by one or two exceptional cases. The data is reported in a way that is a bit confusing because unexpected high rates of short average lengths of stay are appear with the same large dark circle that otherwise is used to represent high mortality rates and high readmission ratings. It is hard to interpret the data by just looking at the representation of it, which resembles the way years/models of automobiles are rated in Consumer Reports publications. It does not appear to me that high mortality rates tend to be associated with either higher or lower costs per case. Nor does it appear that average length of stay correlates with average charge. It does not appear that the number of cases treated in FFY 2009 is correlated with any of the other data. It would take a substantial amount of quantitative analysis to test hypotheses for each procedure and treatment. It would be helpful if the data that has been published was aggregated by hospital rather than only by procedure/treatment.
My guess is that detailed quantitative data analysis would not produce any clear explanation of why some hospitals charge substantially more (on average) than others for the same procedures/treatments. I think a qualitative approach to data analysis might produce more insights. I suggest identifying the hospitals that tend to produce higher charges across most categories of procedures/treatments and then ask insiders what other attributes those hospitals share. They might be hospitals that provide high levels of charitable services and need to shift the cost burden onto patients with insurance or other sources of revenue. They might be hospitals that offer up-scale accommodations. They might tend to be for-profit hospitals, or hospitals deeply in debt. Given the list of the "high chargers," I bet one or more explanatory themes would quickly become apparent.
Labels:
charges,
data analysis,
quality,
research methods
Subscribe to:
Posts (Atom)