I have been talking to some colleagues about the issues around medical tests, in particular whether some tests provide such low quality information as to be of negative value. The PSA test for prostate cancer is a case in point, especially for men of my age. Should men around the age of 50 get the PSA test? My understanding of this test is that it reports a number from 0 to infinity, with higher numbers and a positive rate of change being thought to signal the presence of prostate cancer. Critics of the test note a high rate of false positives.
There are many other situations where medical tests are possible, from full body scans to mammograms. None of these tests are perfect. They will fail to detect cancers (false negatives) and they will signal cancer when none is present (false positive).
There is definitely a community of health professionals who advise many patients to not get the tests – and this is not because the tests fail to provide value in excess of their cost, but because the tests are actually thought to be of negative value even without considering their direct cost.
Note that this idea conflicts quite extremely with an idea that many economists would hold, which is that any information is good. As one of my colleagues puts it: The test has been done, and your doctor has emailed it to you. Would you actually pay for an email filter that would prevent you from seeing that message? If the test has negative value, you would pay for a filter. If the test is of even small value, you would open that email!
This is an important question, of both personal and social value. It deserves adequate consideration. I am going to give some initial analysis, using a framework from Bayesian statistical and decision theory, which I think is the optimal approach.
I am going to begin with what I call a Robinson Crusoe world, where the decision maker acts individually and only in consideration of his situation. So third party effects, such as influence by doctors, will be ignored.
The information setup is as follows. Bear with me if you have not done Bayesian analysis for a while, but it is pretty straightforward. This is all standard stuff; if you want to read more I highly recommend an old survey by two of my UCLA professors: Hirshleifer, J & Riley, John G, 1979. "The Analytics of Uncertainty and Information-An Expository Survey," Journal of Economic Literature, American Economic Association, vol. 17(4), pages 1375-1421, December.
In a Bayesian decision setup, we have three kinds of variables: states of the world; messages, and actions. Here, we will have only two states of the world: cancer, or no cancer. Messages are what the test provides. Now the PSA test is a continuous variable, and later I will return to that characteristic. For now, think of the test as returning one of two messages, m1 or m2. Message m1 can be thought of as a low PSA, below a critical value, while message m2 can be thought of as a high PSA, above the critical value.
There are four possible (message, state) outcomes, illustrated by the two-by-two matrix at the top of this post: Two of these have the message being consistent with the state, (m1,s1) and (m2,s2). Then we have two outcomes where the message is in error: a false negative of (m1,s2) and a false positive of (m2,s1). Note in this I am assuming that m1 is the message that we will think of as being the “no cancer” message, i.e., a low PSA.
The key probabilities for decisionmaking will be the posterior probabilities, which are derived from priors and the joint message/state probability density. More precisely:
(1) Pr (s1|m1) = {Pr(m1|s1)Pr(s1)} / Pr (m1)
(2) Pr (s2|m1) = {Pr(m1|s2)Pr(s2)} / Pr (m1)
(3) Pr (s1|m2) = {Pr(m2|s1)Pr(s1)} / Pr (m2)
(4) Pr (s2|m2) = {Pr(m2|s2)Pr(s2)} / Pr (m2)
Note that the message likelihoods – Pr(m2|s2) for example – are a function of the test’s characteristics and quality. For better information quality, we want large differences in the probabilities of a message conditional on different states.
The last two posterior probabilities are the important ones as they are our posteriors after getting the bad message: the probability of not having cancer dependent on getting m2, and the probability of having cancer dependent on getting m2. Note that these two posterior probabilities will differ from their respective prior probabilities, depending on how far the ratios of Pr(m2|s1)/Pr(m2) and Pr(m2|s2)Pr(m2) are from 1. If Pr(m2|s2)/Pr(m2), for example, is much greater than 1, then the posterior probability of having cancer conditional on getting the bad message will be much higher than the decisionmaker’s prior probability. This means that m2 is a highly informative message.
Now we can consider taking actions conditional on a message. I will presume the action to be “treatment,” with the implicit understanding that that might just mean further testing. The decisionmaker wants to take actions that increase their utility, or well-being.
Suppose we take the action of treatment if we get the bad message, m2. Then we can write our expected utility conditional on m2 to be:
(5) E(utility|action,m2) = GAIN*Pr(s2|m2) + LOSS*Pr(s1|m2) - c
where GAIN is our health improvement from treating a real cancer, and LOSS is our health decrement from taking treatment when we do not have cancer (since we got a false positive test). Note that I do include the cost of the test, c, even though I am most interested in whether the before-cost, gross value, of the information can be negative.
Our expected utility conditional on message 1,
(6) E(utility|no action, m1) = -c
since all we do is pay the cost of the test when we get message m1. I could put an additional cost in here, if there were “angst” caused by the test, but I will pass on that idea for now.
The crux of the issue is illustrated by Equation (5), the expected utility conditional on message m2. The value of the test is going to be greater, the greater is the GAIN from treating a detected cancer and the greater is Pr(s2|m2). The value of the test is going to be lower, the greater is the LOSS from undergoing treatment when we do not have cancer, and the greater is Pr(s1|m2) – the probability of a false positive. (Note that false negatives do not enter our analysis directly, but they do indirectly since the probability of a false negative, Pr(m1|s2) equals 1-Pr(m2|s2), so the lower is the probability of a false negative, the higher is the probability of a correct positive.)
One might jump on the fact that the expected utility conditional on message m2 can be negative, even without considering the cost of the test. This is true, if the LOSS and/or the probability of a false positive are large.
However, we need to take a rational decisionmaking viewpoint. If the expected utility conditional on m2 is negative, then we should just never take the treatment! Granted, we will pay the cost of the test, but as I said at the beginning, some people seem to think that tests can be of negative value even without considering the direct cost of the test. From our point of view here, that cannot be true. Of zero value, that is possible, but not negative.
And there is yet another level to the analysis, which will show even more strongly the likelihood of a strictly positive value to any medical test that reports a continuous variable and that has the property that the test becomes more precise as we increase the cutoff. See this report from Johns Hopkins for some discussion, in particular the following:
"In general, a PSA value of 4 ng/mL is considered the cut-off for suspected cancer (although it may vary slightly by age), and levels above 10 ng/mL indicate very high risk. It is values between 4 and 10 ng/mL that are the most ambiguous; men in this range may benefit most from refinements in the PSA test. The risk of cancer based on PSA levels follows:If the expected utility conditional on m2 is negative, then we should increase our cutoff to reduce the probability of false positives and increase the probability of a correct diagnosis (conditional on getting m2). For instance, if a PSA of 8 was our cutoff in the above analysis, then let’s use a cutoff of PSA=50.
PSA levels under 4 ng/mL: "normal"
4 to 10 ng/mL: 20 to 30% risk
10 to 20 ng/mL: 50 to 75% risk
Above 20 ng/mL: 90%."
In equation 5, increasing the cutoff will clearly increase our expected utility conditional on m2, for Pr(s2|m2) will increase and Pr(s1|m2) will decrease.
Now it is of course true that by increasing our cutoff, we are decreasing the chance of getting a bad message, that is, of getting m2. So we will be less likely to take action, but when we do, we can be pretty sure that we are doing the right thing.
With a low probability of m2, the overall value of the test may be negative, for we are always paying for the test and very rarely taking action. However, my point again is that the test must have value in the gross, before-cost, sense. Or to use the email analogy, if someone already emailed me the results of the test, I definitely do not want to delete that message before seeing it!
I could bring in considerations of angst of getting a test result that is not high enough to take action but enough to make one nervous, or issues of self-control -- an inability to commit oneself to not taking action (or not worrying) if the test result is not extremely high. But that will be for another discussion.
No comments:
Post a Comment