Friday, May 25, 2012

Profiling debate

Of course I’m aware that a terrorist could place a bomb in an old lady’s bag—and that is why I was careful to say that everyone’s baggage should be screened. But it is very far-fetched to think that jihadist organizations will successfully recruit people of the sort pictured in my original blog post. And if we are concerned that terrorists might kidnap some old lady’s grandchildren and force her to walk through security with a bomb in her girdle—well, that’s what behavioral profiling is for. Presumably, our screeners would find themselves in the presence of one very nervous old lady.
We have to ask ourselves which is more plausible—that terrorists will find it easier to recruit or coerce the least likely suspects, or that they will benefit from our needlessly searching these suspects by the hundreds of millions, year after year? I do not doubt that a profile can be gamed—and this is worth worrying about—but I am more concerned about the risk of airport screeners obviously wasting their time. 
So I hope we can put that bit about mere “correlation” behind us. Generally speaking, we know who we are looking for—Muslim jihadists.
In your article, you declare that my profile isn’t accurate because “it isn’t true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are.” Unfortunately, this gets things exactly backwards. The question is not, What is the probability that any given Muslim is a terrorist? The question is, What is the probability that the next terrorist will be a Muslim? You can bury the signal in as much noise as you want; it will not change the fact that the threat of suicidal terrorism is coming from a single group.
We face an ongoing threat of people bringing bombs onto airplanes. There will surely be a next attempt, and one after that, and one after that. Even as you and I have been conducting this debate, we have been hearing reports about new and improved “underwear bombs” and about the prospect of terrorists having IEDs surgically implanted in their bodies. How likely is it that these ghoulish attempts to murder innocent people will come from Muslims waging jihad? It isn’t 1 in 80 million, or 1 in 8 million, or even 1 in 8. You admit that the likelihood is “high.”
Your concern about the low base rate of terrorism, leading to the problem of too many false positives, seems misguided. The problem of base rate is often very important, of course, but not in the case of airport security. For readers who might be unfamiliar with Bayesian statistics, let me briefly illustrate what I think you were trying to do with your math:
Let’s say I get a blood test designed to screen for some terrible disease and it comes back positive. My doctor tells me that this test is 99% accurate and only produces false positives 1% of the time. Does this mean that that I have a 99% chance of having the disease? No. We need to know how prevalent this disease is in the population of people who share my risk factors (the base rate). If the disease is rare, the chance that I have it will still be quite low. A false-positive rate of 1% will produce 100 errors per 10,000 tests. If the disease only affects 1 in 10,000 people like me, my actual chance of having the disease (given that I tested positive) will be 1/101—or slightly less than 1%.
This seems to be the kind of sobering and counterintuitive demonstration of the “base rate fallacy” you were attempting in your article. The lesson that you and many others seem desperate to draw is that a little Bayesian analysis proves that profiling Muslims makes absolutely no sense. But what is interesting about false positives in my medical example is that the consequences of entertaining them (i.e., believing that one has a deadly illness) are huge, and learning the base rate completely changes one’s sense of the risk. This is not the case with the threat of Muslim terrorism.
What is a false positive in the context of airport security? It might be nothing more than asking a person a follow-up question or performing a hand inspection of his bag. We are not talking about imprisoning people who fit the profile at the airport. A concern about false positives only makes sense if paying closer attention to innocent Muslims has some truly terrible consequences. You suggest that it will have two: it will produce a backlash in the Muslim community and allow terrorists to game the system (rendering the profile inaccurate). I am skeptical about both these claims for reasons that I hope we will discuss.
Of course your base rate argument could also be used to justify taking no security precautions whatsoever—which I’m beginning to worry is what you recommend. In your essay, you assume that false positives (screening innocent Muslims) are so unpleasant as to be morally unacceptable, while false negatives (letting the occasional bomb-laden terrorist onto an airplane) aren’t so bad that we should seek to prevent every instance of them. I am open to the idea that we are irrationally afraid of airline terrorism (and airplane crashes generally), but you have not made this case. And I would point out that our horror at the prospect of planes exploding at 30,000 feet is part of the cost of terrorism that we must consider. If, as result of some quirk in human psychology, a few downed airplanes will cripple our economy in a way that a few blown up trains never will, then it is rational for us to have a zero-tolerance policy regarding bombs on airplanes.
BS:  It turns out designing good security systems is as complicated as I make it out to be.  Witness all the lousy systems out there designed by people who didn’t understand security.  Designing an airport security system is hard.  Designing a passenger profiling system within an airport security system is hard.  And I’m going to walk you through an analysis of your security design.
In your response above, you make a big deal about two points that are unimportant.
One, it doesn’t matter that the correlation between Muslim and terrorist is a causal relationship. We’re taking about a detection system.  You’re proposing that we can detect attribute A (terrorist) by using attribute B (Muslim).  That’s what matters, not whether or not there’s a causal arrow or which direction it points.  In using the word “correlation” I was giving you the benefit of the doubt; it’s a lower bar.
And two, “the probability that the next terrorist will be a Muslim” doesn’t matter either.  To demonstrate that, for now I’ll just assume the probability equals one.
To analyze your system, I first need to describe it.  In security, the devil is in the details, and it’s the details that matter.  Lots of security systems look great in one sentence but terrible once they’re expanded to a few paragraphs.
You’re proposing an airport passenger screening system with two tiers of security.  Everyone gets subjected to the lower tier, but only people who meet your profile, “Muslims, or anyone who could conceivably be Muslim,” would be subjected to the higher tier.
SH:  Yes, and anyone else whose bag or behavior seems to merit follow up (e.g., the Hindawi affair).
BS:  That’s behavioral profiling, completely different from what we’re discussing here.  I want to stick with your ethnic profiling system.
SH: Well, I disagree. And the Israelis, who are generally credited with being the masters of behavioral profiling, appear to disagree as well. A person’s behavior can only be interpreted in context. What does a man’s sweating profusely and looking agitated mean? It means one thing if he is a morbidly obese senior from Alabama traveling with his wife and their church group, who is struggling to get all the trinkets he purchased in Jerusalem into a bursting suitcase; it means another if he is a 23-year-old man traveling on a Pakistani passport who is doing his best to not make eye contact with anyone. The distinction between behavioral profiling and everything else that can be noticed about a person is a myth. However, we can table this issue for the time being.
BS: You can disagree, but I assure you that the Israelis understand the difference between ethnic profiling and behavioral profiling.  Yes, they do both together, but that doesn’t mean you can confuse them.  But let’s stick to topic: ethnic profiling.
In practice, this would mean that everyone would go through primary airport screening: x-ray machine for hand luggage, and the magnetometer or full-body scanner for their bodies. But when primary screening results in an anomaly—this is generally because the magnetometer beeps, the full-body scanner shows something, or there’s something suspicious in an x-ray image—in some cases people who don’t meet the profile would be allowed through security without that anomaly being further checked.
SH:  Yes, depending on the anomaly.
BS: TSA screeners would have to make the determination, based on some subjective predetermined criteria which they would have to apply, whether or not individuals meet the profile.  You are not proposing this because it will improve security.
SH: On the contrary, I believe it will improve security. Let’s say that in each moment the TSA has $100 worth of attention, and they can spend it any way they want. A dollar spent on a toddler whose family does not stand a chance of having turned him into an IED is a dollar wasted (i.e., not spent elsewhere).
BS:  That’s also a separate issue.  We’re comparing profiling with not profiling. You are essentially making an efficiency argument in support of profiling: “I am more concerned about the risk of airport screeners obviously wasting their time.”  This efficiency, you argue, could result in either cost savings as TSA staffing was reduced, or in increased security elsewhere as superfluous screeners were retasked to do other things that might improve security.  But that is independent of, and irrelevant to, the analysis of the proposed security system.  The proposed benefit of the profiling system is the same security at reduced cost, and reduced inconvenience to non-profiled people.
SH:  I agree. I would just emphasize that I think of efficiency in terms of increased security, not in terms of reducing costs. Efficiency allows for more eyes on the problem—another person watching the scanner images, another person able to study the behavior of a suspicious person. Every moment spent following up with the wrong family is not just a moment in which the line slows down—it’s also a moment in which someone or something else gets ignored.
BS:  Of course.  Again, when you have an efficiency gain you can either realize it by reducing your cost or by doing more of what you’re already doing.  But that potential additional security has nothing to do with the efficacy of profiling.  If we believe that an extra $10 of attention will make us safer, we can either add $10 to the TSA’s budget, or save $10 by increasing efficiency somewhere else.
SH: Now I see what you are getting at—and I’m prepared to agree for the sake of letting you continue with your analysis. But I want to point out that there might be more to it than the question of efficiency. I think a policy of not profiling—that is, remaining committed to the fiction that we have no idea where the threat of suicidal terrorism is coming from—might cause screeners to be much worse at their jobs than they would otherwise be. Gains in efficiency due to profiling might not just be a matter of “doing more of what you’re already doing.” It could be doing more of what the Israelis are already doing—which I don’t think entails their lying to themselves about the source of the problem.
BS: You are, however, implying a different type of profiling system: to take a security procedure now randomly applied—swabbing luggage for explosive residue, for example—and apply it according to the profile.  Leave that aside for now; I’ll come back to it later.
One piece of security philosophy to start.  Complexity is the enemy of security.  Adding complexity to a security system invariably introduces additional vulnerabilities (see my 2000 essay).  Simple systems are easier to analyze.  Simpler systems have fewer security assumptions.  Simpler systems are more robust against mistakes in analysis.  And simpler systems are more secure. 
More specifically, simplicity tends to completely remove potential avenues of attack.  An easy example might be to think of a building.  Adding a new door is an additional complexity, and requires additional security to secure that door.  This leads to an analysis of door materials, lock strength, and so on.  The same building without that door is inherently more secure, and requires no analysis or assumptions about how it will be secured.  Of course, this isn’t to say that buildings with doors are insecure, only that it takes more work to secure them.  And it takes more work to secure a building with ten doors than with one door.  I will appeal to simplicity multiple times in any analysis of your profiling system.
Let’s get started, then.  Security is always a trade-off: costs versus benefits.  We’re going to tally them up.
The primary benefit to your system is increased efficiency, but it’s not as much as you think. In Kip Hawley’s memoir of his time as head of the TSA, he talks about the shoe scanning process. After Richard Reid’s failed shoe-bombing attempt in late 2001, TSA screeners started requiring people wearing thick-heeled shoes and boots to remove them and put them through the x-ray machines.  They deliberately chose the most accurate correlation in order to minimize the passenger inconvenience. But when they revised the rule to require everyone to take their shoes off, checkpoint throughput increased.  There is an inherent inefficiency to non-uniform procedures, and when passengers knew what to expect, there was less delay.
Your system is different.  The non-uniformity is in the resolving of anomalies, not in the basic security procedures that everyone has to go through. There would be an efficiency benefit resulting from your system, but it would still be diminished because passengers wouldn’t know what to expect.

No comments:

Post a Comment