NSA Sweep "Waste of Time," Analyst Says

FacebookXPinterestEmailEmailEmailShare

It'd be one thing if the NSA's massive sweep of our phone records was actually helping catch terrorists. But what if it's not working at all? A leading practitioner of the kind of analysis the NSA is supposedly performing in this surveillance program says that "it's a waste of time, a waste of resources. And it lets the real terrorists run free."
Re-reading the USA Today piece, one paragraph jumped out:

This kind of data collection from phone companies is not uncommon; it's been done before, though never on this large a scale, the official said. The data are used for 'social network analysis,' the official said, meaning to study how terrorist networks contact each other and how they are tied together.

So I called Valdis Krebs, who's considered by many to be the leading authority on social network analysis -- the art and science of finding the important connections in a seemingly-impenetrable mass of data. His analysis of the social network surrounding the 9/11 hijackers is a classic in the field.
step_2.gifHere's what Krebs had to say about the newly-revealed NSA program that aims to track "every call ever made": "If you're looking for a needle, making the haystack bigger is counterintuitive. It just doesn't make sense."
"Certain people are more suspicious than others," he adds. They make frequent trips back-and-forth to Afghanistan, for instance. "So you start with them. And you work two steps out. If none of those people are connected, you don't have a cell. Because if one was there, you'd find some clustering. You don't have to collect all the data in the world to do that."
The right thing to do is to look for the best haystack, not the biggest haystack. We knew exactly which haystack to look at in the year 2000 [before the 9/11 attacks]. We just didn't do it...
The worst part -- the thing that's most disappointing to me -- is that this is not the right way to do this. It's a waste of time, a waste of resources. And it lets the real terrorists run free.

UPDATE 2:30 PM: Shane Harris broke this story, in broad strokes, back in March, Patrick reminds us. Harris also offers a possible explanation for some of the NSA program's massive size:
To find meaningful patterns in transactional data, analysts need a lot of it. They must set baselines about what constitutes "normal" behavior versus "suspicious" activity. Administration officials have said that the NSA doesn't intercept the contents of a communication unless officials have a "reasonable" basis to conclude that at least one party is linked to a terrorist organization.
To make any reasonable determination like that, the agency needs hundreds of thousands, or even millions, of call records, preferably as soon as they are created, said a senior person in the defense industry who is familiar with the NSA program and is an expert in the analytical tools used to find patterns and connections. Asked if this means that the NSA program is much broader and less targeted than administration officials have described, the expert replied, "I think that's correct."

Harris also fingers a likely program set of research efforts to help the NSA better comb through all this data: "Novel Intelligence from Massive Data," or NIMD. Its goal is to develop "techniques and tools that assist analysts not only in dealing with massive data, but also in interactively making explicit - and modifying and updating - their current analytic (cognitive) state, which includes not only their hypotheses, but also their knowledge, interests, and biases."
You'll be shocked to hear that NIMD's website has been taken offline. But you can find Goggle caches about the program here, here, here, and here.
UPDATE 5:19 PM: "To me, it's pretty clear that the people working on this program aren't as smart as they think they are," says former Air Force counter-terrorist specialist John Robb. "Some top level thinking indicates that this will quickly become a rat hole for federal funds (due to wasted effort) and a major source of infringement of personal freedom." John gives a bunch of reasons why. Here's just one:
It will generate oodles of false positives. Al Qaeda is now in a phase where most domestic attacks will be generated by people not currently connected to the movement (like we saw in the London bombings). This means that in many respects they will look like you and me until they act. The large volume of false positives generated will not only be hugely inefficient, it will be a major infringement on US liberties. For example, a false positive will likely get you automatically added to a no-fly list, your boss may be visited (which will cause you to lose your job), etc.

UPDATE 6:23 PM: And now, the rebuttal. I just got off the phone with a source who has extensive experience in these matters. And he disagrees, strongly, with Krebs and Robb.
Really, the source said, there are two approaches to whittling down massive amounts of information: limiting what you search from the beginning -- or taking absolutely everything in, and sifting through it afterwards. In his experience, the source said, the approach of using "brute force... not optimally, not smartly" on the front end, and "cleaning [the data] up later" worked the best. Often times, other people don't know what you're searching for (or they don't have the same super-slick data-mining algorithms you've got). Better just to get it all.
In everything from speech analysis to sensor fusion, he argued, when you've got a weak signal masked by a lot of noise, "more data seems to be the answer... More data is what's going to allow you to get to ground truth."
Of course, there's a price to pay with this approach: a ton of false alarms. Several stages of filtering should fix that, he argued. Besides, "it's not like you call the FBI every time you get a hit."
Think of it as the Google approach. Wouldn't you rather have everything available on the search engine, and then do queries yourself?
UPDATE 05/12/06 8:52 AM: The rebuttal gets rebutted.
"I find it almost impossible to believe that the NSA has a system good enough to beat human int[elligence], selective tapping, and the kind of progressive extension that Krebs cites," an MIT professor says, who also passes along this handy graphic.
kevinBacon.jpg
You need to have a good understanding of the "classifiers" and functions appropriate for your data set -- developing the knowledge and techniques around finding those classifiers has taken [computer] vision [research] 30 years to get where it is (able to drive a car through a pre-set path in a desert, recognize one face out of a thousand with good rejection but many, many false positives)... Meaning fine, but not great... We have almost no idea how complex this issue is, but it's probably similar.
One thing about your "extensive experience" source is that he doesn't really specify what kind of search he was doing. People doing data mining may be looking in many different ways. For instance, if you have six million examples of successful stock price changes and six million examples of unsuccessful ones, you might look for other variables (past performance, location, etc.) that signal a difference -- any difference. Large data sets are definitely helpful for this. Getting machine learning to discover a specific thing -- like a familial bond based on telephone calls -- may or may not work at all. If all you have is frequency, there may be a half dozen other types of relationships that lead to numerous calls. There may never be a way of discerning relationship based on a single modality of communication. That's why most of the people I know are using millions of other sensors, like GPS, accelerometers, recording the voice, reading heart rate, etc. Then they may be able to say with moderate certainty that they can tell something from phone calls. The NSA can't do that with what USA Today says they're collecting.

UPDATE 05/12/06 11:48 AM: Click here to see if you can spot the difference between an Al-Qaeda cluster, and on from a Fortune 500 firm.
Story Continues
DefenseTech