≡ Menu

Something doesn’t add up in the lastest Washington Post PRISM story

The Washington Post has released additional slides from the PRISM deck, which it has annotated and which have resurrected the “equipment installed at company premises” claim. Some – notably Glenn Greenwald – have claimed this proves the “direct access to company databases” claim from the original story has been verified, despite the vociferous denials of all the companies involved.

But does it? Dig a little deeper, and I think it becomes clear that the WaPo hasn’t got the story it thinks it has.

First, there’s nothing in the released slides themselves which directly corroborates the “installed at company premises” claim, which exists only in the annotations that the reporter, Barton Gellman, has added to the slides. Here’s how the process is described by Gellman:

The search request, known as a “tasking,” can be sent to multiple sources — for example, to a private company and to an NSA access point that taps into the Internet’s main gateway switches. A tasking for Google, Yahoo, Microsoft, Apple and other providers is routed to equipment installed at each company. This equipment, maintained by the FBI, passes the NSA request to a private company’s system.

The slides themselves, though, make no mention of much of this. In particular, there’s no reference to company premises in anything on the slides. 

Given that the slides don’t say that equipment is installed at the company, where has this point come from? I think there’s three options:

  1. It’s featured in other, as-yet unreleased slides.
  2. It comes from verbal or written testimony from Edward Snowden or another intelligence source.
  3. It’s an interpretation of something in the released slides.

The first option is possible, but I think we can rule it out. If there was a clear, unambiguous statement that the FBI had equipment installed in company premises on another slide, I can’t see why the WaPo wouldn’t publish that slide, even if it had to do so in heavily redacted form. So that leaves us with the other options.

Is the WaPo relying on unknown third-party sources? If it was, I can’t see why it wouldn’t add an “intelligences sources confirmed…” in the story. It would be a stronger story for it, so why not say? If, on the other hand, it’s Snowden, I can understand why it might avoid naming him as the source. Snowden’s direct testimony has proved to be occasionally exaggerated and sometimes even unreliable – but the WaPo could use “a source familiar with the whole presentation” instead of naming him, which would again strengthen the story.

At this point, I think the onus is on the WaPo be a little transparent and clear this up. If there’s additional evidence, show it – or at least note you’re relying on it.

Which leaves us with the third option: interpretation. And I think this is where WaPo has, at the very least, produced something that’s an epic muddle. The muddle occurs around the box labelled “FBI Data Intercept Technology Unit (DITU)”.

A DITU sounds like a piece of technology. It sounds like the kind of thing that you would install somewhere to do intercepts, and, given the way the diagram is structured, you might well surmise that it was installed on company premises.

 

But it’s not. In fact, the Data Intercept Technology Unit isn’t a piece of technology, something which would sit at the premises of a company. In fact, it’s a department of the FBI, formed several years ago, tasked with data interception of the “packet sniffing” variety (it even has its own Challenge Coin). It’s known to use a suite of packet inspection tools which allow it, from TCP/IP data, to recreate emails, IM, images, web pages and more. Essentially, it specialises in snooping tools which let you find out what someone is doing online without having access to the original servers. Essentially, it will tap data at the ISP level, rather than the server level.

The annotation on the second new is where the waters get really muddy. In a note attached to the box for the DITU, Barton adds:

From the FBI’s interception unit on the premises of private companies… [my emphasis]

Does Gellman think that DITU is the “interception unit”? I emailed him to ask, and initially he confirmed that the “interception unit” referred to in the annotation was the DITU – which would be a fairly major error. However, when I pointed out that this made no sense, he clarified, claiming that by “interception unit” he was referring to the organisation within the FBI, not the equipment. All clear on that?

WaPo DITU

This, though, makes the annotations even more puzzling. Why would you use the phrase “interception unit on the premises” to refer to the organisation within the FBI? Clearly, the organisation isn’t on the premises – the equipment (supposedly) is.

The other option is that Gellman is using “interception unit” to mean both the DITU and the equipment, which would be – at the very least – pretty poor writing. So what exactly does Gellman mean? Perhaps understandably, he declined to answer further questions.

None of this means that the WaPo doesn’t have a story. We now know that the FBI’s DITU can be tasked by the NSA to conduct live surveillance on the data of identified (and 51%-certain-foreign) targets. The NSA can also request data from previous FBI DITU surveillance. These specifics weren’t known before, so Gellman and the WaPo should get credit for a scoop.

But it isn’t the scoop they think it is, because the slides don’t confirm either the direct server access that Greenwald is crowing about or the presence of on-premise equipment at Google, Apple, and the rest. There’s simply nothing in the slide which states that equipment is on-site, and there’s no alternative source for this claim. There’s no way I can see to interpret anything on the slides as putting that “interception unit” inside the premises, accessing data on demand without any company oversight. 

A more likely scenario, particularly given the DITU’s heritage as data tappers, is that the equipment taps into Internet backbones – something that’s supported by one of the original slides, which referred to how much of the world’s comms data flowed through the US. Why bother with a slide like that if you’re tapping directly into Google’s servers?

The WaPo story isn’t proof of mass warrantless surveillance of US citizens, or (as it stands) of in-house equipment at Google, Apple, Microsoft and the rest. Unless it has more evidence which hasn’t been published that explicitly shows this, not much new controversial information has been added to what we know about the NSA and its activities.

Comments on this entry are closed.

  • windship

    Few people have your temerity to poke sticks into a rattlesnake’s cave.

  • Jeff R. Allen

    I agree 100% with your analysis. Mark Klein told us long ago that there are special rooms in backbone sites. Academic researchers use fiber splitters to harvest 10% of the signal in a fiber (http://learn.caida.org/cds/traffic0202/CoralReef/splitters.html) and the n take high bandwidth packet captures for research. So it’s likely those are in use by DITU to grab a copy of the backbone traffic.

    Having a giant firehose of packets is not useful. You need some software to reassemble it into flows, and then you need to separate flows that are completely useless for intel purposes (CSS, JS, images for UI) from flows with user-specific content. A logical way to structure that software would be a foundation layer that does the flow reassembly, and a plugin layer that would use specific code per provider to reduce the flows to useful content, and reformat it from the format used internally to the app to a format expected by the downstream system.

    I suspect the dates on the original slide mentioning the 9 cloud providers are the release dates of the provider-specific plugins. For example, the annotation “Dropbox coming soon” probably means, “the plugin that can understand the Dropbox protocol and pull content out of it for entry into the downstream systems is coming soon”.

    -jeff

  • http://www.technovia.co.uk Ian Betteridge

    The DITU used to use a tool called Packeteer to reassemble packets into emails, web pages, etc – something that was acknowledged in a conference agenda which ended up on Cryptome. That was in 2010, and the document also noted that Packeteer was due for “significant upgrade” in the next couple of years.

    From what I can gather, Packeteer worked pretty much as you describe, with “filters” rather than plugins devoted to specific services. Anything not recognised is dumped into a “bucket”, which can be analysed later when new filters are developed.