California Secretary of State Debra Bowen's decision on the fate of her state's voting technology was announced just before midnight last Friday. The certifications of all three reviewed systems (Diebold, Hart, and Sequoia) were revoked and then re-issued subject to a range of conditions intended to make it harder to exploit some of the problems we found in the security review (see previous entry in this blog). The certification of a fourth system, ES&S, was revoked completely because the vendor failed to submit source code in time to be reviewed.
Whether the new conditions are a sufficient security stopgap and whether the problems with these systems can be properly fixed in the long term will be debated in the technical and elections communities in the weeks and months to come. How to build secure systems out of insecure components is a tough problem in general, but of huge practical importance here, since we can't exactly stop holding elections until the technology is ready.
But that's not what this post is about.
The traditional role of the vendors in cases like this, where critical products are found to be embarrassingly or fatally insecure, is to shoot the messengers. The reaction is familiar to most anyone who has ever found a security flaw and tried to do the right thing by reporting it rather than exploiting it: denials, excuses, and threats.
Occasionally, though, a company will try to look "responsible" by employing a different strategy, acknowledging -- and perhaps even actually correcting -- the underlying problems. This should be understood as nothing more than a transparent attempt to pander to customers by wastefully improving the security of otherwise perfectly good products. These naive organizations -- a tipoff is that they're often run by engineers rather than experienced business people -- do enormous damage by shirking their public relations duty to the community as a whole. Fortunately, this kind of unsophistication is rare enough not to have been much of an issue in the past, although in some circles, it is becoming worrisomely commonplace.
To help vendors focus on their obligations here, Jutta Degener and I present Security Problem Excuse Bingo. Usual bingo rules apply, with vendor press releases, news interviews, and legal notices used as source material. Cards can be generated and downloaded from www.crypto.com/bingo/pr
Because we follow all industry standard practices, you can rest assured that there are no bugs in this software. We take security very seriously.
Readers of this blog may recall that for the last two months I've been part of a security review of the electronic voting systems used in California. Researchers from around the country (42 of us in all) worked in teams that examined source code and documents and performed "red team" penetration tests of election systems made by Diebold Election Systems, Hart InterCivic and Sequoia Voting Systems.
The red team reports were released by the California Secretary of State last week, and have been the subject of much attention in the nationwide press (and much criticism from the voting machine vendors in whose systems vulnerabilities were found). But there was more to the study than the red team exercises.
Today the three reports from the source code analysis teams were released. Because I was participating in that part of the study, I'd been unable to comment on the review before today. (Actually, there's still more to come. The documentation reviews haven't been released yet, for some reason.) Our reports can now be downloaded from http://www.sos.ca.gov/elections/elections_vsr.htm .
I led the group that reviewed the Sequoia system's code (that report is here [pdf link]).
The California study was, as far as I know, the most comprehensive independent security evaluation of electronic voting technologies ever conducted, covering products from three major vendors and investigating not only the voting machines themselves, but also the back-end systems that create ballots and tally votes. I believe our reports now constitute the most detailed published information available about how these systems work and the specific risks entailed by their use in elections.
My hats off to principal investigators Matt Bishop (of UC Davis) and David Wagner (of UC Berkeley) for their tireless skill in putting together and managing this complex, difficult -- and I think terribly important -- project.
By law, California Secretary of State Debra Bowen must decide by tomorrow (August 3rd, 2007) whether the reviewed systems will continue to be certified for use throughout the state in next year's elections, and, if so, whether to require special security procedures where they are deployed.
We found significant, deeply-rooted security weaknesses in all three vendors' software. Our newly-released source code analyses address many of the supposed shortcomings of the red team studies, which have been (quite unfairly, I think) criticized as being "unrealistic". It should now be clear that the red teams were successful not because they somehow "cheated," but rather because the built-in security mechanisms they were up against simply don't work properly. Reliably protecting these systems under operational conditions will likely be very hard.
The problems we found in the code were far more pervasive, and much more easily exploitable, than I had ever imagined they would be.
Eric Cronin found this cute little junior phone bugging kit on sale at Toys 'R Us. Recommended for ages 10-14 (presumably because children any older than that are more likely to be prosecuted under 18 USC 2511 and 18 USC 2512), the kit is basically a tunable low-power FM radio transmitter designed to connect to an analog telephone line. I especially like the way the instruction sheet [pdf] prominently warns of the dangers of eating solder, but only casually mentions the illegality of listening to other people's phone calls once you've got the thing built. (A non-trivial concern, especially considering the trouble that Ramsey Electronics got into with the US Customs service a few years back for selling similar kits.)
As strongly as I feel about the evils of illegal wiretapping, I must admit to having decidedly mixed feelings here. No, kids, don't tap your neighbor's phone. But unraveling the once-forbidden mysteries of telephone electronics has a way of pulling a young geek into a lifetime of technological exploration. It certainly did for me.
I was at a conference recently where everyone was asked to recall their first moment of thinking "I rule!" over some technology. It's a surprisingly revealing question; experience the exhilaration of hacker empowerment at a sufficiently impressionable age and you're hooked forever. A disproportionately large fraction of the answers seemed to involve telephony. (Mine was when I discovered you could dial a phone by flashing the hookswitch. I think I was too young to have anyone to call, though).
So I suppose if the nerdy kid next door figures out how to hook one of these kits up to my phone, I won't be too upset. Just make sure not to eat the solder.
Vassilis Prevelakis and Diomidis Spinellis just published (in the July '07 IEEE Spectrum) a terrific technical analysis [link] of the recent Greek cellular eavesdropping scandal. In 2005, it was discovered that over a hundred Athens cellphones, mostly belonging to politicians (ranging from the mayor to the prime minister), were being illegally wiretapped. The culprit hasn't been found, but there's plenty of fodder for speculation, including mysteriously missing records, suspicious suicide, and, as Prevelakis and Spinellis point out, an intriguing technological mystery.
This would all be interesting enough for its stranger-than-spy-fiction elements alone, but what makes the story essential reading here is how definitively it illustrates something that many of us in the security and privacy community have been warning about for years: so-called "lawful interception" interfaces built in to network infrastructure become inviting targets for abuse. (See, for example, this point made in 1998 [pdf] and in 2006 [pdf]). And, as this case shows, those targets can be rich indeed.
For some reason, wiretapping interfaces don't seem to get much technical scrutiny, and we're starting to see how easy it can be to exploit them to nefarious ends. Vulnerabilities here can cut both ways, too, sometimes making it easier for real criminals to evade legal surveillance. A couple of years ago, Micah Sherr, Eric Cronin, Sandy Clark and I discovered basic weaknesses in the interception technologies used for decades to tap wireline telephones. Many of the vulnerabilities have found their way, in the name of "backward compatibility", into the latest eavesdropping standards, now implemented just about everywhere. Maybe even in Greek cellular networks.
Several people asked me for a list of references from my talk on "Safecracking, Secrecy and Science" Sunday morning in Sebastopol, and I promised a blog entry with pointers. (If you were there, thanks for coming; it was fun. For everyone else, I gave a talk on the relationship between progress and secrecy in security, as illustrated by the evolution of locks and safes over the last 200 years.)
Unfortunately, few of the historical references I cited are on the web (or even in print), but a bit of library work is repaid with striking parallels between the security arms races of the physical and virtual worlds.
The California Secretary of State recently announced plans for a "top-to-bottom" review of the various electronic voting systems certified for use in that state. David Wagner of U.C. Berkeley and Matt Bishop of U.C. Davis will be organizing source code and "red team" analysis efforts for the project, and they've recruited a large group of researchers to work with them, including me. This has the potential to be one of the most comprehensive independent evaluations of election technologies ever performed, and is especially significant given California's large size and the variety of systems used there. Trustworthy voting is perhaps the most elemental of democratic functions, but, as security specialists know all too well, complex systems on the scale required to conduct a modern election are virtually impossible to secure reliably without broad and deep scrutiny. California's review is a welcome and vitally important, if small, step forward.
I'll be leading one of the source code review teams, and we'll be getting to work by the time you read this. We have a lot to do in a very short time, with the final report due to be published by late summer or early fall. Until then, I won't be able to discuss the project or the details of how we're progressing, so please don't take it personally if I don't.
For some more details, the project FAQ is available here (PDF format).
UPDATE Aug 2, 2007: Our code review reports are now available. See this blog entry for details.
As interested as I am in the human-scale side of security, I suppose I should have strong opinions about last week's unscheduled evacuation drill in Boston. There's plenty to react to, after all: misguided marketing, hair-trigger over-reaction, shameless media pandering, oddball artists, and of course, disingenuous self-justification from all concerned. Yet for all the negligence and ineptitude on display, there doesn't seem to be very much to learn from these mistakes that we didn't already know. More troubling to me is the manipulative con game that triggered the whole spectacle in the first place. And, for a change, this has nothing to do with homeland security or fear mongering. But it strikes at the heart of commerce, culture and trust.
We often say that researchers break poor security systems and that feats of cryptanalysis involve cracking codes. As natural and dramatic as this shorthand may be, it propagates a subtle and insidious fallacy that confuses discovery with causation. Unsound security systems are "broken" from the start, whether we happen to know about it yet or not. But we talk (and write) as if the people who investigate and warn us of flaws are responsible for having put them there in the first place.
Words matter, and I think this sloppy language has had a small, but very real, corrosive effect on progress in the field. It implicitly taints even the most mainstream security research with a vaguely disreputable, suspect tinge. How to best disclose newly found vulnerabilities raises enough difficult questions by itself; let's try to avoid phrasing that inadvertently blames the messenger before we even learn the message.
I've long been an admirer of the James Randi Educational Foundation (JREF), tireless advocates for critical thinking, skepticism, and the scientific method. They offer a one million dollar prize to the first person who can provide convincing, testable proof of supernatural powers. The foundation recently set up a "remote viewing" challenge in which the purported psychic is asked to describe the contents of a special sealed box held at the JREF office in Fort Lauderdale, Florida.
Those who know me may be surprised to read this, but I'm pleased to announce that Jutta Degener and I have successfully visualized the contents of Randi's challenge box. We accomplished this from over a thousand miles away and entirely through mental concentration and the application of our unique talents (or, I should say, gifts), and without any physical access or inside information. We can now reveal to the world the item in the box: a small mirrored flat circular wheel or disk, such as a DVD or CD. Randi, if you're reading this, a money order or certified check will be fine.