Friday, November 03, 2006

Bruce Schneier

Bruce Schneier
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Bruce Schneier
Enlarge
Bruce Schneier

Bruce Schneier (born January 15, 1963) is an American cryptographer, computer security specialist, and writer. He is the author of several books on computer security and cryptography, and is the founder and chief technology officer of Counterpane Internet Security[1].
Contents
[hide]

* 1 Education
* 2 Writing on cryptography
* 3 Miscellaneous
* 4 Publications
* 5 See also
* 6 External links
* 7 RSS Feed

[edit] Education

Originally from New York, Schneier currently lives in Minneapolis, Minnesota. Schneier has a Master's degree in computer science from American University and a Bachelor of Science degree in physics from the University of Rochester. Before Counterpane, he worked at the United States Department of Defense and then Bell Labs.

[edit] Writing on cryptography

Schneier's Applied Cryptography is a popular and widely regarded reference work for cryptography. Schneier has designed or co-designed several cryptographic algorithms, including the Blowfish, Twofish and MacGuffin block ciphers, and the Yarrow and Fortuna cryptographically secure pseudo-random number generators. Solitaire is a cryptographic algorithm developed by Schneier for use by people without access to a computer, called Pontifex in Neal Stephenson's novel Cryptonomicon.

However, Schneier now denounces his early success as a naive, mathematical, and ivory tower view of what is inherently a people problem. In Applied Cryptography, he implies that correctly implemented algorithms and technology promise safety and secrecy, and that following security protocol ensures security, regardless of the behavior of others. Schneier now argues that the incontrovertible mathematical guarantees miss the point. As he describes in Secrets and Lies, a business which uses RSA encryption to protect its data without considering how the cryptographic keys are handled by employees on "complex, unstable, buggy" computers has failed to properly protect the information. An actual security solution that includes technology, must also take into account the vagaries of hardware, software, networks, people, economics, and business. Schneier is now referring people trying to implement actually secure systems to his new book with Niels Ferguson, Practical Cryptography.

Schneier writes a freely available monthly Internet newsletter on computer and other security issues, Crypto-Gram, as well as a security blog [2] . He is frequently quoted in the press on computer and other security issues, pointing out flaws in security and cryptographic implementations ranging from biometrics to the post-September 11 airline security measures.

[edit] Miscellaneous

Bruce Schneier is name-dropped in the blockbuster book The Da Vinci Code. Page 199 of the American hardcover edition states:

"Da Vinci had been a cryptography pioneer, Sophie knew, although he was seldom given credit. Sophie's university instructors, while presenting computer encryption methods for securing data, praised modern cryptologists like Zimmermann and Schneier but failed to mention that it was Leonardo who had invented one of the first rudimentary forms of public key encryption centuries ago."

The website geekz.co.uk features Bruce Schneier in a parody of Chuck Norris Facts called Bruce Schneier Facts featuring lines such as "Most people use passwords. Some people use passphrases. Bruce Schneier uses an epic passpoem, detailing the life and works of seven mythical Norse heroes."

[edit] Publications

* Schneier, Bruce. Applied Cryptography, John Wiley & Sons, 1994. ISBN 0-471-59756-2
* Schneier, Bruce. Protect Your Macintosh, Peachpit Press, 1994. ISBN 1-56609-101-2
* Schneier, Bruce. E-Mail Security, John Wiley & Sons, 1995. ISBN 0-471-05318-X
* Schneier, Bruce. Applied Cryptography, Second Edition, John Wiley & Sons, 1996. ISBN 0-471-11709-9
* Schneier, Bruce; Kelsey, John; Whiting, Doug; Wagner, David; Hall, Chris; Ferguson, Niels. The Twofish Encryption Algorithm, John Wiley & Sons, 1996. ISBN 0-471-35381-7
* Schneier, Bruce; Banisar, David. The Electronic Privacy Papers, John Wiley & Sons, 1997. ISBN 0-471-12297-1
* Schneier, Bruce. Secrets and Lies, John Wiley & Sons, 2000. ISBN 0-471-25311-1
* Schneier, Bruce. Beyond Fear: Thinking Sensibly about Security in an Uncertain World, Copernicus Books, 2003. ISBN 0-387-02620-7
* Ferguson, Niels; Schneier, Bruce. Practical Cryptography, John Wiley & Sons, 2003. ISBN 0-471-22357-3

[edit] See also

* Attack tree

[edit] External links
Wikiquote has a collection of quotations related to:
Bruce Schneier

* Personal website
* Schneier biography on Counterpane.com
* Schneier's ProCon.org Bio
* Counterpane.com
* Crypto-Gram newsletter
* Essays
* Schneier 'Facts'
* Encryption Expert Teaches Security — profile of Schneier

[edit] RSS Feed

* Schneier on Security




Essays and Op Eds
Minneapolis Star Tribune Op Eds
Focus on Terrorists, Not Tactics (Aug 13 2006)
We're Giving Up Privacy and Getting Little in Return (May 31 2006)
Your Vanishing Privacy (Mar 5 2006)
Unchecked presidential power (Dec 20 2006)
The Erosion of Freedom (Nov 21 2005)
Toward a Truly Safer Nation (Sep 11 2005)
How Long Can the Country Stay Scared? (Aug 27 2004)
Unchecked Police And Military Power Is A Security Threat (Jun 24 2004)
A National ID Card Wouldn't Make Us Safer (Apr 1 2004)
Better Get Used to Routine Loss of Personal Privacy (Dec 21 2003)
Wired News Columns
Lessons From the Facebook Riots (Sep 21 2006)
Quickest Patch Ever (Sep 7 2006)
Refuse to be Terrorized (Aug 24 2006)
Drugs: Sports' Prisoner's Dilemma (Aug 10 2006)
How Bot Those Nets? (Jul 27 2006)
Google's Click-Fraud Crackdown (Jul 13 2006)
It's the Economy, Stupid (Jun 29 2006)
The Scariest Terror Threat of All (Jun 15 2006)
Make Vendors Liable for Bugs (Jun 1 2006)
The Eternal Value of Privacy (May 18 2006)
Everyone Wants to 'Own' Your PC (May 4 2006)
The Anti-ID-Theft Bill That Isn't (Apr 20 2006)
Why VOIP Needs Crypto (Apr 6 2006)
Let Computers Screen Air Baggage (Mar 23 2006)
Why Data Mining Won't Stop Terror (Mar 9 2006)
U.S. Ports Raise Proxy Problem (Feb 23 2006)
Fighting Fat-Wallet Syndrome (Feb 9 2006)
Big Risks Come in Small Packages (Jan 26 2006)
Anonymity Won't Kill the Internet (Jan 12 2006)
Hold the Photons! (Dec 15 2005)
Airline Security a Waste of Cash (Dec 1 2005)
Real Story of the Rogue Rootkit (Nov 17 2005)
Fatal Flaw Weakens RFID Passports (Nov 3 2005)
Sue Companies, Not Coders (Oct 20 2005)
A Real Remedy for Phishers (Oct 6 2005)
A Sci-Fi Future Awaits the Court (Sep 22 2005)
Terrorists Don't Do Movie Plots (Sep 8 2005)
America's Flimsy Fortress (Wired Magazine, Mar 2004)
Walls Don't Work in Cyberspace (Wired Magazine, Jun 2003)
Other Essays and Op Eds
2006-09-16 The ID Chip You Don't Want in Your Passport Washington Post
2005-12-20 Uncle Sam is Listening Salon
2005-06-23 Make Businesses Pay in Credit Card Scam New York Daily News
2005-05 Risks of Third-Party Data Communications of the ACM
2004-12-09 Who says safe computing must remain a pipe dream? CNet News.com
2004-11-24 Profile: "hinky" Boston Globe
2004-11-24 Why is it so hard to run an honest election? OpenDemocracy
2004-10-31 Getting Out the Vote San Francisco Chronicle
2004-10-26 The Security of Checks and Balances Sydney Morning Herald
2004-10-22 Outside View: Security at the World Series UPI
2004-10-04 Does Big Brother want to watch? International Herald Tribune
2004-10-04 Bigger Brother The Baltimore Sun
2004-10 Do Terror Alerts Work? The Rake
2004-10 The Non-Security of Secrecy Communications of the ACM
2004-09-20 Academics locked out by tight visa controls San Jose Mercury News
2004-09-19 City Cops' Plate Scanner is a License to Snoop New Haven Register
2004-08-26 Olympic Security Sydney Morning Herald
2004-08-25 U.S. 'No-Fly' List Curtails Liberties Newsday
2004-08-24 An Easy Path for Terrorists Boston Globe
2004-08-02 BOB on Board Sydney Morning Herald
2004-07-30 Security, Houston-Style Sydney Morning Herald
2004-06-16 CLEARly Muddying the Fight Against Terror News.com
2004-05-10 Curb electronic surveillance abuses Newsday
2004-05-04 We Are All Security Customers CNET News.com
2004-04-27 Terrorist Threats and Political Gains Counterpunch
2004-04 Hacking the Business Climate for Network Security IEEE Computer
2004-02-03 IDs and the illusion of security San Francisco Chronicle
2004-01-30 Slouching Towards Big Brother CNET News.com
2004-01-14 Fingerprinting Visitors Won't Offer Security Newsday
2004-01-19 Homeland Insecurity Salon.com
2003-12-19 Are You Sophisticated Enough to Recognize an Internet Scam? The Mercury News
2003-12-16 Blaster and the Great Blackout Salon.com
2003-11-11 Festung Amerika Financial Times Deutschland
2003-11 Liability Changes Everything Heise Security
2003-10-21 Terror Profiles by Computers Are Ineffective Newsday
2003-10-14 Fixing intelligence UPI
2003-08 Voting and Technology: Who Gets to Count Your Vote? Communications of the ACM
2003-03-07 American Cyberspace: Can We Fend Off Attackers? San Jose Mercury News
2003-03-02 Secrecy and Security SF Chronicle
Other Writings
Computer Security Articles
Academic Papers

Schneier.com is a personal w


Internet Shield: Secrecy and security

Bruce Schneier
SF Chronicle, March 2, 2003

THERE'S considerable confusion between the concepts of secrecy and security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

Last month, the SQL Slammer worm ravished the Internet, infecting in some 15 minutes about 13 root servers that direct information traffic, and thus disrupting services as diverse as the 911 network in Seattle and much of Bank of America's 13,000 ATM machines. The worm took advantage of a software vulnerability in a Microsoft database management program, one that allowed a malicious piece of software to take control of the computer.

This vulnerability had been made public six months previously when a respected British computer researcher had published the code on the Web.

During the same month, an AT&T researcher published a paper that revealed the vulnerability in master- key systems for door locks, the kind that allow you to have a key to your office and the janitor to have a single key that opens every office. The gap in security is this: The system allows someone with only one office key, and access to the lock, to create a master key for himself. This vulnerability was known in the locksmithing community for more than a century, but was never revealed to the general public.

Many argue that secrecy is good for security; that both computer and lock vulnerabilities are better kept secret. Making public the weak point only helps the bad guys, the argument goes. Now that more burglars know about the lock vulnerability, maybe we're more at risk. If the hacker who wrote the Sapphire worm (a.k.a. SQL Slammer) program didn't have access to the public information about the software's vulnerability, maybe he wouldn't have written the worm. The problem is, according to this position, with the information about the weak spot, not the weak spot itself.

This position in the debate ignores the fact that public scrutiny is the only reliable way to improve security -- be it of the nation's roads, bridges and ports or of our critically important computer networks. Several master- key designs are secure to the key- copying kind of attack, but they're not widely used because customers don't understand the risks they are taking by installing the old system, and because locksmiths continue to knowingly sell a flawed security system. It is no different in the computer world.

At the same time as the SQL software vulnerability was publicized, Microsoft made a software patch available to plug the security chinks in the networks. But before software bugs were routinely published, software companies routinely denied their existence and wouldn't bother fixing them, believing in the security of secrecy. And because customers didn't know any better, they bought these systems believing them to be secure. If we return to a practice of keeping these software bugs secret, we'll have vulnerabilities known by a few in the security community and by much of the hacker underground.

That's the other fallacy with the locksmiths' argument. Techniques such as this are passed down as folklore in the criminal community as well as the locksmithing community. In 1994, a thief made his own master key to a series of hotel safe-deposit boxes and stole $1.5 million in jewels. The same thing happens in the computer world. By the time most computer vulnerabilities are announced in the press, they're folklore in the hacker underground. Attackers don't abide by secrecy agreements.
HOMELAND SECURITY AT ISSUE

This clash of the secrecy versus openness camps is happening in many areas of security. U.S. Attorney General John Ashcroft is trying to keep details of many anti-terrorism countermeasures secret. Secret arrests are now permitted, and the criteria for those secret arrests are themselves secret. The standards for the Department of Homeland Security's color-coded terrorism threat levels are secret. Profiling information used to flag certain airline passengers is secret. Information about the infrastructure of plants and government buildings are secret. This keeps terrorists in the dark, but at the same time, the citizenry -- to whom the government is ultimately accountable -- is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can't improve because there's no public debate or public education. The nature of the attacks people learn to mount, and the defenses to counter them, will become folklore, never spoken about in the open but whispered from security engineer to security engineer and from terrorist to terrorist. And maybe in 100 years someone will publish the details of a method that some security engineers knew about, that terrorists and criminals had been exploiting for much of that time, but that the general public was blissfully unaware of.

Secrecy prevents people from assessing their own risk. In the master-key case, even if there weren't more secure designs available, many customers might have decided not to use master keying if they knew how easy it was for an intruder to make his own master key. Ignorance is bliss, but bliss is not the same as security. It's better to have as much information as possible to make informed security decisions.

I'd rather have the information I need to exert market pressure on vendors to improve security. I don't want to live in a world where locksmiths can sell me a master-key system that they know doesn't work or where the government can implement vulnerable security measures without accountability.


AMERICAN CYBERSPACE: CAN WE FEND OFF ATTACKERS?
FORGET IT: BLAND PR DOCUMENT HAS ONLY RECOMMENDATIONS

Bruce Schneier
San Jose Mercury News, March 7, 2003

AT 60 pages, the White House's National Strategy to Secure Cyberspace is an interesting read, but it won't help to secure cyberspace. It's a product of consensus, so it doesn't make any of the hard choices necessary to radically increase cyberspace security. Consensus doesn't work in security design, and invariably results in bad decisions. It's the compromises that are harmful, because the more parties you have in the discussion, the more interests there are that conflict with security. Consensus doesn't work because the one crucial party in these negotiations -- the attackers -- aren't sitting around the negotiating table with everyone else. They don't negotiate, and they won't abide by any security agreements.

Drafts of the plan included strong words about wireless vulnerability, which were removed because the wireless industry didn't want to look bad. Drafts included a suggestion that Internet Service Providers supply all their users with personal fire walls; that was taken out because ISPs didn't want to look bad for not already doing something like that. There's nothing in the document about liability regulation, because the software industry doesn't want any of that.

And so on. This is what you get with a PR document. You get lots of comments and input from all sorts of special interests. You get nebulous ideas that sound good but don't offend anyone. And you end up with a bland document that does little because it demands little.

Much of the document is filled with recommendations and suggestions. For some reason, the Bush administration continues to believe that it can increase cybersecurity simply by asking nicely. This government has tried this sort of thing again and again, and it never works. Businesses respond to business pressures: liabilities, market forces, regulations. They don't respond to cajoling.

Security is a commons. Like air and water and the radio spectrum, any individual's use of it affects us all. The way to prevent people from abusing a commons is to regulate it. Companies didn't stop dumping toxic wastes into rivers because the government asked them nicely. Companies stopped because the government made it illegal to do so.

If the U.S. government wants to improve cyberspace security, it must take action. I like the parts of the document that talk about the government's own network security, and ways to improve that. I like the parts that talk about awareness and training. I hope there's actual funding behind those recommendations, and they're not just idle talk.

But we need more. The government needs to use its considerable purchasing power to fund secure products. And the government needs to pass a law making companies liable for insecurities. If you align market forces with increased security, you'll be surprised how quickly things get more secure. Leave the feel-good PR activities to the various industry trade organizations; that's what they're supposed to do.

This national strategy document isn't law, and it doesn't contain any mandates to government agencies. If the government wants a more secure cyberspace, it's going to have to forget about consensus. It's going to have to offend people. It's going to have to lead.

up to Essays and Op Eds

Voting and Technology: Who Gets to Count Your Vote?
Paperless voting machines threaten the integrity of democratic process by what they don't do.

David L. Dill, Bruce Schneier, and Barbara Simons
Communications of the ACM, Vol. 46, No. 8
August 2003
Acrobat Version

Voting problems associated with the 2000 U.S. Presidential election have spurred calls for more accurate voting systems. Unfortunately, many of the new computerized voting systems purchased today have major security and reliability problems.

The ideal voting technology would have five attributes: anonymity, scalability, speed, audit, and accuracy (direct mapping from intent to counted vote). In the rush to improve the first four, accuracy is being sacrificed. Accuracy is not how well the ballots are counted; it's how well the process maps voter intent into counted votes and the final tally. People misread ballots, punch cards don't tabulate properly, machines break down, ballots get lost. Mistakes, even fraud, happen.

When the election is close, we demand a recount. It involves going back to the original votes and counting them a second time. Presumably more care is taken, and the recount is more accurate.

But recounts will become history if paperless Direct Recording Electronic (DRE) voting machines -- typically touch-screen machines -- become prevalent. Approximately one in five Americans vote on such machines, as do citizens in several countries.1 In the U.S. the "Help America Vote Act" will subsidize more DREs.

DREs have some attractive features. The human interface can be greatly improved. People with disabilities can vote unassisted. Ballots can be changed at the last minute and quickly personalized for local elections.

However, all of the internal mechanics of voting are hidden from the voter. A computer can easily display one set of votes on the screen for confirmation by the voter while recording entirely different votes in electronic memory, either because of a programming error or a malicious design. Almost all the DREs currently certified by state and local agencies have an "audit gap" between the voter's finger and the electronic or magnetic medium on which the votes are recorded. Because the ballot must remain secret, there's no way to check whether the votes were accurately recorded once the voter leaves the booth; neither the recorded vote nor the process of recording it can be directly observed. Consequently, the integrity of elections rests on blind faith in the vendors, their employees, inspection laboratories, and people who may have access -- legitimate or illegitimate -- to the machine software.

With traditional voting machines, election officers are present to ensure integrity. But with DREs, election officers are powerless to prevent accidental or deliberate errors in the recording of votes. If there is tampering, it is likely present in the DRE's code, to which election officers have no access. In fact, DRE code is usually protected by code secrecy agreements, so that no one but the manufacturer has access to it. In recent cases the complainants have not been allowed to review the code, even when DRE-based elections have been contested in court.

Anyone who doubts the result of an election is now obliged to prove those results are inaccurate. But paper ballots -- the main evidence providing that proof -- are being eliminated. Vendors and election officials are free to claim that elections have gone "smoothly," when there is, in fact, no evidence the votes counted had anything to do with the intent of the voters.

This is an unacceptable way to run a democracy. The voters and candidates are entitled to strong, affirmative proof that elections are accurate and honest. Paper-based elections with good election administration practices show the losers in an election that they lost fair and square. DREs do not.

Many voters and election officials are under the impression that computerized voting machines are infallible. DRE manufacturers insist that care goes into the design and programming of the machines. They and some election officials reassure us the machines meet rigorous standards set by the Federal Elections Commission; that the designs are reviewed and the machines thoroughly tested by independent testing labs; and that further review and testing occurs at the state and local levels.

The problem with these arguments is that it's impossible without some very special hardware (and maybe even with it) to make computers sufficiently reliable and secure for paperless electronic voting. The manufacturers attempt to hide this fact by keeping the designs of their machines a closely held secret, and then challenging critics to find flaws in those designs. Ironically, reverse engineering the code used for voting machines to check for bugs or voting fraud is likely to be a violation of the Digital Millennium Copyright Act.2

Even if adequate reliability and security were achievable, current practices are grossly inadequate. There is no indication that the major vendors or testing laboratories have computer security professionals to design and evaluate voting equipment. Manufacturers make basic computer security errors, such as failing to use cryptography appropriately, or designing their own home-brew cryptographic algorithms. Moreover, regulations and tests of greater rigor than those used for DREs routinely miss accidental flaws in software for other applications, and have virtually no chance of discovering tampering with software.

Problems are routine.3 For example, a March 2002 runoff election in Wellington, FL, was decided by five votes, but 78 ballots had no recorded vote. Elections Supervisor Theresa LePore claimed those 78 people chose not to vote for the only office on the ballot! In 2000, a Sequoia DRE machine was taken out of service in an election in Middlesex County, NJ, after 65 votes had been cast. When the results were checked after the election, it was discovered that none of the 65 vote were recorded for the Democrat and Republican candidates for one office, even though 27 votes each were recorded for their running mates. A representative of Sequoia insisted that no votes were lost, and that voters had simply failed to cast votes for the two top candidates. Since there was no paper trail, it was impossible to resolve either question.

While accidental design flaws are likely to cause election disasters in the immediate future, deliberate tampering is an even more serious concern. In older voting systems, election fraud typically is a labor- intensive process of altering or forging individual ballots. With large numbers of DREs in use, a small group or even a single individual at a voting machine manufacturer could alter software later installed on tens or hundreds of thousands of machines. If modified software switched a small percentage of votes between political parties, the tamperer could change the outcome of close races around the country.

There is nothing fundamental to DRE machines that requires an audit gap. The DRE machine simply needs to record the vote on paper when the voter has finished voting.4 The voter reviews the paper ballot to verify it is marked in accordance with his or her intentions, after which the paper ballot is deposited into a ballot box. Discrepancies can be brought to the attention of an election official. The official vote count would be based on the DRE-produced paper ballots, with the DRE machine providing a preliminary total to be checked against the paper ballots in a recount. There is one such machine that is already certified in many states, and several of the major DRE vendors have agreed to provide voter-verifiable printers in contracts already in place.

Amazingly, the elimination of paper ballots is considered a major advantage by some, since the lack of paper simplifies the election process. The accompanying security risks are ignored, or even denied, by people who don't understand the underlying technology or simply want to believe the reassurances they receive from the vendors.

Maybe we will be extremely lucky, and every vote cast on DRE machines in the future will be accurately recorded. But there will always be surprising election results, and people who question the results. Even if voting machines are accurate, it's important that voters trust the machines and know they are accurate. Democracy should not depend on blind faith.

The anonymity requirement of elections makes voting machines difficult to design and implement. You can't rely on a conventional audit, as we do with large-value financial computer systems.5 Election machines must be treated like safety- and mission- critical systems: fault tolerant, redundant, carefully analyzed code. And they need to close the audit gap with paper ballots.

Over 900 computing professionals, including many of the top experts in computer security and electronic voting, have endorsed the "Resolution on Electronic Voting" petition,6 urging that all DRE voting machines include a voter-verifiable audit trail.

Fortunately, some policymakers understand the security issues relating to voting. Rep. Rush Holt recently introduced the "Voter Confidence and Increased Accessibility Act of 2003" (H.R. 2239)7 that calls for voter-verification and audit capacity in e-voting machines.

In 1871 William Marcy ("Boss") Tweed said: "As long as I get to count the votes, what are you going to do about it?" Paperless DRE machines ensure that only the company that built them gets to count the votes, and that no one else can ever recount them.

David L. Dill (dill@cs.stanford.edu) is a professor of computer science and, by courtesy, electrical engineering at Stanford University, Stanford, CA.
Bruce Schneier (schneier@counterpane.com) is CTO of Counterpane Internet Security, Cupertino, CA.
Barbara Simons (simons@acm.org) is a former ACM president and current co-chair of ACM's U.S. Public Policy Committee.

1. For example, the U.K. recently conducted several local elections on the Internet. Internet voting raises additional security issues that space limitations preclude discussing in greater detail in this column.

2. See www.acm.org/usacm/Issues/DMCA.htm for information about ACM and USACM activities and statements relating to the DMCA.

3. See the Q/A Web page at verify.stanford.edu/evote.html and the wealth of information at www.notablesoftware.com/evote.html.

4. www.counterpane.com/crypto-gram-0012.html#1 is an early essay with this idea.

5. See www.counterpane.com/crypto-gram-0102.html#10 for more information.

6. See verify.stanford.edu/EVOTE/statement.html to read and endorse the petition.

7. See www.acm.org/usacm/PDF/HR2239_Holt_Bill.pdf


Outside View: Fixing intelligence

Bruce Schneier
UPI, October 14, 2003

A joint congressional intelligence inquiry has concluded that 9/11 could have been prevented if our nation's intelligence agencies shared information better and coordinated more effectively. This is both a trite platitude and a profound proscription.

Intelligence is easy to understand after the fact. With the benefit of hindsight, it's easy to draw lines from people in flight school here, to secret meetings in foreign countries there, over to interesting tips from informants, and maybe to INS records. Connecting the dots is child's play.

Doing it before the fact is another matter entirely and, before 9/11, it wasn't so easy. There's a world of difference between intelligence data and intelligence information. Some data did, before the fact, point to 9/11, but it was buried in an enormous amount of irrelevant data leading to blind alleys, false conclusions, and innocent people.

Most of the time intelligence gets lucky and connects the dots correctly. Sometimes it doesn't. To carefully select bits of intelligence after the fact and demand why they weren't understood before the fact misses the point.

The 9/11 report was absolutely correct in asserting that better coordination could have prevented the terrorist attack.

Security decisions need to be made as close to the problem as possible. This has many implications: protecting potential terrorist targets should be done by people who understand the targets; bombing decisions should be made by the generals on the ground in the war zone, not by Washington; and investigations should be approved by the FBI office closest to the investigation.

This mode of operation has more opportunities for abuse, so competent oversight is vital. It is also more robust, and the best way to make security work.

Security analysis also needs to happen as far away from the sources as possible.

Intelligence involves finding relevant information amongst enormous reams of irrelevant data, and then organizing all those disparate pieces of information into coherent predictions about what will happen next.

It requires smart people who can see connections, and who have access to information from many disparate government agencies. It can't be the sole purview of anyone, not the FBI, CIA, NSA, or the new Department of Homeland Security. The whole picture is larger than any single agency, and each only has access to a small slice of it.

The implication of these two truisms is that security will work better if it is centrally coordinated but implemented in a distributed manner. We're more secure if every government agency implements its own security, within the context of its department, with different strengths and weaknesses. Our security is stronger if multiple departments overlap each other.

It is therefore a good thing that the institutions best funded and equipped to defend our nation against terrorism aren't part of this new department: the FBI, the CIA, and the military's intelligence organizations.

All these organizations have to communicate with each other. One organization needs to be a single point for coordination and analysis of terrorist threats and responses. One organization needs to see the big picture, and make decisions and set policies based on it.

The administration has countered the report in part by saying that the Department of Homeland Security has the job of centralizing counter-terrorism. But because the DHS centralizes rather than coordinates, the security benefits will be minimal. Centralizing security responsibilities has the downside of making our security more brittle, by instituting a commonality of approach and a uniformity of thinking.

The human body defends itself through overlapping security systems. It has a complex immune system specifically to fight disease, but disease fighting is also distributed throughout every organ and every cell. The body has all sorts of security systems, ranging from your skin to keep harmful things out of your body, to your liver filtering harmful things from your bloodstream, to the defenses in your digestive system. These systems all do their own thing in their own way. They overlap each other, and to a certain extent one can compensate when another fails.

It might seem redundant and inefficient, but it's more robust, reliable, and secure.

The biological metaphor translates well to the terrorism discussion. It is hard to defend against because it subverts our institutions and turns our own freedoms and capabilities against us. It invades our society, festers and grows, and then attacks.

It's hard to fight, in the same way that cancer is hard to fight. If we are to best defend ourselves against terrorism, security needs to be pervasive. It can't be in just one department; it has to be everywhere. Every federal department needs to do its part to secure our nation. Fighting terrorism requires defense in depth. This means overlapping responsibilities to reduce single points of failures, both for the actual defensive measures and for the intelligence functions.

Our nation may actually be less secure if the Department of Homeland Security eventually takes over the responsibilities of existing agencies. The last thing we want is for the Department of Energy, the Department of Commerce, and the Department of State to say: "Security; that's the responsibility of the DHS."

Security is the responsibility of everyone in government. We won't defeat terrorism by finding a single thing that works all the time. We'll defeat terrorism when every little thing works in its own way, and together provides an immune system for our society. Unless the DHS distributes security responsibility even as it centralizes coordination, it won't improve our nation's security.

up to Essays and Op Eds

Terror Profiles by Computers Are Ineffective

Bruce Schneier
Newsday, October 21, 2003

In September 2002, JetBlue Airways secretly turned over data about 1.5 million of its passengers to a company called Torch Concepts, under contract with the Department of Defense.

Torch Concepts merged this data with Social Security numbers, home addresses, income levels and automobile records that it purchased from another company, Acxiom Corp. All this was to test an automatic profiling system to automatically give each person a terrorist threat ranking.

Many JetBlue customers feel angry and betrayed that their data was shared without their consent. JetBlue's privacy policy clearly states that "the financial and personal information collected on this site is not shared with any third parties." Several lawsuits against JetBlue are pending. CAPPS II is the new system designed to profile air passengers -- a system that would eventually single out certain passengers for extra screening and other passengers who would not be permitted to fly. After this incident, Congress has delayed the entire CAPPS II air passenger profiling system pending further review.

There's a common belief -- generally mistaken -- that if we only had enough data we could pick terrorists out of crowds, and CAPPS II is just one example. In the months after 9/11, the FBI tried to collect information on people who took scuba-diving lessons. The Patriot Act gives the FBI the ability to collect information on what books people borrow from libraries.

The Total Information Awareness program was intended to be the mother of all "data-mining" programs. Renamed "Terrorism Information Awareness" after the American public learned that their personal data would be sucked into a giant computer system and searched for "patterns of terrorism," this program's funding was killed by Congress last month.

Security is always a trade-off: How much security am I getting, and what am I giving up to get it? These "data-mining" programs are not very effective. Identifiable future terrorists are rare, and innocents are common. No matter what patterns you're looking for, far more innocents will match the patterns than terrorists because innocents vastly outnumber terrorists. So many that you might as well not bother. And that assumes that you even can predict terrorist patterns. Sure, it's easy to create a pattern after the fact; if something identical to the 9/11 plot ever happens again, you can be sure we're ready. But tomorrow's attacks? That's much harder.

Even those who say that terrorists are likely to be Arab males have it wrong. Richard Reid, the shoe bomber, was British. Jose Padilla, arrested in Chicago in 2002 as a "dirty bomb" suspect, was a Hispanic-American. The Unabomber had once taught mathematics at Berkeley. Terrorists can be male or female, European, Asian, African or Middle Eastern. Even grandmothers can be tricked into carrying bombs on board. Terrorists are a surprisingly diverse group of people.

There's also the other side of the trade-off: These kinds of "data mining" and profiling systems are expensive. They are expensive financially, and they're expensive in terms of privacy and liberty. The United States is a great country because people have the freedom to live their lives free from the gaze of government. We as a people believe profiling is discriminatory and wrong.

I have an idea. Timothy McVeigh and John Allen Muhammad -- one of the accused D.C. snipers -- both served in the military. I think we need to put all U.S. ex-servicemen on a special watch list, because they obviously could be terrorists. I think we should flag them for "special screening" when they fly and think twice before allowing them to take scuba-diving lessons.

What do you think of my idea? I hope you're appalled, incensed and angry that I question the honesty and integrity of our military personnel based on the actions of just two people. That's exactly the right reaction. It's no different whether I suspect people based on military service, race, ethnicity, reading choices, scuba-diving ability or whether they're flying one way or round trip. It's profiling. It doesn't catch the few bad guys, and it causes undue hardship on the many good guys who are erroneously and repeatedly singled out. Security is always a trade-off, and in this case of "data mining" the trade-off is a lousy one.

Liability changes everything

Bruce Schneier
Heise Security, November 2003
German version

Computer security is not a problem that technology can solve. Security solutions have a technological component, but security is fundamentally a people problem. Businesses approach security as they do any other business uncertainty: in terms of risk management. Organizations optimize their activities to minimize their cost-risk product, and understanding those motivations is key to understanding computer security today.

It makes no sense to spend more on security than the original cost of the problem, just as it makes no sense to pay liability compensation for damage done when spending money on security is cheaper. Businesses look for financial sweet spots -- -adequate security for a reasonable cost, for example -- and if a security solution doesn't make business sense, a company won't do it.

This way of thinking about security explains some otherwise puzzling security realities. For example, historically most organizations haven't spent a lot of money on network security. Why? Because the costs have been significant: time, expense, reduced functionality, frustrated end-users. (Increasing security regularly frustrates end-users.) On the other hand, the costs of ignoring security and getting hacked have been, in the scheme of things, relatively small.

We in the computer security field like to think they're enormous, but they haven't really affected a company's bottom line. From the CEO's perspective, the risks include the possibility of bad press and angry customers and network downtime -- none of which is permanent. The result: a smart organization does what everyone else does, and no more. Things are changing; slowly, but they're changing. The risks are increasing, and as a result spending is increasing.

This same kind of economic reasoning explains why software vendors spend so little effort securing their own products. We in computer security think the vendors are all a bunch of idiots, but they're behaving completely rationally from their own point of view. The costs of adding good security to software products are essentially the same ones incurred in increasing network security -- large expenses, reduced functionality, delayed product releases, annoyed users -- while the costs of ignoring security are minor: occasional bad press, and maybe some users switching to competitors' products. Any smart software vendor will talk big about security, but do as little as possible, because that's what makes the most economic sense.

As scientists, we are awash in security technologies. We know how to build much more secure operating systems. We know how to build much more secure access control systems. We know how to build much more secure networks. To be sure, there are still technological problems, and research continues. But in the real world, network security is a business problem. The only way to fix it is to concentrate on the business motivations. We need to change the economic costs and benefits of security. We need to make the organizations in the best position to fix the problem want to fix the problem.

Liability enforcement is essential. Remember that I said the costs of bad security are not borne by the software vendors that produce the bad security. In economics this is known as an externality: a cost of a decision that is borne by people other than those making the decision.

Today there are no real consequences for having bad security, or having low-quality software of any kind. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles, and invest in secure software development processes, they must be liable for security vulnerabilities in their products. If we expect CEOs to spend significant resources on their own network security -- especially the security of their customers -- they must be liable for mishandling their customers' data. Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem. And putting pressure on his balance sheet is the best way to do that.

This could happen in several different ways. Legislatures could impose liability on the computer industry by forcing software manufacturers to live with the same product liability laws that affect other industries. If software manufacturers produced a defective product, they would be liable for damages. Even without this, courts could start imposing liability-like penalties on software manufacturers and users.

This is starting to happen. A U.S. judge forced the Department of Interior to take its network offline, because it couldn't guarantee the safety of American Indian data it was entrusted with. Several cases have resulted in penalties against companies that used customer data in violation of their privacy promises, or collected that data using misrepresentation or fraud. And judges have issued restraining orders against companies with insecure networks that are used as conduits for attacks against others. Alternatively, the industry could get together and define its own liability standards.

Clearly this isn't all or nothing. There are many parties involved in a typical software attack. There's the company who sold the software with the vulnerability in the first place. There's the person who wrote the attack tool. There's the attacker himself, who used the tool to break into a network. There's the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn't fall on the shoulders of the software vendor, just as one hundred percent shouldn't fall on the attacker or the network owner. But today one hundred percent of the cost falls on the network owner, and that just has to stop.

However it happens, liability changes everything. Currently, there is no reason for a software company not to offer more features, more complexity, more versions. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they're entrusted with.
Festung Amerika

Bruce Schneier
Financial Times Deutschland, November 11, 2003
Festung Amerika

Bruce Schneier
Financial Times Deutschland, November 11, 2003

Im Jahr 2004 werden die USA viele Milliarden Dollar für Sicherheit ausgeben. Leider ist das meiste davon zum Fenster herausgeworfen – wirklichen Schutz bringt diese Aufrüstung nicht
VON BRUCE SCHNEIER

Der 11. September 2001 hat ein Trauma hinterlassen. Seit den Terroranschlägen brauchen die Amerikaner das Gefühl von mehr Sicherheit. An Flughäfen wurden Soldaten der Nationalgarde stationiert, an vielen öffentlichen und gewerb-lichen Gebäuden wurden intensi-vere Passkontrollen eingeführt, die Polizei überwacht wichtige Brücken und Tunnels.

Das Justizministerium und das FBI haben radikale neue Befug-nisse zur Terrorbekämpfung erhalten: Das Ministerium für Heimatschutz wurde gegründet, und massive staatliche Überwa-chungssysteme wie CAPPS II und TIA wurden finanziert. Doch trotz der guten Absichten, trotz des Geldes und trotz der Unannehm-lichkeiten für die Bürger sind diese neuen Sicherheitsmaßnah-men weitgehend ineffektiv.

Terroranschläge sind sehr sel-ten. Sie sind so selten, dass die Chancen, in einem Industrieland einem Terroranschlag zum Opfer zu fallen, fast gleich null sind. Die meisten Anschläge treffen nur wenige Menschen, der 11. Sep-tember war eine Anomalie. 3029 Menschen fielen den Terroristen zum Opfer. Im selben Jahr star-ben in den USA 156 005 Men-schen an Lungenkrebs, 41 967 im Straßenverkehr und 3433 an Un-terernährung.

Ein Problem bei der Sicherung Amerikas gegen den Terrorismus ist das Ausmaß der Bedrohung. Terroristen können Flugzeuge, Gebäude, Sportstadien, Wasserspeicher, Kraftwerke und chemische Lager angreifen – das macht die Verteidigung so schwierig. Wer sich für die Verteidigung dieser Ziele entscheidet, muss alles verteidigen. Wir wollen Terroranschlägen überall vorbeugen, daher sind Gegenmaßnahmen, bei denen die Bedrohung einfach verschoben wird, nur begrenzt von Wert. Wenn wir viel Geld für die Verteidigung unserer Flugzeuge ausgeben und dann Bomben in voll besetzten Sportstadien explodieren, haben wir dann wirklich etwas erreicht?

Schon die Verteidigung gegen eine bestimmte Bedrohung gestaltet sich sehr schwierig. Sicherheit ist – wie eine Kette – nur so stark wie ihr schwächstes Glied, und das Kontrollsystem für Flugpassagiere ist nur so sicher wie der unsicherste Flughafen des Landes. Hat man einmal den Check an einem Flughafen durchlaufen, kommt man bei den Anschlussflügen unkontrolliert durch.

Der Großteil der Sicherheitsmaßnahmen, mit denen wir täglich zu tun haben, versucht, die Bösen zu identifizieren, indem jeder verdächtigt wird. Die Technik will uns dabei zur Hilfe kommen: Rasterfahndung, um Terroristen herauszufiltern, Gesichtserkennung, um sie auf Flughäfen zu identifizieren, oder künstliche Intelligenz, um ein terroristisches Komplott rechtzeitig zu erkennen. Das Problem dabei ist dasselbe, wie bei Sicherheitsbeamten am Flughafen, die ihre Zeit damit verschwenden, Unschuldige zu durchsuchen: Fehlalarm. Terroristen und terroristische Komplotte sind so selten, dass fast jeder Alarm ein Fehlalarm ist. Milliarden von Dollar werden verschwendet, um falsche Spuren zu verfolgen, ohne die geringste Gewissheit, dass ein echtes Komplott aufgedeckt wird. Wenn ein Sicherheitsbeamter am Flughafen bei einem Unschuldigen ein Messer konfisziert, ist das ein Versagen des Sicherheitssystems.

Die einzige Möglichkeit, Terrorismus effektiv zu bekämpfen, ist altmodische Polizeiund Geheimdienstarbeit: terroristische Komplotte aufdecken, bevor sie umgesetzt werden, und dann die Terroristen selbst verfolgen. Jede Verhaftung eines Al-Kaida-Führers schwächt die Organisation. Jedes Land, das sich weigert, sie aufzunehmen, erschwert ihnen das Handeln. Natürlich brauchen wir einen gewissen Schutzwall rund um Flughäfen und öffentliche Gebäude. Das Unterbrechen der Finanzierungsund Kommunikationskanäle und die Verhaftung der Anführer haben al-Kaida aber viel mehr geschadet als alle Wachen, Schranken und Ausweiskontrollen zusammen.

Sicherheit ist immer ein Kompromiss. Eine Gesellschaft kann so viel Sicherheit bekommen, wie sie will, solange sie bereit ist, Geld, Zeit, Bequemlichkeit und Freiheiten zu opfern. Leider sind die meisten Sicherheitsmaßnahmen, die man uns auferlegt, faule Kompromisse: Sie erfordern riesige Opfer und bieten im Gegenzug nur wenig Sicherheit. 2004 werden wir noch mehr von diesem „Sicherheitstheater“ erleben, das den Leuten bestenfalls eines bringt: das Gefühl von Sicherheit.

Das soll nicht heißen, dass Flughafensicherheit völlig nutzlos ist oder dass man es nicht besser machen könnte. Doch Amerika muss klügere Sicherheitskompromisse eingehen. Das Geld sollte lieber für Ermittlungsarbeit ausgegeben werden, um Terroristen auf der ganzen Welt zu verfolgen, als für aufdringliche Maßnahmen zu Hause. Die Beamten an Flughäfen und Grenzen brauchen mehr Handlungsspielraum, damit sie auf ihren Instinkt hören können, statt nur blind der Technik zu folgen. Der Bau einer Festung Amerika wird die Sicherheit des Einzelnen nicht erhöhen.




Blaster and the Great Blackout

Bruce Schneier
Salon.com, December 16, 2003

Ten years ago our critical infrastructure was run by a series of specialized systems, both computerized and manual, on dedicated networks. Today, many of these computers have been replaced with standard mass-market computers connected via the Internet. This shift brings with it all sorts of cost savings, but it also brings additional risks. The same worms and viruses, the same vulnerabilities, the same Trojans and hacking tools that have so successfully ravaged the Internet can now affect our critical infrastructure.

For example, in late January 2003, the Slammer worm knocked out 911 emergency telephone service in Bellevue, Wash. The 911 data-entry terminals weren't directly connected to the Internet, but they used the same servers that the rest of the city used, and when the servers started to fail (because the connected parts were hit by Slammer), the failure affected the 911 terminals.

What's interesting about this story is that it was unpredicted. The Slammer attacked systems basically at random, and happened to knock over 911 service. This isn't an attack that could have been planned in advance. It was an accidental failure, and one that happened to cascade into a major failure for the citizens of Bellevue.

I have read article after article about the risks of cyberterrorism. They're all hype; there's no real risk of cyberterrorism. Worms and viruses have caused all sorts of network disruptions, but it's all been by accident. In January 2003, the SQL Slammer worm disrupted 13,000 ATMs on the Bank of America's network. But before it happened, you couldn't have found a security expert who understood that those systems were dependent on that vulnerability. We simply don't understand the interactions well enough to predict which kinds of attacks could cause catastrophic results.

More recently, in August 2003, the Nachi worm disabled Diebold ATMs at two financial institutions (Diebold declined to name which ones). These machines were running the Windows operating system, and were connected to the Internet. ATM machines that weren't running Windows were unaffected.

As mass-market computers and networks permeate more and more of our critical infrastructure, that infrastructure becomes vulnerable not only to attacks but also to sloppy software and sloppy operations. And these vulnerabilities are not necessarily the obvious ones. The computers that directly control the power grid (for example) are well protected. It's the peripheral systems that are less protected and more likely to be vulnerable. And a direct attack is unlikely to cause our infrastructure to fail, because the connections are too complex and too obscure. It's only by accident -- a worm affecting systems at just the wrong time, allowing a minor failure to become a major one -- that these massive failures occur.

Might this be what happened during the great blackout of this past summer?

The "Interim Report: Causes of the August 14th Blackout in the United States and Canada," published in November and based on detailed research by a panel of government and industry officials, blames the blackout on an unlucky series of failures that allowed a small problem to cascade into an enormous failure.

The Blaster worm affected more than a million computers running Windows during the days after Aug. 11. The computers controlling power generation and delivery were insulated from the Internet, and they were unaffected by Blaster. But critical to the blackout were a series of alarm failures at FirstEnergy, a power company in Ohio. The report explains that the computer hosting the control room's "alarm and logging software" failed, along with the backup computer and several remote-control consoles. Because of these failures, FirstEnergy operators did not realize what was happening and were unable to contain the problem in time.

Simultaneously, another status computer, this one at the Midwest Independent Transmission System Operator, a regional agency that oversees power distribution, failed. According to the report, a technician tried to repair it and forgot to turn it back on when he went to lunch.

To be fair, the report does not blame Blaster for the blackout. I'm less convinced. The failure of computer after computer within the FirstEnergy network certainly could be a coincidence, but it looks to me like a malicious worm.

No matter what caused the computer failures, the story is illustrative of what is to come. The computer systems we use on our desktops are not reliable enough for critical applications. Neither is the Internet. The more we rely on them in our critical infrastructure, the more vulnerable we become. The more our systems become interconnected, the more vulnerable we become.

It's not the power generation computers, it's the alarm computers. It's not the police and medical systems, it's the 911 computers that dispatch them. It's the computer you never thought about, that -- surprise -- is critical and critically vulnerable.

up to Essays and Op Eds



Are you sophisticated enough to recognize an Internet scam?

Bruce Schneier
The Mercury News, December 19, 2003

Recently I have been receiving e-mails from PayPal. At least, they look like they're from PayPal. They send me to a Web site that looks like it's from PayPal. And it asks for my password, just like PayPal. The problem is that it's not from PayPal, and if I do what the Web site says, some criminal is going to siphon money out of my bank account.

Welcome to the third wave of network attacks, what I have named "semantic attacks." They are much more serious and harder to defend against because they attack the user and not the computers. And they're the future of fraud on the Internet.

The first wave of attacks against the Internet was physical: against the computers, wires and electronics. The Internet defended itself through distributed protocols, which reduced the dependency on any one computer, and through redundancy. These are largely problems with a known solution.

The second wave is syntactic: attacks against the operating logic of computers and networks. Modern worms propagate and can infect millions of computers worldwide within hours. Traditional computer security has focused on this second wave, which aims to exploit programming errors in software products. It would be a lie to say that security experts know how to protect computers absolutely against these kinds of attacks, but we're getting better. Better software quality, more pro-active patching capabilities and better network monitoring will give us some measure of security in the coming years.

But this new wave of semantic attacks targets the way people assign meaning to content.

Many worms arrive as e-mail attachments. A user receives an e-mail message from someone he knew. It has an enticing subject line and a plausible message body. Of course a recipient is going to click on the attachment. And that's exactly what causes the infection.

People tend to believe what they read. How often have you needed the answer to a question and searched for it on the Web? How often have you taken the time to corroborate the accuracy of that information, by examining the credentials of the site, finding alternate opinions or other means?

People have long been taking advantage of others' naivete. Many old scams have been adapted to e-mail and the Web. Unscrupulous stockbrokers use the Internet to fuel their "pump and dump" strategies. In 1999, a fake press release circulated on the Web caused the stock of the Emulex Corp. to temporarily drop 61 percent. More recently, we've seen newspaper archives on the Web changed and fake Web sites purporting to be something they're not.

Against computers, semantic attacks become even more serious, simply because the computer cannot demand all the corroborating data that people instinctively rely on. Despite what you see in movies, real-world software is incredibly primitive when it comes to what is known as simple common sense. Ever increasing numbers of sensors and data collection devices are on the Internet. What happens when hackers realize that these devices can be fed bad data?

People have long been the victims of bad statistics, urban legends and hoaxes. Any communications medium can be used to exploit credulity and stupidity, and people have been doing that for eons. The difference is the scale. A single forged e-mail, a single fake press release, can affect millions.

Current computer security technologies are largely irrelevant against semantic attacks. These attacks aim directly at the human-computer interface, the most insecure portion on the Internet. Defending against them will take more than technology -- it will take education, experience and skepticism. Too many Internet users don't have enough of those three qualities.

up to Essays and Op Eds



Homeland Insecurity
The fact that U.S. intelligence agencies can't tell terrorists from children on passenger jets does little to inspire confidence.

Bruce Schneier
Salon.com, January 9, 2004

Security can fail in two different ways. It can fail to work in the presence of an attack: a burglar alarm that a burglar successfully defeats. But security can also fail to work correctly when there's no attack: a burglar alarm that goes off even if no one is there.

Citing "very credible" intelligence regarding terrorism threats, U.S. intelligence canceled 15 international flights in the last couple of weeks, diverted at least one more flight to Canada, and had F-16s shadow others as they approached their final destinations.

These seem to have been a bunch of false alarms. Sometimes it was a case of mistaken identity. For example, one of the "terrorists" on an Air France flight was a child whose name matched that of a terrorist leader; another was a Welsh insurance agent. Sometimes it was a case of assuming too much; British Airways Flight 223 was detained once and canceled twice, on three consecutive days, presumably because that flight number turned up on some communications intercept somewhere. In response to the public embarrassment from these false alarms, the government is slowly leaking information about a particular person who didn't show up for his flight, and two non-Arab-looking men who may or may not have had bombs. But these seem more like efforts to save face than the very credible evidence that the government promised.

Security involves a tradeoff: a balance of the costs and benefits. It's clear that canceling all flights, now and forever, would eliminate the threat from air travel. But no one would ever suggest that, because the tradeoff is just too onerous. Canceling a few flights here and there seems like a good tradeoff because the results of missing a real threat are so severe. But repeatedly sounding false alarms entails security problems, too. False alarms are expensive -- in money, time, and the privacy of the passengers affected -- and they demonstrate that the "credible threats" aren't credible at all. Like the boy who cried wolf, everyone from airport security officials to foreign governments will stop taking these warnings seriously. We're relying on our allies to secure international flights; demonstrating that we can't tell terrorists from children isn't the way to inspire confidence.

Intelligence is a difficult problem. You start with a mass of raw data: people in flight schools, secret meetings in foreign countries, tips from foreign governments, immigration records, apartment rental agreements, phone logs and credit card statements. Understanding these data, drawing the right conclusions -- that's intelligence. It's easy in hindsight but very difficult before the fact, since most data is irrelevant and most leads are false. The crucial bits of data are just random clues among thousands of other random clues, almost all of which turn out to be false or misleading or irrelevant.

In the months and years after 9/11, the U.S. government has tried to address the problem by demanding (and largely receiving) more data. Over the New Year's weekend, for example, federal agents collected the names of 260,000 people staying in Las Vegas hotels. This broad vacuuming of data is expensive, and completely misses the point. The problem isn't obtaining data, it's deciding which data is worth analyzing and then interpreting it. So much data is collected that intelligence organizations can't possibly analyze it all. Deciding what to look at can be an impossible task, so substantial amounts of good intelligence go unread and unanalyzed. Data collection is easy; analysis is difficult.

Many think the analysis problem can be solved by throwing more computers at it, but that's not the case. Computers are dumb. They can find obvious patterns, but they won't be able to find the next terrorist attack. Al-Qaida is smart, and excels in doing the unexpected. Osama bin Laden and his troops are going to make mistakes, but to a computer, their "suspicious" behavior isn't going to be any different than the suspicious behavior of millions of honest people. Finding the real plot among all the false leads requires human intelligence.

More raw data can even be counterproductive. With more data, you have the same number of "needles" and a much larger "haystack" to find them in. In the 1980s and before, East German police collected an enormous amount of data on 4 million East Germans, roughly a quarter of their population. Yet even they did not foresee the peaceful overthrow of the Communist government; they invested too heavily in data collection while neglecting data interpretation.

In early December, the European Union agreed to turn over detailed passenger data to the U.S. In the few weeks that the U.S. has had this data, we've seen 15 flight cancellations. We've seen investigative resources chasing false alarms generated by computer, instead of looking for real connections that may uncover the next terrorist plot. We may have more data, but we arguably have a worse security system.

This isn't to say that intelligence is useless. It's probably the best weapon we have in our attempts to thwart global terrorism, but it's a weapon we need to learn to wield properly. The 9/11 terrorists left a huge trail of clues as they planned their attack, and so, presumably, are the terrorist plotters of today. Our failure to prevent 9/11 was a failure of analysis, a human failure. And if we fail to prevent the next terrorist attack, it will also be a human failure.

Relying on computers to sift through enormous amounts of data, and investigators to act on every alarm the computers sound, is a bad security tradeoff. It's going to cause an endless stream of false alarms, cost millions of dollars, unduly scare people, trample on individual rights and inure people to the real threats. Good intelligence involves finding meaning among enormous reams of irrelevant data, then organizing all those disparate pieces of information into coherent predictions about what will happen next. It requires smart people who can see connections, and access to information from many different branches of government. It can't be seen by the various individual pieces of bureaucracy; the whole picture is larger than any of them.

These airline disruptions highlight a serious problem with U.S. intelligence. There's too much bureaucracy and not enough coordination. There's too much reliance on computers and automation. There's plenty of raw material, but not enough thoughtfulness. These problems are not new; they're historically what's been wrong with U.S. intelligence. These airline disruptions make us look like a bunch of incompetents who cry wolf at the slightest provocation.

up to Essays and Op Eds





Fingerprinting Visitors Won't Offer Security

Bruce Schneier
Newsday, January 14, 2004

Imagine that you're going on vacation to some exotic country.

You get your visa, plan your trip and take a long flight. How would you feel if, at the border, you were photographed and fingerprinted? How would you feel if your biometrics stayed in that country's computers for years? If your fingerprints could be sent back to your home country? Would you feel welcomed by that country, or would you feel like a criminal?

This month the U.S. government began giving such treatment to an expected 23 million visitors to the United States. The US-VISIT program is designed to capture biometric information at our borders. Only citizens of 27 countries who don't need a visa to enter the United States, mostly Europeans, are exempt. Currently all 115 international airports and 14 seaports are covered, and over the next three years this program will be expanded to cover at least 50 land crossings and also to screen foreigners exiting the country.

But the program figures to be ineffective, overly costly and to make the United States look bad on the world stage.

The program cost $380 million in 2003 and will cost at least the same in 2004. But that's just the start; the Department of Homeland Security is requesting bid proposals for a project that could eventually cost up to $10 billion. .

According to the Bush administration, the measures are designed to combat terrorism. As a security expert, it's hard for me to see how. The 9/11 terrorists would not have been deterred by this system; many of them entered the country legally on valid passports and visas. We have a 5,500-mile long border with Canada and another 2,000-mile long border with Mexico currently uncovered by the program. An estimated 200,000 to 300,000 people enter the country illegally each year from Mexico. Two million to 3 million people enter the country legally each year and overstay their visas. Capturing the biometric information of everyone entering the country doesn't make us safer.

And even if we could completely seal our borders, fingerprinting everyone still wouldn't keep terrorists out. It's not like we can identify terrorists in advance. The border guards can't say "this fingerprint is safe; it's not in our database" because there is no fingerprint database for suspected terrorists.

Even more dangerous is the precedent this program sets. Today the program affects only foreign visitors with visas. The next logical step is to fingerprint all visitors to the United States and then everybody, including U.S. citizens.

Retaliation is another worry. Brazil is now fingerprinting Americans who visit that country, and other countries are expected to follow suit. All over the world, totalitarian governments will use our fingerprinting regime to justify fingerprinting Americans who enter their countries. This means that your prints are going to end up on file with every tin-pot dictator from Sierra Leone to Uzbekistan. And Secretary of Homeland Security Tom Ridge has already pledged to share security information with other countries.

Security is a trade-off. When deciding whether to implement a security measure, we must balance the costs against the benefits. Large-scale fingerprinting is something that doesn't add much to our security against terrorism and costs an enormous amount of money that could be better spent elsewhere. Allocating the funds on compiling, sharing and enforcing the terrorist watch list would be a far better security investment. As a security consumer, I'm getting swindled.

America's security comes from our freedoms. For more than two centuries, we have maintained a delicate balance between freedom and the opportunity for crime. We deliberately put laws in place that hamper police investigations because we know we are more secure because of them. We know that laws regulating wiretapping, search and seizure, and interrogation make us all safer, even if they make it harder to convict criminals.

The U.S. system of government has a basic unwritten rule: The government should be granted only limited power, and for limited purposes, because of the certainty that government power will be abused. We've already seen the Patriot Act powers granted to the government to combat terrorism directed against common crimes. Allowing the government to create the infrastructure to collect biometric information on everyone it can is not a power we should grant the government lightly. It's something we would have expected in former East Germany, Iraq or the Soviet Union. In all of these countries, greater government control meant less security for citizens, and the results in the United States will be no different. It's bad civic hygiene to build an infrastructure that can be used to facilitate a police state.

up to Essays and Op Eds


Slouching Towards Big Brother

Bruce Schneier
CNET News.com, January 30, 2004

Last week the Supreme Court let stand the Justice Department's right to secretly arrest noncitizen residents.

Combined with the government's power to designate foreign prisoners of war as "enemy combatants" in order to ignore international treaties regulating their incarceration, and their power to indefinitely detain U.S. citizens without charge or access to an attorney, the United States is looking more and more like a police state.

Since the Sept. 11 attacks, the Justice Department has asked for, and largely received, additional powers that allow it to perform an unprecedented amount of surveillance of American citizens and visitors. The USA Patriot Act, passed in haste after Sept. 11, started the ball rolling.

In December, a provision slipped into an appropriations bill allowing the FBI to obtain personal financial information from banks, insurance companies, travel agencies, real estate agents, stockbrokers, the U.S. Postal Service, jewelry stores, casinos and car dealerships without a warrant--because they're all construed as financial institutions. Starting this year, the U.S. government is photographing and fingerprinting foreign visitors coming into this country from all but 27 other countries.

The litany continues. CAPPS-II, the government's vast computerized system for probing the backgrounds of all passengers boarding flights, will be fielded this year. Total Information Awareness, a program that would link diverse databases and allow the FBI to collate information on all Americans, was halted at the federal level after a huge public outcry, but is continuing at a state level with federal funding. Over New Year's, the FBI collected the names of 260,000 people staying at Las Vegas hotels. More and more, at every level of society, the "Big Brother is watching you" style of total surveillance is slowly becoming a reality.

Security is a trade-off. It makes no sense to ask whether a particular security system is effective or not--otherwise you'd all be wearing bulletproof vests and staying immured in your home. The proper question to ask is whether the trade-off is worth it. Is the level of security gained worth the costs, whether in money, in liberties, in privacy or in convenience?

This can be a personal decision, and one greatly influenced by the situation. For most of us, bulletproof vests are not worth the cost and inconvenience. For some of us, home burglar alarm systems are. And most of us lock our doors at night.

Terrorism is no different. We need to weigh each security countermeasure. Is the additional security against the risks worth the costs? Are there smarter things we can be spending our money on? How does the risk of terrorism compare with the risks in other aspects of our lives: automobile accidents, domestic violence, industrial pollution, and so on? Are there costs that are just too expensive for us to bear?

Unfortunately, it's rare to hear this level of informed debate. Few people remind us how minor the terrorist threat really is. Rarely do we discuss how little identification has to do with security, and how broad surveillance of everyone doesn't really prevent terrorism. And where's the debate about what's more important: the freedoms and liberties that have made America great or some temporary security?

Instead, the Department of Justice, fueled by a strong police mentality inside the administration, is directing our nation's political changes in response to Sept. 11. And it's making trade-offs from its own subjective perspective--trade-offs that benefit it even if they are to the detriment of others.

From the point of view of the Justice Department, judicial oversight is unnecessary and unwarranted; doing away with it is a better trade-off. They think collecting information on everyone is a good idea because they are less concerned with the loss of privacy and liberty. Expensive surveillance and data-mining systems are a good trade-off for them because more budget means even more power. And from their perspective, secrecy is better than openness; if the police are absolutely trustworthy, then there's nothing to be gained from a public process.

When you put the police in charge of security, the trade-offs they make result in measures that resemble a police state.

This is wrong. The trade-offs are larger than the FBI or the Justice Department. Just as a company would never put a single department in charge of its own budget, someone above the narrow perspective of the Justice Department needs to be balancing the country's needs and making decisions about these security trade-offs.

The laws limiting police power were put in place to protect us from police abuse. Privacy protects us from threats by government, corporations and individuals. And the greatest strength of our nation comes from our freedoms, our openness, our liberties and our system of justice. Ben Franklin once said: "Those who would give up essential liberty for temporary safety deserve neither liberty nor safety." Since the events of Sept. 11 Americans have squandered an enormous amount of liberty, and we didn't even get any temporary safety in return.

up to Essays and Op Eds

Schneier.com is a personal website. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.




IDs and the illusion of security

Bruce Schneier
San Francisco Chronicle, February 3, 2004

In recent years there has been an increased use of identification checks as a security measure. Airlines always demand photo IDs, and hotels increasingly do so. They're often required for admittance into government buildings, and sometimes even hospitals. Everywhere, it seems, someone is checking IDs. The ostensible reason is that ID checks make us all safer, but that's just not so. In most cases, identification has very little to do with security.

Let's debunk the myths:

First, verifying that someone has a photo ID is a completely useless security measure. All the Sept. 11 terrorists had photo IDs. Some of the IDs were real. Some were fake. Some were real IDs in fake names, bought from a crooked DMV employee in Virginia for $1,000 each. Fake driver's licenses for all 50 states, good enough to fool anyone who isn't paying close attention, are available on the Internet. Or if you don't want to buy IDs online, just ask any teenager where to get a fake ID.

Harder-to-forge IDs only help marginally, because the problem is not making sure the ID is valid. This is the second myth of ID checks: that identification combined with profiling can be an indicator of intention.

Our goal is to somehow identify the few bad guys scattered in the sea of good guys. In an ideal world, what we would want is some kind of ID that denotes intention. We'd want all terrorists to carry a card that says "evildoer" and everyone else to carry a card that said "honest person who won't try to hijack or blow up anything." Then, security would be easy. We would just look at people's IDs and, if they were evildoers, we wouldn't let them on the airplane or into the building.

This is, of course, ridiculous, so we rely on identity as a substitute. In theory, if we know who you are, and if we have enough information about you, we can somehow predict whether you're likely to be an evildoer. This is the basis behind CAPPS-2, the government's new airline passenger profiling system. People are divided into two categories based on various criteria: the traveler's address, credit history and police and tax records; flight origin and destination; whether the ticket was purchased by cash, check or credit card; whether the ticket is one way or round trip; whether the traveler is alone or with a larger party; how frequently the traveler flies; and how long before departure the ticket was purchased.

Profiling has two very dangerous failure modes. The first one is obvious. Profiling's intent is to divide people into two categories: people who may be evildoers and need to be screened more carefully, and people who are less likely to be evildoers and can be screened less carefully.

But any such system will create a third, and very dangerous, category: evildoers who don't fit the profile. Oklahoma City bomber Timothy McVeigh, Washington-area sniper John Allen Muhammed and many of the Sept. 11 terrorists had no previous links to terrorism. The Unabomber taught mathematics at UC Berkeley. The Palestinians have demonstrated that they can recruit suicide bombers with no previous record of anti-Israeli activities. Even the Sept. 11 hijackers went out of their way to establish a normal-looking profile; frequent-flier numbers, a history of first-class travel and so on. Evildoers can also engage in identity theft, and steal the identity -- and profile -- of an honest person. Profiling can result in less security by giving certain people an easy way to skirt security.

There's another, even more dangerous, failure mode for these systems: honest people who fit the evildoer profile. Because evildoers are so rare, almost everyone who fits the profile will turn out to be a false alarm. This not only wastes investigative resources that might be better spent elsewhere, but it causes grave harm to those innocents who fit the profile. Whether it's something as simple as "driving while black" or "flying while Arab," or something more complicated such as taking scuba lessons or protesting the Bush administration, profiling harms society because it causes us all to live in fear...not from the evildoers, but from the police.

Security is a trade-off; we have to weigh the security we get against the price we pay for it. Better trade-offs are to spend money on intelligence and analysis, investigation and making ourselves less of a pariah on the world stage. And to spend money on the other, nonterrorist security issues that affect far more Americans every year.

Identification and profiling don't provide very good security, and they do so at an enormous cost. Dropping ID checks completely, and engaging in random screening where appropriate, is a far better security trade-off. People who know they're being watched, and that their innocent actions can result in police scrutiny, are people who become scared to step out of line. They know that they can be put on a "bad list" at any time. People living in this kind of society are not free, despite any illusionary security they receive. It's contrary to all the ideals that went into founding the United States.

up to Essays and Op Eds



Hacking the Business Climate for Network Security

Bruce Schneier
IEEE Computer
April 2004
Acrobat version

Computer security is at a crossroads. It's failing, regularly, and with increasingly serious results. CEOs are starting to notice. When they finally get fed up, they'll demand improvements. (Either that or they'll abandon the Internet, but I don't believe that is a likely possibility.) And they'll get the improvements they demand; corporate America can be an enormously powerful motivator once it gets going.

For this reason, I believe computer security will improve eventually. I don't think the improvements will come in the short term, and I think that they will be met with considerable resistance. This is because the engine of improvement will be fueled by corporate boardrooms and not computer-science laboratories, and as such won't have anything to do with technology. Real security improvement will only come through liability: holding software manufacturers accountable for the security and, more generally, the quality of their products. This is an enormous change, and one the computer industry is not going to accept without a fight.

But I'm getting ahead of myself here. Let me explain why I think the concept of liability can solve the problem.

It's clear to me that computer security is not a problem that technology can solve. Security solutions have a technological component, but security is fundamentally a people problem. Businesses approach security as they do any other business uncertainty: in terms of risk management. Organizations optimize their activities to minimize their cost-risk product, and understanding those motivations is key to understanding computer security today. It makes no sense to spend more on security than the original cost of the problem, just as it makes no sense to pay liability compensation for damage done when spending money on security is cheaper. Businesses look for financial sweet spots-adequate security for a reasonable cost, for example-and if a security solution doesn't make business sense, a company won't do it.

This way of thinking about security explains some otherwise puzzling security realities. For example, historically most organizations haven't spent a lot of money on network security. Why? Because the costs have been significant: time, expense, reduced functionality, frustrated end-users. (Increasing security regularly frustrates end-users.) On the other hand, the costs of ignoring security and getting hacked have been, in the scheme of things, relatively small. We in the computer security field like to think they're enormous, but they haven't really affected a company's bottom line. From the CEO's perspective, the risks include the possibility of bad press and angry customers and network downtime-none of which is permanent. And there's some regulatory pressure, from audits or lawsuits, which adds additional costs. The result: a smart organization does what everyone else does, and no more. Things are changing; slowly, but they're changing. The risks are increasing, and as a result spending is increasing.

This same kind of economic reasoning explains why software vendors spend so little effort securing their own products. We in computer security think the vendors are all a bunch of idiots, but they're behaving completely rationally from their own point of view. The costs of adding good security to software products are essentially the same ones incurred in increasing network security-large expenses, reduced functionality, delayed product releases, annoyed users-while the costs of ignoring security are minor: occasional bad press, and maybe some users switching to competitors' products. The financial losses to industry worldwide due to vulnerabilities in the Microsoft Windows operating system are not borne by Microsoft, so Microsoft doesn't have the financial incentive to fix them. If the CEO of a major software company told his board of directors that he would be cutting the company's earnings per share by a third because he was going to really-no more pretending-take security seriously, the board would fire him. If I were on the board, I would fire him. Any smart software vendor will talk big about security, but do as little as possible, because that's what makes the most economic sense.

Think about why firewalls succeeded in the marketplace. It's not because they're effective; most firewalls are configured so poorly that they're barely effective, and there are many more effective security products that have never seen widespread deployment (such as e-mail encryption). Firewalls are ubiquitous because corporate auditors started demanding them. This changed the cost equation for businesses. The cost of adding a firewall was expense and user annoyance, but the cost of not having a firewall was failing an audit. And even worse, a company without a firewall could be accused of not following industry best practices in a lawsuit. The result: everyone has firewalls all over their network, whether they do any actual good or not.

As scientists, we are awash in security technologies. We know how to build much more secure operating systems. We know how to build much more secure access control systems. We know how to build much more secure networks. To be sure, there are still technological problems, and research continues. But in the real world, network security is a business problem. The only way to fix it is to concentrate on the business motivations. We need to change the economic costs and benefits of security. We need to make the organizations in the best position to fix the problem want to fix the problem.

To do that, I have a three-step program. None of the steps has anything to do with technology; they all have to do with businesses, economics, and people.

Step one: Enforce liabilities. This is essential. Remember that I said the costs of bad security are not borne by the software vendors that produce the bad security. In economics this is known as an externality: a cost of a decision that is borne by people other than those making the decision. Today there are no real consequences for having bad security, or having low-quality software of any kind. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality. If we expect software vendors to reduce features, lengthen development cycles, and invest in secure software development processes, they must be liable for security vulnerabilities in their products. If we expect CEOs to spend significant resources on their own network security-especially the security of their customers-they must be liable for mishandling their customers' data. Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, And putting pressure on his balance sheet is the best way to do that.

This could happen in several different ways. Legislatures could impose liability on the computer industry by forcing software manufacturers to live with the same product liability laws that affect other industries. If software manufacturers produced a defective product, they would be liable for damages. Even without this, courts could start imposing liability-like penalties on software manufacturers and users. This is starting to happen. A U.S. judge forced the Department of Interior to take its network offline, because it couldn't guarantee the safety of American Indian data it was entrusted with. Several cases have resulted in penalties against companies that used customer data in violation of their privacy promises, or collected that data using misrepresentation or fraud. And judges have issued restraining orders against companies with insecure networks that are used as conduits for attacks against others. Alternatively, the industry could get together and define its own liability standards.

Clearly this isn't all or nothing. There are many parties involved in a typical software attack. There's the company who sold the software with the vulnerability in the first place. There's the person who wrote the attack tool. There's the attacker himself, who used the tool to break into a network. There's the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn't fall on the shoulders of the software vendor, just as one hundred percent shouldn't fall on the attacker or the network owner. But today one hundred percent of the cost falls on the network owner, and that just has to stop.

However it happens, liability changes everything. Currently, there is no reason for a software company not to offer more features, more complexity, more versions. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they're entrusted with.

Step two: Allow parties to transfer liabilities. This will happen automatically, because CEOs turn to insurance companies to help them manage risk, and liability transfer is what insurance companies do. From the CEO's perspective, insurance turns variable-cost risks into fixed-cost expenses, and CEOs like fixed-cost expenses because they can be budgeted. Once CEOs start caring about security-and it will take liability enforcement to make them really care-they're going to look to the insurance industry to help them out. Insurance companies are not stupid; they're going to move into cyber-insurance in a big way. And when they do, they're going to drive the computer security industry...just as they drive the security industry in the brick-and-mortar world.

A CEO doesn't buy security for his company's warehouse-strong locks, window bars, or an alarm system-because it makes him feel safe. He buys that security because the insurance rates go down. The same thing will hold true for computer security. Once enough policies are being written, insurance companies will start charging different premiums for different levels of security. Even without legislated liability, the CEO will start noticing how his insurance rates change. And once the CEO starts buying security products based on his insurance premiums, the insurance industry will wield enormous power in the marketplace. They will determine which security products are ubiquitous, and which are ignored. And since the insurance companies pay for the actual losses, they have a great incentive to be rational about risk analysis and the effectiveness of security products. This is different from a bunch of auditors deciding that firewalls are important; these are companies with a financial incentive to get it right. They're not going to be swayed by press releases and PR campaigns; they're going to demand real results.

And software companies will take notice, and will strive to increase the security in the products they sell, in order to make them competitive in this new "cost plus insurance cost" world.

Step three: Provide mechanisms to reduce risk. This will also happen automatically. Once insurance companies start demanding real security in products, it will result in a sea change in the computer industry. Insurance companies will reward companies that provide real security, and punish companies that don't-and this will be entirely market driven. Security will improve because the insurance industry will push for improvements, just as they have in fire safety, electrical safety, automobile safety, bank security, and other industries.

Moreover, insurance companies will want it done in standard models that they can build policies around. A network that changes every month or a product that is updated every few months will be much harder to insure than a product that never changes. But the computer field naturally changes quickly, and this makes it different, to some extent, from other insurance-driven industries. Insurance companies will look to security processes that they can rely on.

Actually, this isn't a three-step program. It's a one-step program with two inevitable consequences. Enforce liability, and everything else will flow from it. It has to. There's no other alternative.

Much of Internet security is a common: an area used by a community as a whole. Like all commons, keeping it working benefits everyone, but any individual can benefit from exploiting it. (Think of the criminal justice system in the real world.) In our society we protect our commons-environment, working conditions, food and drug practices, streets, accounting practices-by legislating those areas and by making companies liable for taking undue advantage of those commons. This kind of thinking is what gives us bridges that don't collapse, clean air and water, and sanitary restaurants. We don't live in a "buyer beware" society; we hold companies liable when they take advantage of buyers.

There's no reason to treat software any differently from other products. Today Firestone can produce a tire with a single systemic flaw and they're liable, but Microsoft can produce an operating system with multiple systemic flaws discovered per week and not be liable. Today if a home builder sells you a house with hidden flaws that make it easier for burglars to break in, you can sue the home builder; if a software company sells you a software system with the same problem, you're stuck with the damages. This makes no sense, and it's the primary reason security is so bad today. I have a lot of faith in the marketplace and in the ingenuity of people. Give the companies in the best position to fix the problem a financial incentive to fix the problem, and fix it they will.

up to Essays and Op Eds


Terrorist Threats and Political Gains

Bruce Schneier
Counterpunch, April 27, 2004

Posturing, pontifications, and partisan politics aside, the one clear generalization that emerges from the 9/11 hearings is that information--timely, accurate, and free-flowing--is critical in our nation's fight against terrorism. Our intelligence and law-enforcement agencies need this information to better defend our nation, and our citizens need this information to better debate massive financial expenditures for anti-terrorist measures, changes in law that aid law enforcement and diminish civil liberties, and the upcoming Presidential election

The problem is that the current administration has consistently used terrorism information for political gain. Again and again, the Bush administration has exaggerated the terrorist threat for political purposes. They're embarked on a re-election strategy that involves a scared electorate voting for the party that is perceived to be better able to protect them. And they're not above manipulating the national mood for political gain.

Back in January, the Bush administration released information designed to convince people that the Christmastime Code Orange alert, with its associated airplane flight cancellations, increased police presences, and broad privacy invasions, were motivated by credible information about a real terrorist threat. There was a new intelligence source, we were told.

The trouble is, the intelligence this source produced turned out to be nothing at all. And all the potential terrorists aboard those cancelled international flights turned out to be false alarms. One "terrorist" was a Welsh insurance agent, another an elderly Chinese woman who once ran a Paris restaurant. Yet another was a child. And the man who failed to show up for his ParisLos Angeles flight, the man whose name matched that of a senior Al-Qaeda operative, turned out to be a Indian businessman with no links to terrorism at all.

On 10 June 2003, days before Minnesota FBI agent Coleen Rowley blew the whistle on a badly botched pre-9/11 investigation into some of the terrorists, Attorney General John Ashcroft announced the arrest of a terrorist planning on detonating a "dirty" nuclear bomb in the U.S. Jose Padilla was "disappeared": he was denied any access to an attorney, or any right to have the evidence against him put before a judge. The evidence against him was so flimsy that, even today, it has never been presented in public. (Currently the U.S. Supreme Court is hearing arguments on Padilla's behalf, as well as two other cases challenging the government's claim that it can detain anyone indefinitely, without allowing them the ability to defend themselves.)

Fourteen months later, the government announced another "victory," the arrest of an arms smuggler who arranged to sell a surface-to-air missile and planned to smuggle 50 more--missiles that could be used to shoot down commercial airplanes. Never mind that he seemed more like an innocent dupe entrapped by the intelligence services of Russia, Britain, and the U.S. The case against him has never been brought to trial, so we'll never know.

Even during World War II, German spies captured in the U.S. were given attorneys and tried in public court.

Another well-touted victory was the arrest of six men in Lackawanna, New York, on 13 September 2002. Any evidence against them was never presented in court, because their guilty plea was induced by threats of removing them from the criminal justice system and designating them "enemy combatants"--who could be held indefinitely without access to an attorney.

What does it say about the fairness of our justice system when prosecutors can use the threat of denying an accused access to that system?

Finally, in February, a federal prosecutor in Detroit actually sued Attorney General John Ashcroft, alleging the Justice Department interfered with the case, compromised a confidential informant and exaggerated results in the war on terrorism. Again, making political hay trumped national security concerns.

Security is always a trade-off, and making smart security trade-offs requires us to be able to realistically evaluate the risks. By continually skewing the available information, the Bush administration is ensuring that Americans don't have a clear picture of the terrorism risk. Through stern warnings of imminent danger, the administration is keeping Americans in fear. Fearful Americans are more likely to give away their freedoms and civil rights. Fearful Americans are more likely to sit docilely as the administration guts environmental laws, shields businesses from liability, rewrites foreign policy, and revamps the military--all in the name of counter-terrorism.

There are two basic ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands. The second is to keep people living in fear through constant threat warnings, security checks, rhetoric, and stories of terrorist plots foiled by the diligent work of the increasingly intrusive Department of Homeland Security.

The Republicans have spent decades running for office on "the Democrats are soft on Communism." Since 9/11, they've discovered "the Democrats are soft on terrorism." The effectiveness of this strategy depends on convincing Americans that there is a major terrorism threat, and that we need to give the government free reign to do whatever it sees fit.

Security is complicated, and countermeasures we put in place to defend against one threat may leave us more vulnerable to another. The truth is that the risk of terrorism in this country is as small as it has been since before 9/11. The risk of governance by a corrupt government is much greater. And it's becoming greater still with every policy decision made in the name of "the war on terrorism" that gives more power to the government and less to the people.

up to Essays and Op Eds


We Are All Security Customers

Bruce Schneier
CNET News.com, May 4, 2004

National security is a hot political topic right now, as both presidential candidates are asking us to decide which one of them is better fit to secure the country.

Many large and expensive government programs--the CAPPS II airline profiling system, the US-VISIT program that fingerprints foreigners entering our country, and the various data-mining programs in research and development--take as a given the need for more security.

At the end of 2005, when many provisions of the controversial Patriot Act expire, we will again be asked to sacrifice certain liberties for security, as many legislators seek to make those provisions permanent.

As a security professional, I see a vital component missing from the debate. It's important to discuss different security measures, and determine which ones will be most effective. But that's only half of the equation; it's just as important to discuss the costs. Security is always a trade-off, and herein lies the real question: "Is this security countermeasure worth it?"

As Americans, and as citizens of the world, we need to think of ourselves as security consumers. Just as a smart consumer looks for the best value for his dollar, we need to do the same. Many of the countermeasures being proposed and implemented cost billions. Others cost in other ways: convenience, privacy, civil liberties, fundamental freedoms, greater danger of other threats. As consumers, we need to get the most security we can for what we spend.

The invasion of Iraq, for example, is presented as an important move for national security. It may be true, but it's only half of the argument. Invading Iraq has cost the United States enormously. The monetary bill is more than $100 billion, and the cost is still rising. The cost in American lives is more than 600, and the number is still rising. The cost in world opinion is considerable. There's a question that needs to be addressed: "Was this the best way to spend all of that? As security consumers, did we get the most security we could have for that $100 billion, those lives, and those other things?"

If it was, then we did the right thing. But if it wasn't, then we made a mistake. Even though a free Iraq is a good thing in the abstract, we would have been smarter spending our money, and lives and good will, in the world elsewhere.

That's the proper analysis, and it's the way everyone thinks when making personal security choices. Even people who say that we must do everything possible to prevent another Sept. 11 don't advocate permanently grounding every aircraft in this country. Even though that would be an effective countermeasure, it's ridiculous. It's not worth it. Giving up commercial aviation is far too large a price to pay for the increase in security that it would buy. Only a foolish security consumer would do something like that.

We need to bring the same analysis to bear when thinking about other security countermeasures. Is the added security from the CAPPS-II airline profiling system worth the billions of dollars it will cost, both in dollars and in the systematic stigmatization of certain classes of Americans? Would we be smarter to spend our money on hiring Arabic translators within the FBI and the CIA, or on emergency response capabilities in our cities and towns?

As security consumers, we get to make this choice. America doesn't have infinite money or freedoms. If we're going to spend them to get security, we should act like smart consumers and get the most security we can.

The efficacy of a security countermeasure is important, but it's never the only consideration. Almost none of the people reading this essay wear bulletproof vests. It's not because they don't work--in fact they do--but because most people don't believe that wearing the vest is worth the cost. It's not worth the money, or the inconvenience, or the lack of style. The risk of being shot is low. As security consumers, we don't believe that a bulletproof vest is a good security trade-off.

Similarly, much of what is being proposed as national security is a bad security trade-off. It's not worth it, and as consumers we're getting ripped off.

Being a smart security consumer is hard, just as being a good citizen is hard. Why? Because both require thoughtful consideration of trade-offs and alternatives. But in this election year, it is vitally important. We need to learn about the issues. We need to turn to experts who are nonpartisan--who are not trying to get elected or stay elected. We need to become informed. Otherwise it's no different than walking into a car dealership without knowing anything about the different models and prices--we're going to get ripped off.


Curb electronic surveillance abuses
As technological monitoring grows more prevalent, court supervision is crucial

Bruce Schneier
Newsday, May 10, 2004

Years ago, surveillance meant trench-coated detectives following people down streets.

Today's detectives are more likely to be sitting in front of a computer, and the surveillance is electronic. It's cheaper, easier and safer. But it's also much more prone to abuse. In the world of cheap and easy surveillance, a warrant provides citizens with vital security against a more powerful police.

Warrants are guaranteed by the Fourth Amendment and are required before the police can search your home or eavesdrop on your telephone calls. But what other forms of search and surveillance are covered by warrants is still unclear.

An unusual and significant case recently heard in Nassau County's courts dealt with one piece of the question: Is a warrant required before the police can attach an electronic tracking device to someone's car?

It has always been possible for the police to tail a suspect, and wireless tracking is decades old. The only difference is that it's now much easier and cheaper to use the technology.

Surveillance will continue to become cheaper and easier -- and less intrusive. In the Nassau case, the police hid a tracking device on a car used by a burglary suspect, Richard D. Lacey. After Lacey's arrest, his lawyer sought to suppress evidence gathered by the tracking device on the grounds that the police did not obtain a warrant authorizing use of the device and that Lacey's privacy was violated.

It was believed to be the first such challenge in New York State and one of only a handful in the nation. A judge ruled Thursday that the police should have obtained a warrant. But he declined to suppress the evidence - saying the car belonged to Lacey's wife, not to him, and Lacey therefore had no expectation of privacy.

More and more, we are living in a society where we are all tracked automatically all of the time.

If the car used by Lacey had been outfitted with the OnStar system, he could have been tracked through that. We can all be tracked by our cell phones. E-ZPass tracks cars at tunnels and bridges. Security cameras record us. Our purchases are tracked by banks and credit card companies, our telephone calls by phone companies, our Internet surfing habits by Web site operators.

The Department of Justice claims that it needs these, and other, search powers to combat terrorism. A provision slipped into an appropriations bill allows the FBI to obtain personal financial information from banks, insurance companies, travel agencies, real estate agents, stockbrokers, the U.S. Postal Service, jewelry stores, casinos and car dealerships without a warrant.

Starting this year, the U.S. government is photographing and fingerprinting foreign visitors coming into this country from all but 27 other countries. CAPPS II (Computer Assisted Passenger Prescreening System) will probe the backgrounds of all passengers boarding flights. Over New Year's, the FBI collected the names of 260,000 people staying at Las Vegas hotels. More and more, the "Big Brother is watching you" style of surveillance is becoming a reality.

Unfortunately, the debate often gets mischaracterized as a question about how much privacy we need to give up in order to be secure. People ask: "Should we use this new surveillance technology to catch terrorists and criminals, or should we favor privacy and ban its use?"

This is the wrong question. We know that new technology gives law enforcement new search techniques, and makes existing techniques cheaper and easier. We know that we are all safer when the police can use them. And the Fourth Amendment already allows even the most intrusive searches: The police can search your home and person.

What we need are corresponding mechanisms to prevent abuse. This is the proper question: "Should we allow law enforcement to use new technology without any judicial oversight, or should we demand that they be overseen and accountable?" And the Fourth Amendment already provides for this in its requirement of a warrant.

The search warrant - a technologically neutral legal requirement - basically says that before the police open the mail, listen in on the phone call or search the bit stream for key words, a "neutral and detached magistrate" reviews the basis for the search and takes responsibility for the outcome. The key is independent judicial oversight; the warrant process is itself a security measure protecting us from abuse and making us more secure.

Much of the rhetoric on the "security" side of the debate cloaks one of its real aims: increasing law enforcement powers by decreasing its oversight and accountability. It's a very dangerous road to take, and one that will make us all less secure. The more surveillance technologies that require a warrant before use, the safer we all are.

up to Essays and Op Eds


CLEARly Muddying the Fight Against Terror

Bruce Schneier
News.com, June 16, 2004

Danny Sigui lived in Rhode Island. After witnessing a murder, he called 911 and became a key witness in the trial. In the process, he unwittingly alerted officials of his immigration status. He was arrested, jailed and eventually deported.

In a misguided effort to combat terrorism, some members of Congress want to use the National Crime Information Center (NCIC) database to enforce federal civil immigration laws. The idea is that state and local police officers who check the NCIC database in routine situations, will be able to assist the federal government in enforcing our nation's immigration laws.

The CLEAR Act and HSEA will certainly result in more people being arrested for immigration violations but will probably have zero effect on terrorism. There are a limited number of immigration agents at the Department of Homeland Security, so asking the 650,000 state, local and tribal police officers to help would be a significant "force multiplier."

The problem is that the Clear Law Enforcement for Criminal Alien Removal (CLEAR) Act and the Homeland Security Enhancement Act (HSEA) aren't going to help fight terrorism. Even worse, this will put an unfunded financial burden on local police forces and is likely to make us all less safe in the long run.

Security is a trade-off. It's not enough to ask: "Will increased verification of immigration status make it less likely that terrorists remain in our country?" We have to ask: "Given the police resources we have, is this the smartest way to deploy them?"

The CLEAR Act and HSEA will certainly result in more people being arrested for immigration violations but will probably have zero effect on terrorism. Some of the Sept. 11, 2001, terrorists were in the country legally. Others were easily able to keep their heads down. It's not as if terrorists are waiting to be arrested, if only the police have sufficient information about their immigration status. It's a nice theory, but it's just not true.

And none of this comes cheaply.

The cost of adding this information to criminal databases easily runs into the tens of millions of dollars. The cost to local police of enforcing these immigration laws is likely to be at least 10 times that. And this cost will have to be borne by the community, either through extra taxes or by siphoning police from other duties.

I can't think of a single community where the local police are sitting around idly, looking for something else to do. Forcing them to become immigration officers means less manpower to investigate other crimes. And this makes us all less safe.

Terrorists represent only a very small minority of any culture. One of the most important things that a good police force does is maintain good ties with the local community. If you knew that every time you contacted the police, your records would be checked for unpaid parking tickets, overdue library fines and other noncriminal violations, how would you feel about police? It's far more important that people feel confident and safe when calling the police.

When a Muslim immigrant notices something fishy going on next door, we want him to call the police. We don't want him to fear that the police might deport him or his family. We don't want him hiding if the police come to ask questions. We want him and the community on our side.

By turning police officers into immigration agents, the CLEAR Act and HSEA will discourage the next Danny Sigui from coming forward to report crimes or suspicious activities. This will harm national security far more than any security benefits received from catching noncriminal immigration violations. Add to that the costs of having police chasing immigration violators rather than responding to real crimes, and you've got a really bad security trade-off.

up to Essays and Op Eds


Crypto-Gram Newsletter
December 15, 2000

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@counterpane.com


A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.

Back issues are available at . To subscribe or unsubscribe, see below.

Copyright (c) 2000 by Counterpane Internet Security, Inc.

In this issue:

* Voting and Technology
* Crypto-Gram Reprints
* News
* Counterpane Internet Security News
* Crypto-Gram News
* IBM's New Crypto Mode of Operation
* The Doghouse: Blitzkrieg
* Solution in Search of a Problem: Digital Safe-Deposit Boxes
* New Bank Privacy Regulations
* Comments from Readers


Voting and Technology

In the wake of last November's election, pundits have called for more accurate voting and vote counting. To most people, this obviously means more technology. But before jumping to conclusions, let's look at the security and reliability issues surrounding voting technology.

The goal of any voting system is to establish the intent of the voter, and transfer that intent to the vote counter. Amongst a circle of friends, a show of hands can easily decide which movie to attend. The vote is open and everyone can monitor it. But what if Alice wants _Charlie's Angels_ and Bob wants _102 Dalmatians_? Will Alice vote in front of his friends? Will Bob? What if the circle of friends is two hundred; how long will it take to count the votes? Will the theater still be showing the movie? Because the scale changes, our voting methods have to change.

Anonymity requires a secret ballot. Scaling and speed requirements lead to mechanical and computerized voting systems. The ideal voting technology would have these five attributes: anonymity, scalability, speed, audit, and accuracy -- direct mapping from intent to counted vote.

Through the centuries, different technologies have done their best. Stones and pot shards dropped in Greek vases led to paper ballots dropped in sealed boxes. Mechanical voting booths and punch cards replaced paper ballots for faster counting. New computerized voting machines promise even more efficiency, and Internet voting even more convenience.

But in the rush to improve the first four attributes, accuracy has been sacrificed. The way I see it, all of these technologies involve translating the voter's intent in some way; some of them involve multiple translations. And at each translation step, errors accumulate.

This is an important concept, and one worth restating. Accuracy is not how well the ballots are counted by, for example, the optical scanner; it's how well the process translates voter intent into properly counted votes.

Most of Florida's voting irregularities are a direct result of these translation errors. The Palm Beach system had several translation steps: voter to ballot to punch card to card reader to vote tabulator to centralized total. Some voters were confused by the layout of the ballot, and mistakenly voted for someone else. Others didn't punch their ballots so that the tabulating machines could read them. Ballots were lost and not counted. Machines broke down, and they counted ballots improperly. Subtotals were lost and not counted in the final total.

Certainly Florida's antiquated voting technology is partially to blame, but newer technology wouldn't magically make the problems go away. It could even make things worse, by adding more translation layers between the voters and the vote counters and preventing recounts.

That's my primary concern about computer voting: There is no paper ballot to fall back on. Computerized voting machines, whether they have keyboard and screen or a touch screen ATM-like interface, could easily make things worse. You have to trust the computer to record the votes properly, tabulate the votes properly, and keep accurate records. You can't go back to the paper ballots and try to figure out what the voter wanted to do. And computers are fallible; some of the computer voting machines in this election failed mysteriously and irrecoverably.

Online voting schemes have even more potential for failure and abuse. We know we can't protect Internet computers from viruses and worms, and that all the operating systems are vulnerable to attack. What recourse is there if the voting system is hacked, or simply gets overloaded and fails? There would be no means of recovery, no way to do a recount. Imagine if someone hacked the vote in Florida; redoing the election would be the only possible solution. A secure Internet voting system is theoretically possible, but it would be the first secure networked application ever created in the history of computers.

There are other, less serious, problems with online voting. First, the privacy of the voting booth cannot be imitated online. Second, in any system where the voter is not present, the ballot must be delivered tagged in some unique way so that people know it comes from a registered voter who has not voted before. Remote authentication is something we've not gotten right yet. (And no, biometrics don't solve this problem.) These problems also exist in absentee ballots and mail-in elections, and many states have decided that the increased voter participation is more than worth the risks. But because online systems have a central point to attack, the risks are greater.

The ideal voting system would minimize the number of translation steps, and make those remaining as simple as possible. My suggestion is an ATM-style computer voting machine, but one that also prints out a paper ballot. The voter checks the paper ballot for accuracy, and then drops it into a sealed ballot box. The paper ballots are the "official" votes and can be used for recounts, and the computer provides a quick initial tally.

Even this system is not as easy to design and implement as it sounds. The computer would need to be treated like safety- and mission-critical systems: fault tolerant, redundant, carefully analyzed code. Adding the printer adds problems; it's yet another part to fail. And these machines will only be used once a year, making it even harder to get right.

But in theory, this could work. It would rely on computer software, with all those associated risks, but the paper ballots would provide the ability to recount by hand if necessary.

Even with a system like this, we need to realize that the risk of errors and fraud cannot be brought down to zero. Cambridge Professor Roger Needham once described automation as replacing what works with something that almost works, but is faster and cheaper. We need to decide what's more important, and what tradeoffs we're willing to make.

This is *the* Web site on electronic voting. Rebecca Mercuri wrote her PhD thesis on the topic, and it is well worth reading.


Good balanced essays:










Pro-computer and Internet voting essays:




Problems with New Mexico computerized vote-counting software:


Crypto-Gram Reprints

The Fallacy of Cracking Contests:


How to Recognize Plaintext:


"Security is a process, not a product."


Echelon Technology:


European Digital Cellular Algorithms:


News

One of the problems facing a network security administrator is that there are simply too many alerts to deal with:


Secret (and unauthorized) CIA chat room:





The world's first cybercrime treaty is being hastily redrafted after Internet lobby groups assailed it as a threat to human rights that could have "a chilling effect on the free flow of information and ideas."



A Field Guide for Investigating Computer Crime. Five parts, very interesting:


Interview with Vincent Rijmen (one of the authors of Rijndael) about AES:


Microsoft's Ten Immutable Laws of Security. A good list, actually.


A new report claims that losses due to shoddy security cost $15B a year. Investments in network security are less than half that. Sounds like lots of people aren't doing the math.



NESSIE is a European program for cryptographic algorithm standards (kind of like a European AES, only more general). Here's a list of all the algorithms submitted to the competitions, with links to descriptive documents. Great source for budding cryptanalysts.



More Carnivore information becomes public. Among the information included in the documents was a sentence stating that the PC that is used to sift through e-mail "could reliably capture and archive all unfiltered traffic to the internal hard drive." Since this directly contradicts the FBI's earlier public assertions, why should anyone trust them to speak truthfully about Carnivore in the future?


Independent Carnivore review less than stellar:

Carnivore: How it Works


Interesting biometrics reference site:


The People for Internet Responsibility has a position paper on digital signatures. Worth reading.


The Global Internet Project has released a research paper entitled, "Security, Privacy and Reliability of the Next Generation Internet":


More on the stolen Enigma: When it was returned, some rotors were still missing. And there's been an arrest in the case.


The pros and cons of making attacks public:


And the question of retaliation: should you strike back against hackers if the police can't do anything?


Commentary on Microsoft's public response to their network being hacked.


A review of cybercrime laws:


During WWII, MI5 tested Winston Churchill's wine for poison by injecting the stuff into rats. This is a photo of a couple of very short typewritten pages detailing the report.


Internet users have filed a lawsuit against online advertiser MatchLogic Inc., alleging that their privacy was violated by the company's use of devices that track their Web browsing habits.


A Swiss bank, UBS AG, has just issued a warning bulletin to Outlook and Outlook Express users of its Internet banking service. There is a virus out there that, when a customer attempts an Internet banking transaction, will present legitimate-looking HTML menus, prompt the user for his Internet banking passwords and security codes, and send the information to its own server.


Security and usability:


Top 50 Security Tools. A good list, I think.


Social engineering at its finest: The Nov. 27 issue of _The New Yorker_ has a story written by someone who quit his job to write, but discovered he never got anything done at home. So he strolled into the offices of an Internet startup and pretended to work there for 17 days. He chose a desk, got on the phone list, drank free soda and got free massages. He made fake business phone calls and brought his friends in for fake meetings. After 6 PM you're supposed to swipe a badge to get in, but luckily a security guard held the door for him. He only left when they downsized almost everyone else on his floor -- and not because they caught on; he went around saying goodbye to everyone in the office and everyone wished him well. No Web link, unfortunately.

150-year-old Edgar Allan Poe ciphers decrypted:


Very interesting talks on hacking by Richard Thieme (audio versions):


Picture recognition technology that could replace passwords:


Good article on malware:


Not nearly enough is being done to train information security experts, and U.S. companies face a staffing shortfall that will likely grow ever larger.


Luciano Pavarotti could not check in at his Italian hotel because he lacked proper identification. When you can't even authenticate in the real world, how are you ever going to authenticate in cyberspace?


After receiving a $10M anonymous grant, Johns Hopkins University is opening an information security institute:


Most countries have weak computer crime laws:


Plans for an open source operating system designed to defeat U.K.'s anti-privacy laws:


Microsoft held an invitational security conference: SafeNet 2000. Near as I can tell (I wasn't there; schedule conflict), there was a lot of posturing but no real meat. Gates made a big deal of new cookie privacy features on Internet Explorer 6.0, but all it means is that Microsoft is finally implementing the P3P protocol...which isn't all that great anyway. Microsoft made a great show of things, but talk is a lot cheaper than action.





Speaking of action, Microsoft now demands that security mailing lists not republish details of Microsoft security vulnerabilities, citing copyright laws.


Counterpane Internet Security News

Counterpane receives $24M in third-round funding:



Counterpane success stories:




More reviews of Secrets and Lies:


All reviews:


Crypto-Gram News

Crypto-Gram has been nominated for an "Information Security Excellence Award" by Information Security Magazine, in the "On-Line Security Resource" catagory. If you are a subscriber to the magazine--it's a free subscription--you can vote. You will need a copy of your magazine's mailing label. Voting is open until 17 January.



Thank you for your support.

IBM's New Crypto Mode of Operation

In November, IBM announced a new block-cipher mode of operation that "simultaneously encrypts and authenticates," using "about half the time," and is more suited for parallelization. IBM's press release made bold predictions of the algorithm's wide use and fast acceptance. I'd like to offer some cautionary notes.

Basically, the research paper proposes two block cipher modes that provide both encryption and authentication. It's author Charanjit S. Jutla at the T.J. Watson Research Center. This is really cool research. It's new work, and proves (and shows how) integrity can be achieved for free on top of symmetric-key encryption.

This has some use, but I don't see an enormous market demand for this. A factor of two speed improvement is largely irrelevant. Moore's Law dictates that you double your speed every eighteen months, just by waiting for processors to improve. AES is about three times the speed of DES and eight times the speed of triple-DES. Things are getting faster all the time. Much more interesting is the parallelization; it could be a real boon for hardware crypto accelerators for things like IPsec.

Even so, cryptographic implementations are not generally hampered by the inefficiency of algorithms. Rarely is the cryptography the bottleneck in any communications. Certainly using the same cryptographic primitive for both encryption and authentication is a nice idea, but there are many ways to do that.

Combining encryption with authentication is not new. The literature has had algorithms that do both for years. This research has a lot in common with Phillip Rogaway's OCB mode. On the public-key side of things, Y. Zheng has been working on "signcryption" since 1998.

Most security protocols prefer separating encryption and authentication. The original implementation of PGP, for example, used the same keys for encryption and authentication. They were separated in later versions of the protocol. This was done for security reasons; encryption and authentication are different. The key management is different, the security requirements are different, and the implementation requirements are different. Combining the two makes engineering harder, not easier. (Think of a car pedal that both accelerates and brakes; I think we can agree that this is not an improvement.)

Unfortunately, IBM is patenting these modes of operation. This makes it even less likely that anyone will implement it, and very unlikely that NIST will make it a standard. We've lived under the RSA patents for long enough; no one will willingly submit themselves to another patent regime unless there is a clear and compelling advantage. It's just not worth it.

IBM has a tendency of turning good cryptographic research into ridiculous press releases. Two years ago (August 1998) IBM announced that the Cramer-Shoup algorithm was going to revolutionize cryptography. It, too, had provable security. A year before that, IBM announced to the press that the Atjai-Dwork algorithm was going to change the world. Today I can think of zero implementations of either algorithm, even pilot implementations. This is all good cryptography, but IBM's PR department overreaches and tries to turn them into things they are not.

IBM's announcement:


Press coverage:



The research paper:
[link dead; see http://csrc.nist.gov/encryption/modes/]

Rogaway's OCB Mode:


My write-up of Cramer-Shoup:


The Doghouse: Blitzkrieg

This is just too bizarre for words. If the Doghouse had a hall of fame, this would be in it.



Solution in Search of a Problem:
Digital Safe-Deposit Boxes

Digital safe-deposit boxes seem to be popping up like mushrooms, and I can't figure out why. Something in the water? Group disillusionment? Whatever is happening, it doesn't make sense to me.

Look at the bank FleetBoston. In October, they announced something called fileTRUST, a digital safe-deposit box. For $11 a month, FleetBoston will store up to 40MB of stuff in their virtual safe deposit box. Their press release reads: "Document storage enables a business owner to expand memory capacity without having to upgrade hardware and guarantees that files will be protected from deadly viruses..." Okay, $11 for 40MB is $0.28 per MB per month. You can go down to any computer superstore and buy a 20 Gig drive for $120; if we assume the drive will last four years, that's $0.0001 per MB. Is it that difficult to add a new hard drive to a computer? And the "deadly viruses" claim: storing backups offline is just as effective against viruses, and fileTRUST's feature that allows you to map your data as a network drive makes it just as vulnerable to viruses as any other drive on your computer. Or if you don't map the fileTRUST archive, isn't the decryption key vulnerable to v
iruses?

I dismissed this as a bank having no clue how computers work, but then I started seeing the same idea elsewhere. At least three other companies -- DigiVault, Cyber-Ark, and Zephra -- are doing essentially the same thing, but touting it as kind of a poor man's VPN. You can use this virtual safe-deposit box as kind of a secure shared hard drive. Presumably you can give different people access to different parts of this shared space, take access away quickly, reconfigure, etc.

The DigiVault site is the most entertaining of the bunch. There are a lot of words on their pages, but no real information about what the system actually *does* or how it actually *works*. Even the "Technical Specifications" don't actually specify anything, and instead parrot some security buzzwords.

First off, the safe-deposit box metaphor (Cyber-Ark calls it a "Network Vault: (tm)) makes no sense. The primary value of a safe-deposit box is reliability. You put something in, and it will remain there until you show up at the bank with your key. That's why it's called a "safe-deposit box" and not a "private deposit box," although privacy is a side benefit. The "digital safe-deposit box" provides privacy (insofar as the system is actually secure), but is just as vulnerable to denial-of-service attacks as any other server on the Internet. And the box is only secure against actual destruction of the data insofar as they back up the data to some kind of media and store it somewhere. These companies presumably make backups, but how often? Where and how are the backups stored? The Web sites don't bother to say.

The problem with this metaphor becomes apparent when you read the "No Time for a VPN?" article (second DigiVault URL). The author says it's like a safe deposit box because you need two keys to open it: the bank uses your public key and you use your private key. But the point of having two keys to a real safe deposit box is that the bank only provides its key after you prove your identity; that way, someone stealing your key can't automatically get into your box. It works because the bank's key is *not* public. With DigiVault, the bank uses the same public key that you give to others so they can send stuff to your box. In that case, what's the point of the bank's key?

Second, I don't understand the business model. Yes, VPNs are hard to configure, but they're now embedded in firewalls and operating systems, and are getting easier to use. Yes, it's nice to have a shared piece of digital storage, but 1) I generally use my webserver, and 2) there are companies like X-Drive giving this stuff away. Once you combine encrypted mail with free Web storage space, you have the same functionality that a virtual safe-deposit box offers, for free. Now you're competing solely on user interface.

A digital safe-deposit box (or whatever it should be called) might be the ideal product for someone. But I just don't see enough of those someones to make a viable market.

fileTRUST:





Others:





New Bank Privacy Regulations

There are some new (proposed) interagency guidelines for protecting customer information. Near as I can tell, "interagency" includes the Office of the Comptroller of the Currency (Treasury), Board of Governors of the Federal Reserve System, and Office of Thrift Supervision (also Treasury). If you're a bank, this is a big deal. Ensuring the privacy of your customers will now be required.

Here are some highlights of the proposals:

The Board of Directors is responsible for protection of customer information and data.

The Board of Directors must receive reports on the overall status of the information security program, including materials related to attempted or actual security breaches or violations and responsive actions taken by management.

Monitoring systems must be in place to detect actual and attempted attacks on or intrusions into customer information systems.

Management must develop response programs that specify actions to be taken when unauthorized access to customer information systems is suspected or detected.

Staff must be trained to recognize, respond to, and where appropriate, report to regulatory and law enforcement agencies, any unauthorized or fraudulent attempts to obtain customer information

Management must monitor, evaluate, and adjust, as appropriate, the information security program in light of any relevant changes in technology, the sensitivity of its customer information, and internal or external threats to information security.

These rules are an addition to something called Regulation H. Regulation H is an existing section of legal code that covers a variety of stuff, including the infamous "Know Your Customer" program.

Proposed rules:


Comments on the proposed rules:


Some other privacy regulations that went into effect on 13 November, with optional compliance until 1 July 2001:


Comments from Readers

From: Anonymous
Subject: Microsoft

You didn't hear this from me, but:

- The attackers didn't get in using QAZ. As of last week, Microsoft still didn't know how they entered the network. The media invented the QAZ story, and Microsoft decided not to correct them.
- The damage is much worse than anyone has speculated.

From: Anonymous
Subject: Microsoft

I was involved with Microsoft's interaction with the press over "the event." What actually got told to the press was a completely *separate* incident than the one that really caused the problems. The reason that none of the stories agreed was that they were all fiction.

From: Julian Cogdell
Subject: Microsoft "set hacker trap" theory

Not quite the "penetration test by a Microsoft tiger team" you predicted in
the latest Crypto-Gram, but it's almost there....



From: "Ogle Ron (Rennes)"
Subject: Implications of the Microsoft Hack

I agree with you about this being an unprofessional job, but I wonder what will happen when this becomes a professional job with long-term objectives. I keep thinking that the computerized world is going to have its Black Plague.

If someone wanted to devastate the computerized world, one way would be to plant code into a future release of an operating system that would be widely disseminated and remotely triggerable. If an attacker were to have a long-term objective, she could steal the code, create 30 or 40 vulnerabilities in several different parts of the software, and return the code. Then, say in three years, the attacker could determine which vulnerabilities remained in the "released" software.

She would then devise ways to find the quickest and deadliest attacks while waiting for an additional two years for the software to become entrenched in the world. At this time, she would deploy one vulnerability to show the world what power she could wield. Because the vulnerabilities would be in several different parts of the operating system, it would be very difficult (i.e., near impossible) to remove these other surviving vulnerabilities or even defend against them. The one exception for a defense would be to unplug yourself from the Internet. I'm thinking in five years, to unplug yourself from the Internet for any prolonged period of time would be tantamount to going out of business.

The hacker would wait for two days and put out a demand for whatever she wanted and of course immunity from prosecution, or she would unleash the other vulnerabilities to the other computers that still remained on the Internet. Either way, corporations are losing billions a day, and these corporations would put such pressure on their governments to do whatever was required. Remember, if you're off the Internet to protect yourself, then you can't support commerce. The nice part for the operating system company is that they are covered because all of these corporations are using "AS IS" software with no guarantees or warranties.

How is this possible? Technically, all of the pieces are there to accomplish such an attack. I believe that motive is still missing with the people who are technically capable. I believe this is a reasonable possibility because of the following:

1. With a little more knack, Microsoft could have been hacked without being detected and the attacker could have downloaded the software for a future release. The attacker would also steal a few passwords to be able to get back in as an authorized user for the future.

2. We have seen that people can write some very dangerous code, usually through viruses. Given the source code, a person could devise very very dangerous code and could disguise it. Remember that Microsoft programmers often embed "Easter eggs" and self-promoting code that makes it through their quality assurance checks. Now to make sure that enough vulnerabilities survive (5 to 10) into the released version, the attacker would need to create 30 to 40 such vulnerabilities.

3. Based upon the openness of the software sharing, the attacker could come back in with one or more of the authorized user accounts. The attacker then uploads the "new" software into the code base. Some of this code will be lost through normal evolution of the code base, but enough of the exploits should survive.

4. We know that security is not really looked at from a quality assurance or testing perspective because of the sure number of vulnerabilities that are uncovered that should have never been there in the first place. Programmers/testers are basically not very knowledgeable in good software engineering practices, so "bad" code doesn't affect them much. Therefore, if the code works, they are likely to say good enough.

5. Most companies support a computer infrastructure made up of mostly a MS Windows environment. Because companies have this homogeneous solution, all of their systems would be very vulnerable to this or other types of attacks which would devastate their business. This of course was seen during the Love Bug virus when companies with mostly MS Windows systems were brought to their knees. Just as in nature, diversity is rewarded, but the computer world is reversed (that is, until the Black Plague!).

I think that the above scenario is definitely possible. With the dependence upon MS Windows and the growing dependence upon the Internet to conduct business, the above attack would cause huge devastation. The piece lacking to make this scenario real is a professional group or person with a motive that is willing to invest the time and effort for a long term pay-off. Terrorist groups could have a field day with this. The nice part about this is that if in the next five years the attacker decides not to go through with the attack, then they can just leave the vulnerabilities intact and nobody will be the wiser.

From: "Louis A. Mamakos"
Subject: Digital Signatures

I found your essay on "Why Digital Signatures Are Not Signatures" very interesting. There's an analogue in the Real World which might help explain the situation.

It's the check signing machine. It contains the "signature" of a Real Person, and is used to save the Real Person the drudgery of actually signing 5000 paychecks every couple of weeks. Did the Real Person every actually see each of these documents? Nope, but there's an expectation that the check signing machine is used only for authorized purposes by authorized personnel. Much the same as software which computes the RSA algorithm to "sign" documents.

It's interesting that the use of the check signing machine probably wouldn't be allowed for, e.g., signing contracts. I suppose it's all about expectations.

From: Douglas Davidson
Subject: Digital Signatures

We can perhaps gain some historical perspective on this issue by considering a predecessor of signatures, namely seals. Seals are an ancient human invention, probably antedating writing; they have been used by cultures around the world for purposes similar to those that might be served by digital signatures: providing evidence of the origin and authenticity of a document, indicating agreement, and so forth. Seals also have a similar drawback: they do not really provide evidence of the intent of a particular person, only of the presence of a certain object, which could equally well have been used by anyone who came into possession of it. A common theme in the literature of cultures that use seals is their misuse by unauthorized persons, often close associates or family members of the rightful owner. Even if you have a trusted signing computer, for which you can maintain complete hardware and software security, can you be certain that your children can't get to it?

From: Ben Wright
Subject: Digital Signatures

You are correct about the problems with digital signatures; they do not prove intent. They do not perform the legal wonders claimed by their most zealous proponents.

But you are wrong about the new E-Sign law. First, the law does not say that digital (or electronic) signatures are equivalent to handwritten signatures. Laymen summarize the law that way, but strictly speaking, that is not what the law says.

Second, the E-Sign law does not say that digital signatures prove intent or anything else. The new E-Sign law is very different from the (misguided) "digital signature" laws in states like Utah.

The E-Sign law is good. It simply says that an electronic signature (whether based on PKI, biometrics, passwords or whatever) shall not be denied legal effect solely because it is electronic. That is all it says. It does not address proof of intent, proof of action or proof of anything else. It does not specify technology. It does not even mention digital signatures, asymmetric algorithms, public/private key systems or PKI.

See: http://ourworld.compuserve.com/homepages/Ben_Wright/...

From: "Herbert Neugebauer"
Subject: Digital Signatures

Your article shoots into the wrong direction. You discredit the principle of digital signatures. Your explanation, however, does not convince me. The examples of why digital signatures will never be 100% safe are correct, but the same thing applies to real ink-on-paper signatures. You partly even acknowledge in your article that there are cases in court where these real signatures are denied. And these cases are sometimes won, sometimes lost.

Is the risk of digital signatures higher than the risk on the ink-on-paper signatures? We don't know. There are hundreds of ways to fake "ink-on-paper" signatures. There are similar "ways of attacks," like technical attacks (fake signatures) or social attacks, attacks to lead people to sign a paper that contains different things than they believe they sign.

Some are good, others are weak and people can prove easily that they didn't sign or didn't want to sign or thought they actually wanted to sign something else. Where's the difference to digital signatures?

I personally think the future will have to show how strong or weak the digital signatures actually are compared to "real" signatures. In the meantime I think your article is counter-productive. It generates distrust. I think you intended to warn people that blind trust in technology is wrong and that just by implementing PKI and using digital signatures things are not automatically completely secure. That's correct. That's good. That's important.

However the statement "These laws are a mistake. Digital signatures are not signatures, and they can't fulfill their promise," is in my view plain wrong. We can only judge this 10 years down the road once we really used the technology and can really compare how it works in comparison with "real" signatures. Today digital signatures are virtually non-existent -- not used at all.

We should start adopting. We have to constantly review, check, test, warn, revise and newly invent both technology and laws. We should be careful, not be blind, but we should not dig a big hole and hide in fear of the "end of the world."

From: Peter Marks
Subject: Trusting Computers

In the latest Crypto-Gram you wrote in one context:

> Because the computer is not trusted, I cannot rely on
> it to show me what it is doing or do what I tell it to.

And in another:

> "... the computer refused to believe that the power had
> gone off in the first place."

There's an ironic symmetry here. Perhaps computers feel hampered by a lack of trusted humans. :-)

From: jfunchion answerthink.com (Jack Funchion)
Subject: Semantic Attacks

I have been following the discussion on semantic attacks in the Crypto-Gram the last two months, particularly the idea of changing old news stories in archives and the like. In a previous job I worked for a company that among other things provided a technical analysis system for evaluating stocks. It was based on a database of pricing history, and I can remember dreaming up an idea of how to make a killing in the stock market. The idea is simply to go back and change the stock pricing data in small increments in the databases so that the various technical analysis equations used by quantitative traders will be wrong, and predictably so. You then take positions opposite those predicted by the now known (by you) to be incorrect analyses. I even came up with a name for this kind of attack -- the Saramago Subterfuge. The name comes from Jose Saramago, Portuguese novelist and Nobel Literature prize winner. He wrote a book a few years back called _The History of the Siege of Lisbon_ which revolves around a proofreader who changes a single word in a historical archive and thus changes the history of his country. I recommend it for your readers.

From: Xcott Craver
Subject: Watermarking

I'm one of the Princeton researchers who participated in the successful hack of SDMI's technologies. I read your column about SDMI with interest, but have a few small comments and possible objections:

> Near as I can tell, the SDMI break does not conform to
> the challenge rules, and the music industry might claim
> that the SDMI schemes survived the technology.

Indeed, we have many reasons to believe the contest rules were overstrict. Watermark detectors were not directly available to us, but kept by SDMI, who would test our music _for us_ with an overhead of at least a few hours, sometimes half a day. Not only did this prevent oracle attacks (which real attackers will perform, almost surely,) but the oracle response did not tell us whether failure was due to the watermark surviving, or due to a decision that the music was too distorted.

Also, as you suspect, our submissions were not considered valid in the second round because we did not provide information about how the attacks worked by their deadline.

We also had reason to believe that at least one of the oracles did not behave as documented. That's perhaps the least extreme way to say it. The two "authentication" technologies (the other four were watermarking technologies) were inherently untestable; when SDMI claims that three technologies survived, chances are they are counting those two.

> Even if the contest was meaningful and the technology
> survived it, watermarking does not work. It is
> impossible to design a music watermarking technology
> that cannot be removed.

Ahem. Watermarking works just fine in other application domains, just not this one. By changing the application, one can move the goal posts so that attacks are no longer worth anything.

Consider as an example the (digital) watermarking of currency, so that scanners and photocopy machines will recognize a bill and refuse to scan it. This can be attacked in the usual way, but if the watermark was made visible rather than invisible, the standard attack of removing the mark becomes worthless; for without the mark, the bill appears clearly counterfeit to a human observer.

> Here's a brute-force attack: play the music and re-
> record it. Do it multiple times and use DSP technology
> to combine the recordings and eliminate noise.

It is not clear that this will always work for all watermarking techniques. On the other hand, if you have the capability of playing and re-recording music, you have already foiled the watermark.

My colleague Min Wu developed a similar technique for video, which involves simulating transmission error by leaving out MPEG blocks, then correcting for those missing blocks using DSP techniques. After enough "playing and re-recording" a good deal of the original data is long gone.

> Even if watermarking works, it does not solve the
> content-protection problem. If a media player only
> plays watermarked files, then copies of a file will
> play. If a media player refuses to play watermarked
> files, then analog-to-digital copies will still work.

Watermarking schemes are designed to survive digital-analog-digital conversion. Very robust image watermarking schemes exist which appear to survive printing, xeroxing, then rescanning a watermarked image.

> Digital files intrinsically undermine the scarcity
> model of business: replicate many copies and sell each
> one. Companies that find alternate ways to do
> business, whether they be advertising funded, or
> patronage funded, or membership funded, or whatever,
> are likely to survive the digital economy. The media
> companies figured this out quickly when radio was
> invented -- and then television -- so why are they so
> slow to realize it this time around?

It is indeed surprising, given media companies' previous history. Until the Internet (or maybe until Digital Audio Tape,) the recording industry seemed to view new technology as a new business opportunity. They went digital over a decade ago. Now they seem to want to sue the landscape itself into not changing anymore.

It is difficult to suppress the image of the crazy old miser, driven paranoid by fabulous wealth. Perhaps the flimsy compact "disc," enclosed in a flimsy jewel box yet wrapped in so much anti-theft plastic that a local TV news show aired a segment on how annoying they were, reinforces this view.

Interestingly, a great deal of the rise of MP3s is due to the recording industry's shoddy technology and unsuccessful distribution of music. People want music that won't skip, while we still use a 15-year-old medium that requires moving parts to read. People want to find specific albums, that just can't find room in a physical record store. The recording industry is not merely a victim of shifting landscape, but a major cause of it, through their own failure to act.

From: Andrew Odlyzko
To: Watermarking

I agree with all your points about the SDMI hacking challenge, and would like to add another, which, surprisingly, I don't hear people mention. (I just came back from a conference in Germany on Digital Rights Management, and although many speakers dealt with watermarking, not one mentioned this problem.) What exactly is the threat model that watermarking is supposed to address? Even if you do have an iron-tight technical solution, all that will allow the content producer to determine is who bought the goods from a legitimate merchant. If I am an honest citizen who abides by the rules, and my laptop loaded with honestly purchased movies is stolen, Hollywood might be able to tell that the pirated copies came from my hard drive, but are they going to hold me responsible for their losses?

> The media companies figured this out quickly when
> radio was invented -- and then television -- so why
> are they so slow to realize it this time around?

I agree with you completely about the need for new business model. (My talk in Germany was on "Stronger copyright protection for cyberspace: Desirable, inevitable, and irrelevant," and I discussed how the industry really needs to think more creatively about their business instead of threshing around hoping for secure protection schemes.) However, the claim that "[t]he media companies figured this out quickly when radio was invented" is definitely not correct. It took about a decade for this process. You can read about it in Susan Smulyan's book, "Selling Radio: The Commercialization of American Broadcasting, 1920-1934," Smithsonian Institution Press, 1994.

From: "Marcus J. Ranum"
Subject: Window of Exposure

I finally got a chance to re-re-read your article on reducing the window of exposure for a vulnerability, and I'd like to make a few comments. First off, I think that you've hit on a few very important ideas. I don't know of a way to tie your "exposure window" charts to a real, measurable, metric, but if we could, that would provide invaluable information to help people decide on their course of action in dealing with a vulnerability. There's a subtle point, which you note, that the important goal is to minimize the space under the curve: the number of users that are vulnerable at any given time.

So, you've given us a model whereby we can point and say "you are here" during the course of any given security flaw/response cycle. If you look at Figure 2 (limit knowledge of vulnerability) the area below the curve is dramatically less than the area below the curve in Figure 1 (announce the vulnerability). That's very significant.

Your model of how the threat of the vulnerability "decays" is also thought-provoking. For an example in many of my talks I refer to the "rlogin -froot" bug, which was a vulnerability in AIX 3 (if I recall correctly) in the early '90s. Just about a month ago, I had a system administrator in my class ask me for details on how to fix that particular problem; he's still running AIX without patches. So, there is, indeed, a "tail off" factor to the curve, like you predicted. I've seen it.

You've also missed a very important special case scenario: the one in which a vulnerability is found, quietly diagnosed, quietly fixed, and never brought to the attention of the hacking community-at-large. In that case, the area under the curve is _zero_. Nobody is threatened or hurt at all. This points to a couple of things:

1) Vendors need to take security-critical quality assurance to a much higher level than they do. Finding and fixing your own bugs quickly and quietly is the only 100% victory solution.

2) Vendors need to be able to ensure that users actually install their patches.

The latter point is critical. I believe that within the next five years software will become self-updating for many applications. Antivirus software and streaming media/browsers do this today. The former does it to update its rules, the latter to install new bugs on your system faster and more easily. But security critical products need to do the same thing. Imagine installing a piece of security critical software and having it, at install time, ask you:

"This software has the ability to self-update in the event of critical functionality or security patches. In such an event, should I:
A) Cease functioning and notify an administrator to manually install an upgrade before resuming processing B) Continue functioning in a reduced capacity
C) Automatically install the update and continue to function."

Providing a good automatic update service has some daunting technical requirements (signed code, secure distribution servers, etc.), but those problems are not significantly worse than the problems we face today in getting our users to update all their software manually. Perhaps savvy vendors will realize that such service provides an opportunity to "touch" their customers in a positive way (good marketing) on a regular basis, as well as to justify software maintenance fees. Ironically, Microsoft, who many hold as the great Satan of computer security, is leading the way here: recently the Microsoft IIS team fielded a program called HFCheck that automatically checks for IIS server security updates and alerts the user. The first vendor that can make a believable claim to have licked this problem will reap potentially huge rewards.

In such an environment, a vendor could easily base their judgements on progress along your exposure charts. As soon as there is a certain number of users at risk, it's time to push out an upgrade. Indeed, I predict that in such an environment, it'll become an interesting race between the hacker and the vendor to see if the hacker can issue an alert before the vendor can draw their fangs and make them look redundant by already having released a patch. I look forward to seeing this happen, since it's a necessary step in triggering a change in the current economy of vulnerability disclosure. Under the current economy, the hackers reap real benefits (ego and marketing) in spite of the users they are placing at risk. If they no longer reap those benefits, a significant component of their motivation will be gone.

Then we'll be left to deal with the individuals whose motives are purely malicious.

From: Anonymous
Subject: Anecdote about "open" WaveLAN networks.

I found my first "open" WaveLAN (IEEE 802.11) network by accident. I had a WaveLAN card in my laptop when I visited the California office of the company I work for. My first reaction to getting a working dhcp lease was "Great, I won't have to fiddle with cables. But I think I need to talk to the local sysadmin if he has thought about security." My happiness quickly changed into annoyance when I felt how slow the network was and the annoyance changed into surprise when while debugging the network I realized that XXX.com wasn't the domain name of the company I work for (as a side note: XXX sells crypto hardware). I reported the incident to the local sysadmin and forgot about it.

When I got back to Sweden, I told about the stupidity of XXX to a few friends at a restaurant in downtown Stockholm. Some time before the food arrived we started to discuss WaveLAN and somehow a laptop showed up on the table and voila! We were inside YYYinternal.com. We knew a guy working at YYY, told him about this, he told his sysadmin, the sysadmin responded "I'll have to talk to the firewall guy." (I didn't know that firewalls had TEMPEST protection in their default configuration.) AFAIK the network has been shut off.

Another month or two passed. I was riding the bus around downtown Stockholm to get home after a pretty late evening and I was too tired to read. I fired up my laptop and started to detect networks. I found six or seven (one could have been a duplicate) during 30 minutes.

A week later a friend from Canada visited us. He stayed at a hotel in central Stockholm. He had a working network in some spots in his room. Apparently it belonged to a law firm. On the square outside the hotel the networks didn't work, simply because there were three of them fighting with each other. When we walked around 10 blocks in central Stockholm we found 5 to 15 networks.

And so on...

Many of the networks we found gave us DHCP leases and good routing out to the internet. Most of them were behind a firewall, but the firewall was "aimed" in the wrong direction; the WaveLAN was a part of the internal network. We were inside private networks of telcos, law firms, investment companies, consulting companies, you name it.

From: "David Gamey/Markham/IBM"
Subject: SSL Caching device?

I recently came across a device that appears to cache SSL! It appears that it can cache pages containing personalized data. I haven't got the full story, but I suspect that the HTTP request didn't contain distinguishing data other than an authentication cookie.

The press release:


An explanation:


It appears that the device works with a layer 3/4 switch and can transparently grab SSL connections (by port or packet content?). The marketing piece tries to position it as (or like) an SSL accelerator. It talks about graphics in SSL, being deployed on the network boundary and being transparent to the end-user. It's setting itself up as man-in-the-middle.

Depending on its caching rules, implementation bugs, etc., how many applications will this thing screw up? What happens if a "hacker" gets control of one of these things? The idea of something getting between me and my bank isn't comforting. I already go to my bank, check out the site/cert, then turn on JavaScript and reload. What next?

From: Greg Guerin
Subject: DCMA Anti-Circumvention

The Digital Millennium Copyright Act (DMCA) prohibits certain acts of circumvention, among other things. In particular, section 1201(a)(1) begins: "No person shall circumvent a technological measure that effectively controls access to a work protected under this title."

Look at the word "effectively." Does it mean that the technological measure must be effective in order to qualify under section 1201(a)(1)? That is, if the measure is shown to be ineffective in controlling access, a mere tissue-paper lock, does that measure then cease to be a protected technological measure? But that means that any defeatable measure will lose its legal protection just by being defeated. And if that's not an incentive to circumvent, I don't know what is.

Or perhaps "effectively" means "is intended to", and only the INTENT of protecting the work matters, not the demonstrated strength or quality of the measure itself. In short, well-intentioned incompetence is a sufficient defense. But then, arguably, Java byte-codes in original unobfuscated form might qualify as an access control measure, since they are not easily readable by humans and require an "anti-circumvention technology" known as a disassembler or decompiler in order to be perceived by humans.

So what does "effectively" really mean under section 1201(a)(1)? Upon such fine points do great lawsuits hinge.

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.

To subscribe, visit or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit . Back issues are available on .

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He served on the board of the International Association for Cryptologic Research, EPIC, and VTW. He is a frequent writer and lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is a venture-funded company bringing innovative managed security solutions to the enterprise.




next issue
previous issue
back to Crypto-Gram index

0 Comments:

Post a Comment

<< Home