Secret snooping keeps us vulnerable

This is an obvious point.

Part of NSA’s mission, a very noble part, has always been to play digital defense. They call this “information assurance”, and describe it as “the formidable challenge of preventing foreign adversaries from gaining access to sensitive or classified national security information.” In practice, their role is much broader than that. I run NSA software — on purpose! Thank you, National Security Agency, for SELinux. I’m not worried about foreign adversaries, in particular. I just don’t want my server hacked. NSA helps evaluate and debug encryption standards that find their way into civilian use. With all the talk about 21st century cyberwarfare, about dams being made to malfunction or cars hacked to spin out of control, you’d think the best way to keep the homeland safe from terrorists and foreign adversaries would be an exceptionally secure domestic infrastructure.

However, NSA faces a conflict of mission. The organization’s more famous, swashbuckling “signals intelligence” is about maintaining a digital offense. It relies on adversaries using vulnerable systems. NSA discovers (or purchases) uncorrected “exploits” in order to break into the systems on which it hopes to spy. Normally, a good-guy “white hat” hacker who discovers a vulnerability would quietly inform the provider of the exposed system so that the weakness can be eliminated as quickly and safely as possible. Eventually, if the issue is not resolved, she might inform the broad public, so people know they are at risk. Vulnerabilities that are discovered but not widely disclosed are the most dangerous, and the most valuable, to NSA for intelligence gathering purposes, but also to cyberterrorists and foreign adversaries. There are tradeoffs between the strategic advantage that come from offensive capability and the weakness maintaining that capability necessarily introduces into domestic infrastructure. If the mission is really about protecting America from foreign threats (rather than enjoying the power of domestic surveillance), it is not at all obvious that we wouldn’t be better off nearly always hardening systems rather than holding exploits in reserve. Other countries undoubtedly tap the same backbones we do (albeit at different geographical locations and with the help of different suborned firms). Undoubtedly, passwords that nuclear-power-plant employees sloppily reuse occasionally slip unencrypted through those pipes.

Of course there is a trade-off. If security agencies did work aggressively to harden civilian infrastructure as soon as they discover vulnerabilities, the spooks would not have been able, for example, to stall Iran’s nuclear program with Stuxnet. But the same flaws that we exploited might also have been known to terrorists or foreign adversaries, who could have caused catastrophic industrial accidents in the US or elsewhere while that window was left open. Rather than applauding our clever cyberwarriors, perhaps we ought to be appalled at them for having left us dangerously exposed so that the Iranians would be too. When a cyberattack does come, via some vulnerability NSA might have patched, will we know enough to blame our cyberwarriors, or will we just shovel more money in their direction?

Before we let spy agencies make these tradeoffs for us (tradeoffs between security and security, for those who prioritize security über alles) we might want to think about institutional bias. Would it be rude to point out, given recent events, that NSA’s Power-Point-blared enthusiasm for awesome, eyes-of-the-President offensive capabilities may have eclipsed the unglamorous but critical work of running a good defense? And no, going all North-Korea with personnel is not a solution. I’m very grateful that what’s leaked has leaked, but if reports about what Snowden got are accurate, the absence of ordinary precaution is shocking. There is no irreducible danger from sysadmins that would excuse such a failure. Root access to some machines does not imply pwning the organization. I am speculating, but both Snowden’s claims of expansive access and Keith Alexander’s assessment of “irreversible damage” suggest NSA prioritized analyst convenience over data compartmentalization and surveillance of use. That’s great for helping analysts get stuff into the President’s daily briefing while avoiding blowback for, uhm, questionable trawling. It should be incredibly embarrassing to an organization whose mission is securing data.

Perhaps my speculations are misguided. The point remains. At an organizational level and at a national level, there are tradeoffs between offensive capacity (surreptitious surveillance, sabotage) and defensive security. Maintaining a killer offense requires tolerating serious weaknesses in our defense. The burgeoning, sprawling surveillance state has its own incentives that render it ill-suited to make judgments about how much vulnerability is acceptable in pursuit of an impressive offense. That shouldn’t be their call.

Sometimes the best defense is a great defense. Even if it is a lot less awesome.

Note: The trade-offs described here apply especially to covert, surreptitious means of accessing computer systems. If “we” (however constituted) decide that we want systems that are both secure and susceptible to government surveillance, we can make use of “key escrow” or similar schemes. There would be significant technical challenges to getting these right, but at least the systems could be openly designed and vetted, and could include software-enforced auditing to document use and deter abuse. Systems designed to allow third-party access will always be weaker than well-designed systems without that “feature”, but they can be made a lot more secure than systems whose flaws are intentionally uncorrected in order to enable access. It would be important to avoid implementation monocultures and centralized, single-point-of-failure key repositories. A public review process could see to that. It would not be necessary to ban alternative systems, if we wish to maintain status quo capabilities.

I’m not arguing any of this would be a good idea. But if we decide that we want data-mining or widespread surveillance, we can implement them in ways that are overt and publicly auditable rather than clandestine, insecure, and unaccountable. The status quo, a peculiar combination of lying a lot and demanding the public’s trust, is simply unsupportable.


19 Responses to “Secret snooping keeps us vulnerable”

  1. Harald Korneliussen writes:

    Your IT background is showing again. That’s a good thing, of course.

    Yes, one of the big points of SELinux is to avoid giving any individual such powers as Snowden claims to have had, and likely had, judging from those NSA statements. I saw someone quip (don’t remember where, sorry) that they felt cheated – here they are struggling with SELinux, then it turns out the NSA don’t even use it themselves?

    But as anyone who has tried “getting secure” knows, there’s another tradeoff here: security vs. convenience. It’s very, very hard to get stuff done if you do everything in the safest possible way. It seems NSA, or Booz Allen, at some point decided that having a sysadmin able to get things working was more important than compartmentalization. I can empathize with that.

    There’s another dysfunction here, also one that should be familiar to IT people: the use of contractors to work around your own crippling organizational culture, and dodge blame if something goes wrong. Sysadmins are an independent-minded bunch, you’re not picking from a large pool if you want them to have security clearance and pass all the silly tests the NSA demands. It must be tempting to call in a contractor, to be slightly less picky in looking for candidates, in return for taking the blame when something goes wrong.

    I’m thinking the new North Korea rules you mention will just make this dysfunction worse. It’s totally classic bureaucracy, with an impulsive decision from the top, so that they can be seen as “doing something”, just aggravating the everyday problems they face on the floor.

  2. Highgamma writes:

    Brilliant point. Perverse incentives at work. Needs to be discussed more.
    Please write an Op-ed.

  3. en_anon writes:

    Another great post on the NSA leaks! Thank you – you really are on a roll. I wanted to add two things quickly:

    1. There’s another way in which the offensive mission of the NSA (and other similar operations) undermines defense: in producing an offensive virus (or whatever) like Stuxnet, the NSA and everyone else involved may have slowed Iranian nuclear capabilities, but haven’t they also inadvertently provided Iran with the code of the virus, which Iran can – once it cracks the virus code – exploit for its own purposes? They can sell the code, or try to use a variation of it themselves to attack the US…

    2. We don’t often think about bounded rationality of organizations (in the original Herbert Simon sense) when thinking about government agencies. They seem, at least in popular accounts, either all-powerful or completely incompetent. It seems like there’s a lot more to say about how bounded rationality in organizations (e.g. lack of internal security you discuss) and its implications for national security, the decency of our society, etc. I’d love to hear more of your thoughts on the issue…

  4. […] See full story on […]

  5. Morgan Warstler writes:

    I hate this type of game theory…

    We assume we are spied on by others, China / UK / Russia etc. It doesn’t matter in as much as we know we are doing it to them, and they know it, and we know they know it, and they know…. and on and on.

    What we WANT is to be able to do bad things under the cover of night and not have the govt. siting on a trove of past collected information that they can use to catch us. If a bad thing happens, and we are suspected, thats when game begins – govt. must try to prove it.

    That is our actual demand.

    It is our demand bc IF govt has total past history, then the govt. can choose which of us to prosecute, bc everyone is guilty of some things.


    The correct way out of this, is for the govt. to CHOOSE between the NSA and the FBI and let them fight it out.

    IF the NSA wants to keep a history then FINE as long as any evidence ever derived from it is fruit of poisoned tree.

    Sure look in the NSA trove of data, cause then I’m immune!

    So there is an easy way to ensure this stuff doesn’t bite us very hard, and lets the NSA go apeshit with the handicam… we just have to insist on it.

  6. interfluidity » Secret snooping keeps us vulnerable, me ha parecido muy insteresante, me hubiera gustado que fuese más extenso pero ya saeis si lo bueno es breve es dos veces bueno. Enhorabuena por vuestra web. Besotes.

  7. […] & Noble: The Final Chapter? Welcome to the (Don’t Be) Evil Empire: Google Eats the World Secret snooping keeps us vulnerable Finally You’ll Get To See The Secret Consumer Dossier They Have On You The Universal String […]

  8. RSJ writes:

    I think this is a false trade off.

    The thing about software is that the number of bugs are effectively infinite. There was a study with the Solaris 2 kernel — an old, well-designed piece of production software. After it was released, auditors found a lot of bugs per day spent looking. But the interesting thing is that thereafter the rate of bug discovery stabilized. It did not approach zero. You can still find bugs in the Solaris 2 kernel. You can still find vulns in that kernel.

    What that means is that if you want to find one bug (say), then spend a day looking. If you want to find two, spend two days looking. There an infinite number of bugs.

    For the attacker, they just need some time with the source and the attacker will find as many vulnerabilities as they want to find.

    Therefore it makes no sense to criticize the NSA for discovering vulns and not reporting the results to the vendor. Anyone else can discover their own vulns and not report them. Their vulns will be different from those discovered by the NSA. There are enough vulns to go around — there is no scarcity.

  9. rsj — i think that’s a lot too glib. modern software is complicated, and that’s a challenge, but there is no law of nature or constancy that characterizes it. an anecdote about Solaris 2 (a study, sure, that investigated what amounts to a single anecdote, one piece of software under one (gentler) moment of time and technological circumstance) needn’t be representative.

    more fundamentally, though bugs may numerous, they vary in terms of severity and ease of discovery. even in the solaris anecdote, i doubt very strongly you’d find that the marginal cost of finding severe vulnerabilities was anything close to constant over time. not-widely-known, reliable-to-use “zero day exploits” are bought and sold, and command high prices. sure, there remains a near infinity of bugs. if all you do is count, keep counting. but the frequency of finding really “good” ones gets lower over time (holding constant software configuration).

    an alternative NSA that devoted its full current resource base to attacking systems in order to discover, disclose, and patch — an NSA that played the role of national immune system rather than national eavesdropper — would not eliminate all serious, exploitable flaws. but it would drive up the cost, in time and certainty and money, of finding and exploiting the next vulnerability in the systems it deemed most crucial to protect. i think it unlikely that severity-adjusted exploits have a flat marginal cost curve. as long as there is increasing marginal cost, the tradeoff is real.

  10. RSJ writes:


    I don’t think even the NSA has anywhere near enough resources to discover all “potent” zero days in the major software packages (whose number is growing). Our nations’ GDP is nowhere near sufficient resources.

    The development methodologies and business pressures simply do not allow for the delivery of secure software to the masses. More importantly, they do not allow for the delivery of software that can be verified to the masses.

    The market goes to whoever delivers the software first. That puts enormous pressure on getting the product out the door. Customers care more about features than security. This is an increasing returns industry so small deviations from what customers want on a price per feature basis leads to ruin.

    NASA is famous for delivering quality software — they do all the UML modeling, with red teams trying to break software at the same time that it is delivered. Their developers don’t pull all nighters. But the cost per line of code is well above what the market will bear. For profit vendors are not going to write software that is easy to audit or verify. They are not even going to write security policy models. Because of this, it is extremely hard to verify.

    Imagine a (typical) Microsoft vulnerability: The NTLMv2 protocol used in webdav is flawed and is thus vulnerable to replay. For interoperability you can’t get rid of this or require Kerberos. A company exposes an exchange server on the internet to support webmail. At the same time, some unrelated functionality — calendar invites — were also flawed allowing the “from” field to be spoofed in the invitation. This also cannot be changed because it is a widely adopted protocol. Add to that default Outlook behavior that will automatically perform the handshake base on the unauthenticated ics fields and you have a zero day against Outlook. MS still has not fixed this.

    That’s just one example of the interaction of three pieces. Now imagine that there are thousands of pieces, each being pushed out to market every year. The idea that the NSA can somehow “harden” this to prevent zero days is absurd.

    Hardening a system for which detailed security policy documents are not even written is impossible. But no one uses these methodologies (outside the defense/NSA space).

    I’ve managed two common criteria EAL 4+ certifications. It took about 20 man years of effort. The result of this type of verification was software that still had security holes, and we are not talking about anything near as complex as an operating system and our software was designed with security in mind. It is literally impossible to ensure that there are no remote exploits in any piece of software no matter how complex.

    Nor is there some natural ‘ordering’ of vulnerabilities to ensure that as long as the NSA devotes more resources to the effort than others, vulns found by third parties will be the same as those found by the NSA.

    Here is some data:

    From the paper:

    “Figure 8 shows the vulnerability discovery rate for each
    program as a function of age. For the moment, focus on
    the left panels, which show the number of vulnerabilities
    found in any given period of a program’s life
    (grouped by quarter). Visually, there is no apparent
    downward trend in finding rate for Windows NT 4.0
    and Solaris 2.5.1, and only a very weak one (if any) for
    FreeBSD. This visual impression is produced primarily
    by the peak in quarter 4. RedHat 7.0 has no visually
    apparent downward trend […]

    Moving beyond visual analysis, we can apply a number
    of statistical tests to look for trends. The simplest procedure is
    to attempt a linear fit to the data. Alternately,
    we can assume that the data fits a Goel-Okumoto
    model and fit an exponential using non-linear leastsquares.
    Neither fit reveals any significant trend. In
    fact, the data for Windows NT 4.0 is so irregular that
    the non-linear least-squares fit for the exponential
    failed entirely with a singular gradient. The results of
    these regressions are shown in Figure 9. Note the
    extremely large standard errors and p values, indicating
    the lack of any clear trend. “

  11. […] Secret snooping keeps us vulnerable Interfluidity […]

  12. rsj — nobody has sufficient resources to “harden” everything, or to bring software in general up to some specification or standard. but we are not interested in a binary choice, but in matters of degree. the question is whether a budget of, say, $40B per year could make a significant improvement in the defensive security of our IT infrastructure. i say “yes, most definitely”, not only by finding and helping fix vulnerabilities (as we are squabbling over here), but also by developing more secure systems de novo that the private and public sectors might come to rely on, without some of the feature-creep insecurity that, as you eloquently describe, often comes to infect commercial software. there’s obvious precedent for this, DES, Skipjack, AES, etc.

    lets put the de novo role aside for now, and continue with the question we begun to debate: is it worth it to pay “white hats” to find vulnerabilities? i really enjoyed the rescorla paper, thank you for that. just skimming the literature that cited rescora’s work, though, makes it clear that the position you take (and that he suggests a bit less definitively), is far from widely agreed. regardless, i think the way he modeled costs is great. in particular, the case for seeking vulnerabilities rises with 1) the probability that a vulnerability first found will be rediscovered (i.e. the correlation of discovery), 2) the degree to which the space of vulnerabilities is limited; and 3) the costs that private “black-hat” discovery imposes prior to and in addition to the costs that “white hat” discovery provokes in order to implement a fix.

    you argue basically that (1) is very small and (2) is very large, so that stamping out vulnerabilities would do little good or be actively counterproductive unless (3) is very large. my intuition, and much of the literature i skimmed seems to agree, is that (1) is not so small, people’s approaches to finding vulnerabilities are correlated and some are easier to find than others, and (2) is less unbounded than you suggest, vulnerability discovery is best modeled as cumulating as an S-shaped curve rather than something increasing at a near constant pace. i’ve never looked at this stuff before — you certainly know this literature better than i do. but i think it’s fair to say that the strong position that you or a less-hedged rescorla might take remains controversial. all of this leaves out factor (3), which motivates much of the current-event interest in cyberwarfare, the claim that cyberattacks are likely to become costly in ways heretofore unknown, leading to very direct loss of life and physical property rather than lost time, data, and money (which are certainly bad enough). the greater factor (3), the greater the case for proactive vulnerability-finding, holding (1) and (2) constant. [however, a big factor (3) may also increase the offensive benefit of failing to disclose, if one views the notion of such offensive benefit more favorably than i do.]

    i think we’re going to end up disagreeing reasonably, based on different parameter estimates in a broadly shared model. you know more about the domain than i do, but among those who play here, there is not a consensus that your view is correct. (i think it’s fair to say that my less provocative and less interesting view remains the conventional wisdom, with some not-incontrovertible empirical support.) i do want to emphasize again that i am not positing any perfect ranking of vulnerabilities in terms of order of discovery, nor am i positing that it would be possible to bring nearly all software up to some standard that would be considered secure. i am simply arguing that there would be reductions in expected cost if an aggressive, well-funded “white hat” approach were taken, that fewer vulnerabilities would be exploited less frequently at greater cost to bad guys, and that reduction in cost might more than overwhelm whatever benefits are purported to come from offensive capacity. my case is enhanced by the capacity of a national security white-hat to allocate resources based on national vulnerabilities. perhaps, we should be more concerned about securing industrial controllers and automobile computers than Windows 12, and in economic terms there may be an externality here to correct, the costs of serious failures will never be fully borne by IT providers.

    while skimming the literature you turned me on to, i found this nice game theory paper that addresses quite directly the tradeoff described in the post. interestingly, as they’ve modeled the game, the equilibrium i pretty plainly hope for, where all players prefer maximizing defense rather than “stockpiling” a cyberoffense, is (almost) never stable. however, they model conflicts between nations, in which everybody-defending is not stable because if everybody else defends, a player can clearly gain an advantage by being the sole entity capable of attacking. (in a “cyberhawk” variation of the game, the likelihood that everybody stockpiles in an offensive arms race increases as the likelihood of vulnerability rediscovery decreases. always that is a crucial question.)

    i wonder, however, how their game might change if we introduced a “terrorist” or “mafia” player that had no interest in defense. such a player would always attack rather than disclose, because she doesn’t capture any substantial fraction of the benefits of defense. it’s no longer obvious, then, that there would not be an equilibrium in which all nation-state players would rationally opt for defense. the nash-equilibrium-breaking temptation of everyone else’s beneficence would never apply to any nation state in this world. so given the omnipresence of nonstate attackers, there may be equilibria in which all nation-states should defend.

  13. RSJ writes:


    Let me summarize the arguments

    Claim 1: The NSA is endangering U.S. security by sitting on vulns it discovers rather than disclosing them
    Claim 2: Third party white hat testing is “a good thing” for security
    Claim 3: $40 B of free security QA provided by the NSA would improve the security of U.S. software at no cost to functionality or national output
    Claim 4: $40B of free security QA by the NSA is a good national policy, even though it is a trade off.

    My counterargument:

    The security of software is a function of a series of trade-offs at the design, implementation, and testing stages. These tradeoffs are the result of responding to market demand. Pushing a firm off this tangency is not welfare enhancing.

    Random whitehat testing is not going to increase the assurance level of the software.

    The engineers are already busy fixing bugs and adding features. There is a stack of thousands of vulns already known to the company that has been determined to be less important to fix. Now some random person comes along and threatens to disclose a zero day unless the firm stops what they’re doing and fixes this particular (somewhat random) bug. Is this welfare enhancing? Engineers are pushed off of what they are currently doing (perhaps fixing errors in existing functionality, or adding new features) and must fix this bug. It screws with schedules, and it doesn’t accomplish a whole lot.

    For this reason, many firms are antagonistic to whitehat testing. However, whitehat testing *is* a good thing because it helps pen-testers hone their skills and conduct research on vulnerability classes. Without whitehat testing, security researchers would be confined to pen-testing products that they own or third party simulations. But this does not mean that whitehat testing by the NSA is a good thing.

    Some firms — those with better security development processes — welcome whitehat testing because of the signaling. When a firm announces a bug bounty and offers to pay $1200 per vuln, it is telling the market that, assuming an hourly rate of $60, it takes a pen-tester on average more than 20 hours to find a vuln. That is a claim that the company can indirectly make without putting anything in writing (the EULA will continue to disclaim everything). When a firm commits to fixing vulns within 90 days, it is making a similar claim.

    Therefore I reject 1). I agree with 2), but because whitehat testing is a good educational and signaling tool, not because we are exhausting the pool of vulns. For 3) and 4) I don’t see how this is welfare enhancing. If the market is not willing to pay for software developed more securely, this will be welfare reducing.

    Construction builders choose to build houses that are easily broken into, because households are not willing to pay for houses that are harder to break into. This is a market trade off. Why mandate that everyone lives in a windowless steel box? Particularly when a thief can knock on the door and pull a gun on you anyway. In an environment when phishing is pretty easy, and many employees gladly give away their AD passwords for a candy bar, software vulns are generally not the binding constraint. Many end users are rationally not willing to pay for software more secure than what they have now.

    Those with higher security needs are free to purchase more securely built structures, just as with software. Let the market pay for more security if they want it.

    To the degree that there are externalities — e.g. an owner of an infected machine may be damaging someone else’s system — then provide reforms so that vendors and operators cannot disclaim all liability, or put systems in place at the IP layer to detect infected machines and take them offline with a notice to the owner that they have been infected. That would be an appropriate research project for the NSA.

    As with any kind of crime, the bulk of the heavy lifting will be done with after-the fact law enforcement, not by increasing the difficulty of committing the crime.

    Here, software has unique challenges due to its automated nature and international scope but also unique opportunities, e.g. for mass revocable deposits, for real-time auditing and monitoring large amounts of financial flows, etc.

  14. RSJ writes:

    Also, we can reconcile the S shaped curve versus linear curve by taking into account that new code is being added to each product.

    Here is another paper: that looks at openBSD, which has a well-known slow pace of development and feature growth when compared to linux or even FreeBSD


    “Over a period of 7.5 years and fifteen releases, 62% of the 140 vulnerabilities reported in OpenBSD were foundational: present in the code at the beginning of the study. It took more than two and a half years for the first half of these foundational vulnerabilities to be reported.

    We found that 61% of the source code in the final version studied is foundational: it remains unaltered from the initial version released 7.5 years earlier. The rate of reporting of foundational vulnerabilities in OpenBSD is thus likely to continue to greatly influence the overall rate of vulnerability reporting.

    We also found statistically significant evidence that the rate of foundational vulnerability reports decreased during the study period. We utilized a reliability growth model to estimate that 67.6% of the vulnerabilities in the foundation version had been found. The model’s estimate of the expected number of foundational vulnerabilities reported per day decreased from 0.051 at the start of the study to 0.024.”

  15. rsj — tolerance of the annoyance and (the fairly negligible chance of) ex post law enforcement may be OK for Windows on your desktop. but if the cyberwarriors aren’t lying to us about the vulnerability of dams, banks, factories, etc., the potential “externality” is humongous, and relying on liability and ex post enforcement is absurd. if you want to argue that all of that talk is exaggerated, that in fact the worst we can fear is a steady drip of DoS botnets, remediable identity theft, and other annoyances of modern life, OK. but i don’t think you are suggesting that. in fact, you are claiming that status quo commercial software, including that within industrial controllers that manage US power plants as well as Iranian centrifuges, is radically insecure. if that’s right, it’s completely intolerable, and silly to say we should rely on law enforcement to chasten (perhaps foreign or state) attackers after dams have unleashed reservoirs, automobiles spontaneously accelerate, the cooling flow to nukes fail, etc.

    using examples like Solaris or Windows or BSD is convenient because there is a literature, but it calls to mind use-cases that trivialize the issue. apparently, really really bad things can be done by breaking into military, industrial, banking etc. information systems. apparently, the US has employed these tools offensively to powerful (and arguably virtuous) effect in Iran, and there have been successful counterattacks like industrial sabotage in Saudi Arabia, the cost of which not have been catastrophic but could have been. as far as i know, very few people dismiss this talk of potential catastrophe as silly alarmism. (do you? should i?) it is now public knowledge that the US now games out offensive cyberscenarios in the same way it traditionally games out attack and invasion scenarios. surely our military is not alone, inside and outside of government.

    the trade-off we are discussing is not whether it’d be good to spend $40B hiring “white-hats” to exploit and disclose random exploits of commercial software. our government is already in the business of prioritizing exploits based on destructive potential. the question is, when we find something really “good”, when there are Siemens controllers in wide-use worldwide and we know an effective exploit, is it better to stockpile or disclose? there’s a continuum between random MS Outlook bugs and stuff that could make a new Fukushima, because as we saw with Stuxnet, the same crap that people use to steal credit card numbers is often prerequisite to getting into the industrial controllers. but obviously a “white hat” NSA wouldn’t be random. it would be actively gaming out how really bad things could be done to and with status quo infrastructure and aggressively working with industry to patch those flaws. in doing so, it would be giving up its own best weapons. that is a real tradeoff, one i suggest be resolved in favor of defense.

    liability and law enforcement simply aren’t an answer to the problem of cyberwarfare. a claim that under current economic arrangements, it is simply impossible to defend critical, life-or-death systems is basically a way of saying current economic arrangements are intolerable, and we should have NASA design our industrial controllers. asserting that this kind of software empirically just sucks and there’s no way to fix it — but hey, we can develop our own attacks! — doesn’t take us very far.

  16. Warren Grimsley writes:

    “When a cyberattack does come … we just shovel more money in their direction.” (my edit) This is the end of the trade-off of between feeling afraid and feeling secure. Send money and feel secure. This is the logic of a protection racket.

    I am not a cyber expert, but inductive logic would tell me that absence of evidence is not a proof of the success of a program. There is a well oiled propaganda machine which enjoys motivating the public with fear. I can imagine that the messengers actually feel the fear they report obviating the requirement to reveal any objective data that leads them to their conclusion. When the authority says “run,” I should damn well run because…there may have been a tiger in those bushes and not just the rustling of the leaves caused by bureaucratic hot air.

  17. RSJ writes:


    I was talking about mass commercial software because is the area that I am familiar with and because this area has the most coverage. For custom/embedded software, there is a bit of a gray area and I believe the industry would be better served with more regulation.

    The point being, we are working on it, and to go further requires the right approach. Think, for example, of safes. Safes are ranked according to the time required to break into them. It is not a matter of breakable versus unbreakable. An attacker with sufficient amount of time will be able to break into any safe. A foreign government always has sufficient time.

    It is the same with software. Think in terms of levels of assurance. A level of 1 means minimal assurance, such that a casual attacker can compromise the software. A level of 7 means a mathematical proof that the software will behave as expected. Obviously the cost for delivering software of a given assurance level increases exponential with the complexity of the software and the desired assurance level. There is level 7 software out there (e.g. schlumberger sells javacards of level 7). For minimal functionality embedded software, it is possible to reach high levels of assurance. For general purpose operating systems it is impossible.

    Then let’s assume that externalities are such that utilities and infrastructure companies will tend to cheat. It makes sense to regulate what level of assurance of software they should run. Be aware that for all but the least complex software components, foreign governments will be able to launch a successful attack.

    So now we have target levels of assurance for different infrastructure software. There is already a validation and certification process in which certified test labs validate the software as belonging to an assurance level. The key thing to keep in mind is that an assurance level is a positive statement not made by whitehat testing alone, but by a review of the entire software development lifecycle. It is a process question as much as a “let’s let third parties test it” question. Also be aware that the NSA, in partnership with NIST, already conducts whitehat testing as part of this validation effort.

    The results of this whitehat testing is shared with the vendor, and the vendor is required to address (not necessarily fix) every issue found.

    So we already have in place more thorough validation efforts and have accumulated a decent body of knowledge here. The main problem is that under current regulation, these efforts are required only for government purchases of security software (e.g. firewalls and the like). I would support expanding these regulations to cover utilities and the like. I would not favor any form of subsidies to the vendor — the utility companies should be forced to pay for the added security, just as they pay for physical security.

    Now let’s assume we are in this ideal state already. That is, every critical infrastructure softare is validated by NIAP to EAL 6 or higher. Nevertheless it will still be the case that the NSA (as well as other governments) will have stockpiled vulnerabilities against them. It will not make sense to disclose these to the vendor because doing so will not elevate the software to a higher assurance level. This is because other vulnerabilities can also be found with the similar effort expended to find the original set. The effort required is the assurance level of the software, and this assurance level is much more dependent on the development methodology of the software than on fixing individual bugs.

  18. […] See full story on […]

  19. rsj — and so we are back to our original disagreement.

    thanks for the information regarding certifications. that’s a part of the software universe i’ve never been anywhere near. certifications are always hard, in any nontrivial domain, to keep the correlation between certification and intended meaning while minimizing requirements that don’t effectively enforce the correlation (“bureau rach”, “busy work”). i suspect you’ve had your share of experience with both sides of that. one wants some kind of assurance with respect to the cloud-connected device operating ones “self-driving car”. i wonder how we’ll solve that problem. even software that is “provably correct” in theory is unlikely to be so in practice, cf Ken Thompson’s famous compiler hack. And large software systems will never be provably correct.

    regardless, it is plainly true that not all software is alike from a complexity/security perspective, it matters very much how software is architected and designed, poorly architected software will be irredeemably impossible to secure. the approach to programming made famous by Microsoft — ship “features” fast, worry about problems later — tends to yield software which cannot be secured without breaking intolerably many dependencies. attention matters, throughout the development cycle.

    the question we continue to disagree over is whether, especially for software designed with the unusual care that ought to be applied to systems whose failure would be catastrophic, plugging the vulnerabilities that we find is useful.

    i think you are simply mistaken. you think i am. the literature, such as it is, contains both views in continuing controversy. reasonable people disagree.

    but we should be able to find some common ground. the vulnerability-modeling literature generally finds an S-shaped curve in quantities of cumulative vulnerabilities reported. that is attempting to hold the initial codebase constant. as you point out, both “fixes” and upgrades (new versions) introduce vulnerabilities. if we lived in a world that held the codebase constant except for always-perfect fixes, then our dispute would be over the shape of the right-upper tail of the S-shaped curve and the degree to which points tend to be ordered on the curve. my position makes sense within the right-upper tail if 1) the rate of increase is ever decreasing OR 2) there is, even in the right tail, an ordering with respect to ease or probability of discovery, so that early discovery and repair of a vulnerabilty reduces the finite population of flaws “as or less easy” than the vulnerability discovered. i think both of those conditions would be true, if we could observe such an unrealistically lifeless piece of software. your provocative position is that the right tail of a cumulative severe-vulnerability curve 1) has a constant slope AND 2) that the probability of finding a vulnerability is independent of vulnerabilities already squashed. in its perfect form, the truth of those conditions strikes me as vanishingly unlikely, but you might reasonably (though i’d insist mistakenly) argue that as a cognitive model for guiding action, it’s the right model, because there will always be such a large space of not-too-difficult vulnerabilities that the elimination of any reasonable number just doesn’t matter. you might also argue, as i don’t think you have but rescorla does, that even if finding vulnerabilities does significantly reduce the probability or increase the time of finding new vulnerabilities, it may yet be counterproductive because of costs imposed by causing people to repair the fraction of vulnerabilities that would never have been (re)discovered if not found by white-hats and repaired.

    but when we leave the idealized world of a fixed codebase, some of this controversy becomes academic. the empirical evidence is clear that early on, upon a new release, there is a period of rapid vulnerability finding and squashing, the fast upward-sloping part of the s-shaped curve. unless you attribute all of that activity to attention (ie, perhaps all vulns are equally hard to find right from the start, and the curve’s shape is solely a function of developer’s enthusiasm for attacking new code), unless you want to argue it is ALL attention, it seems pretty clear that new releases and modifications contain “low-hanging-fruit”, vulnerabilities that are easier to spot than those that will remain available once we reach the flatter, right-upper-tail of the curve. if that is true, then in general increased attention will “speed up time”, increase the slope of the strongly upward-slanting region of the S-curve and the likelihood that we are in the flattish upper-right tail. we can argue over the shape of that tail all day long, but it seems pretty indisputable that software is safer there than before we reach there, and attention that reduces the time during which very easy exploits remain available increases security. (a counterargument might be that repairs introduce easy vulnerabilities at such a rate that they don’t net help, but if you think that, you must think that the only explanation for the S-shaped curve is attentional.)

    i think, especially for high quality software (which most commercial software is not, but hopefully very critical software is), that both in the upper-right tail and the the fast-increasing part of the curve, squashing vulnerabilities net-increases security, diminishes the likelihood of finding a new vulnerability for a constant expenditure of effort. we can quarrel over whether that’s right in the upper-right tail,where well-vetted stable software lives, but in domains with frequent upgrades, i think it’s hard to deny.

    all of this is not an argument that fixing random private firms’ buggy commercial software should become NSA’s job. you seem concerned, and very rightly so, that arguments like mine could become an excuse for commercial software providers to shift security costs to the state. nevertheless, there is a public good aspect to the security of critical systems. how costs are distributed in purchasing this public good is an important question. but the good ought to be provided. putting aside very challenging, right-upper-tail vulns we might argue over, if NSA (or the contractors who sell to it) hold onto “low-hanging-fruit” vulnerabilities for offensive use, they are very clearly trading safety for strike-power (or their vendors for just plain money).