Digital Immunity Screenshot

The future of digital security Q&A with Security Expert Dan Geer

By Dan Geer

1) What are the limitations of traditional end point security solutions?

During the time I’ve worked in this field, the execution environment on the endpoint has grown so much larger and so much more complex that characterizing it in full detail is now impossible. I knew a professor of ecology who would assign his new students the job of cataloging all the life in a cubic meter of the forest floor. The point was that it could not be done, thus the student would gain both awe and humility from the exercise. I’m told that Donald Knuth challenged his students to take the computer they were themselves using and catalog what the operating did in a ten-second interval. The point is the same — the ability to fully characterize what is going on is not doable and, one would hope, the student of computer science and the student of ecology both learn some humility and some awe.

What that has to do with end point security solutions seems clear: All we know how to do is to look for things we know to look for. Such and such a behavior or such and such a set of manipulations is a sign that something is good or something is bad (it works either way). The endpoint is now, courtesy of three decades of doubling its power every eighteen months, far beyond our ability to fully characterize what it should be doing. So our sense of security tends to require that we enumerate some set of things we don’t want and to be satisfied when we don’t find them. That list can only grow, yet it can never be complete. There is always another way, a black swan at the micro level one might say.

 

2) In what ways do security failures cause harm to business?

The most succinct version is that the failure diminishes in the set of possible futures the business might otherwise have. Put differently, the greatest costs are the (lost) opportunities that the security failure removed from what might have been. The thing about security failures is that they by-and-large require reversing history in some way, which is never cheap or easy. As my father would ask, “If you don’t have time to do it right, when will you have time to do it over?”

 

3) In your experience, how much effort/resource does the average security/IT team spend securing end points?

This depends on the computing model of the firm. In a classic desktop (only) environment, the effort almost surely maxes out when the number of end-point agents grows beyond some point. In an agent-centric environment, each agent is trying to detect and prevent some specific class of security failures. As one might guess, there will be interactions between the agents and there will be a noticeable compute load as the number of agents increase. The process of installing agents, keeping them updated, avoiding destructive interactions between agents, and smoothing over any version transitions that require new management routine (especially if the version transition is prolonged in time) all add labor.

However, in a BYOD (bring your own device) environment, the spirit of how end-point threats can be countered is quite different. To begin with, version control is an entirely different matter. Where data is actually located is likely to be diffuse and involve clouds that the firm may not itself own or control. Sources of software being run on the end-point will not include just what is sanctioned by the firm. In short, the labor will tend more toward some kind of configuration control that is very much aimed at creating a securable enclave inside an incurable device.

It is hard to answer the question of how much time is spent, or should be spent. There are indeed surveys and the like that recommend some fixed percentage of all IT labor be spent on security, but I’ve never felt convinced that those numbers were normative or sustainable in the absence of events that reinforce — when a CEO can ask how many security failures occurred in the last year and get “none” for an answer, it is only natural for the CEO to suspect that the firm is spending too much money on security. Perhaps the better question would be “Are we well enough instrumented that any security failure will be immediately noticed and when it is, will we have a workable mitigation?”

 

4) Why is the Digital Immunity approach better?

The DI approach does not try to enumerate all the things that you don’t want to happen but, rather, something more modest: “Here is the code we want to run. See to it that it does, as is, and nothing more.” That does not answer whether the code is itself problematic, and it should not try. In the face of the complexity of the execution environment — in particular, the interactions between components — the long argument between whitelisting (default deny) and blacklisting (default permit) has pretty much settled on whitelisting as the preferred alternative for settings that require safety. However, the implementation of whitelisting in all known embodiments other than DI is to do a go/nogo test at launch-time and then to trust (without evidence) that the execution environment will faithfully execute the code that has now been launched. DI’s better alternative is to not trust the execution environment.

This does not mean that DI will prevent some other process employing some other attack method won’t harm the integrity of the code that is eventually launched, only that if some other attack method does harm the integrity of code post-launch then that harm will be detected. This is consistent with what I view as the, repeat the, pinnacle goal of security engineering: No Silent Failure. Note that this is *not* No Failure; it is No Silent Failure, and this is what DI does — it guarantees that any failure in the execution environment is just as likely as it ever was but if there is a successful attack on the code DI is protecting that that attack will be noticed. What you do with that is up to you, be it halt, relaunch, start full recording, go into disinformation mode, or whatever you chose next to do.

 

5) What can we expect to see in the future? What Does the Future Hold for Cyber Security?

The future of cybersecurity is that that which can be protected will continue to shrink. Once upon a time, the Internet was inhabited by a principled elite. When everyone was let it in, firewalls protected corporate interiors from the general exterior. Then the protection perimeter shrank to the desktop, and further to the individual datum. This is an ongoing battle, and will play itself out in all devices with a network address, noting that the Internet of Things has a 35% compound annual growth rate so the number of perimeters to protect is accelerating its increase all the while the size of that which is to be protected grows ever more miniaturized. The interaction between all those components dwarfs our ability to prevent failure hence our only objective hope will come from preventing silent failure.

I must add that so-called big data is not the answer to protection, not because it cannot be made to work (it can) but rather because when it does work, there is no way to examine precisely why it does work, only whether it does. As soon as we rely on automated protections that we cannot examine, we belong to them — they do not belong to us. The term of art is “interrogatibility” as in the ability to interrogate an algorithm to find out why it made such and such a decision. Interrogability costs efficiency and is not naturally an outcome of deep learning. If we (humans) want to remain in charge, we cannot turn over our protections to algorithms whose job is to protect us from other algorithms. Modest goals, my friend, modest goals of which, I daresay, no silent failure is the prime example and precisely what DI does.