Skip to main content
MSRC

Processing Power to the People

Hey everyone, h1kari here. Katie invited me to do a guest post on the BlueHat blog and so I thought I’d rant a little bit on some ideas I’ve had with how crypto best-practices relate to other areas of security that may hit closer to home for you guys. My current interests are in finding areas of computing that would be a lot more useful if they could only be run faster, so I’d like to hear from you about your experiences and what takes up all the idle time on your processors. Please send me feedback on these ideas and let me know if these ideas are useful or just a bunch of BS.

COMPUTATIONAL FEASIBILITY

Many people who are familiar with cryptography know that ciphers are rated by their bit-strength. This is an easy metric to determine which algorithms are appropriate for the data that you’re trying to protect. In many ways, other areas of threat modeling are approached similarly, estimating the worth of what you’re protecting, the cost for the attacker, and meeting a balance somewhere between security and functionality.

When it comes to cryptography, there are only two ways to break a crypto system:

* Find a shortcut

* Try every possible key

Good crypto systems force you to only use the latter method and make sure that there are enough possibilities such that it’s improbable for an attacker to try all of them before the data isn’t important anymore. This, however, is measured by the amount of time the data should be kept secret and the data’s perceived value to the attacker.

With crypto we end up with computational power doubling every 18 months according to Moore’s Law and also the value of the data usually diminishing as time goes on. This is a standard security model that many people rely on but don’t really realize. Some of the problems with systems that rely on this model are that:

* They don’t even realize that they’re using this type of model

* They don’t employ a high enough bit-strength to thwart attackers

Here are some examples:

FUZZING

Many software applications are fuzzed every day and many stand up to the fuzzing just because the attackers only run their fuzzers for a few days and then give up. Occasionally, attackers will randomly stumble upon a bug, but how many are missed in the whole process? Nowadays, with more intelligent fuzzers that use constraint solving and tracing to get good code coverage, you oftentimes need to apply many rules to limit the scope of the states in the program that you’re testing. With many applications it’s nearly impossible to get 100% coverage in a reasonable amount of time.

If you were able to get full coverage of all possible program states, would more bugs be found? If an attacker were able to do this, would this suddenly be a threat? Also, if you ended up in a state that you weren’t sure was exploitable, what if you could solve whether it was or not? If we had nearly unlimited computational power, would we be able to mathematically prove the security or insecurity of an application?

IDS/VIRUS DETECTION

Most of the IDS market consists of software that requires an updated list of signatures to see if a payload looks like a known exploit. Some companies have experimented with solutions that will simulate the payload running against the target and use intelligent heuristics to determine if the payload behaves maliciously or not. Most of the effective solutions use some amount of processing between these two extremes to try to detect known static, dynamic or polymorphic, and/or 0-day payloads. Usually with large enterprise-class networks, the best that you can do is to run a signature-based IDS system in order to keep up with the demand of the speed of the network.

If we were able to thoroughly analyze all payloads that came into the network, would attackers still be able to exploit our systems? Could we 100% prove on the fly whether a payload was malicious or not if we had enough computational power? If we solved the first fuzzing problem and applied it to all of our network-exposed software, would IDSs even exist anymore?

REVERSE ENGINEERING

Oftentimes reverse engineering requires much more computation than engineering. Many applications that use obfuscation require a lot of analysis to determine what actually goes on internally. This also applies to hardware where chips are assumed to be secure because it would be difficult to reverse engineer chips. However, many companies out there are nowadays using image processing techniques to reverse-engineer wafers and circuits for even the most advanced chips out there.

Most of the anti-reverse-engineering measures rely heavily on security through obscurity, which has long been proven to be a bad countermeasure that mostly relies on an assumed lack of determination on the part of the attacker instead of using an actual computational metric. Many products’ anti-reverse-engineering measures fail because a determined attacker only takes a matter of hours to figure out the obfuscation technique and break the entire system.

How long do you want your IP to stay secure? How much would it cost your company if your IP were to be reverse-engineered? How long would it take a single determined attacker to reverse-engineer it? What sort of budget would the attacker require?

CONCLUSIONS

With all of the security models in use, there is an inherent trade-off between what you think the attacker will go through to break your security and what they can actually go through. When you abstract many of the security scenarios we have today down to just assessing their computational feasibility, where would they fall with the traditional bit-strength measurement? And how does it compare to the crypto algorithms that are used out there today? How will your mission-critical software that’s supposed to be up for five, ten, or even twenty years fare when the common attackers will have the computational power to their advantage? If your adversary dedicated more computational resources to defeating your system than you did testing it, would they be able to find problems that you didn’t?

Also, with cryptography you can easily increase the security of a good algorithm by simply increasing the key’s length which causes exponential increases in the time required to attack the key. This makes it trivial to defeat Moore’s Law by just choosing a long enough key. With these other systems, is it that easy to exponentially increase the attacker’s computational requirements to break them?

Just like with cryptography, many of these security systems should be analyzed for shortcuts, but still at the end of the day, if it’s a weak cipher, it doesn’t matter if there are shortcuts if you have enough processing power.

-h1kari

-————-

Editor’s Note: h1kari is one of the founders of the elite security conferencestoorcon, andtoorcon seattle. His contributions to the security community have spanned roughly a decade.


How satisfied are you with the MSRC Blog?

Rating

Feedback * (required)

Your detailed feedback helps us improve your experience. Please enter between 10 and 2,000 characters.

Thank you for your feedback!

We'll review your input and work on improving the site.