Tuesday, September 1, 2015

When it comes to application risk management, you can't do it alone.

I’m often asked to estimate how many developers are required to obfuscate and harden their applications against reverse engineering and tampering – and when they say “required,” what they usually mean is what is the bare minimum number of developers that need to be licensed to use our software.

Of course it's important to get the number of licensed users just right (if the count is too high, you're wasting money - but, if it's too low, you're either not going to be efficient or effective - or worse still - you're forced to violate the license agreement to do your job).

Yet, as important as this question seems, it's not the first question that needs answering.

Staffing to effectively manage application risk is not the same as counting the number of concurrent users required to run our (or any) software at a given point in time.

Consider this:

How many people are required to run our application hardening products on a given build of an application? Actually, none at all, both Dotfsucator for .NET and DashO for Java) can be fully integrated into your automated build and (continuous) deployment processes.

However, how many people does it take to effectively protect your application assets against reverse engineering and tamperingThe answer can be no less than two. Here’s why…

  • Application risk management is made up of one (or more) controls (processes not programs). These controls must first be defined, then implemented, then applied consistently, and, lastly, monitored to ensure effective use.
  • Application hardening (obfuscation and tamper defense injection) is just such a control – a control that is embedded into a larger DevOps framework – and a control that is often the final step in a deployment process (followed only by digital signing).


Now, in order to be truly effective, application hardening cannot create more risk than it avoids – the cure cannot be worse than the disease.

What risks can come from a poorly managed application hardening control (process)?

If an application hardening task fails and goes undetected,

  • the application may be distributed unprotected into production and the risk of reverse engineering and tamper go entirely unmanaged, or 
  • the application may be shipped in a damaged state causing runtime failures in production.


If an application hardening task failure is detected, but the root cause cannot be quickly fixed, then the application can't be shipped; deadlines are missed and the software can't be used.

So, what’s the minimum number of people required to protect an application against reverse engineering and tampering?

You’ll need (at least) one person to define and implement the application hardening control.

…and you’ll need one person to manage the hardening control (monitor each time the application is hardened, detect any build issues, and resolve any issues should they arise in a timely fashion).

Could one individual design, implement and manage an application hardening control? Yes, one person can do all three tasks for sure.

However, if the software being protected is released with any frequency or with any urgency, one individual cannot guarantee that he/she will be available to manage that control on every given day at every given time – they simply must have a backup – a "co-pilot."

No organization should implement an application hardening control that’s dependent on one individual – there must be at least two individuals trained (and authorized) to run, administer, and configure your application hardening software and processes. The penalty for unexpected shipping delays and/or shipping damaged code or releasing an unprotected application asset into “the wild” is typically so severe that even though the likelihood of such an event occurring on any given day may seem remote - it cannot be rationalized.

This is nothing new in risk management – every commercial plane flies with a co-pilot for this very reason – and airline manufacturers do not build planes without a co-pilot’s seat. It would be cheaper to build and fly planes that only accommodate one pilot – and it wouldn’t be an issue for most flights – but to ignore the risk that having a single pilot brings would be more than irresponsible – it would be unethical.

Are there other reasons for additional people and processes to be included? Of course – but these are tied to development methodologies, architecture, testing and audit requirements of the development organization, etc. These are not universal practices.

If reverse engineering and/or application tampering pose Intellectual Property, privacy, compliance, piracy, or other material risks, they need to be managed accordingly - as a resilient and well-defined process. Or, in a word, when it comes to application risk management, you can't do it alone.