Information security has entered everyday life. Many are now familiar with techniques such as two-factor authentication, encryption and software updates thanks to the devices and social media platforms they use. When it comes to the innards — that is, secure design — many questions remain. What goes on in modern processors these days is difficult to express in a few words. Everyone is talking about the Meltdown and Spectre bugs, but you need a lot of prior knowledge to understand what exactly is happening. In addition, there is the code that does the work for devices such as graphics cards or network cards in the firmware, virtually under the operating system. If you now want to develop an application, the following question arises: How can you keep track of the large number of components and program securely? Secure coding, i.e. secure programming techniques, alone only helps with one’s own code. The answer lies in the architecture of the whole.
The structure is decisive in secure design
An application never starts with code. It always starts with a design and a description of the tasks that an application should perform. This phase is the most critical part in software development, because decisions made here usually determine the future of the project. In particular, this means that the basis for the later information security of the code is made at this point. For example, the selection of components, the protocols used, necessary conditions (such as cryptographic keys, network and storage requirements, etc.), and communication with third-party vendors are established in the design phase. So what does Secure Design mean in this context?
Secure Design consists of a set of fundamentals that are observed at every point in the software project. You try to minimize the attack surface of the code, use secure defaults everywhere, distribute as few permissions as possible to accounts/components, build security checks into all layers, catch bugs securely, don’t trust third-party services, and separate tasks in the code. The application must be able to work reliably in any environment without exposing vulnerabilities. Processed data must be constantly checked for integrity. This also applies to internal processes such as communication with a database or other parts. Developers and code must at no time make assumptions that an attack could negate. This includes software libraries and capabilities of hardware or operating systems. This includes the characteristics of processors that caused the class of Meltdown and Spectre bugs.
Of course, one should not be gripped by paranoia now and give up all the features of the platform. After all, one is supposed to reuse code and not always reinvent the boat (the boat is older than the wheel). It’s always about assumptions you make when you program. The XML input may not be XML. The cloud service response may not be from the assumed source or may just be wrong. The data you previously wrote to the database may now be wrong (because there was an attack). Modern code is full of assumptions. It gets exciting when these are not true.
Mathematical proofs for secure coding
One way to avoid ambiguity in specifications is to use mathematics. Since code comes from computer science and computer science is a branch of mathematics, processes can be represented mathematically. If one translates now tasks from the design documents into a mathematical language, then one can represent and prove with the help of tools processes in the code mathematized. If this is achieved, one is literally on the safe side. In addition, one eliminates unclear formulations, which in turn helps the developers.
Formalisms are also a part of Secure Design.
Data-based development ≠ Database development
So how do you implement secure design? Since you are always dealing with data, you can start with them. After all, the application should be able to handle errors securely. This means that nothing critical should happen when nonsensical or manipulated data is entered. One tactic is therefore to feed the software with randomly generated or manipulated data and see what happens. Ultimately, this is the test case strategy one often has in development to avoid finding fixed bugs back in the code. The technique for tripping up systems with random data is called fuzzing. There are many tools for this, some of which are already available for development tools. The combination of random data and real data works best, so you can get over the first check. After all, you want to test all the code.
Operating system vendors or teams developing critical software already use this approach.
Change the habits!
Information security rises and falls with everyday habits. Anything that needs to be actively considered will go wrong sooner or later. Of course, the same is true for software development. For this reason, secure design and secure coding must be incorporated into one’s own processes in appropriate steps. Unfortunately, this cannot be done by reading documentation or instructions alone. It is recommended to implement it in partial steps, accompanied by a workshop for all developers. There is plenty of material to illustrate this, because studying known errors creates practical relevance and helps to question one’s own code.
About René Pfeiffer
René Pfeiffer has been working as a senior security consultant for SEC4YOU since 2009. In addition to his self-employed work, he is the managing director of DeepSec GmbH and has been organizing DEEPSEC for over 10 years. Through the use of recognized methods and his affinity for IT security and Linux, he has had the privilege of advising countless customers in the sectors of industry, aviation, telecommunications, utilities, pharmaceuticals, healthcare, advertising, law firms, NGOs, media, logistics and software development on security issues.
Please ask René Pfeiffer about Secure Design — Secure Coding using our contact form.
Leave A Comment