Cybercrime is the greatest menace to every organisation, and its impact on society is reflected in the numbers. The global cyber security market is accounted for about €84 billion in 2017. According to Statista, the market size is forecasted to grow to €220 billion by 2023. This data represents the most significant shift in economic wealth in history. The EU General Data Protection Regulation (GDPR) outlines Security and Privacy by Design as a legal necessity. However, it’s not a new concept for security professionals.
Do you want to remain compliant and strengthen the vulnerabilities in your digital products? Then you must become familiar with the principles of Security by Design (SbD) and Privacy by Design (PbD). Keep reading for an overview of both and how they’re put into practice.
A breakdown of security and privacy by design
The threat of cyber crime looms. Security professionals are increasingly required to develop new ways to protect their organisations. In fact, cyber security is a major concern for every organization. Simply defending the perimeter of a resource is no longer a solution. Instead, IT security must address security concerns as the default mode of operation. This occurs through an enterprise architecture approach.
As the practices of SbD and PbD are now legal requirements for many organisations, this is no longer a best practice, but a mandatory one. And the GDPR requires not only security and privacy by design, but also by default. By default means that these security and privacy principles must be baked into digital projects from the beginning. In addition, they’ll need to be operationally verifiable.
Security by design principles
Most cyberattacks are performed by cybercriminals adept at identifying software vulnerabilities. These are usually programming mistakes or oversights that leave digital products exposed. For this reason, software programmers must create applications with a high standard of security. This process keeps attacks and data breaches from occurring. We’ve recently published a whitepaper on how can data breaches be prevented. Feel free to download the material here.
As the name says, Security by Design refers to designing a software system or application so that it’s secure from the start. Properly security a website, network, or other resource isn’t an easy task. Thankfully, it’s made easier by the Open Web Application Security Project’s (OWASP) list of SbD principles that all programmers should adhere to.
By following this set of security design principles, you can be sure that your product is secure. You’ll dramatically reduce the risk of falling victim to a cyber attack. Now, we’ll briefly describe nine OWASP secure by design principles.
1. Least privilege
The Least Privilege Design Principle calls for a minimalistic approach to user access rights. It also requires time-based access rights to limit resource access time to only the time necessary to complete the necessary tasks.
If access is granted beyond this scope, random access is allowed. Random access increases the chance of data becoming compromised. It may also be updated out of the approved context. Limiting access rights reduces damaging attacks from users—intentional or not.
2. Fail-safe defaults
The Fail-Safe Defaults Design Principle is a methodology for only allowing resources to be accessed if explicit access is given to a user. This is the opposite of access exclusion—where everyone has access, and certain people are then excluded. By default, no user has access until it is explicitly granted, which prevents unauthorised users from gaining access until it is given to them.
3. Economy of mechanism
This principle requires that systems are designed as small and straightforward as possible. With fewer resources to access, there is a higher chance of identifying unauthorised access. Put simply: less is more and less can go wrong. An error or breach is more easily remedied when there’s less to monitor.
4. Complete mediation
The Complete Mediation Design Principle calls for rigorous monitoring of access granting and management. Access must be checked and authorised every single time someone tries to access resources—no exception.
5. Open design
A mechanism’s security and it’s algorithms must never depend on its design or implementation being kept secret. The Open Design Principle doesn’t pertain to passwords or cryptographic keys, as these are data and not algorithms.
6. Separation privilege
The Separation Privilege Design Principle removes the existences of a single point of failure. The principle states that a system should never grant permissions based on a single condition. Before access to privileges can be granted, multiple conditions must be met. An example of this is two-factor authentication.
7. Least common mechanism
The Least Common Mechanism Design Principle suggests that organisations should lessen or eliminate shared attack surfaces. Shared mechanisms introduce a shared compromise.
8. Psychological acceptability
This design principle emphasises with the user. If security becomes too intrusive for the user to perform their role, the user will bypass the security controls. The goal is to balance security and convenience because security is undermined after a certain degree of inconvenience for the user.
9. Defence in depth
The Defence in Depth Design Principle is the idea that layering resource access authorisation verification decreases the chance of a successful cyber attack. This principle asks programmers to begin by identifying the most valuable assets. Then, they should build layers of protection radiating outwards from there.
Privacy by design principles
Privacy requires that personal information be protected from those with unauthorised access. For this reason, strong security measures are essential. However, privacy involves more than guaranteeing secure access to private data. Privacy is about enabling individuals to maintain personal control over their data. This control is concerning its collection, how it’s processed, and disclosure.
National Institute of Standards and Technology (NIST) developed a framework that provides a comprehensive approach to data privacy management. This framework supports privacy by design concepts. If you’re excited to learn more about that, please refer to our blog post named NIST cybersecurity framework components.
A Privacy by Design approach considers privacy and data protection from the start. This should occur when building new digital products, sharing data with third parties, or utilising data for new purposes. The organisations who approach data in this way will see greater brand trust and commercial benefits.
Here, we’ll briefly describe each of the PbD principles:
1. Proactive not reactive; preventative not remedial
The PbD approach is characterised by active instead of reactive measures. The goal is to prevent privacy breaches before they happen. The key to this principle is that designers should think about privacy at the beginning of the planning process. The security implementation shouldn’t begin after an invasive event. This principle implies:
- A commitment to set and enforce high standards of privacy at the highest level. These standards are often higher than the rules issued by global laws and regulations;
- A commitment to privacy that is demonstrable and shared by users in a culture of continuous improvement.
- Established processes to recognise faulty privacy designs, anticipate poor practices and outcomes, and correct any vulnerabilities before they occur in systematic, proactive, and innovative ways.
In a way, this principle sets the tone for the others – always think of privacy first.
2. Privacy as the default setting
The Privacy as the Default Setting principle is a difficult one for companies in the high-tech world to understand. PbD seeks to give consumers maximum privacy protection by default. What this means is that if an individual does nothing, their privacy remains intact. Privacy by Default lowers the data security risk profile because there is fewer data available. And less data means a less damaging breach if one occurs.
This principle is informed by the following Federal Information Processing Standards (FIPs):
An individual must be told why their data is collected. They should also understand how it is used and disclosed. This communication must occur at or before the time the information is collected. Specified purposes must be clear, limited, and relevant to the events.
The collection of any personally identifiable information must be fair and lawful. It must also be limited to what is necessary for the specified purposes. Organisations must prove why they need to collect that specific information.
Organisations must keep the collection of personally identifiable information to a strict minimum. The default of digital products should be non-identifiable interactions and transactions from the start. Identifiability, observability, and likability of personal data should be minimised where possible.
Use, retention, and disclosure limitation
The collection of data must be limited to the relevant purposes explicitly identified to the individual for which they have consented. The only exemption is if the data is requested by law. Data should only be held as long as necessary to fulfill its stated purposes. After that, it must be securely destroyed.
Finally, where the need or use of personal data is not explicit, there must be a presumption of privacy. The precautionary principle shall apply, which means the default settings must be the most protective.
3. Privacy embedded into design
This principle states that privacy must be embedded in the design of IT systems and other business practices. This should be done in a creative, holistic, and integrative way. Broader contexts must always be considered, and all stakeholders and interests should be consulted.
The Privacy Embedded into Design principle is another one that is difficult for tech startups. Most software programmers are most concerned about the core functionality of their products. However, they can’t neglect to test the vulnerabilities that are most common in software.
Programmers should adopt an approach that relies upon outlined, accepted standards and frameworks. They should be subject to reviews and audits. And when possible, detailed privacy impact and risk assessments should be performed and published. The resulting report should clearly document the privacy impacts of technology and its uses.
4. Full functionality – positive-sum, not zero-sum
The idea behind this principle is that PbD will not compromise business goals. All organisations should be able to achieve privacy, revenue, and growth—you won’t have to sacrifice one for the other. PbD should simply become a part of company culture.
When embedding privacy into a digital product, process, or system, you can do it in a way that doesn’t compromise functionality. PbD rejects the approach that privacy must compete with other design objectives, technical capabilities, and interests. It embraces non-privacy goals and seeks to accommodate them innovatively.
5. End-to-end security – full lifecycle protection
The PbD principles follow data wherever it goes. They apply when the data is first created, shared, and archived. The data must be protected until the very end when it is securely destroyed. There should never be gaps in data protection or accountability—and without security, there can be no privacy.
Organisations must assume responsibility for personal data throughout its lifecycle. This includes applying security standards that ensure confidentiality, integrity, and availability.
6. Visibility and transparency – keep it open
The Visibility and Transparency principle helps build trust with consumers. It refers to presenting information about privacy practices out in the open. Your consumers should have access to a clear redress mechanism that outlines all responsibility.
The following FIPs influence this principle:
Any organisation that collects personal information has a duty of care for its protection. Privacy-related policies and who is responsible for them must be documented and communicated. If data is transferred to a third party, equivalent privacy protection must be secured through contractual or other means.
Openness and transparency are critical in regards to accountability. Individuals should have access to the policies and processes related to their personal data.
Information about any complaint and redress mechanisms should be communicated to individuals. The communication should include how to obtain the next level of appeal.
7. Respect for user privacy – keep it user-centric
Finally, consumers own their data. The data held by organisations must be accurate, and the user must have the power to make corrections if needed. The user is the only one who can grant and receive consent on the use of their data.
You must empower consumers through an active role in managing their data. This is a useful check against any abuse and misuse of privacy and personal information.
Bake in security and privacy by design
Security and Privacy by Design ensure that programmers and organisations understand and implement the appropriate privacy and data protection controls at the beginning of a project, not when an attack occurs. Implementing privacy in this way allows security teams to help provide guidance, advice, and audit at every step of the process.
The Swiss Cyber Forum seeks to build the world’s leading cybersecurity and data privacy ecosystem. Becoming a member of the Forum gives you access to workshops, educational training, industry events, and more. Make sure to check our Cyber Security Specialist training with Swiss Federal Diploma.
Join us today to access our global community of experts, startups, companies and corporations, public bodies, academics, innovation hubs, and more!