Mad@Work is a study project that focuses on mental wellbeing management and productivity-boosting in the workplace. Nixu is one of the companies participating in the project and focuses on privacy-by-design software development and personal data control with MyData principles.
As a part of the Mad at Work project, Nixu is developing a privacy threat modeling method for use in highly complex human-centric systems. But what exactly is privacy threat modeling?
Threat modeling aims to answer the question, "What can go wrong?" and "What can we do about it?". Let's apply this thinking to the kind of a system that Mad@Work is researching and developing, shown in the picture below. This system for mental wellbeing management and productivity-boosting in the workplace is designed to support advanced sensor technology, such as environmental and movement sensors. Solutions for analyzing and facial expressions combined with data from occupational health systems and organizational barometers are also planned. If you give this system even a glance, it immediately raises various questions regarding privacy. Because of the complexity of the system, it needs a robust method to threat model it.
Some areas from which threats can arise are:
- Laws. Legislation may have conflicts between them – EU GDPR and national legislation such as the Data Protection Act, Occupational Safety and Health Act, Act on the Protection of Privacy in Working Life, Co-operation within Undertakings, and the Occupational Health Care Act.
- People. Various interests may not match – employees, team leaders, human resources, company management, occupational health personnel, system admins, system architects, and others are looking to achieve different things.
- Technical complexity. New sensor technology, artificial intelligence, various integrations - can we recognize all problems?
Various definitions of privacy
The concept of privacy needs to be also defined. Privacy is often seen in a narrow sense. The first thought might be that some private information is revealed to others: a loss of confidentiality. The GDPR view of this is much broader. It does not specify what is private and what is not, which makes sense because this varies from one context to another. Instead, it defines personal data as any data that can be linked to a person. That must be treated properly, without causing harm to people.
Only for a particular person or persons
Relating to a single person
Confidentiality requirements vary
"Can we speak in private?"
Harmful for people, impact on companies
When things go wrong for privacy, it ultimately means that people are being harmed. People are the primary asset to be protected.
For the system, the principles for personal data processing from the GDPR should act as the design guidelines. The organization should have a management system to govern the personal data processing. Failings in these can create threats, and uncovering threats helps to identify the relevant organizational and technical controls to patch up the management system and to apply the principles better.
Companies are, of course, concerned about sanctions and other harmful effects on them. Documented proof that the company has duly considered the harm to people and acted on it is the best 'insurance' for companies. This approach is similar to that of health and safety, where the company also aims to protect people. Using a robust threat modeling process that feeds to a documented risk assessment shows that the company has taken privacy into account in its activities.
There's more to privacy threat modeling than meets the eye. On top of the different definitions of privacy, it's relevant to understand who the people affected by the system are and can the system be used in other ways than intended. In the next blog post, we are going to examine attackers and victims.
Want to keep track of what's happening in cybersecurity? Sign up for Nixu Newsletter.