Ten data processing principles for safety systems

Jan 24, 2018 | Data, Products, Safety

Ten data processing principles for safety systems

In this post we look at a set of ten data processing principles that we use when designing and building our safety systems. We think that when they are applied, they can help to avoid some very common pitfalls that arise when working on technology for workplace safety. If you’re interested to read a bit about the motivation for these concepts then read on, or else just skip ahead to see the ten principles.

Many of these ten principles below might seem either a statement of the obvious or correspond simply to good systems and engineering principles.

But we’ve found that together they do help on a practical level to constrain how we should try to solve real safety problems that people face, and the kind of safeguards that are needed.

We’ve also found that by and large, these principles can be applied in systems where the default relationship between people using it is one of trust. They would need to be extended if they were to accommodate systems that need to take into account or detect bad actors, or involve danger scenarios of the most extreme duress.

So, we’ve included a primary objective which pretty much sums up an important requirement on everything we do, from the smallest design tweak, to the largest projects we’ve delivered.

Now onto our ten data processing principles for safety systems:

Ten data processing principles for safety systems

Primary objective

Our products and services should always act in a testable way to increase and sustain the safety of people. This should be the case not just on average for all people in the system and on average over time, but for all people individually, all the time.

Principle 1: Chronology

The chronology, or time ordering, of events happening within our system should be preserved. Without reliable chronology, cause and effect, or merely the sequence of events coming from different sources, can not be determined. Events should receive additional timestamps as they go through various stages or transitions, like being sent and delivered.

Principle 2: Testability

We only make available products and services that can be regularly tested and updated. Retesting timescales should be determined by the criticality of the scenario or product feature. If we can not test a product feature and how people will use it, or if we do not understand the scenario for which it is intended, it should not be made available.

Tests are explicit assertions about how our system is expected to work under certain conditions, leading to a pass or fail outcome, and this makes them a blueprint for documentation. Test suites and documentation should be consistent with each other.

We should explicitly describe the scenarios under which our products, or features of our products, are intended or expected to work, thereby separating the safety problem from the potential solutions.

Finally, there are no scenarios, without people. So we should explicitly research and specify the personas– the needs and goals– of the people using our products. We should consider how changes to our products might impact these personas in different ways.

Principle 3: Security and integrity

Our products need to adhere to information security principles and need to be tested for security. They should be secure by design or else implement best practices from the security community. They should not rely upon ‘security by obscurity’.

Our systems should be ‘lossless’, meaning that we should not lose or corrupt information that might impact or improve general or detailed understanding of what is going on in the system. Subsystems (for instance web and mobile apps) need to be consistent with each other and not lead to erroneous or conflicting conclusions.

Where accuracy trade-offs are necessary, for instance over a low bandwidth communications channel, these should be made apparent to people.

Principle 4: Privacy and trust

Recognising the sensitivity of data that pertains to safety, our products should respect the privacy of those that use them, both meeting and exceeding data protection obligations. We should design privacy controls that are understandable to those that are affected by them, and that are not a burden to maintain. Data protection is not protection of data (information security); rather it is protecting people and preventing harm to them through inappropriate or unlawful use of their personal data. Our privacy policy contains a further six User Privacy Principles, which are specific techniques we use for limiting or mitigating privacy impacts and building trust.

Principle 5: Accountability

In situations where there may be a actual or perceived conflict between privacy and any other of these principles, we use accountability (“who, when, why, what”) within the system to mitigate the effects. Examples techniques we use are auditing systems, access logs, response times, or even just an avatar (visual identifier) for users in the system.

Principle 6: Actionable feedback

People within our system should understand when ‘state changes’ occur in the system. They should receive actionable, considerate, understandable feedback about these state changes (such as notifications, sounds, signs, delivery receipts, coaching reports, and recommended actions) that is proportionate to the importance of the state change, and only on things that are under their control to change or act on. Feedback should not be distracting.

We should be mindful of the need for extreme clarity, brevity and simplicity of communications, since our products may be used transiently and under duress where there is little time for complicated decision making.

Principle 7: Explicit policies

Safety policies, where they are needed, should be built into our products and should be explicit and obvious to the people that need to follow them. An example of a safety policy which helps a traveller and supervisor collaborate is “Please share your location with me when you set off for, and arrive at your destination”. Policies should automatically adapt to people’s environment and circumstances.

Principle 8: No fine tuning

We should design and build systems that do not require a great deal of manual or fine tuning of the parameters of the system, such as thresholds, tolerances, scales and options. We should use ‘intelligent defaults’ and should not hide important functionality deep in the settings.

Principle 9: Predictable costs

The costs of running our systems including using them, testing them, supporting them, learning how to use them, power consumption, latency, and paying for them should be predictable, leading to no unplanned or unexpected costs.

Our pricing should correspond as close as possible to the actual value that we are creating, and should be seen to deliver value for money when any necessary communication overages or bursts occur.

The latency (delay in time) of the transfer of information around the system should be understood by people. The transfer of information needs to be fast enough for the types of decision making that are needed.

Principle 10: Resilience

We should design and build systems that are resilient to change, subsystem failure, and that are ‘self-healing’. Their availability should be predictable and measurable. We use redundancy to overcome single points of failure.

Do you find these ideas interesting and think they can be improved? We’re hiring!

Connect With Us

Share This