Automated detection and response – a pattern of low-concern and high-impact
It is the hottest July in Israel in 17 years and the day after Tisha B’Av – a day of mourning and fasting where the Jews remember the destruction of the Temple and attempt to be nice to each other for a few days.
A time for reflection.
Today I want to talk about an “anti-design-pattern“.
In architecture and software engineering, a design pattern is a generalised and repeatable solution to a problem.
You can think of a design pattern as a template for how to solve a problem, build a house, construct a piece of furniture or write software code.
In the world of architecture, design patterns are thousands of years old. The correct way to build a house or a chair were worked out maybe 2000 years ago in a process of trial and error, inspiration, creativity and innovation that became accepted because it was good and made people feel good living in the house or sitting in the chair.
Design patterns usually (or should have) names that pretty much describe what the pattern does. A pattern called “Office chair” is a template for building a chair for an office. The software pattern called “Proxy” describes a function that hides the operation of another function and forwards and receives messages from the “real” function. As in real-life, the real function delegates things to the proxy – “take this ballot and vote for me in the shareholder meeting” is an example of a people proxy.
What happens when people make or do things that are the opposite of a best-practice design pattern?
We call this an “anti-design-pattern“. The rationale for formalising anti-design-patterns is to learn from the mistakes of others and minimise impact of mistakes.
Low concern, High impact: An anti-design pattern for clinical trial monitoring
Low-concern, High-impact essentially says that Low-concern for a potential issue will quickly lead to zero-awareness of an issue. Zero-awareness of an issue will result in zero-testing of related data inputs. Zero-testing of related data inputs will result in High-impact on a change. Attempts to recover from the High-impact situation may result in a series of additional issues or a “cascade failure”. Cascade failures happen in strongly interconnected systems such as power grids, complex computer software and the human body.
Let’s illustrate the Low-concern, high-impact anti-design pattern with an example from the world of clinical trials.
Consider a scenario where the study monitors use a data extract to produce a report of patients who were not eligible (i.e. did not pass the inclusion/exclusion criteria) but are participating in the trial. Quoting from the European Medicines Agency page on GCP:
Adherence to the protocol is a fundamental part of the conduct of a clinical study. Sponsors and investigators should not use systems of prospectively approving protocol deviations, in order to effectively widen the scope of a protocol. Protocol design should be appropriate to the populations required and if the protocol design is defective, the protocol should be amended.
GCP does permit deviations from the protocol when necessary to eliminate immediate hazards to the subjects but this should not normally arise in the context of inclusion/exclusion criteria, since the subject is not yet fully included in the trial at that point in the process
I think we can agree that a report that enables the study monitor and sponsor to respond quickly to IE violations is a valuable tool.
Let’s now describe an attack scenario where the players fall on the Low-concern, High-impact anti-design pattern.
The sponsor uses C# code that extracts data from the EDC database to XML and then a report writer application formats the XML data into a report of IE violations. The EDC developer made a small change to the EDC database schema in order to enable ingesting data from mobile electronic source devices. The C# code maintainer was not aware of the EDC schema change and the study monitor who runs the report is not aware of the vagaries of C# and Oracle schema changes and does not want to know or understand code.
This is the first part of the pattern – “Low-awareness“.
The study monitor runs the report as usual and everything looks OK. There are no IE violations – which is a good thing. Unbeknownst to the monitoring team, due to the schema change, records that were ingested from the electronic source tablets are not joined with the EDC subject record and as a result do not appear in the report.
Investigators at low-performing sites notice that there is no oversight on IE deviations and bend the rules in order to enrol patients that do not pass IE criteria. Note that there is a strong economic motivation for the PI to enrol as many patients as possible.
As a result of low-concern and low-awareness there is now zero-knowledge of the bug – since the report of IE violations is showing green lights.
2 years later (its a 4 year multi-center global study), the data is locked and an interim-analysis is performed. The study statistician uses the same C# code to extract data but this time, she notices missing data and pushes-back to the sponsor that something does not seem kosher. The sponsor calls the CRO who calls in a quality auditor and another 3 months later discovers dozens of subjects that were in violation of the inclusion/exclusion criteria. This is a major setback to the study.
This is the second part of the pattern – “High-impact“.
An investigation of case immediately leads to the question of why was the C# data extract code not re-validated after the EDC schema change? The answer leads us back to “low-concern“. The data extract is provided by a third-party clinical tool developer that the CRO uses in thousands of studies and it always worked. In addition, the third-party clinical tool provider works with dozens of EDC vendors and may not have the management attention and resources to manage changes in the EDC systems that their customers use. On top of all that, the engineering team in the clinical tool provider had bad vibes in the past with the CRO IT staff and after a few annoying support calls, they flipped the bozo bit on the CRO IT folks, labelling them as stupid and incompetent.
Don’t flip the bozo bit
Don’t flip the bozo bit is a reference to Bozo the clown. It is a conscious decision to ignore another persons input – since that person is considered a “Bozo”. In his 1995 book Dynamics of Software Development, Jim McCarthy (former Visual C manager at Microsoft) coined the term. McCarthy’s advice is that everyone has something to contribute – it’s easy and tempting, when someone ticks you off or is mistaken (or both), to simply disregard all their input in the future by setting the “bozo flag” to TRUE for that person. But by taking the lazy way out, you poison interactions with other people and cannot avail yourself of help from the “bozo” ever again.
Don’t flip the bozo bit is related to low-concern, high-impact and can be found in almost all cases of high impact damage due to un-tested changes.
By being aware of anti-design patterns, you can improve study monitoring performance and GCP compliance.
The key to achieving that is by improving your awareness to the meta-processes that you use, how you solve problems, how you react to issues and how you respond to annoying (and possibly mistaken) colleagues and customers.