Hack back the user interface for clinical trials

As part of my campaign for site-coordinator and study-monitor centric clinical trials; we first need to understand how to exploit a vulnerability in human psychology.

As a security analyst, this is the way I look at things – exploits of vulnerabilities.

In 2007, B.J. Fogg, founder and director of the Stanford Behavior Design Lab taught a class on “mass interpersonal persuasion. A number of students in the class went on to apply these methods at Facebook, Uber and Instagram.

The Fogg behavior model says that 3 things need to happen simultaneously to initiate a behavior: Motivation (M), ability (A) and a trigger (T).

When we apply this model to patient-centric trials, we immediately understand why patient-centricity is so important.

Motivation – the patient wants therapy (and may also be compensated for her participation).

Ability is facility of action. Make it easy for a patient to participate and they will not need a high energy level to perform the requisite study tasks (take a pill, operate a medical device, provide feedback on a mobile app).

Without an external trigger, the desired behavior (participating in the study in a compliant way) will not happen.  Typically, text messages are used to remind the patient to do something (take treatment or log an ePRO diary).  A reminder to log a patient diary is a distraction; when motivation and ability exceed the trigger energy level, then the patient will comply. If the trigger energy level is too high (for example – poor UX in the ePRO app) then the patient will not comply.    Levels of protocol adherence will be low.

The secret is designing the study protocol and the study UX so that the reminder trigger serves the patient and not the patient serving the system.

People-centric clinical trials

Recall – that any behavior ( logging data, following up) requires 3 things: motivation, ability and a trigger.

A site coordinator can be highly motivated. She may be well trained and able to use the EDC system even the UX is vintage 90s.

But if the system doesn’t give anything back to her; reminders to close queries or to follow-up are just distractions.

The secret is designing the study protocol and the study UX so that the reminder trigger serves the CRC and CRA and not the CRC, CRA are serving the system.

When we state the requirement as a trigger serving the person – we then understand that it is not about patient-centricity.

It is about people-centricity.

 

 

A better tomorrow for clinical trials

A better tomorrow – Times of crisis usher in new mindsets

By David Laxer. Spoken from the heart.

In these trying days, as we adjust to new routines and discover new things about ourselves daily, we are also reminded that the human spirit is stronger than any pandemic and we have survived worse.

And because we know we’re going to beat this thing, whether in 2 weeks or 2 months, we also know that we will eventually return to normal, or rather, a new normal.

In the meantime, the world is showing a resolve and a resilience that gives us much room to hope for a better tomorrow for developing new therapeutics.

However, these days have got us wondering how things might have looked if clinical trials were conducted differently. It’s a well-known fact that clinical trials play an integral role in the development of new, life-saving drugs, but by the time they get approved by the FDA it takes an average of 7.5 years and anywhere between $150m-2bn per drug.

Reasons for failure

Many clinical studies still use outdated methods for data collection and verification: they still use a fax for crying out. They continue to manually count leftover pills in bottles, and still rely on patients’ diary entries to ensure adherence.

Today, the industry faces new challenges to recruit enough participants as COVID-19 forces people to stay at home and out of research hospital sites. 

Patient drop-outs, adverse events and delayed recording of adverse events  are still issues for pharma and medical device companies conducting clinical research.  The old challenge of creating interpretable data to examine safety and efficacy of new therapeutics remain.

The Digital Revolution:

As hard as it is to believe, the clinical trial industry just might be the last major industry to undergo digital transformation..

As every other aspect of modern life has already been digitized, from banking to accounting to education now, more than ever, is the time to accelerate the transition of this crucial process, especially as we are painfully reminded of the need for finding a vaccine.  Time is not a resource we can waste any longer.

Re-imagining the future

When we created FlaskData we were primarily driven by our desire to disrupt the clinical trial monitoring paradigm  and bring it into the 21st century — meaning real-time data collection and automated detection and response. From the beginning we found fault in the fact that clinical trials were, and still are overly reliant on manual processes  and this causes unacceptable delays in bringing new and essential drugs and devices to market. These delays, as we are reminded during these days, not only cost money and time, but ultimately they cost us lives.

To fully achieve this digitization it’s important to create a secure cloud service that can accelerate the entire  process, and provide sponsors with an immediate picture and interpretable data without having to spend 6-12 months cleaning data.  This is achieved with real-time data collection, automated detection and response and an open API that enables any healthcare application to collect clinical-trial-grade data and assure patient adherence to the clinical protocol.

Our Promise:

It didn’t take a virus to make us want to deliver new medical breakthroughs into the hands that need them most, but it has definitely made us double down on our resolve to see it through. The patient needs to be placed at the center of the clinical research process and we are tasked to reduce the practical, geographical and financial barriers to participation. The end result is a more engaged patient, higher recruitment and retention rates, better data and reduced study timelines and costs.

The Need For Speed

As the world is scrambling to find a vaccine for Corona, we fully grasp 2 key things: 1) Focus on patients and 2) Provide clinical operations teams with the ability to eliminate inefficiencies and move at lightning speed. In these difficult times, there is room for optimism as it is crystal clear, just how important it is to speed up the process.

 

Social Distancing

In this period of social distancing, we can only wonder about the benefits of conducting clinical trials remotely. We can only imagine how many trials have been rendered useless as patients, reluctant to leave their houses have skipped the required monitoring, have forgotten to take their pills and their diary entries have gotten lost amidst the chaos.

With a fully digitized process for electronic data collection, social distancing would have no effect on the clinical trial results.

About David Laxer

David is a strategist and story-teller. He says it best – “Ultimately, when you break it down, I am a storyteller and a problem solver. The kind that companies and organizations rely on for their brand DNA, culture and long-lasting reputation”.

 

Reach out to David on LinkedIn

7 tips for an agile healthtech startup

It’s a time when we are all remote-workers.   Startups looking for new ways to add value to customers.  Large pharmas looking for ways to innovate without breaking the system.

To quote Bill Gates from 25 years ago. Gates was asked how Microsoft can compete in enterprise software when they only had business-unit capabilities.  Gates was quoted as saying that large enterprises are a collection of many business units, so he was not worried.

The same is true today – whether you are a business unit in Pfizer or a 5-person healthtech startup

Here are 7 tips for innovation in healthcare

1. One person in the team will be a technical guru, let’s call him/her the CTO. Don’t give the CTO admin access to AWS.  He / she should not be fooling around with your instances. Same for sudo access to the Linux machines.
2. Make a no rule – No changes 1 hour before end of day. No changes Thursday/Friday
3. Security – think about security before writing code.  Develop a threat model first. I’ve seen too many startups get this wrong.   Also big HMOs get it wrong.
4. Standards – standardize on one dev stack – listen to the CTO but do not try new things. If a new requirement comes up, talk about it, be critical, sleep on it.    Tip – your CTO’s first inclination will be to write code – this is not always the best strategy – the best is not writing any code at all.  You may be tempted to use some third-party tools like Tableaux – be very very careful.   The licensing or the lack of multi-tenancy may be a very bad  fit for you – so always keep your eye on your budget and business model.
5. Experiment – budget for experimentation by the dev team. Better to plan an experiment and block out time/money for it and fail than get derailed in an unplanned way.  This will also keep things interesting for the team and help you know that they are not doing their own midnight projects.
6. Minimize – always be removing features.  Less is more.
7. CAPA – (corrective and preventive action) – Debrief everything.  Especially failures. Document in a Slack channel and create follow-up actions (easy in slack – just star them).

Streaming clinical trials in a post-Corona future

Last week, I wrote about using automated detection and response technology to mitigate the next Corona pandemic.

Today – we’ll take a closer look at how streaming data fits into virtual clinical trials.

Streaming – not just for Netflix

Streaming real-time data and automated digital monitoring is not a foreign idea to people quarantined at home during the current COVID-19 pandemic.   Streaming: We are at home and watching Netflix.   Automated monitoring: We are now using digital surveillance tools based on mobile phone location data to locate and track people who came in contact with other CORONA-19 infected people.

Slow clinical trial data management. Sponsors flying blind.

Clinical trials use batch processing of data. Clinical trials currently do not stream patient / investigator signals in order to manage risk and ensure patient safety.

The latency of batch processing in clinical trials is something like 6-12 months if we measure the time from first patient in to the time a bio-statistician starts working on an interim analysis.

Risk-based monitoring for clinical trials uses batch processing to produce risk profiles of sites in order to prioritize another batch process – namely site visits and SDV (source data verification).

The latency of central CRO monitoring using RBM ranges wildly from 1 to 12 weeks. This is reasonable considering that the design objective of RBM is to prioritize a batch process of site monitoring that runs every 5-12 weeks.

In the meantime – the study is accumulating adverse events and dropping patients to non-compliance and the sponsor is flying blind.

Do you think 2003 vintage data formats will work in 2020 for Corona virus?

An interesting side-effect of batch processing for RBM is use of SDTM for processing data and preparing reports and analytics.

SDTM provides a standard for organizing and formatting data to streamline processes in collection, management, analysis and reporting. Implementing SDTM supports data aggregation and warehousing; fosters mining and reuse; facilitates sharing; helps perform due diligence and other important data review activities; and improves the regulatory review and approval process. SDTM is also used in non-clinical data (SEND), medical devices and pharmacogenomics/genetics studies.

SDTM is one of the required standards for data submission to FDA (U.S.) and PMDA (Japan).

It was never designed nor intended to be a real-time streaming data protocol for clinical data. It was first published in June 2003. Variable names are limited to 8 characters (a SAS 5 transport file format limitation).

For more information on SDTM, see the 2011 paper by Fred Woods describing the challenges to create SDTM datasets.   One of the surprising challenges is data/time formats – which continue to stymie biostats people to this day.  See Jenya’s excellent post on the importance of collecting accurate date-time data in clinical trials. We have open, vendor-neutral standards and JavaScript libraries to manipulate dates. It is a lot easier today than it was in June 2003.

COVID-19 – we need speed

In a post COVID-19 era, site monitoring visits are impossible and patients are at home. Now, demands for clinical trials are outgrowing the batch-processing paradigm.   Investigators, nurses, coordinators and patients cannot wait for the data to be converted to SDTM, processed in a batch job and sent to a data manager.  Life science sponsors need that data now and front-line teams with patients need an immediate response.

Because ePRO, EDC and wearable data collection are siloed (or waiting for batch file uploads using USB connection like Phillips Actiwatch or Motionwatch), the batch ETL tools cannot process the data.  To place this in context; the patient has to come into the site, find parking, give the watch to a site coordinator, who needs to plug the device into USB connection, upload the data and then import the data to the EDC who then waits for an ETL job converting to SDTM and processing to an RBM system.

Streaming data for clinical research in a COVID-19 era

In order to understand the notion of streaming data for clinical research in a COVID-19 era, I drew inspiration and shamelessly borrowed the graphics from Bill Scotts excellent article on Apache Kafka – Why are you still doing batch processing? “ETL is dead”.

Crusty Biotech

The Crusty biotech company have developed an innovative oral treatment called Crusdesvir for Corona virus.   They contract with a site, Crusty Kitchen to test safety and efficacy of Crusdesvir. Crusty Kitchen has one talented PI and an efficient site team that can process 50 patients/day.

The CEO of Crusty Biotech decides to add 1 more site, but his clinical operations process is built for 1 PI at a time who can perform the treatment procedure in a controlled way and comply with the Crusdesvir protocol.  It’s hard to find a skilled PI and site team but the finally finds one and signs a contract with them.

Now they need to add 2 more PI’s and sites and then 4.   With the demand to deliver a working COVID-19 treatment, Crusty Biotech needs to recruit more sites who are qualified to run the treatment.    Each site needs to recruit (and retain more treatments).

The Crusty Biotech approach is an old-world batch workflow of tasks wrapped in a rigid environment. It is easy to create, it works for small batches but it is impossible to grow (or shrink) on demand. Scaling requires more sites, introduces more time into the process, more moving parts, more adverse events, less ability to monitor with site visits and the most crucial piece of all – lowers the reliability of the data, since each site is running its own slow-moving, manually-monitored process.

Castle Biotech

Castle Biotech is a competitor to Crusty Biotech – they also have an anti-viral treatment with great potential.    They decided to plan for rapid ramp-up of their studies by using a manufacturing process approach with an automated belt delivering raw materials and work-in-process along a stream of work-stations.   (This is how chips are manufactured btw).

Belt 1:Ingredients, delivers individual measurements of ingredients

Belt 1 is handled by Mixing-Baker, when the ingredients arrive, she knows how to mix the ingredients, then put mixture onto Belt 2.

Belt 2:Mixture, delivers the perfectly whisked mixture.

Belt 2 is handled by Pan-Pour-Baker, when the mixture arives, she can delicately measure and pour mixture into the pan, then put pan onto Belt 3.

Belt 3:Pan, delivers the pan with exact measurement of mixture.

Belt 3 is handled by Oven-Baker, when the pan arrives, she puts the pan in the oven and waits the specific amount of time until it’s done. When it is done, she puts the cooked item on the next belt.

Belt 4:Cooked Item, delivers the cooked item.

Belt 4 is handled by Decorator, when the cooked item arrives, she applies the frosting in an interesting and beautiful way. She then puts it on the next belt.

Belt 5:Decorated Cupcake, delivers a completely decorated cupcake.

We see that once the infrastructure is setup, we can easily add more bakers (PI’s in our clinical trial example) to handle more patients.  It’s easy to add new cohorts, new designs by adding different types of ‘bakers’ to each belt.

How does cupcake-baking relate to clinical data management?

The Crusty Biotech approach is old-world batch/ETL – a workflow of tasks set in stone. 

It’s easy to create. You can start with a paper CRF or start with a low-cost EDC. It works for small numbers of sites and patients and cohorts but it does not scale.

However, the process breaks down when you have to visit sites to monitor the data and do SDV because you have a paper CRF.  Scaling the site process requires additional sites, more data managers, more study monitors/CRAs, more batch processing of data, and more round trips to the central monitoring team and data managers. More costs, more time and 12-18 months delay to deliver a working Corona virus treatment.

The Castle Biotech approach is like data streaming. 

Using a tool like Apache Kafka, the belts are topics or a stream of similar data items, small applications (consumers) can listen on a topic (for example adverse events) and notify the site coordinator or study nurse in real-time.   As the flow of patients in a study grows, we can add more adverse event consumers to do the automated work.

Castle Biotech is approaching the process of clinical research with a patient-centric streaming and digital management model, which allows them to expand the study and respond quickly to change (the next pandemic in Winter 2020?).

The moral of the story – Don’t Be Krusty.

 

 

So what’s wrong with 1990s EDC systems?

Make no doubt about it, the EDC systems of 2020 are using a 1990’s design. (OK – granted, there are some innovators out there like ClinPal with their patient-centric trial approach but the vast majority of today’s EDC systems, from Omnicomm to Oracle to Medidata to Medrio are using a 1990’s design. Even the West Coast startup Medable is going the route of if you can’t beat them join them and they are fielding the usual alphabet soup of buzz-word compliant modules – ePRO, eSource, eConsent etc. Shame on you.

Instead of using in-memory databases for real-time clinical data acquisition, we’re fooling around with SDTM and targeted SDV.

When in reality – SDTM is a standard for submitting tabulated results to regulatory authorities (not a transactional database nor an appropriate data model for time series).  And even more reality – we should not be doing SDV to begin with – so why do targeted SDV if not to perpetuate the CRO billing cycle.

Freedom from the past comes from ridding ourselves of the clichés of today.

 

Personally – I don’t get it. Maybe COVID-19 will make the change in the paper-batch-SDTM-load-up-the-customer-with-services system.

So what is wrong with 1990s EDC?

The really short answer is that computers do not have two kinds of storage any more.

It used to be that you had the primary store, and it was anything from acoustic delay-lines filled with mercury via small magnetic dougnuts via transistor flip-flops to dynamic RAM.

And then there were the secondary store, paper tape, magnetic tape, disk drives the size of houses, then the size of washing machines and these days so small that girls get disappointed if think they got hold of something else than the MP3 player you had in your pocket.

And people still program their EDC systems this way.

They have variables in paper forms that site coordinators fill in on paper and then 3-5 days later enter into suspiciously-paperish-looking HTML forms.

For some reason – instead of making a great UI for the EDC, a whole group of vendors gave up and created a new genre called eSource creating immense confusion as to why you need another system anyhow.

What the guys at Gartner euphemistically call a highly fragmented and non-integrated technology stack.
What the site coordinators who have to deal with 5 different highly fragmented and non-integrated technology stacks call a nightmare.

Awright.

Now we have some code – in Java or PHP or maybe even Dot NET THAT READS THE VARIABLES FROM THE FORM AND PUTS THEM INTO VARIABLES IN MEMORY.

Now we have variables in “memory” and move data to and from “disk” into a “database”.

I like the database thing – where clinical people ask us – “so you have a database”. This is kinda like Dilbert – oh yeah – I guess so. Mine is a paradigm-shifter also.

Anyhow, today computers really only have one kind of storage, and it is usually some sort of disk, the operating system and the virtual memory management hardware has converted the RAM to a cache for the disk storage.

The database process (say Postgres) allocate some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel.

If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere.
When Postgres next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file.

And that’s it.

Virtual memory was meant to make it easier to program when data was larger than the physical memory, but people have still not caught on.
And maybe with COVID-19 and sites getting shut-down; people will catch on that a really nifty user interface for GASP – THE SITE COORDINATORS and even more AMAZING – a single database in memory for ALL the data from patients, investigators and devices.

Because at the end of the day – grandma knows that there ain’t no reason not to have a single data model for everything and just shove it into virtual memory for instantaneous, automated DATA QUALITY, PATIENT SAFETY AND RISK ASSESSMENT in real-time.

Not 5-12 weeks later for research site visit or a month later after the data management trolls in the basement send back some reports with queries and certainly not spending 6-12 months cleaning up unreliable data due to the incredibly stupid process of paper to forms to disk to queries to site visits to data managers to data cleaning.

I love being a CRA, but the role as it exists today is obsolete.

I think that COVID-19 will be the death knell for on-site monitoring visits and SDV.    Predictions for 2020 and the next generation of clinical research – mobile EDC for sites, patients and device integration that just works.

I’m neither a clinical quality nor a management consultant. I cannot tell a CRO not to bill out hours for SDV and CRA travel and impact study budget by 25-30% and delay results by 12-18 months.

Nope.   I’m not gonna tell CROs what to do.    Darwin will do that for me.

I develop and support technology to help life science companies go faster to market.  I want to save lives by shortening time to complete clinical trials for COVID-19 vaccine and treatments by 3-6 months.

I want to provide open access to research results – for tomorrow’s pandemic.

I want to  enable real-time data sharing.

I want to enable participants in the battle with COVID-19 to share real-world / placebo arm data, making the fight with COVID-19 more efficient and collaborative and lay the infrastructure for the next wave of pandemics.

I want to provide real-time data collection for hospitals, patients and devices.  Use AI-driven detection of protocol violations and automated response to enable researchers to dramatically improve data reliability, allowing better decision making and improving patient safety.

The FDA (a US government regulatory bureaucracy) told the clinical trial industry to use e-Source 10 years ago and to use modern IT .  If FDA couldn’t then maybe survival of the fittest and COVID-19 well do the job.

FDA’s Guidance for Industry: Electronic Source Data in Clinical Investigations, says, in part:
“Many data elements (e.g., blood pressure, weight, temperature, pill count, resolution of a symptom or sign) in a clinical investigation can be obtained at a study visit and can be entered directly into the eCRF by an authorized data originator. This direct entry of data can eliminate errors by not using a paper transcription step before entry into the eCRF. For these data elements, the eCRF is the source. If a paper transcription step is used, then the paper documentation should be retained and made available for FDA inspection.”

I loved this post by Takoda Roland on the elephant in the room.

Source data validation can easily account for more than 80% of a monitor’s time. You go on site (or get a file via Dropbox). Then you need  to page through hundreds of pages of source documents to ensure nothing is missing or incomplete. Make sure you check the bare minimum amount of data before you need to rush off to catch my flight, only to do it all again tomorrow in another city, I am struck with this thought: I love being a CRA, but the role as it exists today is obsolete.

Opinion: A Futurist View on the Use of Technology in Clinical Trials

 

Using automated detection and response technology mitigate the next Corona pandemic

What happens the day after?   What happens next winter?

Sure – we must find effective treatment and vaccines.  Sure – we need  to reduce or eliminate the need for on-site monitoring visits to hospitals in clinical trials.  And sure – we need to enable patient monitoring at home.

But let’s not be distracted from 3 more significant challenges:

1 – Improve patient care

2 – Enable real-time data sharing. Enable participants in the battle with COVID-19 to share real-world / placebo arm data, making the fight with COVID-19 more efficient and collaborative.

3- Enable researchers to dramatically improve data reliability, allowing better decision making and improving patient safety.

Clinical research should ultimately improve patient care.

The digital health space is highly fragmented (I challenge you to precisely define the difference between patient engagement apps and patient adherence apps and patient management apps).  There are over 300 digital therapeutic startups. We are lacking a  common ‘operating system and  there is a dearth of vendor-neutral standards that would enable interoperability between different digital health systems mobile apps and services.

By comparison – clinical trials have a well-defined methodology, standards (GCP) and generally accepted data structures in case report forms.  So why do many clinical trials fail to translate into patient benefit?

A 2017 article by Carl Heneghan, Ben Goldacre & Kamal R. Mahtani “Why clinical trial outcomes fail to translate into benefits for patients”  (you can read the Open Access article here) states the obvious: that the objective of clinical trials is to improve patients’ health.

The article points at  a number of serious  issues ranging from badly chosen outcomes, composite outcomes, subjective outcomes and lack of relevance to patients and decision makers to issues with data collection and study monitoring.

Clinical research should ultimately improve patient care. For this to be possible, trials must evaluate outcomes that genuinely reflect real-world settings and concerns. However, many trials continue to measure and report outcomes that fall short of this clear requirement…

Trial outcomes can be developed with patients in mind, however, and can be reported completely, transparently and competently. Clinicians, patients, researchers and those who pay for health services are entitled to demand reliable evidence demonstrating whether interventions improve patient-relevant clinical outcomes.

There can be fundamental issues with study design and how outcomes are reported.

This is an area where modeling and ethical conduct intersect;  both are 2 critical areas.

Technology can support modeling using model verification techniques (used in software engineering, chip design, aircraft and automotive design).

However, ethical conduct is still a human attribute that can neither be automated nor replaced with an AI.

Let’s leave modeling to the AI researchers and ethics to the bioethics professionals

For now at least.

In this article, I will take a closer look at 3 activities that have a crucial impact on data quality and patient safety. These 3 activities are orthogonal to the study model and ethical conduct of the researchers:

1 – The time it takes to detect and log protocol deviations.

2 – Signal detection of adverse events (related to 1)

3 – Patients lost to follow-up (also related to 1)

Time to detect and log deviations

The standard for study monitors is to visit investigational sites once ever 5-12 weeks.   A Phase IIB study with 150 patients that lasts 12 months would typically have 6-8 site visits (which incidentally, cost the sponsor $6-8M including the rewrites, reviews and data management loops to close queries).

Adverse events

As reported by Heneghan et al:

A further review of 11 studies comparing adverse events in published and unpublished documents reported that 43% to 100% (median 64%) of adverse events (including outcomes such as death or suicide) were missed when journal publications were solely relied on [45]. Researchers in multiple studies have found that journal publications under-report side effects and therefore exaggerate treatment benefits when compared with more complete information presented in clinical study reports [46]

Loss of statistical significance due to patients lost to follow-up

As reported by Akl et al in  “Potential impact on estimated treatment effects of information lost to follow-up in randomized controlled trials (LOST-IT): systematic review” (you can see the article here):

When we varied assumptions about loss to follow-up, results of 19% of trials were no longer significant if we assumed no participants lost to follow-up had the event of interest, 17% if we assumed that all participants lost to follow-up had the event, and 58% if we assumed a worst case scenario (all participants lost to follow-up in the treatment group and none of those in the control group had the event).

Real-time data

Real-time data (not data collected from paper forms 5 days after the patient left the clinic) is key to providing an immediate picture and assuring interpretable data for decision-making.

Any combination of data sources should work – patients, sites, devices, electronic medical record systems, laboratory information systems or some of your own code. Like this:

Mobile eSource mobile ePRO medical device API

Signal detection

The second missing piece is signal detection for safety, data quality and risk assessment of patient, site and study,

Signal detection should be based upon the clinical protocol and be able to classify the patient into 1 of 3 states: complies, exception (took too much or too little or too late for example) and miss (missed treatment or missing data for example).

You can visualize signal classification as putting the patient state into 1 of 3 boxes like this:Automated response

One of the biggest challenges for sponsors running clinical trials is delayed detection and response.   Protocol deviations are logged 5-12 weeks (and in a best case 2-3 days) after the fact.   Response then trickles back to the site and to the sponsor – resulting in patients lost to follow-up and adverse events that were recorded long after the fact..

If we can automate signal detection then we can also automate response and then begin to understand the causes of the deviations.    Understanding context and cause is much easier when done in real-time.        A good way to illustrate is to think about what you were doing today 2 weeks ago and try and connect that with a dry cough, light fever and aching back.   The symptoms may be indicative of COVID-19 but y0u probably don’t remember what you were doing and  with whom you came into close contact.     The solution to COVID-19 back-tracking is use of digital surveillance and automation. Similarly, the solution for responding to exceptions and misses is to digitize and automate the process.

Like this:

Causal flows of patient adherence

Summary

In summary we see 3 key issues with creating meaningful outcomes for patients:

1 – The time it takes to detect and log protocol deviations.

2 – Signal detection of adverse events and risk (related to 1)

3 – Patients lost to follow-up (also related to 1)

These 3 issues for creating meaningful outcomes for patients can be resolved with 3 tightly integrated technologies:

1 – Real-time data acquisition for patients, devices and sites (study nurses, site coordinators, physicians)

2 – Automated detection

3 – Automated response

 

 

 

 

10 ways to detect people who are a threat to your clinical trial

Flaskdata.io helps Life Science CxO teams outcompete using continuous data feeds from patients, devices and investigators mixed with a slice of patient compliance automation.

One of the great things about working with Israeli medical device vendors is the level of innovation, drive and abundance of smart people.

It’s why we get up in the morning.

There are hundreds of connected medical devices and digital therapeutics (last time I checked over 300 digital therapeutics alone).

When you have an innovative device with network connectivity, security and patient privacy, availability of your product and integrity of the data you collect has got to be a priority.

Surprisingly, we get a  range of responses from people when we talk about the importance of cyber security and privacy for clinical research,

Most get it but some don’t.   The people that don’t get it, seem to assume that security and privacy of patient data is someone else’s problem in clinical trials.

The people who don’t work in security, assume that the field is very technical, yet really – it’s all about people.   Data security breaches happen because people or greedy or careless.    100% of all software vulnerabilities are bugs, and most of those are design bugs which could have been avoided or mitigated by 2 or 3 people talking about the issues during the development process.

I’ve been talking to several of my colleagues for years about writing a book on “Security anti-design patterns” – and the time has come to start. So here we go:

Security anti-design pattern #1 – The lazy employee

Lazy employees are often misdiagnosed by security and compliance consultants as being stupid.

Before you flip the bozo bit on a site coordinator as being non-technical, consider that education and technical aptitude are not reliable indicators of dangerous employees who are a threat to the clinical trial assets.

Lazy employees may be quite smart but they’d rather rely on organizational constructs instead of actually thinking and executing and occasionally getting caught making a mistake.

I realized this while engaging with a client who has a very smart VP – he’s so smart he has succeeded in maintaining a perfect record of never actually executing anything of significant worth at his company.

As a matter of fact – the issue is not smarts but believing that organizational constructs are security countermeasures in disguise.

So – how do you detect the people (even the smart ones) who are threats to PHI, intellectual property and system availability of your EDC?

1 – Their hair is better organized then their thinking

2 – They walk around the office with a coffee cup in their hand and when they don’t, their office door is closed.

3 – They never talk to peers who challenge their thinking.   Instead they send emails with a NATO distribution list to everyone on the clinical trial operations team.

4 – They are strong on turf ownership.  A good sign of turf ownership issues is when subordinates in the company have gotten into the habit of not challenging the VP coffee-cup holding persons thinking.

5 – They are big thinkers.    They use a lot of buzz words.

6 – When an engineer challenges their GCP/regulatory/procedural/organizational constructs – the automatic answer is an angry retort “That’s not your problem”.

7 – They use a lot of buzz-words like “I need a generic data structure for my device log”.

8 – When you remind them that they already have a generic data structure for their device log and they have a wealth of tools for data mining their logs – amazing free tools like Elasticsearch and R….they go back and whine a bit more about generic data structures for device logs.

9 – They seriously think that ISO 13485 is a security countermeasure.

10 – They’d rather schedule a corrective action session 3 weeks after the serious security event instead of fixing it the issue the next day and documenting the root causes and changes.

If this post pisses you off (or if you like it),  contact  me –  always interested in challenging projects with challenged people who challenge my thinking.

Competitive buzzwords in EDC companies

We recently did a presentation to a person at one of the big 4 pharma.  His job title was

Senior IT Project Manager Specialized in Health IT.

I looked at the persons LinkedIn profile before the call and I noticed that the sentence is in past tense. Specialized in Health IT implying that he was now a Senior IT manager who no longer specialized in anything.

I have a friend who worked at Pfizer in IT. He was discouraged by pharma IT mediocrity especially  when he compared it to the stellar talents in the R&D departments.

So it stands to reason that the EDC vendors are just a notch up the technology ladder from the pharma IT guys. If you do not have a unique technology value proposition, you have to resort to marketing collateral gymnastics.

To test this hypothesis – I took a look at the web sites of 4 EDC vendors:  Medidata, Medrio, Omnicomm and Oracle Life Sciences.

Medidata

Run Your Entire Study On A Unified, Intelligent Platform Built On Life Science’s Largest Database.

At Medidata, we’re leading the digital transformation of clinical science, so you can lead therapies to market faster, and smarter. Using AI and advanced analytics, our platform brings data managers, clinical operations, investigators, and patients together to accelerate the science and business of research.

MediData is making a disturbing suggestion in their marketing collateral that they leverage other companies trial data in their Life Science Database to help you lead therapies to market faster.

Medrio

Clinical trial data collection made easy. The industry’s leading early-phase EDC and eSource platform.

The only EDC vendor that actually admitted to being an EDC vendor was Medrio. You have to give them a lot of credit for honesty.

Omnicom

eClinical Solutions for Patient-Centric Clinical Trials
Effective Clinical Tools Driving Excellence in Life Science Research

Software has the power to save lives. OmniComm Systems understands that power and delivers eClinical solutions designed to help life science companies provide crucial medical treatments and therapies to patients around the globe.

OmniComm Systems fills a role in enhancing patient lives by shortening the time-to-market of essential life-saving treatments. Our eClinical suite of products includes electronic data capture (EDC) solutions, automated coding and randomization systems, risk-based monitoring (RBM) and analytics.

This is nice positioning, but it makes you wonder when OmniComm turned into a healthcare provider of crucial medical treatments and therapies to patients around the globe.

Oracle Life Science

Oracle Life Sciences—Reimagining What’s Possible

Innovation in science and medicine demands new technology, and innovation in
technology makes new things possible in science and medicine. Oracle is equipping the life sciences industry today, for the clinical trials of tomorrow.

Solutions Supporting the Entire Clinical Development Lifecycle

Oracle Health Sciences helps you get therapies to market faster and detect risks earlier. Oracle offers a complete set of clinical and safety solutions that support critical processes throughout the clinical development lifecycle—from study design and startup to conduct, close-out, and post-marketing.

SOLUTIONS
Oracle Health Sciences Clinical One cloud environment changes the way clinical research is done—accelerating all stages of the drug development lifecycle by eliminating redundancies, creating process efficiencies, and allowing the sharing of information across functions.

Unlike OmniComm and Medidata,   Oracle is firmly focused on the clinical development lifecycle; not pretending that they are a healthcare provider or leverage the patient data in their EDC databases.

Flaskdata.io

Helping life-science C-suite teams outperform their competitors.

Patient compliance is critical to the statistical power and patient retention of a study.

We help senior management teams complete studies and submission milestones faster and under budget. We do this by providing EDC, ePRO and integration of connected medical devices into a single data flow. We then automate detection and response of patient compliance deviations in clinical trials 100x faster than current manual monitoring practices.

 

 

Develop project management competencies to speed up your clinical trials

The biggest barrier to shortening clinical trial data cycle times is not recruitment.   It is not having a fancy UI for self-service eCRF forms design.   It is not software.

It is not, to paraphrase Medidata, having the ability to Run Your Entire Study On A Unified, Intelligent Platform Built On Life Science’s Largest Database.

It is incompetence in managing a construction project.

That construction project is called designing a clinical trial and the information system for collecting and monitoring data.

For a long time, I thought that this was peculiarly an Israeli problem.

However, conversations with colleagues in the US and Europe suggest that late starts, feet-dragging and time-consuming  change requests may be the norm. Collecting too many variables in the data model is the norm. Complex, long forms that make life hard for the site coordinators is the norm,  Surfeits of edit checks and thousands of queries are the norm.

Most companies spend little  money on project management training and even less money on clinical project strategy development.  Most training is on process, regulatory compliance and standard operating procedures.

Rarely, do we see medical device companies spend money on competencies that will help employees construct clinical trial projects more effectively.

There are verbal commitments that are rarely action commitments.

Yet there is a direct linkage between clinical operations team knowledge and corporate revenue growth which is dependent upon delivering innovative drugs and devices to market.

One way management teams can maximise their investments in project training and clinical project strategy development (outsourced or in-sourced) is to link clinical operations team training to study management competency models that management can qualify and measure.

But the development of a clinical team competency model has strategic and operational barriers that must be managed to make it successful.

Clinical trial project management competency model example

Clinical team Competency Setup Considerations

1. Clinical people often think that building the ‘database’ is an art, not a science, and don’t like to be measured in what they perceive is a non-core skill.

2.  Your project  competency model must include both soft and hard skills training to make it effective.

3. Clinical trial management teams must focus on the competency requirements to make it work and it must be a hands-on approach.

4. You must be able to quantitatively measure the competencies (time to design forms, edit check design, monitoring signals, data cycle time, time spent in meetings, change requests).

5. Competency clinical trial management training programs must be continuous training and educational events, not a one-time event or else the program will fail.

6. The steps of your competency program must be very specific and delineated to make sure it can be delivered and measured.

7. Your clinical operations team must agree that the competencies you are measuring truly help them deliver the study faster (They don’t have to like doing it, just agree that there are required action steps to reduce data cycle times)

8. When implementing your project competencies audits, the certification should be both written and experientially measured to get an accurate reading of the clinical operations team member capabilities.

9. All project  competency certification candidates should have the ability to retest to confirm skills growth.

10. Project competency assessments should never be used solely as a management scorecard tool to make employment decisions about clinical operations team members.

To increase your company revenues and clinical project training success, build and deliver project competency models.