A better tomorrow for clinical trials

A better tomorrow – Times of crisis usher in new mindsets

By David Laxer. Spoken from the heart.

In these trying days, as we adjust to new routines and discover new things about ourselves daily, we are also reminded that the human spirit is stronger than any pandemic and we have survived worse.

And because we know we’re going to beat this thing, whether in 2 weeks or 2 months, we also know that we will eventually return to normal, or rather, a new normal.

In the meantime, the world is showing a resolve and a resilience that gives us much room to hope for a better tomorrow for developing new therapeutics.

However, these days have got us wondering how things might have looked if clinical trials were conducted differently. It’s a well-known fact that clinical trials play an integral role in the development of new, life-saving drugs, but by the time they get approved by the FDA it takes an average of 7.5 years and anywhere between $150m-2bn per drug.

Reasons for failure

Many clinical studies still use outdated methods for data collection and verification: they still use a fax for crying out. They continue to manually count leftover pills in bottles, and still rely on patients’ diary entries to ensure adherence.

Today, the industry faces new challenges to recruit enough participants as COVID-19 forces people to stay at home and out of research hospital sites. 

Patient drop-outs, adverse events and delayed recording of adverse events  are still issues for pharma and medical device companies conducting clinical research.  The old challenge of creating interpretable data to examine safety and efficacy of new therapeutics remain.

The Digital Revolution:

As hard as it is to believe, the clinical trial industry just might be the last major industry to undergo digital transformation..

As every other aspect of modern life has already been digitized, from banking to accounting to education now, more than ever, is the time to accelerate the transition of this crucial process, especially as we are painfully reminded of the need for finding a vaccine.  Time is not a resource we can waste any longer.

Re-imagining the future

When we created FlaskData we were primarily driven by our desire to disrupt the clinical trial monitoring paradigm  and bring it into the 21st century — meaning real-time data collection and automated detection and response. From the beginning we found fault in the fact that clinical trials were, and still are overly reliant on manual processes  and this causes unacceptable delays in bringing new and essential drugs and devices to market. These delays, as we are reminded during these days, not only cost money and time, but ultimately they cost us lives.

To fully achieve this digitization it’s important to create a secure cloud service that can accelerate the entire  process, and provide sponsors with an immediate picture and interpretable data without having to spend 6-12 months cleaning data.  This is achieved with real-time data collection, automated detection and response and an open API that enables any healthcare application to collect clinical-trial-grade data and assure patient adherence to the clinical protocol.

Our Promise:

It didn’t take a virus to make us want to deliver new medical breakthroughs into the hands that need them most, but it has definitely made us double down on our resolve to see it through. The patient needs to be placed at the center of the clinical research process and we are tasked to reduce the practical, geographical and financial barriers to participation. The end result is a more engaged patient, higher recruitment and retention rates, better data and reduced study timelines and costs.

The Need For Speed

As the world is scrambling to find a vaccine for Corona, we fully grasp 2 key things: 1) Focus on patients and 2) Provide clinical operations teams with the ability to eliminate inefficiencies and move at lightning speed. In these difficult times, there is room for optimism as it is crystal clear, just how important it is to speed up the process.

 

Social Distancing

In this period of social distancing, we can only wonder about the benefits of conducting clinical trials remotely. We can only imagine how many trials have been rendered useless as patients, reluctant to leave their houses have skipped the required monitoring, have forgotten to take their pills and their diary entries have gotten lost amidst the chaos.

With a fully digitized process for electronic data collection, social distancing would have no effect on the clinical trial results.

About David Laxer

David is a strategist and story-teller. He says it best – “Ultimately, when you break it down, I am a storyteller and a problem solver. The kind that companies and organizations rely on for their brand DNA, culture and long-lasting reputation”.

 

Reach out to David on LinkedIn

Streaming clinical trials in a post-Corona future

Last week, I wrote about using automated detection and response technology to mitigate the next Corona pandemic.

Today – we’ll take a closer look at how streaming data fits into virtual clinical trials.

Streaming – not just for Netflix

Streaming real-time data and automated digital monitoring is not a foreign idea to people quarantined at home during the current COVID-19 pandemic.   Streaming: We are at home and watching Netflix.   Automated monitoring: We are now using digital surveillance tools based on mobile phone location data to locate and track people who came in contact with other CORONA-19 infected people.

Slow clinical trial data management. Sponsors flying blind.

Clinical trials use batch processing of data. Clinical trials currently do not stream patient / investigator signals in order to manage risk and ensure patient safety.

The latency of batch processing in clinical trials is something like 6-12 months if we measure the time from first patient in to the time a bio-statistician starts working on an interim analysis.

Risk-based monitoring for clinical trials uses batch processing to produce risk profiles of sites in order to prioritize another batch process – namely site visits and SDV (source data verification).

The latency of central CRO monitoring using RBM ranges wildly from 1 to 12 weeks. This is reasonable considering that the design objective of RBM is to prioritize a batch process of site monitoring that runs every 5-12 weeks.

In the meantime – the study is accumulating adverse events and dropping patients to non-compliance and the sponsor is flying blind.

Do you think 2003 vintage data formats will work in 2020 for Corona virus?

An interesting side-effect of batch processing for RBM is use of SDTM for processing data and preparing reports and analytics.

SDTM provides a standard for organizing and formatting data to streamline processes in collection, management, analysis and reporting. Implementing SDTM supports data aggregation and warehousing; fosters mining and reuse; facilitates sharing; helps perform due diligence and other important data review activities; and improves the regulatory review and approval process. SDTM is also used in non-clinical data (SEND), medical devices and pharmacogenomics/genetics studies.

SDTM is one of the required standards for data submission to FDA (U.S.) and PMDA (Japan).

It was never designed nor intended to be a real-time streaming data protocol for clinical data. It was first published in June 2003. Variable names are limited to 8 characters (a SAS 5 transport file format limitation).

For more information on SDTM, see the 2011 paper by Fred Woods describing the challenges to create SDTM datasets.   One of the surprising challenges is data/time formats – which continue to stymie biostats people to this day.  See Jenya’s excellent post on the importance of collecting accurate date-time data in clinical trials. We have open, vendor-neutral standards and JavaScript libraries to manipulate dates. It is a lot easier today than it was in June 2003.

COVID-19 – we need speed

In a post COVID-19 era, site monitoring visits are impossible and patients are at home. Now, demands for clinical trials are outgrowing the batch-processing paradigm.   Investigators, nurses, coordinators and patients cannot wait for the data to be converted to SDTM, processed in a batch job and sent to a data manager.  Life science sponsors need that data now and front-line teams with patients need an immediate response.

Because ePRO, EDC and wearable data collection are siloed (or waiting for batch file uploads using USB connection like Phillips Actiwatch or Motionwatch), the batch ETL tools cannot process the data.  To place this in context; the patient has to come into the site, find parking, give the watch to a site coordinator, who needs to plug the device into USB connection, upload the data and then import the data to the EDC who then waits for an ETL job converting to SDTM and processing to an RBM system.

Streaming data for clinical research in a COVID-19 era

In order to understand the notion of streaming data for clinical research in a COVID-19 era, I drew inspiration and shamelessly borrowed the graphics from Bill Scotts excellent article on Apache Kafka – Why are you still doing batch processing? “ETL is dead”.

Crusty Biotech

The Crusty biotech company have developed an innovative oral treatment called Crusdesvir for Corona virus.   They contract with a site, Crusty Kitchen to test safety and efficacy of Crusdesvir. Crusty Kitchen has one talented PI and an efficient site team that can process 50 patients/day.

The CEO of Crusty Biotech decides to add 1 more site, but his clinical operations process is built for 1 PI at a time who can perform the treatment procedure in a controlled way and comply with the Crusdesvir protocol.  It’s hard to find a skilled PI and site team but the finally finds one and signs a contract with them.

Now they need to add 2 more PI’s and sites and then 4.   With the demand to deliver a working COVID-19 treatment, Crusty Biotech needs to recruit more sites who are qualified to run the treatment.    Each site needs to recruit (and retain more treatments).

The Crusty Biotech approach is an old-world batch workflow of tasks wrapped in a rigid environment. It is easy to create, it works for small batches but it is impossible to grow (or shrink) on demand. Scaling requires more sites, introduces more time into the process, more moving parts, more adverse events, less ability to monitor with site visits and the most crucial piece of all – lowers the reliability of the data, since each site is running its own slow-moving, manually-monitored process.

Castle Biotech

Castle Biotech is a competitor to Crusty Biotech – they also have an anti-viral treatment with great potential.    They decided to plan for rapid ramp-up of their studies by using a manufacturing process approach with an automated belt delivering raw materials and work-in-process along a stream of work-stations.   (This is how chips are manufactured btw).

Belt 1:Ingredients, delivers individual measurements of ingredients

Belt 1 is handled by Mixing-Baker, when the ingredients arrive, she knows how to mix the ingredients, then put mixture onto Belt 2.

Belt 2:Mixture, delivers the perfectly whisked mixture.

Belt 2 is handled by Pan-Pour-Baker, when the mixture arives, she can delicately measure and pour mixture into the pan, then put pan onto Belt 3.

Belt 3:Pan, delivers the pan with exact measurement of mixture.

Belt 3 is handled by Oven-Baker, when the pan arrives, she puts the pan in the oven and waits the specific amount of time until it’s done. When it is done, she puts the cooked item on the next belt.

Belt 4:Cooked Item, delivers the cooked item.

Belt 4 is handled by Decorator, when the cooked item arrives, she applies the frosting in an interesting and beautiful way. She then puts it on the next belt.

Belt 5:Decorated Cupcake, delivers a completely decorated cupcake.

We see that once the infrastructure is setup, we can easily add more bakers (PI’s in our clinical trial example) to handle more patients.  It’s easy to add new cohorts, new designs by adding different types of ‘bakers’ to each belt.

How does cupcake-baking relate to clinical data management?

The Crusty Biotech approach is old-world batch/ETL – a workflow of tasks set in stone. 

It’s easy to create. You can start with a paper CRF or start with a low-cost EDC. It works for small numbers of sites and patients and cohorts but it does not scale.

However, the process breaks down when you have to visit sites to monitor the data and do SDV because you have a paper CRF.  Scaling the site process requires additional sites, more data managers, more study monitors/CRAs, more batch processing of data, and more round trips to the central monitoring team and data managers. More costs, more time and 12-18 months delay to deliver a working Corona virus treatment.

The Castle Biotech approach is like data streaming. 

Using a tool like Apache Kafka, the belts are topics or a stream of similar data items, small applications (consumers) can listen on a topic (for example adverse events) and notify the site coordinator or study nurse in real-time.   As the flow of patients in a study grows, we can add more adverse event consumers to do the automated work.

Castle Biotech is approaching the process of clinical research with a patient-centric streaming and digital management model, which allows them to expand the study and respond quickly to change (the next pandemic in Winter 2020?).

The moral of the story – Don’t Be Krusty.

 

 

So what’s wrong with 1990s EDC systems?

Make no doubt about it, the EDC systems of 2020 are using a 1990’s design. (OK – granted, there are some innovators out there like ClinPal with their patient-centric trial approach but the vast majority of today’s EDC systems, from Omnicomm to Oracle to Medidata to Medrio are using a 1990’s design. Even the West Coast startup Medable is going the route of if you can’t beat them join them and they are fielding the usual alphabet soup of buzz-word compliant modules – ePRO, eSource, eConsent etc. Shame on you.

Instead of using in-memory databases for real-time clinical data acquisition, we’re fooling around with SDTM and targeted SDV.

When in reality – SDTM is a standard for submitting tabulated results to regulatory authorities (not a transactional database nor an appropriate data model for time series).  And even more reality – we should not be doing SDV to begin with – so why do targeted SDV if not to perpetuate the CRO billing cycle.

Freedom from the past comes from ridding ourselves of the clichés of today.

 

Personally – I don’t get it. Maybe COVID-19 will make the change in the paper-batch-SDTM-load-up-the-customer-with-services system.

So what is wrong with 1990s EDC?

The really short answer is that computers do not have two kinds of storage any more.

It used to be that you had the primary store, and it was anything from acoustic delay-lines filled with mercury via small magnetic dougnuts via transistor flip-flops to dynamic RAM.

And then there were the secondary store, paper tape, magnetic tape, disk drives the size of houses, then the size of washing machines and these days so small that girls get disappointed if think they got hold of something else than the MP3 player you had in your pocket.

And people still program their EDC systems this way.

They have variables in paper forms that site coordinators fill in on paper and then 3-5 days later enter into suspiciously-paperish-looking HTML forms.

For some reason – instead of making a great UI for the EDC, a whole group of vendors gave up and created a new genre called eSource creating immense confusion as to why you need another system anyhow.

What the guys at Gartner euphemistically call a highly fragmented and non-integrated technology stack.
What the site coordinators who have to deal with 5 different highly fragmented and non-integrated technology stacks call a nightmare.

Awright.

Now we have some code – in Java or PHP or maybe even Dot NET THAT READS THE VARIABLES FROM THE FORM AND PUTS THEM INTO VARIABLES IN MEMORY.

Now we have variables in “memory” and move data to and from “disk” into a “database”.

I like the database thing – where clinical people ask us – “so you have a database”. This is kinda like Dilbert – oh yeah – I guess so. Mine is a paradigm-shifter also.

Anyhow, today computers really only have one kind of storage, and it is usually some sort of disk, the operating system and the virtual memory management hardware has converted the RAM to a cache for the disk storage.

The database process (say Postgres) allocate some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel.

If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere.
When Postgres next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file.

And that’s it.

Virtual memory was meant to make it easier to program when data was larger than the physical memory, but people have still not caught on.
And maybe with COVID-19 and sites getting shut-down; people will catch on that a really nifty user interface for GASP – THE SITE COORDINATORS and even more AMAZING – a single database in memory for ALL the data from patients, investigators and devices.

Because at the end of the day – grandma knows that there ain’t no reason not to have a single data model for everything and just shove it into virtual memory for instantaneous, automated DATA QUALITY, PATIENT SAFETY AND RISK ASSESSMENT in real-time.

Not 5-12 weeks later for research site visit or a month later after the data management trolls in the basement send back some reports with queries and certainly not spending 6-12 months cleaning up unreliable data due to the incredibly stupid process of paper to forms to disk to queries to site visits to data managers to data cleaning.

10 ways to detect people who are a threat to your clinical trial

Flaskdata.io helps Life Science CxO teams outcompete using continuous data feeds from patients, devices and investigators mixed with a slice of patient compliance automation.

One of the great things about working with Israeli medical device vendors is the level of innovation, drive and abundance of smart people.

It’s why we get up in the morning.

There are hundreds of connected medical devices and digital therapeutics (last time I checked over 300 digital therapeutics alone).

When you have an innovative device with network connectivity, security and patient privacy, availability of your product and integrity of the data you collect has got to be a priority.

Surprisingly, we get a  range of responses from people when we talk about the importance of cyber security and privacy for clinical research,

Most get it but some don’t.   The people that don’t get it, seem to assume that security and privacy of patient data is someone else’s problem in clinical trials.

The people who don’t work in security, assume that the field is very technical, yet really – it’s all about people.   Data security breaches happen because people or greedy or careless.    100% of all software vulnerabilities are bugs, and most of those are design bugs which could have been avoided or mitigated by 2 or 3 people talking about the issues during the development process.

I’ve been talking to several of my colleagues for years about writing a book on “Security anti-design patterns” – and the time has come to start. So here we go:

Security anti-design pattern #1 – The lazy employee

Lazy employees are often misdiagnosed by security and compliance consultants as being stupid.

Before you flip the bozo bit on a site coordinator as being non-technical, consider that education and technical aptitude are not reliable indicators of dangerous employees who are a threat to the clinical trial assets.

Lazy employees may be quite smart but they’d rather rely on organizational constructs instead of actually thinking and executing and occasionally getting caught making a mistake.

I realized this while engaging with a client who has a very smart VP – he’s so smart he has succeeded in maintaining a perfect record of never actually executing anything of significant worth at his company.

As a matter of fact – the issue is not smarts but believing that organizational constructs are security countermeasures in disguise.

So – how do you detect the people (even the smart ones) who are threats to PHI, intellectual property and system availability of your EDC?

1 – Their hair is better organized then their thinking

2 – They walk around the office with a coffee cup in their hand and when they don’t, their office door is closed.

3 – They never talk to peers who challenge their thinking.   Instead they send emails with a NATO distribution list to everyone on the clinical trial operations team.

4 – They are strong on turf ownership.  A good sign of turf ownership issues is when subordinates in the company have gotten into the habit of not challenging the VP coffee-cup holding persons thinking.

5 – They are big thinkers.    They use a lot of buzz words.

6 – When an engineer challenges their GCP/regulatory/procedural/organizational constructs – the automatic answer is an angry retort “That’s not your problem”.

7 – They use a lot of buzz-words like “I need a generic data structure for my device log”.

8 – When you remind them that they already have a generic data structure for their device log and they have a wealth of tools for data mining their logs – amazing free tools like Elasticsearch and R….they go back and whine a bit more about generic data structures for device logs.

9 – They seriously think that ISO 13485 is a security countermeasure.

10 – They’d rather schedule a corrective action session 3 weeks after the serious security event instead of fixing it the issue the next day and documenting the root causes and changes.

If this post pisses you off (or if you like it),  contact  me –  always interested in challenging projects with challenged people who challenge my thinking.

Competitive buzzwords in EDC companies

We recently did a presentation to a person at one of the big 4 pharma.  His job title was

Senior IT Project Manager Specialized in Health IT.

I looked at the persons LinkedIn profile before the call and I noticed that the sentence is in past tense. Specialized in Health IT implying that he was now a Senior IT manager who no longer specialized in anything.

I have a friend who worked at Pfizer in IT. He was discouraged by pharma IT mediocrity especially  when he compared it to the stellar talents in the R&D departments.

So it stands to reason that the EDC vendors are just a notch up the technology ladder from the pharma IT guys. If you do not have a unique technology value proposition, you have to resort to marketing collateral gymnastics.

To test this hypothesis – I took a look at the web sites of 4 EDC vendors:  Medidata, Medrio, Omnicomm and Oracle Life Sciences.

Medidata

Run Your Entire Study On A Unified, Intelligent Platform Built On Life Science’s Largest Database.

At Medidata, we’re leading the digital transformation of clinical science, so you can lead therapies to market faster, and smarter. Using AI and advanced analytics, our platform brings data managers, clinical operations, investigators, and patients together to accelerate the science and business of research.

MediData is making a disturbing suggestion in their marketing collateral that they leverage other companies trial data in their Life Science Database to help you lead therapies to market faster.

Medrio

Clinical trial data collection made easy. The industry’s leading early-phase EDC and eSource platform.

The only EDC vendor that actually admitted to being an EDC vendor was Medrio. You have to give them a lot of credit for honesty.

Omnicom

eClinical Solutions for Patient-Centric Clinical Trials
Effective Clinical Tools Driving Excellence in Life Science Research

Software has the power to save lives. OmniComm Systems understands that power and delivers eClinical solutions designed to help life science companies provide crucial medical treatments and therapies to patients around the globe.

OmniComm Systems fills a role in enhancing patient lives by shortening the time-to-market of essential life-saving treatments. Our eClinical suite of products includes electronic data capture (EDC) solutions, automated coding and randomization systems, risk-based monitoring (RBM) and analytics.

This is nice positioning, but it makes you wonder when OmniComm turned into a healthcare provider of crucial medical treatments and therapies to patients around the globe.

Oracle Life Science

Oracle Life Sciences—Reimagining What’s Possible

Innovation in science and medicine demands new technology, and innovation in
technology makes new things possible in science and medicine. Oracle is equipping the life sciences industry today, for the clinical trials of tomorrow.

Solutions Supporting the Entire Clinical Development Lifecycle

Oracle Health Sciences helps you get therapies to market faster and detect risks earlier. Oracle offers a complete set of clinical and safety solutions that support critical processes throughout the clinical development lifecycle—from study design and startup to conduct, close-out, and post-marketing.

SOLUTIONS
Oracle Health Sciences Clinical One cloud environment changes the way clinical research is done—accelerating all stages of the drug development lifecycle by eliminating redundancies, creating process efficiencies, and allowing the sharing of information across functions.

Unlike OmniComm and Medidata,   Oracle is firmly focused on the clinical development lifecycle; not pretending that they are a healthcare provider or leverage the patient data in their EDC databases.

Flaskdata.io

Helping life-science C-suite teams outperform their competitors.

Patient compliance is critical to the statistical power and patient retention of a study.

We help senior management teams complete studies and submission milestones faster and under budget. We do this by providing EDC, ePRO and integration of connected medical devices into a single data flow. We then automate detection and response of patient compliance deviations in clinical trials 100x faster than current manual monitoring practices.

 

 

5 ways to make your clinical trials run real fast

medical device clinical trials

This week, we had a few charming examples of risk management in clinical trials with several of our customers.   I started thinking about what we could do to get things to run real fast and avoid some of the inevitable potholes and black swans that crop up in clinical trials.

Engaged in basic science and stuck in data traffic

There is something very disturbing  about an industry that develops products using advanced basic science.

It is disturbing because the industry uses 40-year old processes and information technology.

This industry accepts a reality of delays of a year or more due to manual data processing.

This industry is called life sciences.

That’s what disturbs on a personal and strategic level.   We can and should do better.  The disconnect between basic science and modern software should disturb anyone involved with clinical research because the cost to society is enormous.      We are enamoured with Instagram, Uber and WeWork but we choose to pretend that life science research exists in a parallel untouchable universe protected by ICH GCP, FDA, MDR and a slew of other TLAs.

Alright.  I am Israeli and trained as a physicist.   Let’s look for some practical, real-world solutions. Let’s try them out and iterate.

5 ways to make your clinical research run real fast

1. Data model

Before designing your eCRF, design your data model.  If you do not know what data modelling means, then 4 weeks before the study starts is a bad time to start learning.   Hire a specialist in data modelling, preferably someone who does not work in life sciences.   Pay them $500/hour.  It’s worth every penny. The big idea is to design an abstract data model for your study for speed of access and usability by patients, site coordinators, study monitors and statisticians before designing the eCRF.

2. Discipline equals speed

Start early. Go slow and speak softly and then run fast.  There is a story about the difference between a Japanese wood sculpture artist and an Israeli artist. The  Japanese artist goes into his studio and looks at a big piece of wood. He walks around the wood and observes.   He goes home.  The next day and for the next month, he observes the wood in his studio, without touching his tools.    After a month of observation, he comes in, picks up a . hammer and chisel and chop, chop chop, produces a memorable work of art.      The Israeli goes into his studio and looks at a big piece of wood. He starts carving away and improvising all kinds of ideas from his head. He goes home.  The next day and for the next month, he chops away at wood and replaces raw material several times.   After a year, he has a work of art.

The big idea is that discipline equals speed.  It prepares you for the unexpected. See point 6 below.

A good book that presents this approach in a very practical way is Discipline equals Freedom by Jocko Willink.

3.Date and time

Date/time issues can be visualised as a triangle.

Side 1 of the triangle is the site coordinator who collects data into the EDC.

Side 2 of the triangle is the CRA who monitors CRC work and data quality and performs SDV.

Side 3 of this triangle is the subject who needs to come and visit the doctor on certain days that study coordinator scheduled for her when she started the trial.

Pay attention to your date and time fields.    This is a much neglected part of data design in clinical trials.

The challenge is that you need to get your clinical data on different timelines.     Most people ignore the fact that clinical trials have several parallel timelines.

One timeline is the study schedule.  Another timeline is adverse events.  Another timeline is patient compliance.    You get it.   If you collect high quality date times in your data model, you can facilitate generating  the different time-series.

One of the most popular pieces on this blog is an essay Jenya wrote on dates and times in clinical data management.  You can read it here.

4.Do not DIY your EDC

You can DIY a chair from Ikea but not your clinical trial.    The notion of a researcher or clinical manager, untrained in data modeling, data analysis and user interface design using a DIY tool to develop the most important part of your study should make you stop and think.  To put this in different perspective, if you are spending $5,000/month to monitor 3 sites, you should not be paying $450/month for a DIY EDC.    It’s called penny-wise and pound foolish.

5.Prioritize deviations.

While it is true that protocol deviations need to be recorded, not every protocol deviation is created equal.      I was stunned recently to hear from a quality manager at one of the big CROs that they do not prioritise their deviation management.     Biometrics, dosing, patient compliance and clinical outcomes should be at the top of list when they relate to the primary clinical endpoint or safety endpoint.    This is related to the previous points of not DIY, data modelling and observing before cutting wood.

6.Do some up-front risk assessment but don’t kid yourself.

Before you start the study, any threat analysis you do is worthless.   A risk analysis without data is worthless.  You may have some hypotheses based on previous work you or someone else did but do not kid yourself.   First collect data, then analyse threats.   I’ve written about how to do a risk assessment in clinical trials here, here, here and here.  Read my essay on invisible gorillas.

Temperature excursions and APIs to reduce study monitor work

I did a lot of local excursions the past 3 days – Jerusalem, Tel Aviv, Herzliya and Haifa.   For some reason, the conversations with 2 prospects had to do with refrigerators.   I do not know if this is Freudian or not, considering the hot weather of July in Israel.

The conversations about refrigerators had to do with storing drugs / investigational product at the proper temperatures.

Temperature excursion is a deviation

The great thing about not coming from the clinical trials space is that you are always learning new things.

Yesterday – I learned that a Temperature excursion is a deviation from given instructions. It is defined in the WHO Model Guidance as “an excursion event in which a Time Temperature Sensitive Pharmaceutical Product (TTSPP) is exposed to temperatures outside the range(s) prescribed for storage and/or transport.

Storing drugs at the proper temperature is part of GCP. Here is an SOP for Monitoring and Recording Refrigerator & Freezer Temperatures

1 Introduction All refrigerators and freezers used for the storage of Investigational Medicinal Products (IMPs) must be temperature controlled, and continuously monitored and maintained within the appropriate ranges as defined by the protocol. ICH GCP Principle 2.13 states “Systems with procedures that assure the quality of every aspect of the trial should be implemented.”

Moving on:

5 Procedure
 Current maximum/minimum thermometers must be monitored as a minimum at least once on a daily basis on all working days, and recorded legibly on the temperature monitoring log.
 The digital maximum/minimum thermometer –
□ Should be read from the outside of the refrigerator without opening the door.
□ Have an accuracy of at least +/- 1 oC.
□ Be able to record temperatures to one decimal place.
□ Be supplied with a calibration certificate.
□ Have the calibration check on an annual basis.
 Temperature logs should be kept close to the refrigerator/freezer (but not inside) to which they relate for ease of reference, and should be clearly identified as relating to that appliance.
 A separate temperature record must be kept for each fridge/freezer. (The use of whiteboards as a method of logging results is not acceptable.)
 It is good practice to record the temperature at a similar time each day e.g., first thing in the morning before the refrigerator door is opened for the first time. This will allow review of trends in results recorded; help highlight any changes in temperatures recorded and deviation in refrigerator performance.

There is a lot of manual work involved looking at refrigerators

I believe a study monitor will spend 20’/day checking logs of refrigerator temperature readings. When you add in time for data entry to the site coordinators – that’s another 20’/day and then you have to multiply by the number of sites and refrigerators.   This is only the reading temperatures and capturing data to the EDC part of the job.   Then you have to deal with queries and resolving deviations.

For something so mundane (although crucial from a medical research perspective), its a lot of work. The big problem with using study monitors to follow temperature excursions is that the site visits are every 1-3 months. With the spiralling costs of people, the site visits are getting less frequent.

This means that it is entirely plausible that patients are treated with improperly stored drugs and the deviation is undetected for 3 months.

Whenever I see a lot of manual work and late event detection, I see an opportunity.

It seems that there are a few vendors doing remote monitoring of refrigerators.  A Polish company from Krakow, called Efento has a complete solution for remote monitoring of refrigerators storing investigational product.  It looks like this:

 

null

 

What is cool (to coin a pun) about Efento is that they provide a complete solution from hardware to cloud.

The only thing missing is calling a Flask API to insert data into the eCRF for the temperature excursions.

Once’s we’ve got that, we have saved all of the study coordinators and study monitors time.

More importantly, we’ve automated an important piece of the compliance monitoring puzzle – ensuring that temperature excursions are detected and remediated immediately before its too late.

The gap between the proletariat and Medidata (or should I say Dassault)

We need a better UX before [TLA] integration

The sheer number and variety of eClinical software companies and buzzwords confuses me.
There is EDC, CTMS, IWRS, IVRS, IWRS, IRT, eSource, eCOA, ePRO and a bunch of more TLAs.
For the life of me I do not understand the difference between eCOA and ePRO and why we need 2 buzzwords for patient reporting.

Here is marketing collateral from a CRO.   As you will see – they miss the boat on all the things that are important for site coordinators and study monitors.

We adapt responsively to change in your clinical trial to minimize risk and drive quality outcomes. Clinical research is complicated and it’s easy to get off track due to inexperienced project leaders, inflexible workflows, or the failure to identify risks before they become issues. We derive expert insights from evidence-based processes and strategic services to be the driving force behind quality outcomes, including optimized data, patient safety, reduced time-to-market, and operational savings.

What CRCs and CRAs have to say about the leading eClinical solutions

I recently did an informal poll on Facebook of what problems the CRA/CRC proletariat have to deal with on the job.

I want to thank Tsvetina Dencheva for helping me grok and distill people’s complaints
into 3 central themes.

Theme no. 1 – enter data once

Enable administrators to enter data once and have their authorized user lists, sites and metrics update automatically without all kinds of double and triple work and fancy import/export footwork between different systems. Failing a way of managing things in one place –
at least have better integration between the EDC and the CTMS.

The IT guys euphemistically call this problem information silos. I’ve always thought that they used the word silos (which are used to store animal food) as way of identifying with people who farm, without actually having to get their hands dirty by shovelling silage (which is really smelly btw).

I understand the rationale for having a CTMS and an EDC about as much as I understand the difference between eCOA and ePRO.

Here is some raw data from the informal Facebook survey

If I enter specific data, it would be great if there’s an integrated route to all fields connected to the said data. An easy example is – if I enter a visit, it transfers to my time sheet.

Same goes to contact reports. Apps! All sorts of apps, ctms, verified calculators, edc, ixrs, Electronic TMF. The list goes on and on. How could I forget electronic training logs? Electronic all sorts of log.

There are a lot of things we do day to day that are repetitive and can take away from actually moving studies forward. Thinking things like scanning reg docs, auto capturing of reg doc attributes (to a point), and integration to the TMF. Or better system integration, meaning where we enter a single data point (ie CTMS) and flowing to other systems (ie new site in CTMS, create new site in TMF. Enrolment metrics from EDC to CTMS) and so on.

If only the f**ing CTMS would work properly.

Theme number 2 – single sign-on.

The level of frustration with having to login to different systems is very high. The ultimate solution is to use social login – just login to the different systems with your Google Account and let Google/Firebase authenticate your identity.

Theme number 3 – data integrity

EDC edit check development eats up a lot of time and when poorly designed generates thousands of queries. Not good.

There is a vision of an EDC that understands the data semantics from context of the study protocol.

This is a very cool and advanced notion.

One of the study monitors put it like this:

The EDC should be smart enough to identify nonsense without having to develop a bunch of edit checks each time and have to deal with queries.

The EDC should be able to calculate if a visit is in a proper time window, or if imaging is in a proper time window. Also for oncology if RECIST 1.1 is used, then the EDC should be able to calculate: Body Surface Area, correct dosing based on weight and height of a patient, RECIST 1.1 tumor response and many other things that simply can be calculated.

About flaskdata.io

We specialise in faster submission for connected medical devices. We can shorten your
time to market by 9-12 months with automated patient compliance detection and response.

Call us and we’ll show you how. No buzzwords required.

4 strategies to get connected medical devices faster to FDA submission

Introduction

Better designs, site-less trials, all-digital data collection and PCM (patient compliance monitoring) can all save time and money in connected medical device clinical trials.  This article will help you choose which strategies will be a good fit to help you validate your connected medical device and its intended use for submission to FDA.

What is the baseline cost? (hint don’t look at the costs of drug studies)

If you want to save, you need to know the price tag. Note that the costs of drug trials, including CRO and regulatory affairs is an order of magnitude higher than for connected medical devices.  A JAMA report from Nov 2018, looked at drug trials and concluded that a mean cost of $19M was cheap compared to the total cost of drug development – $1-2BN.

Findings:  In this study of 59 new therapeutic agents approved by the FDA from 2015 to 2016, the median estimated direct cost of pivotal efficacy trials was $19 million, with half of the trial cost estimates ranging from $12 million to $33 million. At the extremes of the distribution were 100-fold cost differences, and patient enrollment varied from fewer than 15 patients to more than 8000 patients.

By comparison, the estimated cost of medical device clinical trials to support approval by the FDA, ranges from $1 million to $10 million. A report from May 2017 surveyed the costs of medical device clinical trials and the potential of patient registries to save time and money. The report has some interesting numbers:

1.The average cost to bring a low-to-moderate concern device from concept to 510(K) approval is $31 million. 77% of that is spent on FDA-related/regulatory-affairs activities.

2.The average cost for a high-risk PMA device averages $94 million, with $75 million spent on FDA-related/regulatory-affairs activities. Average of 4.5 years from first contact with FDA to device approval.

3.Clinical trials outside the US are 30% to 50% cheaper. Less than 50% of medical device trials are now conducted in the US.

I. Better study designs

Real-world data (RWD) and real-world evidence (RWE) are being used for post-market safety surveillance and for designing studies, but they are not replacements for conducting a randomized trial with a controlled clinical protocol.  FDA recently issued guidance for use of real-world evidence for regulatory decisions.  FDA uses RWD and RWE to monitor post-market safety and adverse events and to make regulatory decisions.

RWD and RWE can be used in 4 ways improve the design of medical device clinical trials when there is a predicate device that is already being used for treating patients.

1.Use RWD/RWE to improve quality and efficiency of device evaluation at study phases (feasibility, pivotal, and post-market), allowing for rapid iteration of devices at a lower cost.

2.Explore new indications for existing devices

3.Cost efficient method to compare a new device to standard of care.

4.Establish best practices for the use of a device in sub-populations or different sub-specialties.

You will need to factor in the cost of obtaining access to the data and cost of data science.

But real-world data may not be reliable or relevant to help design the study.  As FDA notes in their guidance for Using Real-world evidence to support regulatory decision making:

RWD collected using a randomized exposure assignment within a registry can provide a sufficient number of patients for powered subgroup analyses, which could be used to expand the device’s indications for use. However, not all RWD are collected and maintained in a way that provides sufficient reliability. As such, the use of RWE for specific regulatory purposes will be evaluated based on criteria that assess their overall relevance and reliability. If a sponsor is considering using RWE to satisfy a particular FDA regulatory requirement, the sponsor should contact FDA through the pre-submission process.

II. Site-less trial model

Certain kinds of studies for chronic diseases with simple treatment protocols can use the site-less trial model.  The term site-less is actually an oxymoron, since site-less or so-called virtual trials are conducted with a central coordinating site (or a CRO like Science37). Nurses and mobile apps are using to collect data from patients at home.   You still need a PI (principal investigator).

The considerable savings accrued by eliminating site costs, need to be balanced with the costs of technology, customer support, data security and salaries and travel expenses of nurses visiting patients at homes.

III. Mostly-digital data collection

For a connected medical device, mostly-digital data collection means 3 things:

1.Collect patient reported outcome data using a mobile app or text messaging

2.Collect data from the connected medical device using a REST API

3.Enable the CRC (clinical research coordinator) to collect data from patients (IE, ICF for example) using a Web or mobile interface (so-called eSource) and skip the still-traditional paper-transcription step. In drug studies, this is currently impossible because hospital source documents are paper or they are locked away in an enterprise EMR system. For connected medical device studies in pain, cannabis and chronic diseases, most of the source data can be collected by the CRC with direct patient interviews. Blood tests will still need to be transcribed from paper. Mostly-digital means mostly-fast. Data latency for the paper source should be 24 hours and data latency for the digital feeds should be zero.

There are a number of companies like Litmus Health moving into the space of digital data collection from mobile devices, ePRO and wearables. However, unlike validating a connected medical device for a well-defined intended use, Litmus Health is focused on clinical data science for health-related quality of life.

IV. PCM (patient compliance monitoring)

Once the data is in the system, you are almost there.  Fast (low-latency) data from patients, your connected device and the CRC (which may be nurses in a site-less trial) are 3 digital sources which can be correlated in order to create patient compliance metrics.  But that is a story for another essay.

Summary

We have seen that new business models and advanced technologies can help sponsors conduct connected medical device trials cheaper and faster. It may not be a good fit for your product.  Contact us and we will help you evaluate your options.

For more information read Gail Norman’s excellent article  Drugs, Devices, and the FDA: An overview of the approval process

 

 

 

 

 

 

 

 

Putting lipstick on the pig of electronic CRF?

esource for people in clinical trials

Good online systems do not use paper paradigms. In this post – I will try and entertain you with historical perspective and quantitative tools for choosing an EDC system for your medical device study.

Decades of common wisdom in clinical trials still hold to a paper-based data processing model. One of the popular EDC systems talks about the advantages of having online forms that look exactly like paper forms.  True – familiarity is a good thing, but on the other hand a digital UX has far more possibilities for user engagement and ease-of-use than paper.   So – it is, in a way admitting failure to provide a better UX and downgrading to paper.

We recently engaged with an Israeli medical device vendor who has an innovative device for helping solve a common medical indication for men over 50.

I won’t go into details.

If you are a guy over 50, you know what I mean.

If not, it doesn’t matter.

The client CEO was interested in an eCRF (electronic CRF – case report form) system.  eCRF is better than paper, but it is, at the end of the day just a paper form in an electronic format.

I was having a lot of trouble trying to understand the CEO’s business requirements.   My attempts to steer the conversation to a discussion of how to obtain fast data for his clinical trial and reduce his time to FDA submission fell on deaf ears. A follow up conversation and demo of Flaskdata with the clinical and regulatory manager focused more on reports and how to manage queries.  Queries are a vestige from the paper CRF period, where a study monitor would visit the research site once/month, compare the paper source with the electronic data entry and raise queries or discrepancies.

In order to put this process into a historical context, let’s compare accounting systems from the late 70s, early 80s to an eCRF system.

Accounting versus eCRF

Feature Accounting circa 1970 eCRF circa 2019
Input data Paper JV – journal voucher Paper source
Data entry Data entry to a 2-sided accounting system Data entry to an eCRF
Data processing A batch job, processes punch-card data entry and produces a data entry report and data error report Site coordinators enter data to a Web app 1-3 days after the patient visit. Data entry errors or invalid data create data validation queries which are ignored until the study monitor visit a month later
Exception reporting Data error report – with non-numeric or invalid dates Queries
Management reports Trial balance

Profit and loss

Cash flow

Bean counters of CRF/items

What is profit and loss?

What does a cash flow model have to do with clinical trials?

 

 

Cost justification and TCO for medical device EDC systems

My first recommendation would be don’t buy an EDC system just because its cheap. Charging $100-300/month for a data entry application is not be a reason to give someone money.  As a client of ours once said – “I know I can use Google Forms for data entry and its free but Google Forms does not have an audit trial so  Google Forms is not an option for clinical trials”.

As a rule-of-thumb, a good EDC system for medical device studies should include audit trails and a clinical cash flow report (the flow of patients in and out, the flow of data items in and out).   The EDC should also be able to produce a clinical Profit and loss statement, showing you how well you are doing on your primary and secondary efficacy and safety end points. A well-designed and well-implemented EDC should include a robust data model for testing the primary endpoint and collecting safety data.    At the minimum, a solid design and implementation will cost at least $10,000.  Over 10 months, that’s a starting cost of $1000/month.   As Robert Heinlein said – “There is no such thing as a free lunch”.

Your decision to buy EDC should be based on an economic breakeven point.   One breakeven method is based on cost reduction in site monitoring.  Assume the EDC system costs $4000/month (weighted cost including implementation) and assuming a monthly site visit costs $800/day, then your EDC system must be able to save 5 site visit a month and assure protocol compliance. This places an upper bound on the price you can pay.

This is albeit problematic for small, 1 site studies which often use DIY implementations.  Just remember, that ignoring the implementation cost, does not make the product cheaper. In other words calculate your TCO (Total cost of ownership).

Or as one wise man one said – I’m too poor to buy a cheap car.