Streaming clinical trials in a post-Corona future

Last week, I wrote about using automated detection and response technology to mitigate the next Corona pandemic.

Today – we’ll take a closer look at how streaming data fits into virtual clinical trials.

Streaming – not just for Netflix

Streaming real-time data and automated digital monitoring is not a foreign idea to people quarantined at home during the current COVID-19 pandemic.   Streaming: We are at home and watching Netflix.   Automated monitoring: We are now using digital surveillance tools based on mobile phone location data to locate and track people who came in contact with other CORONA-19 infected people.

Slow clinical trial data management. Sponsors flying blind.

Clinical trials use batch processing of data. Clinical trials currently do not stream patient / investigator signals in order to manage risk and ensure patient safety.

The latency of batch processing in clinical trials is something like 6-12 months if we measure the time from first patient in to the time a bio-statistician starts working on an interim analysis.

Risk-based monitoring for clinical trials uses batch processing to produce risk profiles of sites in order to prioritize another batch process – namely site visits and SDV (source data verification).

The latency of central CRO monitoring using RBM ranges wildly from 1 to 12 weeks. This is reasonable considering that the design objective of RBM is to prioritize a batch process of site monitoring that runs every 5-12 weeks.

In the meantime – the study is accumulating adverse events and dropping patients to non-compliance and the sponsor is flying blind.

Do you think 2003 vintage data formats will work in 2020 for Corona virus?

An interesting side-effect of batch processing for RBM is use of SDTM for processing data and preparing reports and analytics.

SDTM provides a standard for organizing and formatting data to streamline processes in collection, management, analysis and reporting. Implementing SDTM supports data aggregation and warehousing; fosters mining and reuse; facilitates sharing; helps perform due diligence and other important data review activities; and improves the regulatory review and approval process. SDTM is also used in non-clinical data (SEND), medical devices and pharmacogenomics/genetics studies.

SDTM is one of the required standards for data submission to FDA (U.S.) and PMDA (Japan).

It was never designed nor intended to be a real-time streaming data protocol for clinical data. It was first published in June 2003. Variable names are limited to 8 characters (a SAS 5 transport file format limitation).

For more information on SDTM, see the 2011 paper by Fred Woods describing the challenges to create SDTM datasets.   One of the surprising challenges is data/time formats – which continue to stymie biostats people to this day.  See Jenya’s excellent post on the importance of collecting accurate date-time data in clinical trials. We have open, vendor-neutral standards and JavaScript libraries to manipulate dates. It is a lot easier today than it was in June 2003.

COVID-19 – we need speed

In a post COVID-19 era, site monitoring visits are impossible and patients are at home. Now, demands for clinical trials are outgrowing the batch-processing paradigm.   Investigators, nurses, coordinators and patients cannot wait for the data to be converted to SDTM, processed in a batch job and sent to a data manager.  Life science sponsors need that data now and front-line teams with patients need an immediate response.

Because ePRO, EDC and wearable data collection are siloed (or waiting for batch file uploads using USB connection like Phillips Actiwatch or Motionwatch), the batch ETL tools cannot process the data.  To place this in context; the patient has to come into the site, find parking, give the watch to a site coordinator, who needs to plug the device into USB connection, upload the data and then import the data to the EDC who then waits for an ETL job converting to SDTM and processing to an RBM system.

Streaming data for clinical research in a COVID-19 era

In order to understand the notion of streaming data for clinical research in a COVID-19 era, I drew inspiration and shamelessly borrowed the graphics from Bill Scotts excellent article on Apache Kafka – Why are you still doing batch processing? “ETL is dead”.

Crusty Biotech

The Crusty biotech company have developed an innovative oral treatment called Crusdesvir for Corona virus.   They contract with a site, Crusty Kitchen to test safety and efficacy of Crusdesvir. Crusty Kitchen has one talented PI and an efficient site team that can process 50 patients/day.

The CEO of Crusty Biotech decides to add 1 more site, but his clinical operations process is built for 1 PI at a time who can perform the treatment procedure in a controlled way and comply with the Crusdesvir protocol.  It’s hard to find a skilled PI and site team but the finally finds one and signs a contract with them.

Now they need to add 2 more PI’s and sites and then 4.   With the demand to deliver a working COVID-19 treatment, Crusty Biotech needs to recruit more sites who are qualified to run the treatment.    Each site needs to recruit (and retain more treatments).

The Crusty Biotech approach is an old-world batch workflow of tasks wrapped in a rigid environment. It is easy to create, it works for small batches but it is impossible to grow (or shrink) on demand. Scaling requires more sites, introduces more time into the process, more moving parts, more adverse events, less ability to monitor with site visits and the most crucial piece of all – lowers the reliability of the data, since each site is running its own slow-moving, manually-monitored process.

Castle Biotech

Castle Biotech is a competitor to Crusty Biotech – they also have an anti-viral treatment with great potential.    They decided to plan for rapid ramp-up of their studies by using a manufacturing process approach with an automated belt delivering raw materials and work-in-process along a stream of work-stations.   (This is how chips are manufactured btw).

Belt 1:Ingredients, delivers individual measurements of ingredients

Belt 1 is handled by Mixing-Baker, when the ingredients arrive, she knows how to mix the ingredients, then put mixture onto Belt 2.

Belt 2:Mixture, delivers the perfectly whisked mixture.

Belt 2 is handled by Pan-Pour-Baker, when the mixture arives, she can delicately measure and pour mixture into the pan, then put pan onto Belt 3.

Belt 3:Pan, delivers the pan with exact measurement of mixture.

Belt 3 is handled by Oven-Baker, when the pan arrives, she puts the pan in the oven and waits the specific amount of time until it’s done. When it is done, she puts the cooked item on the next belt.

Belt 4:Cooked Item, delivers the cooked item.

Belt 4 is handled by Decorator, when the cooked item arrives, she applies the frosting in an interesting and beautiful way. She then puts it on the next belt.

Belt 5:Decorated Cupcake, delivers a completely decorated cupcake.

We see that once the infrastructure is setup, we can easily add more bakers (PI’s in our clinical trial example) to handle more patients.  It’s easy to add new cohorts, new designs by adding different types of ‘bakers’ to each belt.

How does cupcake-baking relate to clinical data management?

The Crusty Biotech approach is old-world batch/ETL – a workflow of tasks set in stone. 

It’s easy to create. You can start with a paper CRF or start with a low-cost EDC. It works for small numbers of sites and patients and cohorts but it does not scale.

However, the process breaks down when you have to visit sites to monitor the data and do SDV because you have a paper CRF.  Scaling the site process requires additional sites, more data managers, more study monitors/CRAs, more batch processing of data, and more round trips to the central monitoring team and data managers. More costs, more time and 12-18 months delay to deliver a working Corona virus treatment.

The Castle Biotech approach is like data streaming. 

Using a tool like Apache Kafka, the belts are topics or a stream of similar data items, small applications (consumers) can listen on a topic (for example adverse events) and notify the site coordinator or study nurse in real-time.   As the flow of patients in a study grows, we can add more adverse event consumers to do the automated work.

Castle Biotech is approaching the process of clinical research with a patient-centric streaming and digital management model, which allows them to expand the study and respond quickly to change (the next pandemic in Winter 2020?).

The moral of the story – Don’t Be Krusty.

 

 

So what’s wrong with 1990s EDC systems?

Make no doubt about it, the EDC systems of 2020 are using a 1990’s design. (OK – granted, there are some innovators out there like ClinPal with their patient-centric trial approach but the vast majority of today’s EDC systems, from Omnicomm to Oracle to Medidata to Medrio are using a 1990’s design. Even the West Coast startup Medable is going the route of if you can’t beat them join them and they are fielding the usual alphabet soup of buzz-word compliant modules – ePRO, eSource, eConsent etc. Shame on you.

Instead of using in-memory databases for real-time clinical data acquisition, we’re fooling around with SDTM and targeted SDV.

When in reality – SDTM is a standard for submitting tabulated results to regulatory authorities (not a transactional database nor an appropriate data model for time series).  And even more reality – we should not be doing SDV to begin with – so why do targeted SDV if not to perpetuate the CRO billing cycle.

Freedom from the past comes from ridding ourselves of the clichés of today.

 

Personally – I don’t get it. Maybe COVID-19 will make the change in the paper-batch-SDTM-load-up-the-customer-with-services system.

So what is wrong with 1990s EDC?

The really short answer is that computers do not have two kinds of storage any more.

It used to be that you had the primary store, and it was anything from acoustic delay-lines filled with mercury via small magnetic dougnuts via transistor flip-flops to dynamic RAM.

And then there were the secondary store, paper tape, magnetic tape, disk drives the size of houses, then the size of washing machines and these days so small that girls get disappointed if think they got hold of something else than the MP3 player you had in your pocket.

And people still program their EDC systems this way.

They have variables in paper forms that site coordinators fill in on paper and then 3-5 days later enter into suspiciously-paperish-looking HTML forms.

For some reason – instead of making a great UI for the EDC, a whole group of vendors gave up and created a new genre called eSource creating immense confusion as to why you need another system anyhow.

What the guys at Gartner euphemistically call a highly fragmented and non-integrated technology stack.
What the site coordinators who have to deal with 5 different highly fragmented and non-integrated technology stacks call a nightmare.

Awright.

Now we have some code – in Java or PHP or maybe even Dot NET THAT READS THE VARIABLES FROM THE FORM AND PUTS THEM INTO VARIABLES IN MEMORY.

Now we have variables in “memory” and move data to and from “disk” into a “database”.

I like the database thing – where clinical people ask us – “so you have a database”. This is kinda like Dilbert – oh yeah – I guess so. Mine is a paradigm-shifter also.

Anyhow, today computers really only have one kind of storage, and it is usually some sort of disk, the operating system and the virtual memory management hardware has converted the RAM to a cache for the disk storage.

The database process (say Postgres) allocate some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel.

If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere.
When Postgres next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file.

And that’s it.

Virtual memory was meant to make it easier to program when data was larger than the physical memory, but people have still not caught on.
And maybe with COVID-19 and sites getting shut-down; people will catch on that a really nifty user interface for GASP – THE SITE COORDINATORS and even more AMAZING – a single database in memory for ALL the data from patients, investigators and devices.

Because at the end of the day – grandma knows that there ain’t no reason not to have a single data model for everything and just shove it into virtual memory for instantaneous, automated DATA QUALITY, PATIENT SAFETY AND RISK ASSESSMENT in real-time.

Not 5-12 weeks later for research site visit or a month later after the data management trolls in the basement send back some reports with queries and certainly not spending 6-12 months cleaning up unreliable data due to the incredibly stupid process of paper to forms to disk to queries to site visits to data managers to data cleaning.

Develop project management competencies to speed up your clinical trials

The biggest barrier to shortening clinical trial data cycle times is not recruitment.   It is not having a fancy UI for self-service eCRF forms design.   It is not software.

It is not, to paraphrase Medidata, having the ability to Run Your Entire Study On A Unified, Intelligent Platform Built On Life Science’s Largest Database.

It is incompetence in managing a construction project.

That construction project is called designing a clinical trial and the information system for collecting and monitoring data.

For a long time, I thought that this was peculiarly an Israeli problem.

However, conversations with colleagues in the US and Europe suggest that late starts, feet-dragging and time-consuming  change requests may be the norm. Collecting too many variables in the data model is the norm. Complex, long forms that make life hard for the site coordinators is the norm,  Surfeits of edit checks and thousands of queries are the norm.

Most companies spend little  money on project management training and even less money on clinical project strategy development.  Most training is on process, regulatory compliance and standard operating procedures.

Rarely, do we see medical device companies spend money on competencies that will help employees construct clinical trial projects more effectively.

There are verbal commitments that are rarely action commitments.

Yet there is a direct linkage between clinical operations team knowledge and corporate revenue growth which is dependent upon delivering innovative drugs and devices to market.

One way management teams can maximise their investments in project training and clinical project strategy development (outsourced or in-sourced) is to link clinical operations team training to study management competency models that management can qualify and measure.

But the development of a clinical team competency model has strategic and operational barriers that must be managed to make it successful.

Clinical trial project management competency model example

Clinical team Competency Setup Considerations

1. Clinical people often think that building the ‘database’ is an art, not a science, and don’t like to be measured in what they perceive is a non-core skill.

2.  Your project  competency model must include both soft and hard skills training to make it effective.

3. Clinical trial management teams must focus on the competency requirements to make it work and it must be a hands-on approach.

4. You must be able to quantitatively measure the competencies (time to design forms, edit check design, monitoring signals, data cycle time, time spent in meetings, change requests).

5. Competency clinical trial management training programs must be continuous training and educational events, not a one-time event or else the program will fail.

6. The steps of your competency program must be very specific and delineated to make sure it can be delivered and measured.

7. Your clinical operations team must agree that the competencies you are measuring truly help them deliver the study faster (They don’t have to like doing it, just agree that there are required action steps to reduce data cycle times)

8. When implementing your project competencies audits, the certification should be both written and experientially measured to get an accurate reading of the clinical operations team member capabilities.

9. All project  competency certification candidates should have the ability to retest to confirm skills growth.

10. Project competency assessments should never be used solely as a management scorecard tool to make employment decisions about clinical operations team members.

To increase your company revenues and clinical project training success, build and deliver project competency models.

4 strategies to get connected medical devices faster to FDA submission

Introduction

Better designs, site-less trials, all-digital data collection and PCM (patient compliance monitoring) can all save time and money in connected medical device clinical trials.  This article will help you choose which strategies will be a good fit to help you validate your connected medical device and its intended use for submission to FDA.

What is the baseline cost? (hint don’t look at the costs of drug studies)

If you want to save, you need to know the price tag. Note that the costs of drug trials, including CRO and regulatory affairs is an order of magnitude higher than for connected medical devices.  A JAMA report from Nov 2018, looked at drug trials and concluded that a mean cost of $19M was cheap compared to the total cost of drug development – $1-2BN.

Findings:  In this study of 59 new therapeutic agents approved by the FDA from 2015 to 2016, the median estimated direct cost of pivotal efficacy trials was $19 million, with half of the trial cost estimates ranging from $12 million to $33 million. At the extremes of the distribution were 100-fold cost differences, and patient enrollment varied from fewer than 15 patients to more than 8000 patients.

By comparison, the estimated cost of medical device clinical trials to support approval by the FDA, ranges from $1 million to $10 million. A report from May 2017 surveyed the costs of medical device clinical trials and the potential of patient registries to save time and money. The report has some interesting numbers:

1.The average cost to bring a low-to-moderate concern device from concept to 510(K) approval is $31 million. 77% of that is spent on FDA-related/regulatory-affairs activities.

2.The average cost for a high-risk PMA device averages $94 million, with $75 million spent on FDA-related/regulatory-affairs activities. Average of 4.5 years from first contact with FDA to device approval.

3.Clinical trials outside the US are 30% to 50% cheaper. Less than 50% of medical device trials are now conducted in the US.

I. Better study designs

Real-world data (RWD) and real-world evidence (RWE) are being used for post-market safety surveillance and for designing studies, but they are not replacements for conducting a randomized trial with a controlled clinical protocol.  FDA recently issued guidance for use of real-world evidence for regulatory decisions.  FDA uses RWD and RWE to monitor post-market safety and adverse events and to make regulatory decisions.

RWD and RWE can be used in 4 ways improve the design of medical device clinical trials when there is a predicate device that is already being used for treating patients.

1.Use RWD/RWE to improve quality and efficiency of device evaluation at study phases (feasibility, pivotal, and post-market), allowing for rapid iteration of devices at a lower cost.

2.Explore new indications for existing devices

3.Cost efficient method to compare a new device to standard of care.

4.Establish best practices for the use of a device in sub-populations or different sub-specialties.

You will need to factor in the cost of obtaining access to the data and cost of data science.

But real-world data may not be reliable or relevant to help design the study.  As FDA notes in their guidance for Using Real-world evidence to support regulatory decision making:

RWD collected using a randomized exposure assignment within a registry can provide a sufficient number of patients for powered subgroup analyses, which could be used to expand the device’s indications for use. However, not all RWD are collected and maintained in a way that provides sufficient reliability. As such, the use of RWE for specific regulatory purposes will be evaluated based on criteria that assess their overall relevance and reliability. If a sponsor is considering using RWE to satisfy a particular FDA regulatory requirement, the sponsor should contact FDA through the pre-submission process.

II. Site-less trial model

Certain kinds of studies for chronic diseases with simple treatment protocols can use the site-less trial model.  The term site-less is actually an oxymoron, since site-less or so-called virtual trials are conducted with a central coordinating site (or a CRO like Science37). Nurses and mobile apps are using to collect data from patients at home.   You still need a PI (principal investigator).

The considerable savings accrued by eliminating site costs, need to be balanced with the costs of technology, customer support, data security and salaries and travel expenses of nurses visiting patients at homes.

III. Mostly-digital data collection

For a connected medical device, mostly-digital data collection means 3 things:

1.Collect patient reported outcome data using a mobile app or text messaging

2.Collect data from the connected medical device using a REST API

3.Enable the CRC (clinical research coordinator) to collect data from patients (IE, ICF for example) using a Web or mobile interface (so-called eSource) and skip the still-traditional paper-transcription step. In drug studies, this is currently impossible because hospital source documents are paper or they are locked away in an enterprise EMR system. For connected medical device studies in pain, cannabis and chronic diseases, most of the source data can be collected by the CRC with direct patient interviews. Blood tests will still need to be transcribed from paper. Mostly-digital means mostly-fast. Data latency for the paper source should be 24 hours and data latency for the digital feeds should be zero.

There are a number of companies like Litmus Health moving into the space of digital data collection from mobile devices, ePRO and wearables. However, unlike validating a connected medical device for a well-defined intended use, Litmus Health is focused on clinical data science for health-related quality of life.

IV. PCM (patient compliance monitoring)

Once the data is in the system, you are almost there.  Fast (low-latency) data from patients, your connected device and the CRC (which may be nurses in a site-less trial) are 3 digital sources which can be correlated in order to create patient compliance metrics.  But that is a story for another essay.

Summary

We have seen that new business models and advanced technologies can help sponsors conduct connected medical device trials cheaper and faster. It may not be a good fit for your product.  Contact us and we will help you evaluate your options.

For more information read Gail Norman’s excellent article  Drugs, Devices, and the FDA: An overview of the approval process

 

 

 

 

 

 

 

 

Israel Biomed 2019-the high-social, low stress STEM conference

Impressions from Biomed 2019 in Tel Aviv

This week was the annual 3 day Biomed/MIXiii (I have no idea what MIXiii means btw) conference in Tel Aviv.  The organizers also billed it as the “18th National Life Science and Technology Week” (which I also do not know what that means). This was a particular difficult time for a conference of medical device and pharma in Tel Aviv since it coincided with the Eurovision 2019 activities – and the traffic was tough.

There were a huge number of lectures and participants from all over the world and I suppose from that perspective, the conference is a success and tribute to the burgeoning Israeli biomed industry.  Forbes calls Biomed “The High-Paying, Low-Stress STEM Job You Probably Haven’t Considered”.  I think that this is probably a good description for the conference – high participation but low stress.

My colleagues and I come to the conference to network, schmooze, meet customers and suppliers.  It’s a good opportunity to take a few meetings, say hi to friends and hustle for new business.  Having said that, I did meet a few really interesting companies:

RCRI – is a Minneapolis MN based medical device CRO.  I met Todd Anderson and his boss Lisa Olson and pitched our approach for fast data in clinical trials to assure high levels of patient compliance to the protocol and submit faster to FDA.    Todd and Lisa get it and they were open about the CRO business model being more people-hours not speed.     They seemed genuinely interested in what we are working on but its hard to tell with Americans.

Docdok Health – is a startup founded by Yves Nordman, who is a Swiss MD living in Carmiel.  It’s a doctor-patient communications platform beginning to branch out into Post-marketing studies with RWD.    We shared demos and it seems that there is synergy between our regulatory platform and their post-marketing work.

Resbiomed – met Alex Angelov, the CEO.  Alex is leading a consortium including Flaskdata, Carl Zeiss, Collplant, PreciseBio and Pluristem for a Horizon2020 submission for an amazing project for an implant to the cornea.  Dan Peres from Pluristem got us together.   Cheer for us!

BSP Medical and ICB (Israel China Biotech investment) – my buddy Hadas Kligman literally took me by hand to visit to Yehuda Bruner and Andrew Zhang and I did my 60s elevator pitch on getting medical device companies to FDA/CFDA 6-12 months faster.   We agree to talk after the conference.

Butterfly Medical –  I met Idan Geva, the CEO last year at Biomed – we ate lunch at the same table.  I pitched him but he was uninterested – they were using EDC2Go – and he didn’t want to hear other options.     At the Minnesota pavilion talking to Todd Anderson from RCRI,   Idan shows up and looks at me and says “Heah – Hi Danny – I left a contact me request on your web site yesterday and no one got back to me. I said shame on us.  He says – he was referred to us by someone from Florida who used to use Medidata.  I asked where/who? was it Miami?  He says yeah it was Miami and checks his phone – says its someone from Precision Clinical Research that are using Flaskdata and recommended.    (Precision is one of our customer’s Miami sites).  I asked what happened to EDC2Go – he said well you know – they are end of life (I think this means the end of low-cost EDC) and we are now entering questionnaires manually on paper and it is driving us crazy.   He said – can you stick around and give us a demo at 15:00?  I said sure.  We met at 15:00 by the bar upstairs in the David Intercontinental and I demoed the system – he said “Show me the Forms designer”. I showed him.  He says “show me how CRC enters data” – I showed him.  He says “Show me how to extract data” – I showed him.  I think he actually did not believe how fast the Extract to CSV process was and asked me twice if that was the data.  In the end – the format of Mac Numbers was a bit strange for him. I showed him a quick presentation – and he saw that Serenno is a customer – and says – “Heah Tomer is a neighbor of ours in the incubator in Yokneam”.    He asked how much and I said $2K for a basic onboarding package and $1500 / month.  Or $10K and we will build the CRF (their CRF is super simple btw).  He wanted a discount, being Israeli.  I said – “lets meet with your clinical person and get her to buy-in to the solution.  If she buys in – you and I can talk business but before that, there is no point horse-trading.

Count the probabilities of this happening and you will see that it is an impossible event.

Thursday I went back to demo Todd and meet Dr Yael Hayun from Syqe Medical. Yael is one of the most impressive people I’ve met in a long time. She is an MD from Hadassah and one of the movers and shakers in LogicBio Therapeutics.    After we chatted – I told her that Syqe is lucky to have her onboard.   I did our Today is about Speed presentation and a short demo. She was suitably impressed and then mentioned they had met with a Danish EDC company called Smart Trial – which turns out is yet another low-cost eCRF provider.   I said look – eCRF is like 10% of the solution you need – in the case of Syqe, you have a digital inhaler and with cannabis, you are going to have a lot of concerns about patient compliance.

This is what we do – fast data collection from patients, investigators and digital inhalers and automated deviation detection and response.

On the way back – huge traffic from Eurovision.   Didn’t hear a single lecture but the meetings and people were outstanding.

 

The best alternative to paper in medical device clinical trials

There is an urban legend that paper is cheaper than EDC

$1000/subject for paper-based data management (the going rate in Israel)  is a lucrative business for small CROs, independent data managers and biostatisticians, but $1000/subject is not the same as “total cost of ownership” or TCO.

The TCO of doing a clinical trial for an innovative medical device vendor will include the time spent by the scientific staff  preparing and reducing data, additional time by the data manager to clean the data and a large intangible cost of the delay to receive management reports of patient compliance, typically 2-3 months, if you are running a multi-site study with paper.

But beyond TCO, the most significant factor in a medical device vendor decision process is how fast can  you get actionable intelligence on your patients and sites and CRO?

(more…)

The key is not first to eSource, the key is smart to market

This post is not for the Pfizers, Novartis, Merck and GSK giants of the life science industry.

Its for the innovators, the smaller, creative life science companies that are challenged by the costs, the regulatory load and complexity of executing a clinical trial.

This post is dedicated to the startup entrepreneurs of the world.

Building an EDC system for your clinical trial requires executing a plan in order to successfully recruit patients, collect high quality data, sustain patient safety and produce your statistical report in a timely fashion. You can potentially embark on an EDC journey without a plan, without a simple, well-designed protocol, and without appropriate clinical monitoring. This will guarantee you a long trek of pain, burning cash while you resolve issues and clean data.

The pivotal question to any clinical decision maker is this: Do you want to start building an ECRF (electronic case report form) now and pay in pain and cash later, or plan now and own the process?

Simple concept, but important message.

It doesn’t matter if your business is a one-person startup or a “Big Five” bio-technology company. If you develop medical devices, medtech, biotech or drugs on a daily basis, you are faced with an increasing stable of competitors, and barriers to success that can frustrate you as a business manager or a startup entrepreneur trying to make payroll.

Being an entrepreneur like you, I’ve constantly been exposed to walls that have continuously tried to prevent me from success. In this post, you will learn how to plan and execute EDC quickly, efficiently and successfully and break through the business, clinical and regulatory barriers that stand in your way. In a world where competition erodes market share, depresses product pricing, and where large company branding and marketing tramples the innovative medtech startup, the key is not first to eCRF the key is smart to market.

So – here are 2 factors to consider to help get you faster to the finish line.

(more…)

Automated detection and response – a pattern of low-concern and high-impact

Expressions-3.jpg

It is the hottest July in Israel in 17 years and the day after Tisha B’Av – a day of mourning and fasting where the Jews remember the destruction of the Temple and attempt to be nice to each other for a few days.

A time for reflection.

Today I want to talk about an “anti-design-pattern“.

In architecture and software engineering, a design pattern is a generalised and repeatable solution to a problem.

You can think of a design pattern as a template for how to solve a problem, build a house, construct a piece of furniture or write software code.

In the world of architecture, design patterns are thousands of years old. The correct way to build a house or a chair were worked out maybe 2000 years ago in a process of trial and error, inspiration, creativity and innovation that became accepted because it was good and made people feel good living in the house or sitting in the chair.

Design patterns usually (or should have) names that pretty much describe what the pattern does. A pattern called “Office chair” is a template for building a chair for an office. The software pattern called “Proxy” describes a function that hides the operation of another function and forwards and receives messages from the “real” function. As in real-life, the real function delegates things to the proxy – “take this ballot and vote for me in the shareholder meeting” is an example of a people proxy.

What happens when people make or do things that are the opposite of a best-practice design pattern?

We call this an “anti-design-pattern“. The rationale for formalising anti-design-patterns is to learn from the mistakes of others and minimise impact of mistakes.

Low concern, High impact: An anti-design pattern for clinical trial monitoring

Low-concern, High-impact essentially says that Low-concern for a potential issue will quickly lead to zero-awareness of an issue. Zero-awareness of an issue will result in zero-testing of related data inputs. Zero-testing of related data inputs will result in High-impact on a change. Attempts to recover from the High-impact situation may result in a series of additional issues or a “cascade failure”. Cascade failures happen in strongly interconnected systems such as power grids, complex computer software and the human body.

Let’s illustrate the Low-concern, high-impact anti-design pattern with an example from the world of clinical trials.

Consider a scenario where the study monitors use a data extract to produce a report of patients who were not eligible (i.e. did not pass the inclusion/exclusion criteria) but are participating in the trial. Quoting from the European Medicines Agency page on GCP:

Adherence to the protocol is a fundamental part of the conduct of a clinical study. Sponsors and investigators should not  use systems of prospectively approving protocol deviations, in order to effectively widen the scope of a protocol.   Protocol design should be appropriate to the populations required and if the protocol design is defective, the protocol should be amended.

GCP does permit deviations from the protocol when necessary to eliminate immediate hazards to the subjects but this should not normally arise in the context of inclusion/exclusion criteria, since the subject is not yet fully included in the trial at that point in the process

I think we can agree that a report that enables the study monitor and sponsor to respond quickly to IE violations is a valuable tool.

Let’s now describe an attack scenario where the players fall on the Low-concern, High-impact anti-design pattern.

The sponsor uses C# code that extracts data from the EDC database to XML and then a report writer application formats the XML data into a report of IE violations. The EDC developer made a small change to the EDC database schema in order to enable ingesting data from mobile electronic source devices. The C# code maintainer was not aware of the EDC schema change and the study monitor who runs the report is not aware of the vagaries of C# and Oracle schema changes and does not want to know or understand code.

This is the first part of the pattern – “Low-awareness“.

The study monitor runs the report as usual and everything looks OK. There are no IE violations – which is a good thing. Unbeknownst to the monitoring team, due to the schema change, records that were ingested from the electronic source tablets are not joined with the EDC subject record and as a result do not appear in the report.

Investigators at low-performing sites notice that there is no oversight on IE deviations and bend the rules in order to enrol patients that do not pass IE criteria. Note that there is a strong economic motivation for the PI to enrol as many patients as possible.

As a result of low-concern and low-awareness there is now zero-knowledge of the bug – since the report of IE violations is showing green lights.

2 years later (its a 4 year multi-center global study), the data is locked and an interim-analysis is performed. The study statistician uses the same C# code to extract data but this time, she notices missing data and pushes-back to the sponsor that something does not seem kosher. The sponsor calls the CRO who calls in a quality auditor and another 3 months later discovers dozens of subjects that were in violation of the inclusion/exclusion criteria. This is a major setback to the study.

This is the second part of the pattern – “High-impact“.

An investigation of case immediately leads to the question of why was the C# data extract code not re-validated after the EDC schema change? The answer leads us back to “low-concern“. The data extract is provided by a third-party clinical tool developer that the CRO uses in thousands of studies and it always worked. In addition, the third-party clinical tool provider works with dozens of EDC vendors and may not have the management attention and resources to manage changes in the EDC systems that their customers use. On top of all that, the engineering team in the clinical tool provider had bad vibes in the past with the CRO IT staff and after a few annoying support calls, they flipped the bozo bit on the CRO IT folks, labelling them as stupid and incompetent.

Don’t flip the bozo bit

Don’t flip the bozo bit is a reference to Bozo the clown. It is a conscious decision to ignore another persons input – since that person is considered a “Bozo”. In his 1995 book Dynamics of Software Development, Jim McCarthy (former Visual C manager at Microsoft) coined the term. McCarthy’s advice is that everyone has something to contribute – it’s easy and tempting, when someone ticks you off or is mistaken (or both), to simply disregard all their input in the future by setting the “bozo flag” to TRUE for that person. But by taking the lazy way out, you poison interactions with other people and cannot avail yourself of help from the “bozo” ever again.

Don’t flip the bozo bit  is related to low-concern, high-impact and can be found in almost all cases of high impact damage due to un-tested changes.

 In summary

By being aware of anti-design patterns, you can improve study monitoring performance and GCP compliance.

The key to achieving that is by improving your awareness to the meta-processes that you use, how you solve problems, how you react to issues and how you respond to annoying (and possibly mistaken) colleagues and customers.

Why the CRO model for medical device clinical trials is broken and how to fix it

medical device clinical trials - remote monitoring

Who said: ‘If you are not part of the solution, you must be part of the problem’?

This appears to be a misquotation of Eldridge Cleaver  author of the 1968 book “Soul on Ice” and early leader of the Black Panthers. The correct (full) quote is: ‘There is no more neutrality in the world. You either have to be part of the solution, or you’re going to be part of the problem.

“CROs play a crucial role in operating clinical trials and meeting milestones. However, effectively managing CROs by having continuous visibility on their work and the quality of the data they provide remains a large concern for Clinical Operations executives. ”

This is a quote taken from a marketing email from Comprehend – a cloud software company that is focused on central monitoring. They’ve developed outstanding software; I’m on the mailing list and I enjoy reading the customer success stories.

Onsite monitoring performed by CROs accounts for 20-30 percent of total study cost and delivers 1-2 percent actionable items. Calling the quality of the data they provide a large concern for Clinical Operations executives is an understatement.

If data security worked like this – IT managers would be paying outsourcing companies like HP and IBM 20-30% of their total IT budget for monitoring their network which in return would result in only 1-2% actionable intelligence.     I don’t think so.   There is not a single IT manager in the world that would accept such abysmal performance at such high prices.

Hmm.   So let me get this straight.

CROs are delivering abysmally low-quality service – yet we are calling this an oversight and visibility issue.

Why? Why should we paper over a broken system with software?

The CRO business model is broken.

Where entire industries have undergone business process re-engineering over the past 30 years – clinical trials operations are stuck in the 80s with goofy/manual onsite monitoring and sponsors are being held hostage by CROS. This is reminiscent of the IT service bureau model of the 70s and 80s which evolved into software as a service for anything model of today.

Several years ago – our business unit that specializes in medical device security in Israel had engaged with a client running a large multi-center trial. We had a lot of interaction with the CRO and when I asked the CRO General Manager  (rather naively I  suppose) why they were not doing remote monitoring of data from their EDC system – she told me that she had given a talk about RBM at a conference the previous year but she didn’t seem to have a good reason for not actually implementing RBM with this particular study that had 30 hospitals in the trials

I pondered on the reason for that and realized that human, manual, onsite monitoring was an incredibly lucrative business for them.  They had no economic incentive to use better technology.  Good for the CRO. Bad for the sponsor that pays high prices and gets low-quality results.

Oversight is not a replacement for re-engineering the system.

CRO oversight is not a replacement for re-engineering the system which I believe will require more than oversight dashboards.   As we wrote here, communications with the sites is a critical component relating to a study’s performance and ultimately, its success.  In addition, cloud remote monitoring technologies and dynamic methods of risk assessment can help improve safety and get results faster by saving avoidable rework after data lock.

Productivity tools must result in better prices and results.

Good quality data with real-time alerts and collaboration that help teams work better together are great productivity tools that should enable the CRO to deliver much higher quality results at much lower prices.

15 years ago I was a business unit manager at a large group of companies.   One of the General managers pitched the CEO of the group with an idea to create cross-functional groups, share high quality data and improve collaboration to help teams work better together.

The CEO said – “The best synergy is when each person does their job”

If  sponsors told their CROs that their target is to reduce their monitoring prices by 50% and improve their rate of acquiring actionable intelligence from the sites by 5X ( 5% instead of 1% should not be a challenge) the industry would start going through a phase transition.

Just the way FedEx, IBM and Amazon changed the way we do business today, it’s time that sponsors started standing up for better service and lower prices.