A better tomorrow for clinical trials

A better tomorrow – Times of crisis usher in new mindsets

By David Laxer. Spoken from the heart.

In these trying days, as we adjust to new routines and discover new things about ourselves daily, we are also reminded that the human spirit is stronger than any pandemic and we have survived worse.

And because we know we’re going to beat this thing, whether in 2 weeks or 2 months, we also know that we will eventually return to normal, or rather, a new normal.

In the meantime, the world is showing a resolve and a resilience that gives us much room to hope for a better tomorrow for developing new therapeutics.

However, these days have got us wondering how things might have looked if clinical trials were conducted differently. It’s a well-known fact that clinical trials play an integral role in the development of new, life-saving drugs, but by the time they get approved by the FDA it takes an average of 7.5 years and anywhere between $150m-2bn per drug.

Reasons for failure

Many clinical studies still use outdated methods for data collection and verification: they still use a fax for crying out. They continue to manually count leftover pills in bottles, and still rely on patients’ diary entries to ensure adherence.

Today, the industry faces new challenges to recruit enough participants as COVID-19 forces people to stay at home and out of research hospital sites. 

Patient drop-outs, adverse events and delayed recording of adverse events  are still issues for pharma and medical device companies conducting clinical research.  The old challenge of creating interpretable data to examine safety and efficacy of new therapeutics remain.

The Digital Revolution:

As hard as it is to believe, the clinical trial industry just might be the last major industry to undergo digital transformation..

As every other aspect of modern life has already been digitized, from banking to accounting to education now, more than ever, is the time to accelerate the transition of this crucial process, especially as we are painfully reminded of the need for finding a vaccine.  Time is not a resource we can waste any longer.

Re-imagining the future

When we created FlaskData we were primarily driven by our desire to disrupt the clinical trial monitoring paradigm  and bring it into the 21st century — meaning real-time data collection and automated detection and response. From the beginning we found fault in the fact that clinical trials were, and still are overly reliant on manual processes  and this causes unacceptable delays in bringing new and essential drugs and devices to market. These delays, as we are reminded during these days, not only cost money and time, but ultimately they cost us lives.

To fully achieve this digitization it’s important to create a secure cloud service that can accelerate the entire  process, and provide sponsors with an immediate picture and interpretable data without having to spend 6-12 months cleaning data.  This is achieved with real-time data collection, automated detection and response and an open API that enables any healthcare application to collect clinical-trial-grade data and assure patient adherence to the clinical protocol.

Our Promise:

It didn’t take a virus to make us want to deliver new medical breakthroughs into the hands that need them most, but it has definitely made us double down on our resolve to see it through. The patient needs to be placed at the center of the clinical research process and we are tasked to reduce the practical, geographical and financial barriers to participation. The end result is a more engaged patient, higher recruitment and retention rates, better data and reduced study timelines and costs.

The Need For Speed

As the world is scrambling to find a vaccine for Corona, we fully grasp 2 key things: 1) Focus on patients and 2) Provide clinical operations teams with the ability to eliminate inefficiencies and move at lightning speed. In these difficult times, there is room for optimism as it is crystal clear, just how important it is to speed up the process.

 

Social Distancing

In this period of social distancing, we can only wonder about the benefits of conducting clinical trials remotely. We can only imagine how many trials have been rendered useless as patients, reluctant to leave their houses have skipped the required monitoring, have forgotten to take their pills and their diary entries have gotten lost amidst the chaos.

With a fully digitized process for electronic data collection, social distancing would have no effect on the clinical trial results.

About David Laxer

David is a strategist and story-teller. He says it best – “Ultimately, when you break it down, I am a storyteller and a problem solver. The kind that companies and organizations rely on for their brand DNA, culture and long-lasting reputation”.

 

Reach out to David on LinkedIn

So what’s wrong with 1990s EDC systems?

Make no doubt about it, the EDC systems of 2020 are using a 1990’s design. (OK – granted, there are some innovators out there like ClinPal with their patient-centric trial approach but the vast majority of today’s EDC systems, from Omnicomm to Oracle to Medidata to Medrio are using a 1990’s design. Even the West Coast startup Medable is going the route of if you can’t beat them join them and they are fielding the usual alphabet soup of buzz-word compliant modules – ePRO, eSource, eConsent etc. Shame on you.

Instead of using in-memory databases for real-time clinical data acquisition, we’re fooling around with SDTM and targeted SDV.

When in reality – SDTM is a standard for submitting tabulated results to regulatory authorities (not a transactional database nor an appropriate data model for time series).  And even more reality – we should not be doing SDV to begin with – so why do targeted SDV if not to perpetuate the CRO billing cycle.

Freedom from the past comes from ridding ourselves of the clichés of today.

 

Personally – I don’t get it. Maybe COVID-19 will make the change in the paper-batch-SDTM-load-up-the-customer-with-services system.

So what is wrong with 1990s EDC?

The really short answer is that computers do not have two kinds of storage any more.

It used to be that you had the primary store, and it was anything from acoustic delay-lines filled with mercury via small magnetic dougnuts via transistor flip-flops to dynamic RAM.

And then there were the secondary store, paper tape, magnetic tape, disk drives the size of houses, then the size of washing machines and these days so small that girls get disappointed if think they got hold of something else than the MP3 player you had in your pocket.

And people still program their EDC systems this way.

They have variables in paper forms that site coordinators fill in on paper and then 3-5 days later enter into suspiciously-paperish-looking HTML forms.

For some reason – instead of making a great UI for the EDC, a whole group of vendors gave up and created a new genre called eSource creating immense confusion as to why you need another system anyhow.

What the guys at Gartner euphemistically call a highly fragmented and non-integrated technology stack.
What the site coordinators who have to deal with 5 different highly fragmented and non-integrated technology stacks call a nightmare.

Awright.

Now we have some code – in Java or PHP or maybe even Dot NET THAT READS THE VARIABLES FROM THE FORM AND PUTS THEM INTO VARIABLES IN MEMORY.

Now we have variables in “memory” and move data to and from “disk” into a “database”.

I like the database thing – where clinical people ask us – “so you have a database”. This is kinda like Dilbert – oh yeah – I guess so. Mine is a paradigm-shifter also.

Anyhow, today computers really only have one kind of storage, and it is usually some sort of disk, the operating system and the virtual memory management hardware has converted the RAM to a cache for the disk storage.

The database process (say Postgres) allocate some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel.

If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere.
When Postgres next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file.

And that’s it.

Virtual memory was meant to make it easier to program when data was larger than the physical memory, but people have still not caught on.
And maybe with COVID-19 and sites getting shut-down; people will catch on that a really nifty user interface for GASP – THE SITE COORDINATORS and even more AMAZING – a single database in memory for ALL the data from patients, investigators and devices.

Because at the end of the day – grandma knows that there ain’t no reason not to have a single data model for everything and just shove it into virtual memory for instantaneous, automated DATA QUALITY, PATIENT SAFETY AND RISK ASSESSMENT in real-time.

Not 5-12 weeks later for research site visit or a month later after the data management trolls in the basement send back some reports with queries and certainly not spending 6-12 months cleaning up unreliable data due to the incredibly stupid process of paper to forms to disk to queries to site visits to data managers to data cleaning.

I love being a CRA, but the role as it exists today is obsolete.

I think that COVID-19 will be the death knell for on-site monitoring visits and SDV.    Predictions for 2020 and the next generation of clinical research – mobile EDC for sites, patients and device integration that just works.

I’m neither a clinical quality nor a management consultant. I cannot tell a CRO not to bill out hours for SDV and CRA travel and impact study budget by 25-30% and delay results by 12-18 months.

Nope.   I’m not gonna tell CROs what to do.    Darwin will do that for me.

I develop and support technology to help life science companies go faster to market.  I want to save lives by shortening time to complete clinical trials for COVID-19 vaccine and treatments by 3-6 months.

I want to provide open access to research results – for tomorrow’s pandemic.

I want to  enable real-time data sharing.

I want to enable participants in the battle with COVID-19 to share real-world / placebo arm data, making the fight with COVID-19 more efficient and collaborative and lay the infrastructure for the next wave of pandemics.

I want to provide real-time data collection for hospitals, patients and devices.  Use AI-driven detection of protocol violations and automated response to enable researchers to dramatically improve data reliability, allowing better decision making and improving patient safety.

The FDA (a US government regulatory bureaucracy) told the clinical trial industry to use e-Source 10 years ago and to use modern IT .  If FDA couldn’t then maybe survival of the fittest and COVID-19 well do the job.

FDA’s Guidance for Industry: Electronic Source Data in Clinical Investigations, says, in part:
“Many data elements (e.g., blood pressure, weight, temperature, pill count, resolution of a symptom or sign) in a clinical investigation can be obtained at a study visit and can be entered directly into the eCRF by an authorized data originator. This direct entry of data can eliminate errors by not using a paper transcription step before entry into the eCRF. For these data elements, the eCRF is the source. If a paper transcription step is used, then the paper documentation should be retained and made available for FDA inspection.”

I loved this post by Takoda Roland on the elephant in the room.

Source data validation can easily account for more than 80% of a monitor’s time. You go on site (or get a file via Dropbox). Then you need  to page through hundreds of pages of source documents to ensure nothing is missing or incomplete. Make sure you check the bare minimum amount of data before you need to rush off to catch my flight, only to do it all again tomorrow in another city, I am struck with this thought: I love being a CRA, but the role as it exists today is obsolete.

Opinion: A Futurist View on the Use of Technology in Clinical Trials

 

Competitive buzzwords in EDC companies

We recently did a presentation to a person at one of the big 4 pharma.  His job title was

Senior IT Project Manager Specialized in Health IT.

I looked at the persons LinkedIn profile before the call and I noticed that the sentence is in past tense. Specialized in Health IT implying that he was now a Senior IT manager who no longer specialized in anything.

I have a friend who worked at Pfizer in IT. He was discouraged by pharma IT mediocrity especially  when he compared it to the stellar talents in the R&D departments.

So it stands to reason that the EDC vendors are just a notch up the technology ladder from the pharma IT guys. If you do not have a unique technology value proposition, you have to resort to marketing collateral gymnastics.

To test this hypothesis – I took a look at the web sites of 4 EDC vendors:  Medidata, Medrio, Omnicomm and Oracle Life Sciences.

Medidata

Run Your Entire Study On A Unified, Intelligent Platform Built On Life Science’s Largest Database.

At Medidata, we’re leading the digital transformation of clinical science, so you can lead therapies to market faster, and smarter. Using AI and advanced analytics, our platform brings data managers, clinical operations, investigators, and patients together to accelerate the science and business of research.

MediData is making a disturbing suggestion in their marketing collateral that they leverage other companies trial data in their Life Science Database to help you lead therapies to market faster.

Medrio

Clinical trial data collection made easy. The industry’s leading early-phase EDC and eSource platform.

The only EDC vendor that actually admitted to being an EDC vendor was Medrio. You have to give them a lot of credit for honesty.

Omnicom

eClinical Solutions for Patient-Centric Clinical Trials
Effective Clinical Tools Driving Excellence in Life Science Research

Software has the power to save lives. OmniComm Systems understands that power and delivers eClinical solutions designed to help life science companies provide crucial medical treatments and therapies to patients around the globe.

OmniComm Systems fills a role in enhancing patient lives by shortening the time-to-market of essential life-saving treatments. Our eClinical suite of products includes electronic data capture (EDC) solutions, automated coding and randomization systems, risk-based monitoring (RBM) and analytics.

This is nice positioning, but it makes you wonder when OmniComm turned into a healthcare provider of crucial medical treatments and therapies to patients around the globe.

Oracle Life Science

Oracle Life Sciences—Reimagining What’s Possible

Innovation in science and medicine demands new technology, and innovation in
technology makes new things possible in science and medicine. Oracle is equipping the life sciences industry today, for the clinical trials of tomorrow.

Solutions Supporting the Entire Clinical Development Lifecycle

Oracle Health Sciences helps you get therapies to market faster and detect risks earlier. Oracle offers a complete set of clinical and safety solutions that support critical processes throughout the clinical development lifecycle—from study design and startup to conduct, close-out, and post-marketing.

SOLUTIONS
Oracle Health Sciences Clinical One cloud environment changes the way clinical research is done—accelerating all stages of the drug development lifecycle by eliminating redundancies, creating process efficiencies, and allowing the sharing of information across functions.

Unlike OmniComm and Medidata,   Oracle is firmly focused on the clinical development lifecycle; not pretending that they are a healthcare provider or leverage the patient data in their EDC databases.

Flaskdata.io

Helping life-science C-suite teams outperform their competitors.

Patient compliance is critical to the statistical power and patient retention of a study.

We help senior management teams complete studies and submission milestones faster and under budget. We do this by providing EDC, ePRO and integration of connected medical devices into a single data flow. We then automate detection and response of patient compliance deviations in clinical trials 100x faster than current manual monitoring practices.

 

 

The gap between the proletariat and Medidata (or should I say Dassault)

We need a better UX before [TLA] integration

The sheer number and variety of eClinical software companies and buzzwords confuses me.
There is EDC, CTMS, IWRS, IVRS, IWRS, IRT, eSource, eCOA, ePRO and a bunch of more TLAs.
For the life of me I do not understand the difference between eCOA and ePRO and why we need 2 buzzwords for patient reporting.

Here is marketing collateral from a CRO.   As you will see – they miss the boat on all the things that are important for site coordinators and study monitors.

We adapt responsively to change in your clinical trial to minimize risk and drive quality outcomes. Clinical research is complicated and it’s easy to get off track due to inexperienced project leaders, inflexible workflows, or the failure to identify risks before they become issues. We derive expert insights from evidence-based processes and strategic services to be the driving force behind quality outcomes, including optimized data, patient safety, reduced time-to-market, and operational savings.

What CRCs and CRAs have to say about the leading eClinical solutions

I recently did an informal poll on Facebook of what problems the CRA/CRC proletariat have to deal with on the job.

I want to thank Tsvetina Dencheva for helping me grok and distill people’s complaints
into 3 central themes.

Theme no. 1 – enter data once

Enable administrators to enter data once and have their authorized user lists, sites and metrics update automatically without all kinds of double and triple work and fancy import/export footwork between different systems. Failing a way of managing things in one place –
at least have better integration between the EDC and the CTMS.

The IT guys euphemistically call this problem information silos. I’ve always thought that they used the word silos (which are used to store animal food) as way of identifying with people who farm, without actually having to get their hands dirty by shovelling silage (which is really smelly btw).

I understand the rationale for having a CTMS and an EDC about as much as I understand the difference between eCOA and ePRO.

Here is some raw data from the informal Facebook survey

If I enter specific data, it would be great if there’s an integrated route to all fields connected to the said data. An easy example is – if I enter a visit, it transfers to my time sheet.

Same goes to contact reports. Apps! All sorts of apps, ctms, verified calculators, edc, ixrs, Electronic TMF. The list goes on and on. How could I forget electronic training logs? Electronic all sorts of log.

There are a lot of things we do day to day that are repetitive and can take away from actually moving studies forward. Thinking things like scanning reg docs, auto capturing of reg doc attributes (to a point), and integration to the TMF. Or better system integration, meaning where we enter a single data point (ie CTMS) and flowing to other systems (ie new site in CTMS, create new site in TMF. Enrolment metrics from EDC to CTMS) and so on.

If only the f**ing CTMS would work properly.

Theme number 2 – single sign-on.

The level of frustration with having to login to different systems is very high. The ultimate solution is to use social login – just login to the different systems with your Google Account and let Google/Firebase authenticate your identity.

Theme number 3 – data integrity

EDC edit check development eats up a lot of time and when poorly designed generates thousands of queries. Not good.

There is a vision of an EDC that understands the data semantics from context of the study protocol.

This is a very cool and advanced notion.

One of the study monitors put it like this:

The EDC should be smart enough to identify nonsense without having to develop a bunch of edit checks each time and have to deal with queries.

The EDC should be able to calculate if a visit is in a proper time window, or if imaging is in a proper time window. Also for oncology if RECIST 1.1 is used, then the EDC should be able to calculate: Body Surface Area, correct dosing based on weight and height of a patient, RECIST 1.1 tumor response and many other things that simply can be calculated.

About flaskdata.io

We specialise in faster submission for connected medical devices. We can shorten your
time to market by 9-12 months with automated patient compliance detection and response.

Call us and we’ll show you how. No buzzwords required.

What real-time data and Risk-based monitoring mean for your CRO

A widely neglected factor in cost-effective risk-based clinical trial monitoring is availability and accessibility of data.

RBM methods used by a central clinical trial  monitoring operation that receives stale data (any data from patients that is more than a day old is stale) are ineffective. Every day that goes by without having updated data from patients, devices and investigators reduces the relevance and efficacy of remote monitoring.

Real-time data is a sine-que-non for RBM.

Sponsors and Contract research organizations (CROs) should therefore approach real-time data and risk-based monitoring (RBM) as 2 closely related priorities for executing clinical trials. Use of modern data technologies for real-time data collection and remote risk-based monitoring will reduce non-value added rework, people and paper in clinical trials and help speed up time to statistical report.

(more…)

The 3 tenets for designing a clinical data management system

Abstract:
This post reviews the importance of 1) proper study design, 2) good data modeling and 3) realistic estimation of project timetables. The article concludes with a discussion of eSource and attempts to dispel some of the myths including how DIY EDC study build save time (they don’t).

Enjoy!

The trend of DIY: good for EDC vendors, less good for sponsors

The trend for small studies/IIS (investigator-initiated studies) is to use cloud EDC applications
that enable end-users to build eCRF and edit checks using a graphical user interface. This so-called DIY (do-it-yourself) approach is used by most cloud EDC vendors such as Medrio and Clincapture as a way of lowering their barriers to entry to the market.

However – what is good for vendors (lowered barriers to entry) is not necessarily good for sponsors (faster time to market of their innovative drug or medical device).

(more…)

Millennials are the future of clinical trial data management

esource tp get smart to market

Millennials, born between 1980 and 2000 and the first native generation of the digital age, are the quickly approaching additions to the modern workforce. Regardless of whether private or public sector Millennials are soon to become the bulk of the global workforce.

At present, Millennials represent 34% of the current US workforce (up 9% from 25% in 2015), and by 2020 50% of workers will be of the Millennial generation. As the demographics of present job seekers continues to shift, companies need to adjust their culture, facilities and technology to cater to the new generation.

Regarding the clinical trial industry, Millennials are not only the next generation of data managers and monitors, but will soon make up the bulk of the study subjects as well.

Choosing the right tool and UX for millennial subjects becomes acute considering usability factors and patient compliance issues for people under 30.

(more…)

The great ripoff of SDV in medical device studies

medical device clinical trials

Question

Are you still doing 100% SDV in your medical device clinical trial?

Here are some facts from medical device clinical trials:

30% of your study budget is for monitoring and 50% of monitoring is for SDV

For a $1M study – you are spending $150k for SDV.

What do you get for your $150K?

Check the EDC and see what percent of  the data was updated due to SDV activity.   A study we did with 20 medical device studies shows that it is less than 1/3 of 1% for connected medical devices using ePRO ( the endpoint/patient compliance data is collected electronically).

Your return on investment was $525.

What does Risk-Based Monitoring mean for CROs?

It basically means a golden opportunity to shake-down the customer.

(more…)

How to overcome 5 eSource implementation challenges

Jenya wrote a piece about the challenges of clinical trials operations change management for regulatory people who have to work with medical technology developers and I just had to write my own intro.

Frankly, its easier to talk about change for other people than for yourself. A lot easier.  I have written here, here and here about the gaps between the stakeholders in medical device clinical trials – security, IT, engineering, product marketing ,regulatory affairs and medical device security to name a few.

Overlook change management at your own risk

Change management is a topic usually overlooked when medtech companies implement cloud EDC, and introduce medical IoT for collecting data from patients directly and use electronic source documents for their connected device/mobile medical app or device clinical trial.

In this post, Jenya talks about how to manage change during the transition from traditional medical device clinical trial data management to cloud technologies, remote monitoring, medical IoT and electronic source data.

So enjoy.

Danny

(more…)