10 free ways to reduce risk in your medical device clinical trial

You cannot outsource quality in your medical device clinical trial

Collecting low-quality data means that your trial is likely to fail. You will not be able to prove or disprove the scientific hypothesis of your medical device clinical trial. You will have wasted your time.

You cannot outsource quality, you have to build it into the trial design

(more…)

The LA Freeway model of clinical monitoring

A freeway paradigm helps explain why onsite visits by study monitors don’t work and helps us plan and implement an effective system for protocol compliance monitoring of all sites, all data, all the time that saves time and money.

But first – let’s consider some  special aspects of clinical trial data:

Clinical trial data is highly dimensional data.

Clinical trial data is not “big data” but it is highly-dimensional in terms of variables or features of a particular subject.

Highly dimensional data is often found in biology;  a common example of highly dimensional data in biology is gene sequencer output. There are often tens of thousands of genes (features), but only tens of hundreds of samples.

In medical device clinical trial, there may be thousands of features but only tens of subjects.

Traditional protocol compliance monitoring uses on-site visits  and SDV  (source document verification) that requires visual processing of the information at the “scene”.   Since the amount of visual information available at the scene is enormous,  a person processes only a subset of the scene.

Humans focus on the interesting facets of a scene ignoring the rest. This is explained by selective attention theory.

Selective attention.

Selective attention is a cognitive process in which a person attends to one or a few sensory inputs while ignoring the other ones.

Selective attention can be likened to the manner by which a bottleneck restricts the flow rate of a fluid.

The bottleneck doesn’t allow the fluid to enter into the body of the bottle all at once; rather, it lets the fluid to enter in certain amounts depending on the flow rate, until all of it has entered the bottle’s body.

Selective attention is necessary for us to attend consciously to sensory stimuli in such a way that we will not experience sensory overload. See the article in Wikipedia on Attenuation theory.

(more…)

So what’s wrong with 1990s EDC systems?

Make no doubt about it, the EDC systems of 2020 are using a 1990’s design. (OK – granted, there are some innovators out there like ClinPal with their patient-centric trial approach but the vast majority of today’s EDC systems, from Omnicomm to Oracle to Medidata to Medrio are using a 1990’s design. Even the West Coast startup Medable is going the route of if you can’t beat them join them and they are fielding the usual alphabet soup of buzz-word compliant modules – ePRO, eSource, eConsent etc. Shame on you.

Instead of using in-memory databases for real-time clinical data acquisition, we’re fooling around with SDTM and targeted SDV.

When in reality – SDTM is a standard for submitting tabulated results to regulatory authorities (not a transactional database nor an appropriate data model for time series).  And even more reality – we should not be doing SDV to begin with – so why do targeted SDV if not to perpetuate the CRO billing cycle.

Freedom from the past comes from ridding ourselves of the clichés of today.

 

Personally – I don’t get it. Maybe COVID-19 will make the change in the paper-batch-SDTM-load-up-the-customer-with-services system.

So what is wrong with 1990s EDC?

The really short answer is that computers do not have two kinds of storage any more.

It used to be that you had the primary store, and it was anything from acoustic delay-lines filled with mercury via small magnetic dougnuts via transistor flip-flops to dynamic RAM.

And then there were the secondary store, paper tape, magnetic tape, disk drives the size of houses, then the size of washing machines and these days so small that girls get disappointed if think they got hold of something else than the MP3 player you had in your pocket.

And people still program their EDC systems this way.

They have variables in paper forms that site coordinators fill in on paper and then 3-5 days later enter into suspiciously-paperish-looking HTML forms.

For some reason – instead of making a great UI for the EDC, a whole group of vendors gave up and created a new genre called eSource creating immense confusion as to why you need another system anyhow.

What the guys at Gartner euphemistically call a highly fragmented and non-integrated technology stack.
What the site coordinators who have to deal with 5 different highly fragmented and non-integrated technology stacks call a nightmare.

Awright.

Now we have some code – in Java or PHP or maybe even Dot NET THAT READS THE VARIABLES FROM THE FORM AND PUTS THEM INTO VARIABLES IN MEMORY.

Now we have variables in “memory” and move data to and from “disk” into a “database”.

I like the database thing – where clinical people ask us – “so you have a database”. This is kinda like Dilbert – oh yeah – I guess so. Mine is a paradigm-shifter also.

Anyhow, today computers really only have one kind of storage, and it is usually some sort of disk, the operating system and the virtual memory management hardware has converted the RAM to a cache for the disk storage.

The database process (say Postgres) allocate some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel.

If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere.
When Postgres next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file.

And that’s it.

Virtual memory was meant to make it easier to program when data was larger than the physical memory, but people have still not caught on.
And maybe with COVID-19 and sites getting shut-down; people will catch on that a really nifty user interface for GASP – THE SITE COORDINATORS and even more AMAZING – a single database in memory for ALL the data from patients, investigators and devices.

Because at the end of the day – grandma knows that there ain’t no reason not to have a single data model for everything and just shove it into virtual memory for instantaneous, automated DATA QUALITY, PATIENT SAFETY AND RISK ASSESSMENT in real-time.

Not 5-12 weeks later for research site visit or a month later after the data management trolls in the basement send back some reports with queries and certainly not spending 6-12 months cleaning up unreliable data due to the incredibly stupid process of paper to forms to disk to queries to site visits to data managers to data cleaning.

I love being a CRA, but the role as it exists today is obsolete.

I think that COVID-19 will be the death knell for on-site monitoring visits and SDV.    Predictions for 2020 and the next generation of clinical research – mobile EDC for sites, patients and device integration that just works.

I’m neither a clinical quality nor a management consultant. I cannot tell a CRO not to bill out hours for SDV and CRA travel and impact study budget by 25-30% and delay results by 12-18 months.

Nope.   I’m not gonna tell CROs what to do.    Darwin will do that for me.

I develop and support technology to help life science companies go faster to market.  I want to save lives by shortening time to complete clinical trials for COVID-19 vaccine and treatments by 3-6 months.

I want to provide open access to research results – for tomorrow’s pandemic.

I want to  enable real-time data sharing.

I want to enable participants in the battle with COVID-19 to share real-world / placebo arm data, making the fight with COVID-19 more efficient and collaborative and lay the infrastructure for the next wave of pandemics.

I want to provide real-time data collection for hospitals, patients and devices.  Use AI-driven detection of protocol violations and automated response to enable researchers to dramatically improve data reliability, allowing better decision making and improving patient safety.

The FDA (a US government regulatory bureaucracy) told the clinical trial industry to use e-Source 10 years ago and to use modern IT .  If FDA couldn’t then maybe survival of the fittest and COVID-19 well do the job.

FDA’s Guidance for Industry: Electronic Source Data in Clinical Investigations, says, in part:
“Many data elements (e.g., blood pressure, weight, temperature, pill count, resolution of a symptom or sign) in a clinical investigation can be obtained at a study visit and can be entered directly into the eCRF by an authorized data originator. This direct entry of data can eliminate errors by not using a paper transcription step before entry into the eCRF. For these data elements, the eCRF is the source. If a paper transcription step is used, then the paper documentation should be retained and made available for FDA inspection.”

I loved this post by Takoda Roland on the elephant in the room.

Source data validation can easily account for more than 80% of a monitor’s time. You go on site (or get a file via Dropbox). Then you need  to page through hundreds of pages of source documents to ensure nothing is missing or incomplete. Make sure you check the bare minimum amount of data before you need to rush off to catch my flight, only to do it all again tomorrow in another city, I am struck with this thought: I love being a CRA, but the role as it exists today is obsolete.

Opinion: A Futurist View on the Use of Technology in Clinical Trials

 

Using automated detection and response technology mitigate the next Corona pandemic

What happens the day after?   What happens next winter?

Sure – we must find effective treatment and vaccines.  Sure – we need  to reduce or eliminate the need for on-site monitoring visits to hospitals in clinical trials.  And sure – we need to enable patient monitoring at home.

But let’s not be distracted from 3 more significant challenges:

1 – Improve patient care

2 – Enable real-time data sharing. Enable participants in the battle with COVID-19 to share real-world / placebo arm data, making the fight with COVID-19 more efficient and collaborative.

3- Enable researchers to dramatically improve data reliability, allowing better decision making and improving patient safety.

Clinical research should ultimately improve patient care.

The digital health space is highly fragmented (I challenge you to precisely define the difference between patient engagement apps and patient adherence apps and patient management apps).  There are over 300 digital therapeutic startups. We are lacking a  common ‘operating system and  there is a dearth of vendor-neutral standards that would enable interoperability between different digital health systems mobile apps and services.

By comparison – clinical trials have a well-defined methodology, standards (GCP) and generally accepted data structures in case report forms.  So why do many clinical trials fail to translate into patient benefit?

A 2017 article by Carl Heneghan, Ben Goldacre & Kamal R. Mahtani “Why clinical trial outcomes fail to translate into benefits for patients”  (you can read the Open Access article here) states the obvious: that the objective of clinical trials is to improve patients’ health.

The article points at  a number of serious  issues ranging from badly chosen outcomes, composite outcomes, subjective outcomes and lack of relevance to patients and decision makers to issues with data collection and study monitoring.

Clinical research should ultimately improve patient care. For this to be possible, trials must evaluate outcomes that genuinely reflect real-world settings and concerns. However, many trials continue to measure and report outcomes that fall short of this clear requirement…

Trial outcomes can be developed with patients in mind, however, and can be reported completely, transparently and competently. Clinicians, patients, researchers and those who pay for health services are entitled to demand reliable evidence demonstrating whether interventions improve patient-relevant clinical outcomes.

There can be fundamental issues with study design and how outcomes are reported.

This is an area where modeling and ethical conduct intersect;  both are 2 critical areas.

Technology can support modeling using model verification techniques (used in software engineering, chip design, aircraft and automotive design).

However, ethical conduct is still a human attribute that can neither be automated nor replaced with an AI.

Let’s leave modeling to the AI researchers and ethics to the bioethics professionals

For now at least.

In this article, I will take a closer look at 3 activities that have a crucial impact on data quality and patient safety. These 3 activities are orthogonal to the study model and ethical conduct of the researchers:

1 – The time it takes to detect and log protocol deviations.

2 – Signal detection of adverse events (related to 1)

3 – Patients lost to follow-up (also related to 1)

Time to detect and log deviations

The standard for study monitors is to visit investigational sites once ever 5-12 weeks.   A Phase IIB study with 150 patients that lasts 12 months would typically have 6-8 site visits (which incidentally, cost the sponsor $6-8M including the rewrites, reviews and data management loops to close queries).

Adverse events

As reported by Heneghan et al:

A further review of 11 studies comparing adverse events in published and unpublished documents reported that 43% to 100% (median 64%) of adverse events (including outcomes such as death or suicide) were missed when journal publications were solely relied on [45]. Researchers in multiple studies have found that journal publications under-report side effects and therefore exaggerate treatment benefits when compared with more complete information presented in clinical study reports [46]

Loss of statistical significance due to patients lost to follow-up

As reported by Akl et al in  “Potential impact on estimated treatment effects of information lost to follow-up in randomized controlled trials (LOST-IT): systematic review” (you can see the article here):

When we varied assumptions about loss to follow-up, results of 19% of trials were no longer significant if we assumed no participants lost to follow-up had the event of interest, 17% if we assumed that all participants lost to follow-up had the event, and 58% if we assumed a worst case scenario (all participants lost to follow-up in the treatment group and none of those in the control group had the event).

Real-time data

Real-time data (not data collected from paper forms 5 days after the patient left the clinic) is key to providing an immediate picture and assuring interpretable data for decision-making.

Any combination of data sources should work – patients, sites, devices, electronic medical record systems, laboratory information systems or some of your own code. Like this:

Mobile eSource mobile ePRO medical device API

Signal detection

The second missing piece is signal detection for safety, data quality and risk assessment of patient, site and study,

Signal detection should be based upon the clinical protocol and be able to classify the patient into 1 of 3 states: complies, exception (took too much or too little or too late for example) and miss (missed treatment or missing data for example).

You can visualize signal classification as putting the patient state into 1 of 3 boxes like this:Automated response

One of the biggest challenges for sponsors running clinical trials is delayed detection and response.   Protocol deviations are logged 5-12 weeks (and in a best case 2-3 days) after the fact.   Response then trickles back to the site and to the sponsor – resulting in patients lost to follow-up and adverse events that were recorded long after the fact..

If we can automate signal detection then we can also automate response and then begin to understand the causes of the deviations.    Understanding context and cause is much easier when done in real-time.        A good way to illustrate is to think about what you were doing today 2 weeks ago and try and connect that with a dry cough, light fever and aching back.   The symptoms may be indicative of COVID-19 but y0u probably don’t remember what you were doing and  with whom you came into close contact.     The solution to COVID-19 back-tracking is use of digital surveillance and automation. Similarly, the solution for responding to exceptions and misses is to digitize and automate the process.

Like this:

Causal flows of patient adherence

Summary

In summary we see 3 key issues with creating meaningful outcomes for patients:

1 – The time it takes to detect and log protocol deviations.

2 – Signal detection of adverse events and risk (related to 1)

3 – Patients lost to follow-up (also related to 1)

These 3 issues for creating meaningful outcomes for patients can be resolved with 3 tightly integrated technologies:

1 – Real-time data acquisition for patients, devices and sites (study nurses, site coordinators, physicians)

2 – Automated detection

3 – Automated response

 

 

 

 

What takes precedence? GCP or hospital network security?

patient compliance in medical clinical device trials

This is a piece I wrote a while back on my medical device security blog – Cybersecurity for medical devices.

One of the biggest challenge of using connected medical devices in clinical trials is near real-world usage of devices that are not commercially-ready.

We have a couple of customers that are performing clinical trials of medical devices in the ER and ICU. The tradeoffs between cybersecurity and patient safety are not insignificant.

What takes precedence? GCP or hospital network security?

Data quality, protocol compliance and patient safety are the 3 main pillars of GCP.

What is more important – patient safety or the health of the enterprise hospital Windows network?

What is more important – writing secure code or installing an anti-virus?

In order to answer these question, we performed a threat analysis on a medical device being studied in intensive care units.  The threat analysis used the PTA (Practical threat analysis) methodology.

Risk analysis of a medical device

Our analysis considered threats to three assets: medical device availability, the hospital enterprise network and patient confidentiality/HIPAA compliance. Following the threat analysis, a prioritized plan of security countermeasures was built and implemented including the issue of propagation of viruses and malware into the hospital network (See Section III below).

Installing anti-virus software on a medical device is less effective than implementing other security countermeasures that mitigate more severe threats – ePHI leakage, software defects and USB access.

A novel benefit of our approach is derived by providing the analytical results as a standard threat model database, which can be used by medical device vendors and customers to model changes in risk profile as technology and operating environment evolve. The threat modelling software can be downloaded here.

(more…)

The gap between the proletariat and Medidata (or should I say Dassault)

We need a better UX before [TLA] integration

The sheer number and variety of eClinical software companies and buzzwords confuses me.
There is EDC, CTMS, IWRS, IVRS, IWRS, IRT, eSource, eCOA, ePRO and a bunch of more TLAs.
For the life of me I do not understand the difference between eCOA and ePRO and why we need 2 buzzwords for patient reporting.

Here is marketing collateral from a CRO.   As you will see – they miss the boat on all the things that are important for site coordinators and study monitors.

We adapt responsively to change in your clinical trial to minimize risk and drive quality outcomes. Clinical research is complicated and it’s easy to get off track due to inexperienced project leaders, inflexible workflows, or the failure to identify risks before they become issues. We derive expert insights from evidence-based processes and strategic services to be the driving force behind quality outcomes, including optimized data, patient safety, reduced time-to-market, and operational savings.

What CRCs and CRAs have to say about the leading eClinical solutions

I recently did an informal poll on Facebook of what problems the CRA/CRC proletariat have to deal with on the job.

I want to thank Tsvetina Dencheva for helping me grok and distill people’s complaints
into 3 central themes.

Theme no. 1 – enter data once

Enable administrators to enter data once and have their authorized user lists, sites and metrics update automatically without all kinds of double and triple work and fancy import/export footwork between different systems. Failing a way of managing things in one place –
at least have better integration between the EDC and the CTMS.

The IT guys euphemistically call this problem information silos. I’ve always thought that they used the word silos (which are used to store animal food) as way of identifying with people who farm, without actually having to get their hands dirty by shovelling silage (which is really smelly btw).

I understand the rationale for having a CTMS and an EDC about as much as I understand the difference between eCOA and ePRO.

Here is some raw data from the informal Facebook survey

If I enter specific data, it would be great if there’s an integrated route to all fields connected to the said data. An easy example is – if I enter a visit, it transfers to my time sheet.

Same goes to contact reports. Apps! All sorts of apps, ctms, verified calculators, edc, ixrs, Electronic TMF. The list goes on and on. How could I forget electronic training logs? Electronic all sorts of log.

There are a lot of things we do day to day that are repetitive and can take away from actually moving studies forward. Thinking things like scanning reg docs, auto capturing of reg doc attributes (to a point), and integration to the TMF. Or better system integration, meaning where we enter a single data point (ie CTMS) and flowing to other systems (ie new site in CTMS, create new site in TMF. Enrolment metrics from EDC to CTMS) and so on.

If only the f**ing CTMS would work properly.

Theme number 2 – single sign-on.

The level of frustration with having to login to different systems is very high. The ultimate solution is to use social login – just login to the different systems with your Google Account and let Google/Firebase authenticate your identity.

Theme number 3 – data integrity

EDC edit check development eats up a lot of time and when poorly designed generates thousands of queries. Not good.

There is a vision of an EDC that understands the data semantics from context of the study protocol.

This is a very cool and advanced notion.

One of the study monitors put it like this:

The EDC should be smart enough to identify nonsense without having to develop a bunch of edit checks each time and have to deal with queries.

The EDC should be able to calculate if a visit is in a proper time window, or if imaging is in a proper time window. Also for oncology if RECIST 1.1 is used, then the EDC should be able to calculate: Body Surface Area, correct dosing based on weight and height of a patient, RECIST 1.1 tumor response and many other things that simply can be calculated.

About flaskdata.io

We specialise in faster submission for connected medical devices. We can shorten your
time to market by 9-12 months with automated patient compliance detection and response.

Call us and we’ll show you how. No buzzwords required.

Living in an ideal world where the study nurse isn’t overwhelmed by IT

Tigran examines the idea of using EDC edit checks to assure patient compliance to the protocol.

How should I assure patient compliance to the protocol in a medical device trial?

I get asked sometimes whether automated patient compliance deviation detection and response  is not overkill.

After all, all EDC systems allow comparing input to preset ranges and data types (edit checks). Why not use this, already available off the shelf functionality, to catch non-compliance? As Phileas Fogg put it: “Learn to use what you have got, and you won’t need what you have not”.

Why edit checks are not enough

There are 4 issues with using EDC edit checks to enforce patient compliance:

Individual variations

The original purpose of edit checks is to catch data entry mistakes. As they are generated automatically, they need to be robust enough not to fire indiscriminately. The effect non-compliance has on clinical data can be far less clearcut. This is especially true when taking individual variation between patients into account.

Timing

Even if we were able to reliably catch non-compliance through clinical data alone, there’s the issue of timing.

Each hour of delay between non-compliance event and a prompt to return to compliance devalues the prompt. Delays could come from a) manually entering source data into EDC, b) edit check firing in batch mode rather than during data entry, c) the time needed to process the edit checks.  What’s the benefit of being told you were not compliant one week ago?

Talk of closing the stable door after the horse has bolted…

By the time the nurse contacts the patient, the damage has already been done. No reinforcement is possible, as a patient could (theoretically) be reminded about the need to be compliant with the interval of several weeks – in which case this will serve as a token reminder, nothing more.

The study nurse may not have spare time on her hands

Let’s assume we live in an ideal world, where the study nurse isn’t overwhelmed by thousands of edit checks firing for no reason, and where data flows into EDC with no delay.

Even if this is true, there’s still the small matter of actually reaching out to the patient. When compliance reaches 90% that’s considered a good result – so in the best case scenario, the nurse would need to reach out to patients in 10% of cases. Edit checks are meant to be resolved immediately. If the EDC used fires edit checks during data entry, then the data entry process will be paralyzed. If edit checks are fired in the background, then the whole data cleaning/query resolution process would stall.

Edit checks are not an operational tool

What would happen in reality, though, is that any edit checks introduced to monitor patient compliance would be overridden by site staff. Together with any legitimate edit checks designed to keep the errors out. Resulting in the same level of compliance and much dirtier database. And that’s best case scenario, if otherwise no data would be entered at all.

Tigran Arzumanov is an experienced business development/sales consultant running BD as a service, a Contract Sales Organization for Healthcare IT and Clinical development.

Patient compliance – the billion dollar question

The high failure rate of drug trials

The high failure rate of drugs in clinical trials, especially in the later stages of development, is a significant contributor to the costs and time associated with bringing new molecular entities to market. These costs, estimated to be in excess of $1.5 billion when capitalized over the ten to fifteen years required to develop a new chemical entity, are one of the principal drivers responsible for the ongoing retrenchment of the pharmaceutical industry. Therapeutic areas such as psychiatry, now deemed very high risk, have been widely downsized, if not abandoned entirely, by the pharmaceutical industry. The extent to which patient noncompliance has marred clinical research has in some cases been underestimated, and one step to improving the design of clinical trials may lie in better attempts to analyze patient compliance during drug testing and clinical development. Phil Skolnick, Opiant Pharmaceuticals The Secrets of a successful clinical trial, compliance, compliance, compliance.

Compliance, compliance, compliance

Compliance is considered to be key to success of a medical treatment plan. (1, 2, 3)

It is the “billion dollar question” in the pharma and medical device industry.

In home-use medical devices in particular and in chronic diseases in general – there is wide consensus that patient compliance is critical to the success of the clinical trial.   Our experience with Israeli innovative medical device vendors is that they understand the criticality of patient compliance. They “get it”.

However, as Skolnick et al note – patient compliance with the clinical protocol is often underestimated in drug trials.

There are 4 challenges for assuring patient compliance in medical device trials.

1. The first challenge is maintaining transparency.    An executive at IQVIA noted (in a personal conversation with me) that IQVIA does not calculate patient compliance metrics since they assume that patient compliance is the responsibility of the sites.    The sponsor relies on the CRO who does not collect the metrics who relies on the sites who do not share their data.

2. The second challenge is having common standard metrics of compliance. Site performance on patient compliance may vary but if sites do not share common metrics on their patients’ compliance, the CRO and the sponsor cannot measure the most critical success factor of the study.

3. The third challenge is timely data.   In the traditional clinical trial process, low-level data queries are resolved in the EDC but higher-level deviations often wait until study-closeout.  The ability of a study team to properly resolve thousands of patient compliance issues months (or even years) after the patient participated is limited to say the least

4. The final and fourth challenge is what happens after the clinical trial.  How do we take lessons learned from a controlled clinical trial and bring those lessons into evidence-based practice?

A general approach to measuring and sharing patient compliance metrics

A general approach to addressing these challenges should be based on standard metrics, fast data and active monitoring and reinforcement and reuse. 

1. Use standard metrics for treatment and patient reporting compliance. The metrics then become a transparent indicator of performance and a tool for improvement.

A simple metric of compliance might be a score based on patient reporting, treatment compliance and treatment violations. We may consider a threshold for each individual metric – for example a 3 strike rule like in baseball.

A more sophisticated measure of compliance might be similar to beta in capital market theory where you measure the ‘volatility’ of individual patient compliance compared to the study as a whole. (Beta is used in the capital asset pricing model, which calculates the expected return of an asset based on its beta and expected market returns or expected study returns in our case).

2. Fast data means automating for digital data collection from patients, connected medical devices and sites eliminating paper source and SDV for the core data related to treatment and safety endpoints.

3. Actively monitor and help patients sustain a desired state of compliance to the treatment protocol, both pharmacologic and non-pharmacologic. Not everything is about pill-counting. This can be done AI-based reminders using techniques of contextual bandits and decision trees.

4. Reuse clinical trial data and extract high quality training information that can be used for evidence-based practice.

Patient compliance teardown

Measures of patient compliance can be classified into 3 broad categories:

Patient reporting – i.e how well patient reports her own outcomes

1. Treatment compliance – how well the treatment conforms to the protocol in terms of dosing quantities and times of application. 2. Research suggests that professional patients may break the pill counting model

3. Patient violations – if the patient does something contrary to the protocol like taking a rescue medication before the migraine treatment

Confounding variables

Many heart failure patients are thought to be non-compliant with their treatment because of prior beliefs – believing that the study treatment would not help them. In the European COMET trial with over 3000 patients it was found that a Lack of belief in medication at the start of the study was a strong predictor of withdrawal from the trial (64% versus 6.8%; p < 0.0001). Those patients with very poor well-being and limited functional ability (classified as NYHA III–IV) at baseline significantly (p = 0.01) increased their belief in the regular cardiac medication but not in their study medication (4)

But numerous additional factors also contribute to patient non-compliance in clinical trials:  lack of home support, cognitive decline, adverse events, depression, poor attention span, multiple concomitant medications, difficulty swallowing large pills, difficult-to-use UI in medical devices and digital therapeutics and inconveniences of urinary frequency with diuretics for heart failure patients (for example).

It seems that we can identify 6 main confounding variables that influence compliance:

1. Patient beliefs – medication is useless, or this specific medication cannot help or this particular chronic condition is un-curable

2. Concerns about side effects – this holds for investigators and for patients and may account for levels of PI non-compliance.

3. Alert fatigue – patients can be overwhelmed by too many reminder message

4. Forgetfulness – old people or young persons. Shift workers.

5. Language –  the treatment instructions are in English but the patient only speaks Arabic.

6. Home support – patient lives alone or travels frequently or does not have strong support from a partner or parent for their chronic condition.

Summary

Flaskdata.io provides a HIPAA and GDPR-compliant cloud platform that unifies EDC, ePRO, eSource and connected medical devices with automated patient compliance monitoring. The latest version of Flaskdata.io provides standard compliance metrics of patient reporting and active messaging reminders to help keep patients on track.  Your users can subscribe to real-time alerts and you can share metrics with the entire team.

Contact Batya for a free demo and consult and learn how fast data, metrics and active reinforcement can help you save time and money on your next study.

References

1. Geriatr Nurs. 2010 Jul-Aug;31(4):290-8. Medication compliance is a partnership, medication compliance is not.
Gould E1, Mitty E. https://www.ncbi.nlm.nih.gov/pubmed/20682408

2. Depression Is a Risk Factor for Noncompliance With Medical Treatment: Meta-analysis of the Effects of Anxiety and Depression on Patient compliance. DiMatteo et al http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/485411

3. Importance of medication compliance in cardiovascular disease and the value of once-daily treatment regimens. Frishman. https://www.ncbi.nlm.nih.gov/pubmed/17700384

4. Adherence and perception of medication in patients with chronic heart failure during a five-year randomised trial Ekman, Andersson et al. https://doi.org/10.1016/j.pec.2005.04.005

 

 

 

Invisible gorillas and detection of adverse events in medical device trials

Weekly Episode #1 - Patients and study monitors are both people.

What is easier to detect in your study – Slow-moving or fast moving deviations?

This post considers human frailty and strengths.

We recently performed a retrospective study of the efficacy of  Flaskdata.io automated study monitoring in orthopedic trials. An important consideration was the ability to monitor patients who had received an implant and were on a long term follow-up program. Conceptually, monitoring small numbers of slow-moving, high-risk events is almost impossible to do manually since we miss a lot of what goes on around us, and we have no idea that we are missing so much. See the invisible gorilla experiment for an example.

One of patients in the study had received a spinal implant and was on a 6 month follow-up program dived into a pool to swim a few laps and died by drowning despite being a strong swimmer. Apparently, the pain caused by movement of the insert resulted  in loss of control and a severe adverse event. The patient had disregarded instructions regarding strenuous physical activity and the results were disastrous. 

It seems to me that better communications with the patients in the medical device study could have improved their level of awareness of safety and risk and perhaps avoided an unnecessary and tragic event.

Subjects and study monitors are both  people.

This might be a trivial observation but I am going to say it anyhow, because there are lessons to be learned by framing patients and monitors as people instead of investigation subjects and process managers. 

People are the specialists in their personal experience, the clinical operations team are the specialists in the clinical trial protocol. Let’s not forget that subjects and study monitors are both  people.

Relating to patients in a blinded study as subjects without feelings or experience is problematic. We can relate to patients in a personal way without breaking the double blinding and improve their therapeutic experience and their safety. 

We should relate to study monitors in a personal way as well, by providing them with great tools for remote monitoring and enable them to prioritize their time on important areas such as dosing violations and sites that need more training. We can use analytics of online data from the EDC, ePRO and eSource and connected medical devices in order to enhance and better utilize clinical operations teams’ expertise in process and procedure.

A ‘patient-centered’ approach to medical device clinical trials

In conditions such as Parkinsons Disease, support group meetings and online sharing are used to stay on top of medication, side effects, falls and general feeling of the patient even though the decisions on the treatment plan need to be made by an expert neurologist / principal investigator and oversight of protocol violations and adverse events is performed by the clinical operations team. There are many medical conditions where patients can benefit by taking a more involved role in the study. One common example is carpal tunnel syndrome. 

According to the findings of an August 3rd, 2011 issue of the Journal of Bone and Joint Surgery (JBJS), patients receiving treatment for carpal tunnel syndrome (CTS) prefer to play a more collaborative role when it comes to making decisions about their medical or surgical care. 

Treatment of carpal-tunnel syndrome which is very common and also extremely dependent upon patient behavior and compliance is a great example of the effectiveness of “shared decision-making, or collaborative, model” in medicine, in which the physician and patient make the decision together and exchange medical and other information related to the patient’s health.

As the article in JBJS concludes:

“This study shows the majority of patients wanted to share decision-making with their physicians, and patients should feel comfortable asking questions and expressing their preferences regarding care. Patient-centered care emphasizes the incorporation of individual styles of decision making to provide a more patient-centered consultation,” Dr. Gong added. 

In a ‘patient-centered’ approach to medical device clinical trials, patients’ cultural traditions, personal preferences and values, family situations, social circumstances and lifestyles are considered in the decision-making process.

Automated patient compliance monitoring with tools such as Flaskdata.io are a great way to create a feedback loop of medical device clinical data collection,  risk signatures improvement, detection of critical signals and communications of information to patients. Conversely, automated real-time patient compliance monitoring is a a great way of enhancing clinical operations team expertise.

Patients and study monitors are both people.