Embrace Automation in Life Sciences

At the end of November, Michael Causey, writing for the Association of Clinical Research Professionals,  quoted FDA Commissioner Scott Gottlieb saying “We don’t use technology well in clinical trials to collect information and to use it to do quality checks on the data that’s collected.” Wow.

The commissioner was clearly referring to the shortcomings of automation efforts in the life sciences. I believe he is correct – to a degree. The industry is using technology well, though must implement more automation and workflow technologies quicker and on a larger scale, and put a stronger focus on automating data quality, an area that can significantly slow down or derail a trial when done poorly.

This is not to say that the industry is completely failing at automation. On the contrary, we are applying more and more technology to solve everyday problems that arise when designing and running these complex experiments called clinical trials. The deficiency lies more specifically in the lack of focus around data quality improvements through technology, an issue we have been talking about and advocating for – and in fact automating – for several years. Data quality in clinical trials is paramount.

At AG Mednet we have always believed that quality begins at the source. In previous posts and articles, we have made the case for bringing a manufacturing industry discipline to the assembly of clinical data submissions from sites. We believe that basic case report form automation, such as edit checks, is insufficient in ensuring data quality.

In imaging trials for example, having a system that can check, prior to submission, whether scans were properly obtained by following image acquisition charters greatly enhances quality across multiple dimensions. First, it can prevent useless data from being submitted. Second, it can prevent future data queries and their inevitable related delays from taking place. Better management of these data quality efforts can enhance the value of the whole data set when looking for therapeutic efficacy or disease progression.

When these quality checks are done after data submission, we are deluding ourselves into believing that we are unburdening sites from having to “answer to a system.” The typical excuse has always been “don’t prevent a site from sending whatever data they have because they will get frustrated.” This illusion is quickly offset by data queries which sites love just as much as sponsors and CROs.

Simply by employing automation to perform “quality checks on the data that’s collected,” as stated by Commissioner Gottlieb, the impact is significant. The sooner we can catch an error, the sooner it can be corrected. In fact, in clinical trials there are certain errors that simply cannot be corrected if too much time passes between data collection and error detection; subjects may not be available, or the disease may have progressed such that measurements are no longer contemporaneous.

Commissioner Gottlieb is right. We should not only focus on task automation but develop capabilities that can check the data for errors. In imaging trials, we have already seen the benefits of detecting data defects when sites are in the process of assembling a submission. This reduces errors and, with that, signal-to-noise ratio in the experiment can improve appreciably. With standard automation technologies, and the enhancements being made possible by emerging artificial intelligence techniques, we believe there are meaningful improvements ahead of us in all areas of data collection in clinical trials. We can remove so many of the delays and mistakes from the clinical trial process through automation. Let’s rise to Commissioner Gottlieb’s challenge and improve trials so we secure optimal trial outcomes.

No Comments

Post A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.