How AI Can Enhance the Clinical Data Review and Cleaning Process | Lorhan Corporation |

How AI Can Enhance the Clinical Data Review and Cleaning Process

  • Admin
  • 06 Apr 21

Clinical data is a chief resource for most health and medical research. Clinical data is either collected during ongoing patient care or as part of a formal clinical trial program.

To review clinical data and clean it is a tough job. Apart from tough, ensuring data safety and reliability manually is a time-consuming process. So many science companies have come up with platforms that can review clinical data which have been proven to be challenging to use, inflexible, and created confusion between users. That is when Artificial Intelligence steps in. When I say Artificial intelligence, I mean, computer science is concerned with building smart machines that are capable of performing tasks that typically require human intelligence. It involves machine learning (ML), deep learning (DL), and natural language processing (NLP).

To review clinical data and clean it, we can best use the strengths of humans and machines, both!

A machine is good at processing and analyzing large volumes of data with high speed and accuracy while humans can make the best decisions based on that data. When it comes to reviewing clinical data, a human-computer system will perform better than either standalone method. The AI initiatives that revolve around clinical trials require humans and machines to work together.

As written by the author, Prabha Ranganathan, Clinical Data Warehousing, Director, Perficient, Analysis using ML models can provide more insight into clinical data and enable humans to determine the safety and efficacy of the trial. Further is the research proposed by her.

Clinical Data Review Platforms (CDRP):

While ML and NLP functionalities aren’t currently available in today’s CDRP’s, we all know they provide tremendous benefits to the clinical data review and cleaning process.

When conducting clinical trials, data is collected from electronic data capture systems and from central and native labs. This data is then combined and stored during a data warehouse. The information is then transformed into a format in which data managers are familiar. Data managers then review and clean the information. Once cleaned, the information is transformed into common data models (e.g., CDISC SDTM) and used for generating submission documents to the FDA. Machine learning models are often made to check patterns in data, and if there are irregularities or missing data, bring it to the eye of data managers for further review. Data from prior studies are out there for ML algorithms and models and may be learned from them. Every clinical test has certain milestones to reach, and for every milestone, there are certain criteria to be met and associated documentation to be generated. ML models are often trained to know if an attempt is ready for a particular milestone. If not ready, it can determine the bottleneck, and it can make predictions about how much time it will take to reach the milestone supported by historical knowledge. It can generate the documentation needed for a milestone and send it to humans for approval before submitting it to the FDA. For future clinical trials, the algorithms can answer questions like, “How long will it deem me to enroll ‘N’ number of subjects for an oncology study?” and “How long will it take to reach a milestone supported the trial protocol document?”

 

 

• “Show me all demography data where subjects are males, but pregnancy is yes.”

•”Show me all data from adverse events and concomitant medications,
highlighting concomitant medications without corresponding adverse events.”

• “Show me all the severe adverse events reported for the third visit.”

 

NLP/NLQ (natural language querying) converts these texts to search criteria, which is then converted to an SQL query. The query is executed, and therefore the results are returned.

When ML algorithms understand clinical data, they will execute the search criterion and deliver the results. Information extraction with language translation (e.g., English to ML) is often used. Extending this means including audio input from users, converting audio to text, then using NLQ to convert the text to corresponding queries. For non-English speaking users, NLP language translators are often used to achieve an equivalent result.

Data Review Prioritization:

When clinical data reviewers, medical monitors, statisticians, and safety reviewers are reviewing data within the CDRP, an ML model can analyze the information and apply statistical analysis to work out the probability of whether the information is clean or not. Additionally, this model is often used to detect anomalies within the data. Users should be presented with data supported by the probability that the information is clean or the information points to that need their attention. the data reviewers can prioritize their review activity supported by the input from the ML models. It’ll be valuable for the users and can significantly reduce the time taken for data review. Users can set the edge for the data points they need to review first, supported by the statistical analysis done by the ML model.

 

Before starting the data review, each team has its review plan. the data review team has a data review plan, the security team features a safety review plan, the analysis team features a statistical analysis plan, etc. These review plans are standard across studies with few changes. An automatic program can create review plans supported by the metadata information in a study. The review plan will end in an inventory of tasks, which may be assigned to different user groups supported by the previously assigned tasks, the program can automatically assign the tasks and prioritize the tasks for every individual user. This, combined with the prioritized data review, will enable users to prioritize their tasks and complete the data review and the tasks from the review plans.

Study Data Summarization:

When data from a clinical test is loaded into the CDRP, the users of the system would understand the status of the study, and what steps are needed to reach a milestone. For instance, a summary that NLP can help generate could be:

“All data until visit 4 is out there within the platform. it’s 80% clean, and therefore the remaining 20% is expecting input from sites before data managers can review the data. If sites answer all queries within the next two days, the interim report one milestone is often met in 10 days. any longer delays from sites will cause delays to the milestone date.”

Here, something almost like Narrative Science – Quill (an Intelligent Automation platform), a CDRP is often used to do text summarization supported by data and supply recommendations on how to meet deadlines and milestones.

 

Success Criteria:

Setting initial expectations and not promising a remedy may be key to determining the success of an initiative that focuses on deploying AI to streamline the clinical data review and cleaning process.

Training the machine learning model will determine how accurate the results are. After every phase, evaluating the released functionality, reassigning priorities to backlogged initiatives, and releasing supported by prioritized functionalities should be done and closely monitored. The adoption by business users will determine how successful these initiatives are. Other factors which will help in evaluating this include:

• Accuracy and speed of data review
• Effort needed by humans to succeed in a milestone
• Improved user experience
• Adoption of the solution by the end-user community

 

 


Request Demo