- Drug giant Bristol Myers Squibb is quietly testing a handful of AI ideas to improve its research.
- In an interview with Insider, one of its executives outlined three promising AI programs.
- These programs are being used in clinical trials to make them more effective and powerful.
In the last many years, there has been an explosion of AI-specialized biotech startups that have raised hundreds of millions, such as Insitro, Recursion Pharmaceuticals and Deep Genomics – and big pharma is now on its way into action. The drug giant in New Jersey, Bristol Myers Squibb, is quietly testing several artificial intelligence programs in the hopes that the ideas can make pharmaceutical research more effective and powerful.
Venkat Sethuraman, Bristol Myers’ senior vice president of biometrics and computer science, told Insider about three $ 161 billion AI and machine learning programs being used in clinical trials at the company. While AI and ML have become ubiquitous buzzwords in the pharmaceutical industry, these programs show how technologies can impact drug research in the short term, said Sethuraman, a veteran of the pharmaceutical industry who has previously worked at GSK and Novartis.
If the ideas work, they can improve clinical trials by reducing the use of placebo, creating better metrics to measure the effectiveness of a drug, and improving the experience of trial volunteers.
Reduces placebo dependence
The long-standing gold standard for finding out if a drug works is a randomized, controlled trial. Volunteers receive either the experimental treatment, a placebo or the current standard of care.
The design creates excitement for the participants. Researchers need data from a control group, but patients do not want to take placebo. Bristol Myers hopes to use artificial intelligence and vast amounts of historical data to reduce the use of placebo.
Over the past few years, the pharmaceutical industry has been toying with the idea of synthetic control arms, where real-world data replaces the need for placebo. This historical data can come from a variety of locations, including electronic health records, insurance claims and disease records. If these data are of high quality, they may replace the need for a placebo group. These dummy control groups have now supported a few drug approvals, but regulators typically still want to see correct experimental data.
But real-world data is often of lower quality, Sethuraman said. It typically provides far less information than what comes from participants who are closely followed in a clinical study.
“It’s going in the right direction, but there are still ways to go,” Sethuraman said.
Bristol Myers is taking a middle ground in hopes of reducing, rather than replacing, placebo. The idea is to supplement a study’s control arm with historical data.
Sethuraman outlined a study of 1,000 patients as an example. Typically, 500 volunteers would receive the experimental drug, and 500 would receive placebo.
With Bristol Myers’ new concept, the same study could end up with the same amount of data, but recruit fewer volunteers who receive placebo. The aim of the trial was to enroll 250 people in the placebo arm. The experimental researchers’ hope would be that a historical data set for the disease would mimic the rest of the placebo group.
When the study recruited patients, the researchers wanted to take an early look to see how the real-world data stood out with the placebo group. Ideally, placebo and real-world groups would look alike on what the study measures. For example, if the study tests a cancer drug, the focus is likely to be on how quickly the disease worsens and the cancer grows.
If the two data sets were not similar, the researchers would scrap the historical data and return to the typical way and expand the placebo group to 500 volunteers.
However, if the real data agree with the placebo group, the researchers could lean more heavily on the historical data. They were able to adapt the study to use 350 patients from real data and only 150 in the control group. Overall, studies could run faster, recruit fewer patients, and lower the chances of patients receiving placebo.
Sethuraman’s team has already applied this approach retrospectively and analyzed previous studies to see if it can work. He said the research was promising proof of concept, with results to be published soon. Bristol is now using this approach in an intermediate-stage clinical trial, with results expected next year, Sethuraman said.
To create better ways to measure the effectiveness of a drug
Another challenge with clinical trials is deciding what to measure. Sethuraman’s vision is to use artificial intelligence to transform mountains of images, such as PET and CT scans, into valuable data. This data can then be used to create endpoints that can make clinical trials better and faster.
For example, his team presented results earlier this month at a cancer conference outlining a new measurement called a “g-score,” which uses AI to convert images of tumors into data. These data are then used to predict whether a tumor is likely to grow or shrink in a patient.
Predictive measurements like g-score could support faster decisions about whether a drug helps patients, Sethuraman said. Cancer studies typically take years to collect survival data to see if more volunteers died in the control arm than those who received the experimental treatment. Experiments could run faster if measurements such as g-score were adopted.
Conversational AI enhances the patient experience
Sethuraman’s team is also running a pilot program to make participation in clinical trials a better experience.
In collaboration with a startup he declined to name, Bristol Myers rolled out a conversational AI element in some clinical trials earlier this year. Study volunteers can ask the AI questions, which can avoid the usual pain of trying to call a doctor’s office or get in touch with a researcher.
Bristol Myers hopes that AI will keep patients engaged and automatically mark chats where a patient reports side effects.
Sethuraman said he expected to have enough data next year to evaluate the program’s impact.