top of page

Testing AI's Ability to Track Competitor Drug Pipelines

  • Jan 28
  • 1 min read

AI should excel at competitive intelligence tracking, so I decided to test how well current platforms perform on a real-world example: mapping the obesity drug development pipeline. My goal was to create a comprehensive table of all obesity drugs in Phase 2 and Phase 3 clinical trials in the US (excluding agents being developed exclusively in China or other Asian markets).


Methodology


First, I manually compiled a reference table using traditional research methods—reviewing ClinicalTrials.gov, company pipeline disclosures, and recent scientific literature and news. This served as my benchmark for accuracy.



Next, I tested the three leading AI platforms, ChatGPT, Gemini and Claude, with an identical prompt: "Create a table of all drugs in active clinical development for obesity in the US"

I evaluated each platform's output using a simple scoring system:


  • Fully Correct: +3 points

  • Partially correct: +1 point

  • Wrong or missed: -3 points


Key Findings


Claude performed best overall, though the differences between platforms were modest. All three AI tools significantly underperformed, missing the majority of drugs in the pipeline—particularly in Phase 2 development.

Beyond accuracy, Claude distinguished itself by delivering a well-formatted Excel spreadsheet with clinical efficacy data and detailed notes. ChatGPT and Gemini both provided simpler table formats


Conclusion


While AI shows promise for competitive intelligence, current platforms still have significant limitations in comprehensiveness and accuracy for pipeline tracking. Traditional research methods remain essential, though AI tools can provide a useful starting point and better data presentation.

 
 
  • LinkedIn

Copyright © 2024 Horizon BioInsights LLC. All Rights Reserved.

bottom of page