How AI Will Accelerate Breakthroughs in Biotechnology with Benchling CEO Sajith Wickramasekara
The Digital Gap in the Wet Lab
Sajith Wickramasekara didn’t set out to build a software empire; he just wanted to finish his research. As a student at MIT, he found himself at the intersection of two very different worlds. In one, he was part of the burgeoning computer science revolution, where tools were sleek, collaborative, and fast. In the other—the world of molecular biology—he was stepping back in time.
"It was a jarring contrast," Sajith notes, recalling the days of physical paper notebooks and manual data entry. While his peers in software were leveraging Git and cloud computing, world-class scientists were documenting life-saving research in spiral-bound journals. This friction wasn't just a minor inconvenience; it was a systemic bottleneck. Benchling was born out of this frustration—a mission to provide the "operating system" for the biological revolution.
"We want to shorten the path from a scientist’s breakthrough idea to a life-saving product in the hands of a patient."
Sajith explains that by centralizing data, Benchling allows teams to stop fighting their tools and start fighting the diseases they are trying to cure.
The $2 Billion Gamble
To understand why Benchling is necessary, one must understand the sheer brutality of the drug development process. It is a funnel of epic proportions. It begins with thousands of candidates and ends, often a decade and several billion dollars later, with a single approved drug.
Sajith breaks down the current state of the industry as a race against "Eroom's Law"—the observation that drug discovery is becoming slower and more expensive over time, despite improvements in technology. The "state of the art" in many labs is still a fragmented mess of legacy software, leading to what he calls "data silos," where valuable insights go to die in unsearchable spreadsheets.
The R&D Efficiency Problem
The "Eroom's Law" visualization: Despite massive tech leaps, the cost to bring a drug to market continues to skyrocket.
Why Biology is Different
- ● Unpredictability: Software follows logic; biology follows evolution. It is messy and often non-deterministic.
- ● Data Volume: A single experiment can generate terabytes of sequencing data that must be contextualized.
- ● Collaboration: Modern drugs aren't made by one person, but by global teams across chemistry, biology, and data science.
Sajith laughs as he recalls the early meetings with venture capitalists. "People told us biology was too slow for software cycles. They didn't realize that the slower the process, the more desperate you are for efficiency." Today, as the industry shifts toward "Tech-Bio"—where biology is treated more like an engineering discipline—Benchling finds itself at the center of a fundamental shift.
But the real transformation isn't just digital—it's intelligent. As we move into the next chapter, Sajith reveals how the arrival of Large Language Models is finally cracking the code of biological complexity...
The Silicon Catalyst in the Wet Lab
Having navigated the turbulent waters of the modern biotech industry and the grueling "valley of death" in drug development, we arrive at the industry's most polarizing savior: Artificial Intelligence. It is easy to view AI as a shiny new veneer on a century-old process, but the conversation shifted here. We aren't just talking about "robots doing science"; we’re talking about a fundamental shift from trial-and-error biology to a predictable engineering discipline.
The speaker notes that the hype cycle often obscures the actual utility. In the previous era, biotech was limited by the "human bottleneck"—the sheer number of PhDs required to manually pipette, observe, and document. AI’s role isn't necessarily to replace the PhD, but to provide them with a "GPS for biology." Instead of wandering through the vast expanse of chemical space, researchers are now using generative models to narrow down candidates before they ever touch a petri dish.
"It’s not about finding a needle in a haystack anymore; it’s about using AI to burn the hay so only the needle remains."
Benchling: The Operating System’s New Brain
The discussion takes a pragmatic turn when focusing on Benchling. If biotech has an "operating system," Benchling is often it. The introduction of AI into this platform isn't just about adding a chatbot; it’s about structured data. The speaker emphasizes that AI is only as good as the context it lives in. By integrating LLMs directly into the R&D workflow, Benchling allows scientists to ask complex questions of their own experimental history.
Feature: Scientific Assistance
Automating the tedious documentation that eats up 40% of a scientist's day. AI can now "read" a plate layout and suggest the next sequence design based on historical success.
Impact: Data Continuity
Bridging the gap between "wet lab" results and "dry lab" analysis. AI identifies patterns in protein folding that previously required separate specialized software suites.
Toward the "Self-Driving Lab"
Where is this heading? The speaker paints a picture of the "Self-Driving Lab." This isn't science fiction; it's the convergence of robotics, AI, and cloud-based data layers. In this future, the AI doesn't just suggest an experiment—it initiates it. It monitors the liquid handlers, analyzes the results in real-time, and iterates the hypothesis overnight.
The future isn't a scientist sitting at a bench with a pipette; it's a scientist sitting at a dashboard, directing a fleet of autonomous biological assays. We are moving from 'doing' science to 'orchestrating' science.
The real breakthrough, however, won't be the AI itself, but the interoperability. For the first time, the "Dry Lab" (computational) and "Wet Lab" (physical) are speaking the same language. This leads us directly to the next critical component of the stack: the data itself.
Reality Check: What AI Can’t Do (Yet)
To close this segment, the speaker takes a sharp needle to the "AI Drug Discovery" bubble. There is a common misconception that AI will suddenly produce "FDA-ready" drugs at the push of a button. The reality is more sobering. While AI is incredible at *discovery* (finding the molecule), it still struggles with *development* (testing if that molecule is toxic in a complex human body).
The Myth
- ❌ AI will eliminate the need for clinical trials.
- ❌ AI can predict human toxicity with 100% accuracy from a digital model.
- ❌ Small "AI-first" biotechs will replace Big Pharma overnight.
The Reality
- ✅ AI accelerates the pre-clinical phase from years to months.
- ✅ Biology is still messy; "Wet" validation is required for every "Dry" prediction.
- ✅ Big Pharma is acquiring AI tools to fix their own R&D productivity crisis.
As we transition into the next chapter, it becomes clear that AI is a engine that requires a very specific fuel. Without high-quality, clean, and accessible data, these models are just expensive noise-makers. We turn next to the "Data Layer"—the unsung hero of the biotech revolution.
Beyond the Hype: The Unseen Plumbing of Modern Biotech
Having cleared the air on the persistent myths surrounding AI drug discovery, the conversation shifts to the actual "fuel" in the engine: data. In the biotech world, data isn't just a byproduct of experiments; it is the experiment. However, there’s a recurring frustration in the voice of our speaker—a sense that the industry has spent decades collecting data without building the proper buckets to hold it, or the pipes to move it.
"You can’t just sprinkle AI on a broken data strategy and expect a miracle," the speaker notes, with a hint of a laugh. The logic is clear: the bottleneck in pharma isn't a lack of brilliance, but a historical neglect of tooling. For years, software was treated as a secondary concern—a "nice to have" utility rather than a core scientific instrument. We are now seeing a fundamental pivot where the quality of a lab’s digital infrastructure is just as critical as the purity of its chemical reagents.
"We spent forty years refining the microscope. Now, we're realizing the most important lens we have is the one made of code."
— On the shift from hardware-centric to software-first research.
The Toolmaker's Revolution
Why does pharma struggle with tools? The speaker points to a cultural divide. Traditional software tools were built for "accounting" or "compliance," not for the messy, iterative process of scientific discovery. When AI enters the lab, it shouldn't just be "predicting" outcomes; it should be integrated into the research loop, helping scientists decide which experiment to run tomorrow based on what happened today.
This is the true impact of AI: it’s a force multiplier for research productivity. By automating the mundane data cleaning and providing real-time insights, we move from a world of "trial and error" to "design and verify." The speaker emphasizes that building a biotech company today isn't about hiring 100 chemists; it’s about building a hybrid organization where the engineers and scientists speak the same language.
The Foundation
Structured Data Lakes
Moving away from fragmented Excel sheets to unified, queryable repositories.
The Engine
Integrated Tooling
Software that lives where the science happens—from LIMS to predictive modeling.
The Output
Iterative Discovery
A feedback loop where AI models improve with every physical wet-lab experiment.
Building the Hybrid Organization
As we look toward the practicalities of company building, the speaker highlights a hard truth: "Culture eats data for breakfast." You can have the best datasets in the world, but if your biologists don't trust your data scientists, the company will stall. This sets the stage for our next discussion on interdisciplinary collaboration—how to bridge the gap between "Move Fast and Break Things" and the rigorous, slow-paced reality of clinical safety.
Efficiency Gains: Traditional vs. AI-Augmented Research
Figure 1.0: While validation time remains constant due to regulatory needs, AI significantly compresses the upstream discovery phases.
Coming up next: How these data-driven foundations enable a new kind of interdisciplinary collaboration, and what tech can learn from the "slow and steady" world of biology.
The Great Convergence: Where Code Meets Codon
Building a biotech company, as we’ve discussed, is a feat of endurance. But once the foundation is laid, the real magic—and the real friction—happens in the hallway where the software engineers meet the molecular biologists. Transitioning from the structural challenges of a startup to the cultural challenges of interdisciplinary work is where most "tech-bio" dreams either crystallize or crumble.
The speaker notes a fascinating shift: we are moving away from the "service provider" model. In the old days, you’d have a biologist who had a problem and a "computer guy" who wrote a script to solve it. Today, the collaboration is much more symbiotic. It’s about building a common language. "You can't just throw data over the wall and expect a miracle," the speaker laughs. "You need the person writing the algorithm to understand the noise inherent in a wet-lab experiment, and you need the biologist to understand the constraints of the model."
This interdisciplinary "soup" is what allows for the rapid iteration cycles we're seeing today. When a machine learning engineer understands the physical properties of a protein, they don't just build a better model—they help design a better experiment. It turns the scientific method from a linear path into a feedback loop.
The Tech Mindset
- • Iteration: Move fast, break things, patch later.
- • Abstraction: Building layers to hide complexity.
- • Scalability: If it works for one, it should work for a billion.
The Biotech Mindset
- • Rigor: Reproducibility is the only currency.
- • Complexity: Respecting the messy reality of living systems.
- • Safety: High stakes; "bugs" can be life-threatening.
Learning from Each Other
The most profound realization of the segment is how much these two disparate worlds are beginning to mirror one another. Tech is learning that biology isn't just "dirty data"—it's a sophisticated system that requires a new kind of respect. Conversely, biotech is learning from tech how to treat biology as an engineering discipline rather than a purely observational science.
"The moment a biologist starts thinking like a programmer—looking for the logic gates in a cell—and a programmer starts thinking like a biologist—appreciating the beautiful chaos of evolution—that's when the breakthroughs happen."
— On the "Aha!" moment of Interdisciplinary Success
As we look toward the conclusion of this journey, it's clear that the future of medicine isn't just found in a petri dish or a server rack—it’s found in the bridge between them.
Final Thoughts: The Century of Biology
We began this exploration looking at how AI is fundamentally reshaping scientific research, and we've ended at the human element: the collaboration required to bring those discoveries to life.
The speaker’s parting note is one of cautious but profound optimism. We are entering an era where the tools of silicon and the building blocks of carbon are finally working in tandem. The bottlenecks of the past—data scarcity, trial-and-error experimentation, and siloed knowledge—are being dismantled. As the conversation closes, the message is clear: the next decade won't just be about "digital transformation," it will be about the biological revolution, powered by the very technologies we once thought were purely for the virtual world.
End of Feature