Clay Overturf has been advancing innovation as an Automation Engineer at Geno since 2021. A graduate of LeTourneau University with a B.S. in Mechanical Engineering and a minor in Mathematics, Clay combines technical expertise with a passion for sustainability.

Inside the lab, he develops custom automation solutions designed to enhance throughput, streamline workflows, and accelerate Geno’s cutting-edge technology. Clay’s commitment to efficiency and innovation plays a vital role in driving Geno’s mission to create sustainable solutions for a better future.

Transcript

All right, good afternoon, everyone. As James mentioned, my name is Clay Overturf. I’m an automation engineer at Geno, background in mechanical engineering. I came over to Geno about three and a half years ago, and I’ve been learning the science, which has been really exciting.

In this talk, I’m going to go over a few different things. I’m going to start with kind of a high-level overview over Geno, what we do as a company. I’m going to talk about our R&D workflow, and then how we’re using the Dynamic Devices Lynx to kind of accelerate our strain to data.

All right, so at Genomatica, really our goal is to address the climate crisis by developing these scalable, kind of drop-in solutions that drive the greatest reduction in carbon intensity. What does that mean? We’re basically finding ways to make existing products in cleaner ways.

We accomplish this by using renewable resources like plants and sugars to make these traceable and transparent by manufacturing processes. That way we can really ensure the products we use have been made sustainably.

So, we have a few different product platforms at Geno. The core technology is really similar across these. We’re converting these renewable carbon sources in plants and sugars. We’re using engineered microorganisms to produce these target molecules that are used across this range of industries, from cosmetics, textiles, automotive, pretty much anything you touch.

And so, by engineering these microorganisms, scaling, manufacturing processes, we can make more and more renewable products. This makes sustainability kind of a part of everyday life, and that’s our goal.

What does this look like in action? So, starting with by developing these microorganisms, we have our Geno Biomanufacturing Technology. This allows us to convert these alternative and renewable feedstocks into widely used molecules, and then we kind of drop those into the products we touch and use on a daily basis.

So that’s Geno. Going a little deeper, I want to talk about how we actually engineer scalable cell factories.

We have a very specific blueprint that we use. We call that our data cycle: design, assemble, test, and analyze. So, we start with our modeling and design. We’re using systems bioengineering to guide strain engineering and different experimental approaches that we can use. This informs our assemble phase. We’re using synthetic biology, engineering enzymes, integrating different genotypes that are going to improve the performance of our cell.

Then we get to test space. This is the fun part. This is my job, this is where I’m working. We use robotics, instrumentation, specifically the Lynx, to kind of deploy these complex experiments that allow us to test the robustness and performance of our strain. This produces a bunch of data. We do this multidimensional analysis, feed that back into our modeling design, and like any iterative design process, each step informs the next, and you continuously improve product performance.

When engineering cell factories, there are a lot of variables at play. On the cell engineering side, we have a pretty good understanding of what we’re doing, how it works. We have this model right here. Our cell is up taking these feedstocks, our renewable carbon sources.

Different precursors are at play, metabolic pathways that we can engineer. Then we start producing more of our desired product, product X, product Y. A lot of variables there, they all impact cell performance. On the other side are bioprocess conditions. These are things like oxygen levels, pH, temperature, the environment that the cell is living in. It’s critical that we can control these, because all these variables together, they impact our downstream process design, process cost, and again, sustainability, and that’s our goal.

How can we use synthetic biology and an information-rich phenotyping platform to advance cell engineering? In screening, we need to generate a lot of diversity, so there’s two different approaches that we use. There’s the forward engineering thinking. We use rational design. Assembly of genetic parts to maximize our desired product.

The other one is reverse engineering. This is where we generate a bunch of unknown diversity. We screen it under specific conditions. We select candidates, an idea ID, the beneficial genotypes that exist in that strain. So I kind of want to take us through what an example of that looks like. This is what we call our small-scale technology. It’s a plate-based fermentation workflow.

So in this case, we’re using this reverse engineering approach to improve substrate utilization of one of our strains. So we’ve generated a bunch of unknown diversity, we’re screening it under very specific process-relevant conditions, and at the end of this small-scale fermentation, we have a bunch of production data.

Specifically, we’re looking at QS, which is the rate our strain is utilizing substrate, and QP, which is the rate our strain is producing our target molecule. There’s a range of candidate performance here, but particularly our area of interest is this top-left area. High QS, high QP. Our strain is happy in the environment we’re living in. It’s utilizing the substrate at hand. It’s producing a lot of our target molecule. So we can select these candidates, we can identify any beneficial genotype that exists, and integrate that back into our parent strain.

So, we talked about process conditions, I want to go a little bit deeper into that. Process conditions, there’s a bunch of different factors at play, but this could be like the substrate you’re using. Maybe media components, how high or low your pH may be. It could be the agitation rate or the temperature your cell is exposed to during fermentation. All these things matter, and they impact strain production and eventually the phenotype.

So let’s look at this figure on the right. We have a few different strains here, and there’s one thing we know. Different strains and different strain genotypes, they can produce different growth phenotypes. But we also know that a single strain and a single strain genotype can produce different growth phenotypes under different process conditions.

So genotype matters, process conditions matter, how do we develop a workflow and screen for all of these different conditions?

We’ve been able to do this in low throughput, kind of on the bench, very costly, very time-intensive. But with our robotic-integrated Lynx, we can do this in high throughput. So that’s what I want to talk about, because that’s our goal.

So, what does a high-throughput phenotyping platform look like? Well, the Dynamic Devices Lynx. We have an LM900 with a 96VVP head. That is at the core of this platform. The VVP technology allows us to control each channel independently. That’s 12 times faster than your 8-channel pipetter, which is the next best technology, 96 times faster than your manual pipetter. So now we’re getting places.

We’ve integrated this with a controlled plate storage, our incubator shaker for fermentation. We’ve got a plate reader for time course sampling. All of this schedules together using Genera, which is our scheduling platform.

Want to shout out Method Manager– for most of these instruments, they do have drivers, so you could actually control this with Method Manager.

And then we got a SCARA robot arm. This allows us to move labware throughout the platform.

So, things can get really complex really quickly. We got a bunch of different conditions per plate, a bunch of different plates per scientist, we got several scientists running on the same platform, and we are running 24 hours a day up to seven days a week.

So, on top of the physical infrastructure, we built some custom barcoding data handling solutions using Python. This connects to our LIMS system, allows us to push and pull data and experimental parameters to the platform.

This looks like sending the Lynx liquid handling volumes, maybe updating the Lynx workspace with specific labware locations, or maybe our desired reservoir, our target reservoir with our desired substrate.

Let’s look at some pictures from the lab.

So, it’s honestly a pretty simple platform. We have our Thermo Fisher Cytomat for fermentation. We have an Agilent Flex robot arm, moving labware throughout. Our BioTek plate reader for time course OD sampling, of course, our Dynamic Devices Lynx.

There’s a couple other peripherals on here that kind of tie it out. We’ve got some Tek-Matic labware storage and a ClickBio refill station. This allows us to run unattended and most of our runtime is overnight.

Okay. We have this platform, how can we use the Lynx to run these advanced multivariate experiments?

When screening process conditions, there’s a lot of different variables and a lot of different variable values that need to be tested. So we can use the Lynx, we can generate a bunch of different variables, a bunch of different variable values, and control these values in each well of a 96-well plate. So I kind of wanted to demonstrate that in the bottom left using food dye as an example, but let’s say the first picture there, a yellow food dye, maybe that’s our substrate. We want to test a range of substrate concentrations, then move to our next condition. This could be maybe a media component, an acid, or a base. Test a bunch of conditions there. We go to our third. In red, this could be some sort of selective agent, a range of 96 conditions. So we can scale the complexity of experiment setup using our Lynx in a single liquid handling event, and all that can be completed in less than 60 seconds. That’s huge.

So we take that plate, complex experiment setup, we pair that with our high-throughput phenotyping platform. Again, we’re using the Lynx to sample our culture at some defined interval time during fermentation. We generate a bunch of phenotyping data. This could be biomass or cell growth, those are kind of the critical markers we use that tell us how happy our cell is under given process conditions.

We can then use that data, paired with the production results, at the end of this small-scale fermentation and begin to get a good understanding of how a strain of genotype is being impacted across these process conditions, and the associated phenotypes that may exist.

So ultimately, we are using this platform. We’re mimicking batch fermentation in a 96-well plate.

I want to go through a couple examples here.

So how can we use this multivariate assay to inform scaled bioprocesses? That’s our goal. Right now, we’re working in the milliliter range, but we want to go up to, eventually, 100,000 liters at commercial scale.

So, a critical question we have when scaling up bioprocesses: Do strain improvements found in screening– which are typically at one, kind of a homogenous condition– do they hold true to the gradients seen in scaled fermentation?

When growing cells in larger volumes, larger tanks, the cell is exposed to kind of heterogeneous conditions due to a range of process factors. This could be how the cell– insufficient mixing in the tank, or maybe how a substrate is being fed into that tank.

What we know is these things matter; they impact cell performance.

So again, when screening is smaller, homogenous conditions, it’s hard to mimic what may happen in scaled fermentation.

So we can use these multivariate assays to kind of build a picture of strained performance at the range of conditions we expect to see at scale.

And that’s kind of what I want to show here on the right. So this is an example. We have two strains, strain one may be a parent strain. Strain two could have some engineered modification. We expect to see some sort of substrate gradient at scale. So we can design this experiment, we have two different substrates, we have a range of substrate concentrations across our 96-well plate. We can run this assay, we can get the results at the end of this production fermentation, and we can say for each of these conditions that we expect to see, strain two is statistically better than strain one. This kind of validates our step change improvement prior to scale up, and we can form multiple rounds of engineering really quickly. And this can minimize our data cycle by several weeks. That’s huge.

But we always want to do more, so this is batch fermentation. What if we used our robotic-connected Lynx to expand our small screen capabilities to mimic fed-batch fermentation? Because that’s also really cool.

So, in this example, let’s start on the figure on the left. We wanted to test our strains under a range of feed rates. So, we have a 96-well plate, we have a range of initial substrate concentrations. At each of those substrate concentrations, we have four different feed rates. And the goal here is, at what feed rate can we match the cell’s substrate uptake? So, in this idea, the metric that we’re using would be measuring the residual substrate at the end of our fermentation and comparing that to our initial substrate concentration.

So looking here, we run this assay. We get the results. We can say, OK, feed rate 2 looks really good. Under all of the initial substrate concentrations, it kind of has a one-to-one ratio of residual substrate.

We take this phenotyping data. We can pair this with our results at the end of our small-scale fermentation. We can do this multidimensional analysis of the output. And we can see the impact on our product titer, rate, and yield. We can then take this data, we can feed it back into our modeling and design of our data cycle, as I talked about earlier, and really begin guiding the next round of strain engineering.

Two big takeaways I want to have here: The first is, we can use our robotic-connected Lynx to simplify the complexity of running fed batch fermentations.

And the second one is we can now reveal the complexities we see in scale-up in high-throughput, small-scale screening.

And none of that would be possible without the power of the Dynamic Devices Lynx, specifically the VVP technology.

As always, it takes a lot of people to make a project like this. I have the honor of presenting, but many different people came and touched us and impacted this platform. We have a couple contributors in this room. We’ve got Alex and Natasha back there, as well as Jungik. So huge thank you to the entire team. It took a lot of work and a lot of effort to make something like this successful.

And of course, the Dynamic Devices team, as well, were huge contributors. So, thank you.