As a Senior Automation/Full-stack Software Engineer at Bristol Myers Squibb (BMS) in the SD Biotherapeutics division, Benjamin Lee spearheaded the automation of protein production workflows using Lynx, improving efficiency, reducing manual work, and speeding up timelines. To manage the growing volume of data from these workflows, Benjamin Lee developed a range of custom software solutions, including a high-throughput (HTP) mass spectrometry analysis tool. This tool streamlined the mass spec analysis process, cutting analysis time by over 80%, and enhancing both the speed and accuracy of data processing. By leveraging a combination of software and machine automation, Benjamin Lee played a key role in increasing throughput and automating complex workflows, driving significant improvements in research efficiency at BMS.
Transcript
Hello everyone, I’m Ben, it’s nice to virtually meet you. Bet you’re not tired of that phrase yet. Anyways, I hope everyone’s enjoying sunny San Diego. I would first like to thank Dynamic Devices for giving me the opportunity to present to you all. And I would like to thank the audience for being with me in my virtual presentation. I know you’re reliving flashbacks that we want to forget. But anyways, I’ll be giving a presentation today on how the Lynx enables us to do scalable protein expression and high-throughput mass spec characterization.
A little about myself, my name is Ben. You already know that. I worked at Bristol Myers Squibb as a senior automation slash full stack engineer. I’ve been with BMS for around three years. I am part of the protein sciences organization. I’m in San Diego, part of the larger Discovery Biotherapeutics department. I am in charge of developing the high-throughput workflows using robotic liquid handlers like the Lynx and also creating software applications to support the scientists to speed up their work. When I joined, all the processes were very manual, and then over the course of my tenure here, we’ve been able to automate most of our processes. And without the Lynx, that would have been almost impossible.
Now, onto the fun stuff. So as you can tell by the border, I really like food. And that’s why I kind of got into automation in the first place. So my motto is basically, I can work smarter. I don’t really want to work harder, but if I work smarter, what I’m going to do with the extra time, right? I’m going to eat more. So that’s why the motto is work smarter, snack harder.
I’ll start off by giving an internal process overview. At BMS, we’re pretty focused into leveraging AI and ML to help us make informed decisions. So we go through the cycle called inform, design, produce, test, and back to inform. So, we use historical data to help inform our engineers, who then design the proteins, which then go into this production cycle shown right here. And this production cycle goes all the way up to characterization to make sure we are properly producing what we’re producing. And then it goes into test, which will basically produce the results that are needed to train the ML models or to inform the scientists if they’re making the correct decisions. So as you can see, the heart of the whole process is centered around ML and the scientists. And you can also see that production is a key component in delivering the material needed to do the tests and assays. So everything that you see highlighted in purple, we’ll be going over more in depth today. And in order to feed this data-hungry beast known as, basically, ML or AI, automation is the key because automation not only saves time, but is also less error-prone.
And then traditionally, liquid handlers lack– are very specialized. They either produce a lot of samples, or they produce a large amount of said samples. So for example, four mils, 25 mils, or a large amount of samples like 96, 384. There’s not too many overlapping between this, but that’s where the Lynx excels, and I’ll be going over that in the future slides.
Our stable pool production process is the best example of the versatility of Lynx. All steps that are listed in purple I’ll be doing a deep dive on. But generally, our production process starts off with a chemical transfection where we spike in DNA with the transfection reagent into a host line. Now different from transient, we introduce a selection condition after the transfections to help select out low producer, and this requires a media exchange. Over the course of expression, depending on the cell line, we’ll add booster and feed. But after the recovery, these cells are stably integrated into the DNA, so they’ll produce endless material until they’re terminated. So, in order to do rapid production, we would scale these cells up into different formats. For example, 96 to 24, so in terms of, volume-wise, would be one mil to four mils and 24 to deep well six or C50s, which would be four mils into 25 mils or more. The cells hit critical density at around three to four days after multiplying, so we’ll do a supernatant harvest and analysis twice a week.
The harvested supernatant then goes through affinity purification followed by a kind of sample modification that the requester wants. So, for example, a lot of– after affinity purification, the buffer might not be compatible with the assay, so we have to do buffer exchange for a buffer that is compatible with the assay. And then after all that is done, we have to verify that we produced what we think we’re producing and that is on characterization. So, because this is stable, we can endlessly repeat basically step six and seven on all the harvests, and harvests are done biweekly. And since these are stable cell pools, they can be frozen down and resuspended any time later.
Here’s our transfection process on our Lynx LM1800, which I think is the biggest chassis that I was offered at this time. We have a dual arm setup, one standard standing arm, so SV, and one variable multi-volume arm, so VVP.
This method can hold many different plate formats. So, we start with that 96, we can handle the 96, and it goes all the way up to a deep well six or a centrifuge 52. But this will produce 16 plates in a single run.
In this video, you can see that both arms are operating at the same time. This is actually pretty important to our transfection process since there is an incubation time that is needed. So, while one arm is focused on spiking in the cells at exactly the incubation time, the other is focusing on preparing the DNA plates and the off-starts are aligned such that these arms will never collide with each other.
We mainly use this process for our 24 deep well transfections, which contain about four mils of material, and per run that’s around 384 samples. What does that mean? Oh, let’s say we have eight hours in a day, and each run– let’s round up– instead of 40 minutes, let’s say it’s an hour. So, in one day, we can produce 3,000 unique molecules at 1.2 migs for multi-specifics and three migs for maps after purification. If the user requires more breadth of material and don’t need that much, say, for like a screening campaign, we can swap into a deep well 96 format, and we can produce up to 12,000 unique molecules per day. And let’s say the user wants more material and don’t need that many, they don’t have that much breadth, we can easily swap into a centrifuge 50 or a deep well six format where we’re able to produce a lot more material. In this case, five migs per well in a multi-specific and 15 migs per amount. This workflow is shared between both our transient and stable processes.
I think that the cell pasturing and scale-up methods that we use to maintain the stable cell pools during production can highlight the key reasons why we chose the Lynx. Stable cells tend to double at different rates. This makes using a stamping head difficult because it can’t individually tune each well. In the instance of a 24 well plate where each well holds four mils, using a single pipette is time-consuming because one pipette has to return to the same well two to three times in order to normalize that well.
The VVP head, or variable volume pipette head, offered by Dynamic Devices and the Lynx, solves this. The VVP head has 96 individually tunable channels. So, let’s take this example scenario. In a 24 deep well plate, four tips can fit into one well, and we’re able to change the amount of liquid inside each channel. So in this case right here, what you see is a variable volume. So, on the low aspirations, it’s not an error.
This is just a visual representation of some wells needing 1.4 mils while other needs 3.8.
One of the other cool features that opens up since each head can be individually turned on and off is the fact we can bulk pull liquid into a single well. In the example shown over here, I am filling a 50 mL centrifuge tube or a C-50 tube with different volumes, and I’m able to fill up a rack of 6 50 mL tubes with up to 20 mils of liquid in a go, or a single tube with up to 120 mils of liquid.
Now, during our scale-up slash normalization process, instead of, so think of this as a normalization of the cell density. So instead of tossing the cells that need to be removed to achieve an optimal cell density, we can instead choose to dilute the entire pool into a larger volume. So, let’s say 4 mils into 24 or 25 mils. We can use the VVP head to turn off the channels that are not located in the sample well, so it just won’t take the samples, and then aspirate the samples that are chosen and dispense them into their new plate format. And then this new plate has already been pre-filled with different amounts of volumes to accommodate the different cell densities that are in the sample.
And you can see that the throughput changes depending on whether or not it’s going into a different plate format or not, because the same well format, the tips are already built to handle, while in different plate formats, the tip needs to go back to refill these wells with bulk amounts of liquid.
So, after the samples are harvested, they go through an affinity purification process. However, sometimes we run into this issue where the elution buffer is not compatible with the assay. So, each sample has to be buffer exchange. To give a little context, we use a filter plate and a centrifuge to do the buffer exchange where the target buffer is added to the plate after each centrifuge cycle. There’s this cool thing about the VVP head is that it can use pressure to determine the height of the liquid in the well. This is actually pretty critical for our buffer exchange process because all the samples have different concentrations, so they’ll concentrate at different rates. However, to keep the spin cycles consistent for the same time, we have to refill them back to the same volume. Since we can know the liquid height of each well using the VVP liquid level detect, we’re able to use the height of the liquid to back calculate the volume in each well, and then use that volume to refill the well back to the target volume.
In this picture right here, we started with wells of multiple different volumes. And then after the process, they all come down to uniform—or, they technically go up to uniform volume. And this method takes 15 minutes, and it takes up to eight plates.
Now we get to the characterization portion. Although we use a multitude of tests to make sure that we are making what we think we’re making, we use the mass spec as our universal truth. This is just a super high-level breakdown of an intact mass analysis of a sample plate, which is used to confirm the identification of the sample in the well, as well as classifying its impurities. Using the Lynx, we’re able to prepare the plates at 12 minutes per plate. This is already a huge time savings compared to the half an hour it takes someone to do it manually. Additionally, if you want any additional information, we have a paper out that is called the “End-to-End Automated Intact Protein Mass Spectrometry for High-Throughput Screening and Characterization of Bispecific and Multispecific Antibodies.” And Ben Niu is the primary author for this because he was the one heading the mass spec portion.
The sample preparation process is pretty simple. It involves normalizing the sample to the target dilution of 0.1 migs per mil, and that’s where the VVP head shines. The entire plate of normalization is done in one motion instead of a column at a time or maybe a couple of samples at a time. After normalization, a deglycosylation enzyme is added along with Tris HCL to prepare the sample for injection, and this is all done using our Lynx LM1200.
In this example, you can see that the Lynx can be relatively gentle with all the volume aspirates, as well as it can also vigorously mix in some of the later steps in this video shown over here.
This is the sample acquisition of our mass spec. I’m just gonna briefly touch on this. By using a rapid acquisition method, we’re able to reduce the traditional mass spec acquisition time of using liquid chromatography from 10 minutes per sample to two minutes per sample. And then by using sample stream, we’re able to reduce that acquisition time down again to one minute per sample. So for scale, a normal 96-well plate using the traditional reverse LC method would take around 16 hours to acquire data on all 96 samples. By using a size exclusion chromatography version, we’re able to shorten that down to just a little over three hours for those 96 samples. And then by using sample stream, we shorten it even down further to just an hour and a half.
So, anyone who uses the mass spec regularly knows that the sample acquisition and sample preparation isn’t the main bottleneck, it’s the spectra analysis. Now comes the other part of my rule, the full stack software engineering portion. I created an internal application that analyzes that spectra. So, the process starts off with the sample file. The sample file gets deconvoluted, and in parallel, the sample information is requested from our database, where the composition and the sequences are pulled down for each composition component. The user is then prompted to either use the existing PTM library, so post-translational modification library, or to add or subtract from the current library. All these PTMs will be combined with the sample information to calculate all possible combinations. In order to combat false positives, as well as to optimize processing time, I developed an auto-assembly algorithm that will only calculate the masses of biological, possible species. For example, there’s no way for a heavy chain to exist as a species because it would be two heavy chains and one free-floating heavy chain.
The deconvoluted spectra will then be compared with a list of possible species, where the spectra will get analyzed, and my app will recommend the classification given to that said peak.
The software can tell exactly the amount of target half-mer and mis-pairing and other contaminants in the sample in under a minute. Usually, an analysis will go back to double-check my software is producing the right results, and the existence of this software has reduced the sample analysis time of 96 samples from a couple of weeks to a couple of days. Internally, we did a proof of concept done against eight sample plates, where 800 samples were analyzed in only 10 business days. Usually, this will take about four to six months, so that’s massive time savings.
Now, if the samples are truly a mystery, we have a special troops option of using peptide mapping. Traditionally, preparing the samples for peptide mapping takes a huge amount of time. However, this is where the Lynx comes into play. The Lynx allows us to prepare all samples in a 96 deep well plate in just seven hours. One of the key features to point out is that during the incubation step, where the sample is on a different plate, the Lynx can actively go and use the settable offset parameters to thread each pipette into the dialysis filter so that the dialysis plate itself gets filled with dialysate while the samples are still incubating. If you want to learn more about this method, there’s a paper listed here.
Hopefully, after this presentation, you know why the Lynx excels. Not only can it handle different plate formats, in my case, 96 up to deep well six, but I also have other methods developed for assays that go into 384 wells. It’s very integration-friendly; like you saw on the peptide mapping method, we have a thermocycler on deck. And this video over here shows that it can also be integrated with a decapper.
The methods are highly configurable, as you also saw with the peptide mapping method, where you can use the offsets that align with certain features of your plate. And in that case, we needed to aim for areas slightly outside of the cassette so that we can fill each well with dialysate. And then later on in that method, we had to aim for one of the little entries in the dialysis cassette itself.
Most of all, the support that Dynamic Devices gives is top-tier. They will reply to your emails within 48 hours. I don’t know of any company that can consistently do that.
I want to thank all my past and present coworkers.
Specifically, I really want to call out Ben Niu, who played a critical role in churning out some of these mass spec methods.