Amy Lee earned a Master of Engineering degree in Biomedical Engineering from Cornell University in 2017. Beginning her career at Merck, she dedicated five years to specializing in protein purification and biophysical characterization for structural biology. Currently, in her role at Amgen, Amy is committed to advancing biologics programs and optimizing protein engineering capabilities with an emphasis on automation. She collaborates with various teams to implement innovative solutions to manual processes through the use of liquid handlers, automated systems, and other technologies.

Transcript

Thank you, James. Hi, everyone. My name is Amy, and today I’m excited to share with you guys how we’re using the Lynx to enable high-throughput expression at Amgen.

No, that’s fine.

Okay. All right. So now before I go into the meat of my presentation, I wanted to give a brief overview of what we do at Amgen. So I would say that the heart of what we do at Amgen lies in this large molecule discovery pipeline. So you can see a couple of the molecule formats that we work with, some of them a little more traditional, like the recombinant proteins and the monoclonal antibodies. But of course, as the industry moves towards trying to drug the undrugable targets, a little more creativity is needed, and that’s where things like the bispecifics, the peptibodies, and the BiTE constructs come in. And of course, we’re moving even more towards the next generation of biotherapeutics, bringing in multispecifics, PROTACs, and other combination therapies.

So within large molecule discovery, we have a group called protein therapeutics, and we are responsible for designing, engineering, and developing these new therapeutic candidates. So what typically happens is we’ll receive a few validated leads, and then we have a whole host of different capabilities like computational design, epitope mapping, protein purification, expression, and then after analytics, we get to spit out a couple of the optimized candidates. So just to dig into that a little deeper, so we receive these validated leads from upstream R&D groups, and that’ll be typically about a dozen or so of these leads, and then it’s our job to create many more variants of those. That could be a couple hundred to even upwards of a thousand or so molecules. And now then with those many, many variants, we get to design those new constructs, see how they express, see how they purify, and then putting them through functional testing and analytics to see, do any of those perform better than the original leads that we received? Hopefully, yes, right? And then we get to narrow down those many, many variants into just a few that we then pass on to process development. So our group is pretty much the last group in R&D before these molecules get passed on to process development.

Now, in terms of the scale that we’re working with, you can see we kind of work with a variety. We have plate-based expression, so that’s anything less than 8 mL of culture. And then the slightly greater ones get expressed in 50 ml conical tubes. That’s anywhere between 8 to 100 mL. And then anything greater than 100 mL, that could be up to a couple liters, tend to get produced in flasks.

If you look at the distribution of this pie chart, you can see the vast majority is our plate-based expression. I would say just a couple years ago, the pie chart was way greater in the larger scale of production. But of course, as the industry is moving towards miniaturization, being able to get the same answers with less protein, that’s how we can now produce less protein in those plate-based formats and still get the same answers that we used with larger amounts of protein. And I also want to point your attention to the number of molecules there. Close to 7,500 molecules is what we produced in the past year. But that number has also increased dramatically. Maybe two to three years ago, that number was closer to 2,000 to 3,000 molecules. And so as the industry moves towards trying to produce more molecules and that demand increases for the number of molecules, but also the percentage of plate-based expression that needs to be done, something needed to change in our labs. The answer was not going to be hand pipetting more, right? So that’s kind of where the Lynx came in for us as we were looking for automated solutions.

All right, so in our lab, the Lynx does support all of the production scales that we’re working with. But as you can see in this diagram, the larger formats or larger production scales only have transfection being done on the deck because those happen in the plate-based formats and then everything else is done in larger flasks. But of course, the plate-based expressions, the whole expression process can be done on the Lynx.

Now, for the purposes of my talk today, I’m just going to be focusing on the transfection protocol since it touches all parts of our production– or, yeah, expression. And yeah, I kind of wanted to give you guys an idea of what it was like to collaborate with Dynamic Devices to get the protocol that we have today.

All right.

So this is a general overview of our transfection protocol. We kind of have three main parts, and I wanted to highlight some of Lynx’s unique capabilities that we were able to use to get the protocol that we have now. So one of our main parts is, of course, adding media from a reservoir to our 24-well expression plates. We really liked being able to use the VVP head and the liquid level detection capability that it has. So what happens is, as you start your protocol, that VVP head will be able to sense how much liquid is in your reservoir. And then as the protocol continues and volume is taken out of that reservoir, it kind of can do those calculations to know, hey, the liquid level might be getting a little low, and then the user can be alerted about that. So that was really great for us, right, because that means nobody has to be hovering over the instrument to see if the liquid-level is getting too low, and it’ll just–we’ll know when we have to refill those reservoirs.

A second part is adding DNA that comes to us in a 96-well format and then having to rearray it into the 24-well format. So for applications like this, a stamping head was not going to be–was not going to work for us. And so again, that VVP head was really great because it very easily rearranges the contents of that 96-well DNA plate into the 24-well expression plates.

And thirdly, we have a portion where we’re adding cells to our expression plates, and we don’t want the cells to just be sitting, and we want it to be homogenous. So that’s why it was really nice that we could integrate a BioShake onto the deck of our Lynx so that the cells could be kept in suspension the whole time before we needed to use them.

All right. So we went to Dynamic Devices with a really great idea of what we wanted out of this protocol, and they were able to give us this first iteration of the protocol that was end-to-end, and it had everything that we needed, which was very great because they minimized user intervention way more than, you know, the hand pipetting that we used to do in the past. And another thing we really liked was this dynamic worktable. So what I mean by that is, you know, when you start the experiment, you start out by typing a number of samples that you’re working with for that run, and the amount of DNA that you’re using. So then after clicking okay, a work table pops up, and it gives you exactly what you need to load that deck.

But if we were to, say, type in a different number, so this example I have 480, but if I were to decrease that to something random like 56, then the work table that comes up is very different. It only shows the number of plates that you need. So this was actually very useful for newer users in our lab because we’ve worked with other instruments where the work table kind of just represents the maximum capacity that you could use, and so you needed to have a certain level of familiarity with the protocol to know, hey, if I’m not using that max capacity for the protocol today, which, you know, nests need to be filled for that run, but this dynamic worktable kind of takes the guesswork out for those newer users, and so we found that it was very approachable for some newer automation users in our lab.

Okay, so this first iteration of our transfection protocol was great, quick and easy to use, and definitely saved time from having to hand-pipette these things. But we did see some opportunities for improvement, so that first iteration of the protocol, we could only run one request or one project on it at a time because it assumed that each 24-well plate would be filled before moving on to the next one. But in our, you know, work, it’s not that these requests come in perfect multiples of 24. You know, sometimes it’s something random like 56, and so there would be partial plates, but that first iteration couldn’t handle that, and so we asked Dynamic Devices, hey, could we have a second version where we can run multiple requests in one run? And also, we wanted to use that barcode validation feature because we know that there’s a barcode scanner on the instrument, and so we wanted to scan all the barcodes on the plates to just validate that the correct plates were there. All right, so after chatting with Dynamic Devices, we were able to get a second iteration of this protocol. Now, this enhanced protocol now does allow multiple requests. It does include the barcode scanning capability, and we were able to utilize Benchling-generated worklists. So Benchling is the LIMS system that we are starting to use, and these worklists, you know, when you execute an experiment in Benchling, it always pops out this worklist that includes the barcodes of your source DNA plate and then your destination expression plate, and it includes the source and destination wells as well. And this was great that we could just use this Benchling-generated worklist instead of having to copy and paste into some sort of format that’s digestible by the Lynx. It’s just we could use whatever CSV or Excel file we had on hand.

All right, so with this new protocol, the user interface looks a little different in that it starts out by having us load the worklist that we need for this run. So in this example here, I have three worklists, meaning three different requests or projects that I’m working with for this run, and then we can type in the volumes of everything that we need.

I want to point your attention to these numbers here. So none of these are multiples of 24, meaning all of them would need some sort of partial 24-well plate.

Now, if we’re assuming that we’re using that first iteration of the protocol, we would see that those numbers add up to 396, and then easy math, you’ll see that it fits into 17 24 deep well plates. But this does not work for us because, like I said earlier, we need them to be in separate plates. We don’t want molecules from two different projects or requests to be in the same 24-well plate.

So this second version of the protocol, what it allowed us to do is that it would allocate the correct number of plates, looking at each request separately. So, you know, for example, that first request using 266 molecules, it’ll make sure that 12 plates are allocated for that, and then, you know, four plates for the next request and three plates for the next request. And that total gets us to 19 plates instead of the original 17 plates that was calculated. And I know that’s a very small detail, but this is just something that we needed for our group in order to get it to work, and it was just great that Dynamic Devices could accommodate that.

All right. So we really love this protocol. I also want to point out a difference in the worktable. So the previous iteration that we had, it was still important to make sure that the plates were loaded in order, but that actually wasn’t the case for this protocol anymore because of the barcode scanning feature. So now you could place the plates anywhere you want on the deck. It didn’t have to be in a specific order anymore. So, you know, if you wanted to put DNA plate number one in spot one, but then five in the next spot and three in whatever, that was totally fine. Same deal with the 24-well plates. You could put them in any order that you want.

And, you know, as scientists, of course, we do like to place them in a certain order. That makes sense. But it was just nice to have that level of flexibility. And that’s because, you know, as the protocol starts, that barcode scanner will take inventory of all the plates that are on the deck, all the DNA plates, the expression plates, and now the Lynx knows which plate is on which nest of the deck. And it was also really nice that it could compare these barcodes that it scanned to the worklist that we loaded earlier on in the protocol. And it would say, hey, all those are aligned. Or if there’s a plate on the deck that was not on the work list, it’ll throw an error. And the opposite is true as well. If we’re missing a plate on the deck that was shown in the work list, that will also throw an error. So this really reduced a lot of the errors that could happen, right? In either way. And then, obviously, there was no error possible from putting the plates in the wrong position.

So at this point, I would say that we really got to a protocol that worked exactly in the way that we wanted it to work.

Now, kind of going back to earlier in the presentation, the demand for throughput is increasing, and specifically throughput for plate-based expression is increasing. So, it was really nice that we were able to get this customized automation solution that worked really well for us. So with the Lynx, you know, having that liquid level detection capability, that VVP head, and the ability to integrate outside devices onto the deck allowed us to create these custom design methods that work well for us. It really pushed the boundary of how many plate-based expressions we could produce. And it certainly reduced a lot of user error. As throughput is increasing, you can imagine if someone was trying to hand-pipette these things– that would open a can of worms and introduce room for error. But automation has really decreased that significantly for us. And I want to touch on this user-friendly note that I have on here. We’ve received a lot of feedback from our labmates that the Lynx has been a lot more user-friendly compared to other devices that they’ve worked with in the past, just the software being a little easy to understand and use and customize. That was great for us because we didn’t have to fight the users to want to use the instrument.

And then lastly, we’re taking the first steps towards Benchling integration. Again, Benchling is the LIMS system that we’re using. But we’re looking forward to ways that we can have full Benchling integration in the future as well.

With that, I just want to acknowledge everyone that made this work possible. And thank you all for listening.