Derek Buhr is an Automation Engineer working for Bruker Spatial Biology (BSB) with 8 years of experience developing and implementing consumables automation for the CosMx and GeoMx spatial biology platforms. His work at BSB has encompassed extensive liquid-handling programming and development of a custom LIMS for inventory and workflow tracking of over 100K+ unique oligonucleotides. Today, Derek will be presenting how BSB has implemented the Dynamic Devices Lynx to dynamically normalize and create pools for R&D and commercial products using database produced inputs and Lynx based outputs.
Transcript
All right, so yes, thanks to James for that introduction. So yeah, essentially what, you know, kind of the previous two presenters showed, it was a little more advanced, you know, sort of what you’re doing on the science side. Essentially what I’ll be talking about is much more low level, really just pooling and normalization, but how we’ve used it to really kind of expand our capabilities to do much, much larger scale options here. And then also use QC metrics to make sure that those, you know, will be very solid pool.
So, our current agenda, or the menu as I’ll call it today, is I’ll go really quickly over who I am here in a bit. What exactly needed the automation? What process workflow were we building out or trying to improve that we needed to actually, you know, accomplish– our requirements, if you’re coming from the engineering sector, right? And then the next will be, why did we choose the Lynx? What features did we identify as being critical to those processes that made sense to go ahead and pursue the Lynx versus, you know, other potentially cheaper systems or other alternatives?
We’ll then go into, you know, how did we do it? What options did we change on the Lynx? What did we do testing-wise to really dial in how we got these pools to be created? And then lastly, I’ll just go over a really quick assessment of did it actually work, right? We pulled together this method. Is it going to do what we want it to?
So, first thing first, I am, as they said from Bruker Spatial Biology. If you’re aware of the spatial biology space, it’s kind of gone through some changes in the last year or so. So, Bruker Spatial Biology is essentially a group that Bruker has acquired that consists of Canopy Biosciences and their CellScape platform. We have NanoString Technologies, which is where I am originally from, and our three boxes, our original nCounter digital system and the GeoMx and CosMx platforms, which are now, you know, new large advances into spatial biology.
So together, we’re now working as Bruker Spatial Biology to kind of tackle these challenges all together and use our combined knowledge to really, you know, advance that area of science.
So, now we know who I am. What did actually needed automation? So, try to make it as visual as possible just to make things clear. We’ll start on the left side of the screen from your point of view. So, what are we actually pooling from, right? What are the objects we’re using? So, in our case, it’s just 384-well plates. We’re in an SBS format and really any of these wells just contains a molecule. That’s the way that we’ve kind of conceptualized it. When we’re pooling, all we’re doing is combining a bunch of molecules together and any one of those wells could be a molecule.
The thing is, when we’re making these pools, they may not actually include everything on the well. So we now need to potentially drop volumes, cherry pick, but we also want to use the vast majority of that well in our case.
But we’re also pooling 40,000 potential molecules. So, we’re talking about 120ish of those pools that we’re working with, going through this just to make this massive pool and these other sub pools. So what are we doing with those oligos? As I said, we’re going to pool them and we also want to normalize them at the same time, right? If we have a normalization in a pooling program, we’re taking about twice as long, let’s say. If we can do this all in the same time with a 96 head, we’re going to be just set. So this is essentially what our worklist looks like, and we’d also like to continue to use that. We standardize this format on other instruments. If we can continue to use it in the lab, everybody’s going to be vastly happier. So, you can kind of see on here that we’re also adjusting these volumes on the fly. Our standard molecule, what we’re looking for is actually a 5-microliter transfer of 100 micromolar oligos. So that’s our standard. And then we’re correcting these based on the 4.8, based on 104. So those volumes are lower because those concentrations are higher, and then you’ll flip around. So again, now we’re asking for a 96 head that can do multiple volumes at the same time.
We’re also going into multiple pools. So now we’ve got different objects that each of these little plates combinations are going to. And we use all four of those in our processes.
And we’ve also got production metrics from the team. We want to be able to do this whole process in less than two weeks and ideally just be able to get out 12 and a half thousand of these a day. So we’re pooling, ideally, tens of thousands of these in a day to get to this completed pool in less than two weeks.
So that’s the task that we were kind of faced with. So then why choose the Lynx? As I said, the 96VVP head is going to get us so much value. You need single channels to do this independently. Now we have eight of those times 12. That’s insane for pooling and for normalization. The power that you have there to increase the speed and everything is just insane. It’s exactly what we needed for that side of the equation.
We also have diagnostic output. So, for instance, we’re already doing this independently, but we’ve now got internal to the system, this great little tracker that’s telling us everything that we can get. And because we have the scripting capabilities embedded, we can now manipulate all of these variables that the system is giving us and actually create a– essentially a QC output, just toggling some of the general features that are there. For instance, this back here is essentially the QC sheet that our system now puts out when it’s running. At the end of the run, you can go through and parse all of this on the same file and the same method and know immediately whether something’s gone wrong.
Additionally, like I mentioned earlier, we would love to be able to standardize our inputs. Again, if we can get one input and we can just transform it within the system, that’s going to be great. So, for instance, you can see up here is kind of our pathway that we’ve typically used. We’re going to create a work list, then we’re going to run that work list. Well, now we got to find the QC or the log files, and then we have to run another script to parse those, and then we want to update our LIMS. Now instead of that, doing some of these offline where you may have a standalone script, and now you have problems with which version of that script are you using. If you have multiple users, do you need to have that script be controlled? Now however, I can embed it right here in the method. We can validate that method and our script is also validated in the same time period.
And we can also jump in and just do really simple database querying insertions. So, the primary method that we’ve done with that is we actually use Excel and we connect to it with an OLE DB (Object Linking and Embedding) database, words. And we’re able to actually manipulate our input file while the run is going to add additional QC checks in that file.
So really what we’ve got here is just all the capabilities that we asked for up front all in one system. So the question now is, we definitely chose the Lynx, it had the capabilities. How did we actually do it?
So, we’ll start with kind of their standardization of the input work files, right? Because, definitely the 96VVP solves all of our problems when it comes to throughput. It’s fairly straightforward.
What we ended up doing, actually, was learning how to parse our input files, which are this side into the VMDI format that the Lynx uses. This is actually, despite that one little line up there, probably the hardest part, was to go in, build the C# to actually create that. We had a lot of help from Dynamic Devices with that. They actually gave us a script that kind of worked with one plate, and we had to do a little bit of our own development work to kind of identify, okay, are we getting the right volumes transferred to where we need based on the QC file? And after all of that work, we were able to really, you know, again, dial down these specific transfers and make sure that each of these four were just working fine.
Additionally, though, we’re able to get this great little UI thing that pops up. And what we have done is we’ve really had it tack on the volumes to some of these original spreadsheets and QC sheets. So now when the system looks at that initial input file that we have, and we use the same one consistently, whenever we actually process a plate, in this case, we can only do four plates at a time, we actually have a record of what we’ve done previously. So instead of a user needing to go talk to a colleague, oh, you know, where are we on this 120-plate run? They can actually just go to the freezer and say, okay, you know, this is the place that I need to run, pull them out, and they’ll know exactly what is left over. So you can identify that.
If they try to put in a plate that is on the completed plates list, it’s going to error out. So now you don’t have any way for them to actually make a mistake in repool, which in, you’re doing 120 plates, it, it gets logistically difficult. It does happen. This is one way for us to probably prevent the biggest error that could really impact those pool. Because if you put in to the same sample plate, well, now you need to double up your entire pool to compensate for that. And if you’ve done that, you know, 38,000, all it goes out of 40,000, that kind of hurts.
So, we’ve got all this, now we’ve got this extra protection too.
Additionally, now we go back to the volume tracking for the QC update. So, this was probably the more of the scientific part of what we did. Turning it on is toggles that are in the system. They’re very easy to find. You could basically got a grid of four ways that you can pick it out. You’ve got minus five microliters, five microliter cycles. You’ve got a post aspiration choice or a real time choice.
Going through this process, we actually determined that, you know, essentially our less than five microliters, real time, was the options that we were going to choose for this. And this is the data that we actually got.
So to test that, what we did is, and I apologize for anybody who is red, green, colorblind—that is not something I considered when I put this slide up together.
But what we did was basically filled a quadrant of a 90 or of a 384-well plate with volumes ranging from about 20 microliters to about 65. You know, our typical volume for this is about 50, so we wanted to challenge the system and say, well, if we have this tracking on and we have 20 microliters in the well and we want 50, what does the system actually tell us?
So, if you go through it, I went ahead and essentially calculated four values for each of these locations and then also did your percent difference from the calculated, what was supposed to be in the well, and then what was actually in the well. So the big takeaway here is, is if you look, everything up to about this small section here, you’re getting a less than what you expected volume. In all cases, though, that tends to be right after about 55 microliters. So we came up with this rule where if we’re tracking volume, we’re going to get extremely solid values from the system on how much it pipetted as long as we have more than five microliters.
However, since we’re looking to, you know, of course, aspirate the full volume here, we now know that we’re also seeing a slight undershoot of that. So we can use this data to actually go back in and do corrections. Say that we thought this oligo had 25 microliters, but when we pulled the data, it actually said it only pulled 10 microliters. We can now use that downstream with these checks and these scripts that we’ve got going to really say, oh, well, you need to go back and add in another 25 microliters of this sample. We haven’t done it quite yet, but we can also do that as a loop where it would correct itself and go back in. This is kind of the next step for us is to do that. But we’ve now got essentially a way to track all these volumes accurately. We can correct errors that occurred during these poolings, on the fly if we have to. And then we’ve also got all of those original input files.
And then, last, we had a few little physical things that we had to change here. Essentially what we had to do here is, we also had to change the height and the mechanism of these, so we have varying different levels of 50 mL tube heights. You also saw the 2 microliter sarge sets, so very different heights. So on the deck, we had to make some decisions to actually make sure that, you know, depending on where the 96-well head is, if you have it too close to a taller item on your right-hand side, it has that bridge you have to worry about. So we had to move things around to verify that. And then also just those spacing considerations too, just to, you know, make sure we utilized our deck space here. Good.
But after all of that… So, what was our impact? So, we’ll go left to right here. We actually had a method for this that we had previously used, it just wasn’t meeting the requirements.
So, we actually, what the previous state was for these was, we were doing second person verification of all of those plates. The reason why, because there’s 120 of them and we don’t want somebody to put on the same plate twice, right? So now we have two operators in the room instead of just one. When if we do it automatically, right? So now we’re reducing work in that sense.
We’re previously pooling using channels. So we clearly know, jump into a 96 head, that we can do 11 times more of these—it’s going to definitely help us out with reducing turnaround time. We also have no volume tracking on the previous system. Now we’re able to track everything on the system. We can use that to adjust volumes that are from oligos that, you know, did not have sufficient volume because we correct incorrectly tracked or a vendor gave us the wrong volumes. Any number of those sort of mistakes, we can now actually see and modify and correct on the fly.
We’ve also got one pool per run on the previous one. And we took two to three weeks to do 40,000 originally.
We now can do multiple pools at a time. And the current state is actually, we can do one pool, the 40,000 oligos, now, in two to three days. So we’ve reduced that time about 7x, and you know, of course you can tell it could be 11 times faster, but you’ve always got your little tiny tweaks in there. But, seven times improvement for us, that is crazy. We went from R&D scientists asking us to make crazy pools this side to almost, you know, taking us, you know, being our primary work in the R&D side to being like, oh, well now we can help you, and then also work on tangential manufacturing work since our group now kind of covers both sides of it.
And the one thing I wanted to emphasize here actually is, we have this brand new method that’s awesome for this purpose, but we have this original method that’s already validated. We, I’m a type of person who doesn’t want to throw away something like this just because we have something newer and fancier. So, we actually went ahead and just repurposed that method. It’s now a, pretty much a primary cherry-picking method. So you know, when you’re doing a full 384-well plate, it makes perfect sense to do that, but when you’re only doing four or five from, you know, 10 plates or something, we definitely want to cherry pick. It’s going to be a lot more efficient. We’re going to manipulate a lot less of those oligos. The one thing with this system is we are dipping our tips into 384. There’s a little bit of contamination concern there. Fortunately, on our side, where this product is actually fairly resistant to contamination, so we did not have that risk that we needed to worry about.
And then of course, just so much less tip waste. However, these tips are considerably cheaper than other tips around that may have, you know, other features. So, it kind of balances out in some ways. However, you know, we got to a state that we’re now able to produce this product, which is a flagship of our current nCounter or my fault, CosMx program and our GeoMx program in.
Yeah, that’s it. I’ve got some acknowledgements here. The big thing is, the primary method developers were actually not myself. This was actually my team. So, John Hamann and Spencer Gellner did amazing work getting this up while I was out basically walking the PCT and hiking. So I have major, major kudos to them. And then lots of scripting assistance from the Sangel brothers over at Dynamic Devices. They helped us out a lot. And I know we had quite a few others and from the Dynamic Devices side help out. So, greatly appreciated there. And yeah, any questions? Let me know.