No matter what type of experiment you embark on, controlling your variables is vital to understanding whether your assay is behaving properly, and the output is reliable and actionable. To avoid writing a novel on this topic, I have tried to streamline my thoughts on the most important variables to get a handle on for your SPR assay. As always, I welcome your feedback and questions around any other variables and experiences you’ve had.
The importance of startups and what’s a correct startup?
Startups are one of the most underdiscussed topics but also one of the factors that can have the biggest impact on the quality and precision of your data, so it’s definitely important to get this sorted out early on. Simply put, a start-up is the part of the assay run where the chip surface is hydrated and you can assess response changes as it hydrates.
The way I like to think of a CMX chip is that it’s like a kelp forest that sways back and forth nicely when it’s all underwater but if the tide drops and some of it sits of the surface of the water then it’s slow and rigid. So we really need to hydrate the sensor chip prior to use and this is best achieved by passing running buffer over the surfaces to be used multiple times, obviously this has the added benefit of equilibrating the sensor chip surface with the running buffer too.
Here you can also prepare the chip for future regen conditions by exposing the sensor chip to your regen solution. If I use the kelp forest analogy again, adding harsh regeneration reagents make the sensor chip surface flat due to change in the chemical environment. Therefore, we want to make sure that any changes in the surface structure is resolved during our startups and not our actual assays.
This is a critical step that is often overlooked in assay optimisation and can have a large effect upon how the first couple of samples react to the chip surface and thus will affect your replicate precision. The idea is to expose your chip any extra material in the assay. In my day to day this is normally my capture antibody and analyte, where I perform a single injection at the highest concentration set out in the assay. Conditioning your chip will help bed it down and can really improve your data.
The relative capture level shows that this sensor chip surface is good to go after the five chip conditioning steps, so your confidence in the initial data sets should be good.
Sub-optimal assay setup where although the startup has done its job on the chip hydration, not enough development has been done on the chip conditioning steps and the surface doesn’t plateau before sample analysis starts.
Unfortunately, there is no one method that covers all startups, but feel free to get in touch with me to ask questions around your own set up.
Tricky buffers are exactly what they sound like, and if not managed correctly can really mess with your assay.
Simply put, SPR data is made up of two contributing factors:
1. The true interaction
2. The refractive properties of the buffer that the analyte is in
So, in order to generate good data we want to keep the bulk effects as low as possible whilst maximising the total response. The main culprits for causing bulk in SPR are what I call the cyclics of doom!
Histidine and Sucrose are most commonly found in antibody buffers such as Etanercept and Cosentyx, with trehalose most commonly found as a cryoprotectant in lyophilised proteins. Continuing the VEGF theme, here are three VEGF165 formulation results from 3 different protein suppliers, each containing the dreaded trehalose :
Assay orientation isn’t just about minimising bulk effects but can become especially important when looking at structure function relationships, where assay sensitivity is key (e.g. stress analysis). So, the best way to increase sensitivity of your assay, and combat the trick buffer conundrum is to setup the assay so that the antibody is captured and the analyte is the target:
This instantly solves excipient issues as they are washed away during the capture stage so that you’re left with only the antibody in running buffer. As the concentration normally used for capture is ~10 µg/mL, there is very little excipient to wash away so this is a quick process. The main thing to focus on during the capture step and analysis step in terms of the antibody is the stability of capture.
Then we run into the issue that trehalose is present in your purchased protein! This is less of an issue if you’re measuring antigen binding but if you’re measuring a weak affinity such as the Fc receptors against an IgG then it becomes more of an issue and steps need to be taken to remove the trehalose from the analyte solution (buffer exchange). Traditionally this can be very costly and off-putting for those with smaller budgets, but the market is moving and cost savings are to be found if you look hard enough.
By this point I hope we’ve solved almost all of the issues that ultimately affect your SPR data but there are a couple of steps left. In order to have confidence in your results you need to generate replicate data but how can you be sure of a clean regeneration?
The key is making chip regen as uniform as possible, and the easiest way is to regenerate the whole interaction by using a high pH Glycine solution (pH 1.5 or 2).
The best part here is that no damage occurs to the capture molecule and the fresh capture molecule ensures that you’re always measuring the same surface for the analyte. The only thing that requires development here is the duration and speed of the regeneration as it’s critical that the buffer baseline returns to normal as much as possible or you may have residual sample.
Replicates, replicates, replicates, why are people so obsessed by them?Well it all goes back to accuracy and precision. The easiest way that I was ever taught the difference is using a dart board analogy:
Ideally we want our assay to be as accurate (true interaction) and as precise as possible, so we need the number of replicates to be above a minimum threshold to be able to derive statistical meaning.. In simple terms, you don’t want to trust an assay setup until you’ve seen that the Sensorgrams overlay on at leasttriplicate data (my preference).
The ICH Q2 (R1) guidelines recommend for accuracy that at least 3 concentrations are assessed with a minimum of 9 determinations, so depending on your concentration assessments, triplicates is the way to go.
When it comes to assay precision and replicate data it is worth noting the two basic types of replicate:
1. intra-assay, where replicates are assessed against each other within the assay runs
2. inter-assay, where replicates are assessed between assay runs
For intra-assay replicates there are two schools of thought as to whether you take the replicates from the same well or not. As the experimental variation from evaporation is very small in comparison to the variation introduced by the operator during sample preparation, I prefer the triple-dip.
Inter-assay replicates allow you to determine the true experimental error as fresh chips and reagents are used, so the results from each experiment can be compared and true experimental errors determined. I am a big fan of this as it means my assays could potentially get better and better.
There you have it, a few hints, tips and considerations on what I would say are the most important variables to achieve better data. Better data means you can make better decisions.. The key to a beautiful assay is that you have to treat your sensor chips nicely and then they’ll be nice back to you. As with the other sections, spend the time upfront to get your startups, conditioning, regens and replicates optimised and you’re ahead of the game.