The importance of adequate controls is something I believe should be talked about more in research. They're usually mentioned as an afterthought but they can be a true lifesaver when you fully understand them.
Controls tell you whether or not your experiment is working correctly and you can trust your results. Broadly, there are two types: positive and negative controls. To put it simply, if your assay measures a given parameter, a positive control should affect this parameter while a negative control should not affect this parameter.
In this instance, we're using ELISA to look for Protein X in a blood sample:
Positive Control: A sample of blood which has been mixed ahead of time with Protein X. You know Protein X is present and you're expecting the assay to detect it.
Negative Control: A sample of blood which is known to be devoid of Protein X. Therefore, the concentration of Protein X should read as zero.
A well-defined and regularly performed assay should generally have at least one positive and one negative control. I'll run through each of the possible results you may see:
Positive Control Signal | Negative Control Signal | Overall Meaning |
+ | - | This is great! The detection method appears to be working as expected! |
- | - | The assay is not detecting your protein. This is called a false negative. |
+ | + | The assay is not specific to your protein of interest, or your blank samples were contaminated in some way. This is called a false positive. |
- | + | This is the rarest of the four possibilities. If this is a true result, there is something in your positive control sample interfering with the assay and also something getting picked up by your detection method. Go with Occam's razor here. Are you sure the samples weren't switched? Run again to be sure, then try to interrogate positive and negative controls individually. |
In my opinion, this is the area where controls really show their utility. In assay development there's a vast array of things which could go wrong. When I develop an assay I want to do everything I can to avoid the situation of “it didn't work and I have no idea why."
When setting up a new assay that you and your lab have not performed yet, this is the order I've found best to go about it:
When your assay is optimized and it's a routine workflow you won't need as many controls, but when doing development the more the better. This will save you time in having to re-run the experiment, and will allow you to more quickly diagnose the issue at hand. By front-loading the work you can avoid many setbacks later in time. It's much better to run an experiment with a bunch of samples the first time than have to run the entire experiment all over again and not know what went wrong.
I'm curious to hear from other researchers. What are some assay controls you've implemented that may have been outside the norm but helped a great deal in figuring out what the issue was?