Don’t leave peer review behind!
Despite most deep tech startups being born out of academia, we don’t talk much about navigating the transition from academia to startup. This transition is challenging, and one can easily fall into the trap of thinking we should be throwing all academic rigor out the window. The most costly mistake is to leave behind the core practice of peer review. Unfortunately, catchphrases like “Move fast and break things” have created the illusion that speed at all cost is the only way to drive towards startup success. While that philosophy can drive progress in software companies, it is counterproductive for deep tech.
When technical founders leave academia, they have a very steep learning curve to navigate. The focus is no longer on the technology alone, but on how that tech can become a business. First time tech founders are suddenly expected to learn about customer discovery, market research, business plans, financial models, and the list goes on. Luckily, there are many resources to help and provide business-oriented support: incubators, accelerators, mentors, advisors, fellowships. But who’s helping you navigate the tech side of the business?
In a counterintuitive way, your strength of being a technical expert can become your blind spot. I’ve heard numerous times investors or advisors tell founders “oh I’m not worried about the tech, you’ve got that”. Unfortunately, the well-deserved acknowledgement and celebration of a founder’s amazing technical expertise can lead to insufficient support or resources for them to identify and overcome their commercialization technical challenges.
Leaving academia can be an isolating transition. You now have the walls of patents, trade secrets, and confidential information preventing open peer review. You can’t just walk down the hall to your friends in the other labs to ask them for feedback on this weird experimental data you got yesterday. You have limited funds and are likely to hire people who have complementary skills, so your team has non-overlapping expertise. Later in the lifecycle of your company when you have 7 other PhDs in your field on the team this becomes much less of a problem, but you need to survive and grow the business to get to that point first.
The most important part of scientific rigor that we should take with us from academia into our startups is peer review. This tried and true mechanism ensures that our experimental plans are solid before we expend the resources to execute them, and that our data interpretations and conclusions are well-founded. Without these elements, you could be acquiring data quickly, but that speed would lead exactly nowhere if those data are inadequate for drawing meaningful conclusions. I’ve seen numerous times scientists or engineers unintentionally overgeneralize the results of an experiment which then led to wrong assumptions about the generalizability of the technology. These mistakes can be fatal for the business if unresolved prior to an expensive scale-up investment. But they can also be prevented if more peer review is intentionally embedded into the startup’s operations.
Of course there are many ways to incorporate peer review into your startup, and it starts with the basics like setting aside time to peer review experimental plans and data. Beyond that, I’m providing below three specific steps anyone can use to better leverage the power of peer review.
Always run control experiments
An unfortunately common line I’ve heard from folks working under time pressure in startups is “we don’t have time to run all these controls”. The truth is the exact opposite: you don’t have time not to run control experiments. Remember: speed only matters if you actually get to where you need to go. You could save a small amount of time by eliminating those data points but end up wasting the entire time of the experiment (and possibly subsequent ones) because you end up with the wrong conclusion.
For example, during the time at Arrakis when I was investigating the use of my carbon-negative minerals as plastics additives, I was measuring the changes in properties that these additives would impart to different plastic resins. If I had not run the control of base resin alongside the mixture with my additive, I would have misattributed the change in thermal properties of the mixture to the presence of my additive, when in fact the vendor mixed up materials and shipped me polyethylene instead of polypropylene for the base resin. Seeing the unexpected change not just in the mixture but also in the control resin saved me a lot of time chasing an unexisting property of the mineral.
Peer review super-charges your ability to perform key control experiments. Bringing in different perspectives to review your experimental plan can help identify alternative explanations, define better controls, and prevent misguided data interpretations.
Try rubberducking
In software development, rubberducking is the process of debugging your code or solving a problem by explaining it to someone who is not an expert. While the classic example involves using an actual inanimate object like a rubber duck, I’d recommend the version where you are speaking to a colleague on your team who’s not a subject matter expert.
Although rubberducking is not strictly speaking peer review, it does fall into the same theme of utilizing the people around you in creative ways to solve a problem. Sometimes it might feel like “you don’t have time” to explain problems to other folks who can’t directly provide solutions, but the process itself of articulating the problem can better harness your own expertise. I’ve solved many chemistry problems in startups this way, and rubberducking has an educational bonus when your conversation partner is an intern or co-op.
For example, during my time working in UV-based additive manufacturing, we were developing a custom resin/solvent pairing, and I was struggling to find the best solvent formulation that would easily remove residual resin but not damage the printed part. While explaining the problem to our co-op, I was actually better able to articulate and define the key chemical characteristics we needed in a solvent and was able to come up with a new set of candidates, one of which was a great match for the application.
Engage with multiple advisors, and do it before it’s a crisis
This section has two layers, the first one being the diversity of advisors you engage with - i.e. the people you can rely on for peer review. Academic faculty are often part of technical advisory boards and they bring lots of domain knowledge and academic rigor to the table. To complement their skills, think about adding other folks to the mix, whether they be formal advisors, informal mentors, consultants, etc. Think about recruiting the kinds of people who have the subject matter expertise to do a deep dive on your work. They might not be big wig names, but they might be the people who’ve looked at thousands of NMR spectra before and know that the peak you are interpreting as a new product is actually trace solvent from when the intern cross-contaminated the sample.
The second layer of this section has to do with how and when you engage your advisors. Most commonly I’ve seen founders turn to their advisory boards when things are on fire and there’s a huge problem to be solved, and that’s perfectly normal. However, engaging our technical advisors earlier and more often in peer review can prevent a lot of these fires. Advisors can play a key role in reviewing preliminary data, experimental plans, and system designs before they’re implemented. This approach to reducing project risk ensures that small, easily fixable mistakes at an early stage do not grow into very expensive and time-consuming problems later on.