Learning to learn from inventions: electrophoresis, PCR, Sanger sequencing

Warning: The aim of this post is to understand the process and style of thinking that led scientists to discover electrophoresis, PCR, and Sanger sequencing. I decided not to explain these techniques, so this post might get confusing at times if you are unfamiliar with them.

What is the pattern of thinking that leads people to great discoveries? For example, are the critical tools we use today in biology all were like an apple on Newton's head? Could you have created sequencing or mass spectrometry? What system does it take? Do discoveries come only when all the prerequisites are already there? Or do you plan for requirements and then check them one by one? I ventured to answer these questions by looking at the history of pillars of molecular biology - PCR, sequencing, electrophoresis.

The most straightforward way to trace cross-generational reasoning is to create a timeline of discovery and its necessary components. But it is not a mere history of techniques. It is how small forgotten findings can sometimes trigger a whole line of research, how one peculiar bacteria can enable automation, and how inventions can come before discoveries are ready.

Electrophoresis - 1931

  • 1791 - Faraday presents the laws of electrolysis

  • 1800 - Voltaic pile was invented. For the first time, a steady electric current can be provided

  • 1801 - Gautherot first notes the phenomena of electrophoresis (particles moved by electricity) in "Mémoire Sur in galvanisme" (p. 203-210)

  • 1807 - Ferdinand Friedrich Reuss observes electroosmosis (substance moved by electricity), publishes observations in 1809 (Notice on a new, hitherto unknown effect of galvanic electricity)

  • 1860 - method of optical detection of moving boundaries (August Toepler) - widely used to study supersonic motion

  • 1930 - Arne Tiselius publishes his dissertation “The moving boundary method of studying electrophoresis of proteins.”

  • 1937 - First electrophoretic machine created by Arne Tiselius and described in "A New Apparatus for Electrophoretic Analysis of Colloidal Mixtures" (moving boundary electrophoresis, not capable of fully separating mixtures as it was free of supporting media). Tiselius later won the Noble Prize for the electrophoretic machine.

  • 1950 - Kunkel, after spending a year in Tiselius’ lab, started investigating a variety of electrophoretic media and discovered that washed starch grains allow for more sharp separation

  • 1955 - While studying insulin, Oliver Smithies modifies the starch approach to create gel as a new medium for electrophoresis (Noble lecture by Smithies). Although a minor insight, this was the step that enabled a more reliable separation of macromolecules (called zone electrophoresis now, with “zone” indicating a more clear separation of lines on a gel)

Electrophoretic machine as designed by Tiselius

 
 

PCR - 1985

  • 1953 - Watson & Crick discover DNA structure

  • 1956 - Kornberg identified the first DNA polymerase

  • 1967 - Thomas Brock found Taq bacteria (bacteria able to survive in high temperatures)

  • 1970 - Kleppe & Khorana published the first principles of PCR, proposing the process of DNA amplification through a 2-primer process (they didn't try it in practice)

  • 1976 - Alice Chien and John Trela purified Taq (thermostable) polymerase

  • 1984 - Kari Mullis rediscovers that you can synthesize a specific region of DNA by using two primers and tries it in practice.

  • 1985 - Cetus team publishes a PCR method. All steps were completed manually, and polymerase needed to be replaced at each stage.

  • 1988 - commercialization of Taq polymerase by Cetus enables automation of PCR

1993 - Higuchi proposes the idea of monitoring PCR with fluorescence (Kinetic PCR Analysis: Real-time Monitoring of DNA Amplification Reactions). This enabled qPCR fluorescent binding tags Q-priming PCR.

 
The initial version of PCR managed to exist without thermostable polymerase and was implemented as a machine called "Mr.Cycle.” src:https://americanhistory.si.edu/collections/search/object/nmah_1000862

The initial version of PCR managed to exist without thermostable polymerase and was implemented as a machine called "Mr.Cycle.”

src:https://americanhistory.si.edu/collections/search/object/nmah_1000862

Notebook of Mullis with his attempt to run PCR reaction

src:https://www.the-scientist.com/foundations-old/the-first-polyullmerase-chain-reaction-52020

 

Sanger Sequencing

  • 1962 - restriction and modification enzymes discovered by Werner Arber and Daisy Dussoix

  • 1965 - Robert Holley sequences the first tRNA using partial hydrolysis

  • 1965 - Sanger publishes two-dimension partition sequencing method

  • 1968 - Kaiser and Wu  note that, when only three types of nucleotides are present, the reaction will terminate at the point where the missing type of nucleotide should be located (this served as a basis for “minus” reaction in +- approach by Sanger); Kaiser and Wu relied on some of the methods popularized by Sanger, including labeling 32P isotopes and digesting large molecules into smaller ones

  • 1969 - Atkinson describes the rate, stoichiometry, and linkage of ddNTPs to DNA

  • 1971 - Paul Englund observes DNA polymerase in E.coli degrades DNA from 3’ end  and only stops at nucleotides corresponding to the single dNTP present in the reaction mixture (this served as a basis for “plus” reaction in +- approach by Sanger)

  • 1972 - Walter Fiers sequences DNA of a complete gene

  • 1975 - plus-minus sequencing by Sanger

  • 1977 - chain-termination method by Sanger (incorporates ddNTPs); sequenced the first genome (method relies on DNA polymerase)

  • 1977 - Maxam & Gilbert sequencing technique (does not rely on DNA polymerase)

  • 1987 - Hood & Hunkapiller substitute radioactive labels with fluorescent ones in chain-termination sequencing; incorporated computational analysis

Pattern

Components and foundations (physical and conceptual)

Gel electrophoresis can be used to separate and analyze particles based on their size and charge. In gel electrophoresis, one pours analytic media onto a gel and passes an electric field through it. Molecules in a mixture have different weights, and, when displaced by electricity, they all move at different speeds, creating streaks that correspond to molecular weight. Overall, for the separation process, we need electrodes, medium (gels), and buffer in the electrophoretic machine. Conceptually, it's governed by the laws of electrolysis.

What was the problem the researcher trying to solve when they invented the solution?

When Tiselius started working on the electrophoretic machine, he didn’t have any specific problem in mind (that is, the lab he worked at focused on separation methods rather than on basic science). So his primary objective as a scientist wasn’t, say, to figure out how to separate proteins in blood serum (which was the first application he used electrophoresis for); instead, the goal was to look at all the existing separation methods and find ways to improve them - maximize the degree of separation of particles and accuracy of detecting this degree of separation.  In that way, Tiselius can be considered a classic tool-builder.

Were there alternatives at the time the method was discovered? And how were the modern-day improvements developed?

The story is always complicated if you try to assign credit. 0 to 1 in science can be both a categorical switch and an arithmetic sum of infinitely small ideas. Not surprisingly, the first work of Tiselius points to several different scientists working on electrophoresis. One of them, Botho Schwerin, already had patents on this method. Despite this, electrophoresis wasn’t yet used in labs, as existing apparatus didn’t offer any quantitative assessment, worked only with colored media (such that you can see the molecules creating strokes), and were inconvenient. The primary role of Tiselius was in connecting the dots and doing innovative engineering with the structure of existing machines to make them usable. 

The ultracentrifuge was also around at that time and was used for the separation of liquids. Tiselius was actually a student of scientist Theodor Svedberg who received the Noble Prize for ultracentrifuge work. Svedberg was the one who told Tiselius to start working on electrophoretic methods as part of his Ph.D. I imagine Svedberg was also the one to share the idea of the refractive index technique with Tiselius (Svedberg was a physicist by training and used refractive index technique in ultracentrifuges too). Part of the success, as always, was about being in a loop - I do not call it luck because even luck can be systematic. 

Given all the information available, what were the possible questions/lines of reasoning that could lead researcher(s) from A to B? 

Throughout the publication, Tiselius pays attention to different aspects of improving electrophoretic machines. What are the techniques that can enable quantitative analysis of the resulting separation? To what extent can molecules of different masses impact each other if they move in the same direction? Does this impact affect the separation? Can we count them somehow? How do we remove these dreadful convection currents that distort the separation? How do we reduce the heat generated? Each of these is incremental by itself. However, put together, they enabled a machine that was by orders of magnitude better than prior electrophoretic machines.

What made the work of Tiselius genuinely unique is the application of Schlieren photography developed by August Toepler. This technique allowed for the detection of media that couldn’t be seen to a human eye. But was it an easy connection to make, or did he have to dig through archives of patents to find inspiration? We can probably answer this by looking at how Tiselius iterates through all the possible designs for his machine (both in his dissertation and later work), trying to resolve the contradictions of the problem. If the moving boundary method only applies to colored colloids, how can we apply it to something colorless? Use the Tyndall effect (that phenomenon when you can see the clear path of light in the fog)? That would limit us to only certain colloids… Use ultraviolet light from mercury lamps and get proteins to emit fluorescence? Fluorescence here is not highly reproducible, and, besides, we can’t use a water thermostat then (water will become fluorescent after some time). Without a thermostat, convection currents will kill the boundaries of separation - a contradiction to what we want! Colorless media… Physicists have this refractive index technique for mapping colorless gaseous media! Can we apply it here too? 

The Refractive index method was at least 70 years old when Tiselius started working on electrophoresis but wasn’t necessarily forgotten - it was widely known and used among physicists like Ernst Abbe, Robert Wood, and Ernst Mach (G. S. Settles, Schlieren and Shadowgraph Techniques). His lab mentor, Noble prize winner Svedberg, also used the schlieren technique to detect separation after centrifuging a mixture. This is an excellent example of how insight can only come if you are aware of tools that other fields offer. Luckily, there are many ways beyond the conventional degree to obtain broad awareness of sciences: by checking the documentation of old and new inventions, by doing curiosity conversations with people of different backgrounds, and by thinking about how different sciences can solve the problem from your field.

 
 

Pattern

Components and foundations (physical and conceptual)

Polymerase Chain Reaction (or PCR) allows one to create billions of copies of a specific DNA region. Its conceptual foundation is a usual process of DNA polymerization with small modifications. As in the typical polymerization reaction, it requires primers, dNTPs (nucleotide precursors), and DNA polymerase. However, in contrast to the polymerization reaction that occurs in nature, it also involves temperature cycling and the type of polymerase that can withstand high temperature during cycling (Taq polymerase).

What was the problem researcher was trying to solve when the solution was invented?

PCR landed perfectly on the timeline discoveries, as large-scale sequencing and the Human Genome Project would never be possible if we didn’t have a way to make many copies of DNA. At the same time, one of the co-workers of Kary Mullis mentions how PCR was never designed to address any grande problem. Instead, “once PCR was solved, problems to which it can be applied started appearing.” It is not that scientists didn’t need more copies of a particular DNA for their experiments. It’s that everyone settled with a less convenient technique for making DNA copies - synthesis. In fact, Mullis himself wasn’t looking for ways to make more DNA. For him, PCR was “the possible outcome of a solution to a hypothetical problem that didn’t really exist.”  And yet, it managed to bring exponential (literally) improvements to existing methods. 

After getting a Ph.D. in chemistry, working in pediatry, managing a bakery for two years, and working on rocket propulsion, Kary Mullis joined a biotech company, Cetus, where his primary role was probe (oligonucleotide) production for other scientists. As there wasn’t much known about oligonucleotide properties back then, learning involved tinkering. While his position didn’t require it, Mullis spent quite some time looking for temperatures at which probes would attach to nucleic acids and conditions at which different sequences would denature. Running the same process with different temperatures gave Kary Mullis the knowledge he later used to bring PCR to life. Playing around with the system in hand might not lead one directly to a foundational insight. However, “feeling” the properties of the system is essential for deep insights - and playing can help create this feeling. If anything, he “came away with the idea that oligonucleotides hybridize fairly rapidly.”

When Mullis invented PCR, he was looking for a solution to a basic science question - how can one identify a single base pair mutation in beta globulin? The primary constraint here was the modest quantity of available beta globulin. To detect this mutation, one would need to increase the sensitivity of existing sequencing methods somehow. Mullis was thinking about isolating the target region and then sequencing it with conventional methods. Since his specialization was on the design of oligonucleotides (primers), he played with them as a primary source for performing isolation experiments. 

Was the solution unique? How did it change over time, and were changes optimizing for?

PCR falls under the category of inventions that came to mind to several people. Kari Mullis has a famous story of driving in a car and having a sudden flash of enlightenment. However, just about 14 years earlier, the same principles of PCR were published by Kleppe and Korana:

“The principles for the extensive synthesis of the duplexed tRNA genes which emerge from the present work are the following. The DNA duplex would be denatured to form single strands. This denaturation step would be carried out in the presence of a sufficiently large excess of the two appropriate primers. Upon cooling, one would hope to obtain two structures, each containing the full length of the template strand appropriately complexed with the primer. DNA polymerase will be added to complete the process of repair replication. Two molecules of the original duplex should result. The whole cycle could be repeated, there being added every time a fresh dose of the enzyme.”

from the original publication by Kleppe and Khorana

Unfortunately, they had trouble purifying enough polymerase for this idea; the design of primers was also still in its infancy. It was a critical breakthrough that wasn’t taken forward due to technical challenges (progressively solved during the next decade). What could have happened if Kleppe and Khorana didn’t treat the polymerase problem or probe design as impossible? Perhaps they could have found a Taq polymerase workaround themselves and claimed the Nobel prize for PCR almost a decade before Mullis. 

The biggest take from the story of Mullis is about grit in scientific projects. Like Kleppe and Khorana, he faced challenges. From his notebook note “PCR 01” and his own stories, it is evident that at his first experiments, Mullis didn’t use cycling, passed on controls, and had only one tube to run the experiment. It took him many months of investigations to arrive at conditions that worked. Taking it to a finish line is exactly what differentiated him from  Kleppe and Korana, who never tried it in real life. Paper by Mullis was also rejected by Nature and Science, which is not entirely unheard of even for breakthroughs. Immunity to failure seems to be a vital skill in any worthy project, and this immunity was an essential component in the history of PCR.

It's been many years since PCR was invented, but the number of its alternatives is still increasing: loop-mediated isothermal amplification, nucleic acid sequence-based amplification, strand displacement amplification, multiple displacement amplification. All of these do not require thermal cycling, but they do have a slightly more complex mechanism. As a result, they are not as widely adopted as PCR (which recently proved itself again through COVID tests). Sometimes modifications are better, and sometimes they are just different.

Given all the information available, what were the possible questions/lines of reasoning that led researcher(s) from A to B? 

Thinking about isolating mutation regions, Mullis was playing around with different conditions and ways to do polymerization. Conventional polymerization requires one primer. But what if he added two? You can attach primers to different sides of the region and isolate it that way. He still wasn’t thinking about amplifying DNA. Instead, he was focused on clearing fragments and building blocks from the solution of the sequence, in hope it would increase sensitivity. Maybe employing polymerase twice would help incorporate extraneous blocks?  

“...some nagging questions still nagged at me. Would the oligonucleotides extended by the mock reaction interfere with subsequent reactions? What if they had been extended by many bases instead of just one or two? What if they had been extended enough to create a sequence that included a binding site for the other primer molecule? Surely that would cause trouble-I was suddenly jolted by a realization: the strands of DNA in the target, and the extended oligonucleotides, would have the same base sequences. In effect, the mock reaction would have doubled the number of DNA targets in the sample!”

from the interview of Kary Mullis

The invention is rarely a 1 step process. The first insight about PCR was a thinking sprint, but making it usable was a marathon spanning years. The last steps in the PCR optimization process were directed towards automation - how do we avoid adding new polymerase at each step? The answer was found in nature - if we find bacterias that can thrive in hot environments, we can borrow their polymerase. Polymerase that can withstand high temperatures wouldn’t be destroyed by each cycle of increased temperature in PCR. It didn’t take long before Cetus and Kary Mullis found Taq, a thermostable polymerase in bacteria (it was found years before, so the process would be way slower if it wasn’t already isolated by someone else).

 

Patterns

Components and foundations (physical and conceptual)

History of tool invention for sequencing is a graph with so many dependencies and co-dependencies that simplifying it down to a single “aha” moment would be hard and partially unfair. That is why here I focus only on a single approach at the dawn of sequencing, Sanger sequencing (or chain-termination method), that dominated biology for several decades. 

The main components of the chain-termination method are DNA itself, polymerase (some sequencing techniques do not require it, such as the one by Maxam & Gilbert), and isotope-labeled ddNTPs (these are nucleotide precursors that terminate polymerization reaction, also mainly unique to this method). There is also a separation & detection step, which relies on chromatography or electrophoresis.

What was the problem researcher trying to solve when they invented the solution?

In general, all the work on sequencing, both first- and next-generation techniques, can be characterized as a targeted exploration. Complex multi-step problems like DNA sequencing are rarely single flashes of insights after all! The first method developed by Sanger for sequencing nucleic acids isn’t well-known and is called plus-minus sequencing. It was forgotten over time and has major drawbacks. The actual technique that we call “Sanger sequencing” (or chain-termination) was largely created as a consequence of Sanger’s dissatisfaction with the plus-minus approach. 

Past work and ideas sometimes serve as a mind trap. It seems comfortable and “reasonable” to add incremental changes to something you have been working on for a long time. This wasn’t the case for Sanger. He didn’t settle after his first Nobel Prize, which he received for determining the sequence of amino acids in insulin. He also didn’t try to incrementally apply the same techniques he used for sequencing insulin to different proteins. Instead, he got himself into the field that was even more challenging and even less developed - determining the code of nucleic acids. He also didn’t declare victory after inventing the plus-minus technique and eventually went all-in on a different method - chain-termination. 

Was the solution unique? How did it change over time, and were changes optimizing for?

There is a great abundance of methods to sequence DNA / RNA - all with their own tradeoffs. Partial hydrolysis, plus-minus, chain-termination, Maxam & Gilbert technique were the ones that came before next-generation sequencing. It is interesting to think about the properties of a problem where a number of solutions are possible. It seems like an abundance of solutions is often the case for inventions with multiple steps. In the case of sequencing, there are multiple ways to break molecules into smaller pieces, multiple ways to label them, and multiple ways to get the reads. As a result, the combinatorics of solution generation is almost always overwhelming. Of course, not all of these would be perfect, and some would not work at all. That’s why not everyone who tries to invent once becomes an inventor (but everyone who keeps trying will necessarily get somewhere).

The whole sequencing race is characterized mainly by optimization for speed. After all, sequencing could have been done before Sanger or Max & Gilbert - albeit, the speed was terribly slow. It took Robert Holley around 6 years to sequence and determine the structure of tRNA that was only 77 nucleotides long! More on his method can be found in other articles, but it was pretty cumbersome, as you can tell.

Given all the information available, what were the possible questions/lines of reasoning that led researcher(s) from A to B? 

Right before he got interested in nucleic acids, Sanger received a Nobel prize for determining the insulin sequence (the approaches he tried for sequencing insulin impacted the techniques he later tried in DNA/RNA sequencing). This idea was novel - there were researchers who were working on determining composition, but the idea of the “sequence” was largely unheard of. This work hugely impacted how he later approached the sequencing of nucleic acids. Same radioactive labeling (to which Sanger was introduced by Chris Anfinsen in 1954), same partitioning of bigger sequences into subsequences.

Sanger started work on sequencing nucleic acids with RNA, as did Robert Holley at around the same time, simply because RNA was smaller and, thus, more straightforward to break and reassemble. There are multiple ways in which this work didn’t translate to DNA though - when he applied the degradation procedure from RNA sequencing to DNA, it failed because of how big DNA molecules are. 

One of the standard inventing techniques you probably have heard of is “do the opposite.” If you can’t sequence by degrading, then maybe you can sequence by synthesizing (or polymerizing, to be precise). This was the foundation of both of the sequencing techniques by Sanger - plus-minus and chain-termination.

The plus-minus approach drew on two independent works: the “minus” stage was invented by Wu and Kaiser, while the “plus” step was taken from the research of Paul Englund. The idea is to produce nucleotides with different endings (as it was later in the chain-termination approach). Minus stage includes only three nucleotides (of 4 that are needed) and DNA polymerase I: polymerization occurs until a missing nucleotide is encountered. In contrast, plus stage creates nucleotide fragments by having only one nucleotide present in polymerization, along with a T4 polymerase: T4 has a 3’-5’ exonuclease (proofreading) activity and degrades nucleic acid until it encounters the nucleotide present in a reaction. By preparing 4 test tubes for both plus and minus reactions, we can generate a series of fragments with overlapping sequences. The labeled ends of these fragments can be read by electrophoresis or chromatography techniques. 

While slow, the plus-minus technique was still a major breakthrough. Incorporating an acrylamide gel-based system instead of homochromatography enabled the readout of results with autoradiography, allowing for a more precise reconstruction and saving time. Sanger wasn’t actually the one to work out the acrylamide gel-based approach - he took some help from John Donelson, who went through a lot of trial and error before he got it to work (despite these contributions, John Donelson remained unknown to history). The separation lines were mixed and unclear sometimes, so Sanger started exploring other techniques. Just a few years before this, in 1969, Arthur Kornberg conducted some experiments with dideoxynucleotide triphosphate (similar to building blocks of nucleic acids, A, C, T, G, but terminates the reaction after it is incorporated). After ddNTPs are incorporated at various spots, you end up with a mixture of sequences with different lengths. Sanger learned about dideoxy nucleotides when reading the textbook “DNA Replication” by Arthus Kornberg (chapter “DNA polymerase I”). He struggled to find ddNTPs for his experiment, so he was thinking about abandoning this idea after all. I honestly couldn’t make a scientific reasoning connection for why ddNTPs usage would create a more clear separation of bands. Technically the product of chain-termination reactions is the same - a lot of nucleotide fragments labeled with radioactive probes. For some reason, however, the chain-termination reaction leads to a more clear separation. It does seem that it might have been a brute force through a number of possible ways to generate fragments of DNA. Here is an official account of Sanger regarding this:

“For the approach to work, we needed to be able to prepare a mixture in which the various end products (all with the same residue at the 3' end) were present in about equal amounts. In the plus and minus method, this was not the case, and the sizes of the products were distributed over a rather narrow range so that only relatively short sequences could be determined from one incubation. Another way of achieving the same effect was suggested by the work of Kornberg and his colleagues with analogs of the normal DNA polymerase substrates, which act as chain-terminating inhibitors.”

Summarizing

When I think about scientists in the past, I always feel intimidated about just how many skills they should have to do things. However, truth be told, you do not need to know it all. For example, Sanger, when he first started working on sequencing, was intimidated by the chemistry of primer design. Given that his group was small, he had to go and seek help outside, (and got it from Hans Kossel). Tiselius, being a chemist by training, built only mockups of the electrophoretic machines and outsourced design to manufacturing company F. Hellige & Co. Self-reliance is a beautiful idea, but you can save so much more time by outsourcing some of the subproblems. 

The same holds for independence in ideation. Neither Sanger nor Tiselius or Mullis existed in a vacuum. In all of these cases, you can see them cite some work that is pretty similar to theirs. With a ready foundation, they were the ones to incorporate essential pieces of the puzzle - be that 2-primer process, redesigned nucleotides (ddNTPs), or Schlieren optical detection. People rarely go from 0 to 1 in science single-handed - and even 1 to n takes multiple research groups. Sanger talks about it in one of his reflections:

“I have been very fortunate to work in two first-class laboratories, both fairly young and occupied by enthusiastic scientists, who were interested not only in what they were doing but also in other people's work and keen to exchange ideas. It is easy to thrive in such an atmosphere. I may have had some good ideas, but from where did I get them? Perhaps from some chance conversation that I have now completely forgotten.”

“The early days of DNA sequences” by Sanger 

Even if you are independent in your work, you still “stand on the shoulders of giants.” In the case of electrophoresis, we needed laws of electrolysis, a source of steady electric current, and a medium that doesn’t absorb our mixture. As for PCR, in its foundational principles published by Kleppe, there are sketches of DNA structure all over the paper, possible only because we already knew the structure of DNA at that point. This is not to say that all the pre-requisites must be in place for the invention to happen. For example, PCR came to life without thermostable polymerase; electrophoresis separation was happening on paper for more than a decade, and Sanger sequencing for a while didn’t have any sequence alignment software. It is, perhaps, a reminder that gaps and imperfections in a plan should not prevent one from realizing the bigger vision. 

What is shared among all of these inventions is that there is always an initial and updated version. And the updated version, as both in PCR and electrophoresis case, might not be proposed by the initial creator - that is, the mind that improves something can be independent of the mind that has the foundational insight. Among all the papers and dusted patterns out there, there is some tool that couldn’t have been possible to materialize or scale 30 years ago - and you can be the person who will go on to rediscover and improve it.

Were these inventions systematic and planned or just random? Neither, really. Electrophoresis development became more systematic once we had a concrete objective to optimize for - separation of mixtures - and seemed to be a random walk for a century before that. PCR, again, was a consequence of a solution to a problem that didn’t really exist. At the same time, sequencing was pretty much goal-oriented - we wanted to know the sequence of large and bulky DNAs, and we did not want to wait for it for decades. There is no template to follow here, and it seems to largely depend on the style of the scientist. People who, like Mullis, play around with the system for the fun of it are more likely to stumble upon random solutions. The alternative path, objective-oriented, makes you drill the system in a less random way. Yet both of these require systematic question answering. Whether your style is a child-like curiosity or a strategic problem-solving, asking the right questions is what leads one from A to B - and if I had to simplify the science down to one description, this would be it.

Previous
Previous

Black holes and gravitational wave simulations