Research organization patterns, research process patterns, and my preferences

When I started the PhD, I knew nothing about the academia and thus spent a lot of time and efforts in mining the unspoken rules. I wish that someone could have lent me a hand, rather than leaving me wandering in the darkness. This painstaking experience has inspired me to help those younger so that they could have a smoother sailing in their intellectual journeys. In this post, I will try something similar but more profound.

Indeed, I will go beyond the unspoken rules and discuss the modality of the research itself. In particular, I will address how research can be conducted on an organizational level and on an individual level. It is not a summary of existing states in the world; rather, it is an exploration of all possible research modalities, currently existing or not. It is a map for newbies curious to explore the intellectual world and a reference for team leaders confused about the arrangement of their groups.

In addition, I will also add my personal flavor by showing my opinions and preferences on these issues. I do not mean to pinpoint a template prescribing how research should be conducted, but I do intend to discuss the pros and cons of the various modalities. Like anything on the Internet, you should take what I said with a grain of salt. Feedback and personal thoughts are welcome as well.

In the following, I will address two questions surrounding the research:

  1. Within a group, how various individuals can cooperate with each other.
  2. How a principal investigator can lead his group to approach the funded research question, or how an independent researcher can approach his research question.

Centralized vs. decentralized

The research organization can be centralized or decentralized. The centralized pattern is also referred to as the manager-workers pattern, or the master-slaves pattern. The principal investigator (PI) sees himself as a manager, he delegates the workload to his subordinates as well as his students. The decentralized pattern is also referred to as the democratic pattern, or the federated pattern. Each individual is responsible for his own research question whilst can still soliciting expertise and aids from others.

Centralized pattern

The centralized pattern is simplex but definitely not simple.

A shitty soldier shits himself, whereas a shitty general shits the whole division. – An old Chinese saying

The efficiency of this pattern relies solely on the manager, and it is the manager to be blamed if anything goes wrong.

To be a manager in the correct way, every instruction you give should be accurate, unambiguous, and executable. You cannot just tell your workers to “go find some good stuff for me” without specifying the definition of “good stuff” first. If it is a high-level instruction (e.g. when you are a high-level manager), you should set up the evaluation criteria (known as KPI in the industry).

It is easier said than done. The perception of the feasibility of an instruction determines the goodness of a manager. To be a good manager, he must understand the underlying complexity of the task. If your girlfriend tells you to bring her the Moon, she is definitely not going to be a good manager.

Similarly, a high-level manager’s evaluation criteria determines his goodness. The criteria should at least serve the goal. For example, IT companies do not evaluate the employees by their presence at the office. A good manager will also work out criteria less likely to be exploited.

Last but not least, the title of good manager is not the final achievement on the ladder. Supposing that under your management your team succeeds in answering the research question or delivering the product, then it does prove you as a successful manager. Nevertheless, it does not prove you as a successful leader.

Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others. – Jack Welch

  • Pro: It is simple and straightforward to be a worker.
  • Pro: The ceiling of efficiency can be very high.
  • Con: It relies too much on the ability of the manager.
  • Con: The health state of the manager matters.

Decentralized pattern

In the decentralized pattern, every researcher is a peer and thus equal. Each has his own research question, which does not prevent him from helping others though.

For instance, he can offer to proofread the manuscripts of the others. As we all known, a manuscript should be proofread over and over before submission. Furthermore, some may be talented in coding, whilst others may be talented in illustration; by working together, each brings unique asset to the group. It hence becomes a micro Silicon Valley, where the economy of agglomeration rises.

Still, easier said than done. This pattern requires the peers to be collegial and be willing to contribute to other people’s projects rather than to take free rides. People from developing countries where competition is a real thing are less likely to share this mindset. In contrast, countries where geeks originate are more likely to produce such mindset, which partially explains why Silicon Valley appears in the US.

Also, peers in this system needs training. You cannot throw someone in the water, expecting him to learn swimming by trial and error. Even the most talented among us may feel frightened or even betrayed in such circumstance.

  • Pro: The system does not rely on a single node; any subsystem can work nearly equally well.
  • Pro: Everyone feels respected.
  • Con: The peer in the system needs to be responsible not only for himself but also for other peers.
  • Con: Newbies may be frightened if thrown in this system without well equipped in advance.

Why this distinction matters?

It is important to distinguish these two patterns for two reasons. Firstly, both patterns are acceptable, but it is unacceptable to not know which pattern you are within, as each pattern distributes the responsibilities and expectations differently. In the centralized pattern, the workers are expected to accurately execute the instructions of their manager, and the workers are not allowed to deviate from their manager’s plan. In the decentralized pattern, each peer is expected to set up his own plan and own up to his bad choices. If you do not understand the type of system you are within, you are unlikely to live up to the expectations.

Secondly, there exist some hybrid patterns, especially bad hybrids, that attempt to compromise these two archetypal patterns. For instance, some managers in the centralized pattern blames the workers for the failure of the project. This is unacceptable as long as the workers do the necessary to comply with the manager. It is the manager’s fault if he is unable to foresee the outcome and take measures in advance. Another example would be that a peer in a decentralized system is not given the necessary resources to conduct the research. I see too often that mentors ask the students to make a feat with literally 0 cost.

Even the most skillful cook becomes helpless when he runs out of ingredients. – An old Chinese saying

The key here is the ownership of the research project. The one owning the project is and should be responsible for it. The responsibilities can be shared, but resources should always match the responsibility and thus be shared accordingly. The Chinese market economy serves as a counterexample concerning the issue of ownership. Although China has become the second largest economic entity in the world, the ambiguity of ownership yields the mismatch between responsibilities and resources and thus hinders China’s further growth.

Theory vs. experiment

There are two types of researchers: those good at theories and those good at experiments. In hard science (e.g. physics), the former found theory to explain phenomena, whilst the latter design and conduct experiments to validate the theory. In computer science (e.g. machine learning), the former found theory to prove the complexity or computability, whilst the latter design algorithms to achieve the desired complexity and precision.

Both the theory and experiments are important, and they complement each other. Although there is a division of labor between these two groups of researchers, there are always attempts to cross the line so as to make their theory or experiment more legitimate. I am not qualified for talking about physics, but I can talk about statistics and machine learning. In the following, I will give two historical stories. These stories are recovered from my (corrupted) memory, so I cannot guarantee the accuracy of these stories. Readers are recommended to investigate them themselves.

The first story is about the birth of compressed sensing. Some computer scientist in the US accidentally discovered, when he was playing with his research tools, that the L1-norm helps a lot in image processing. He then visited Terry Tao (or Emmanuel Candès) and told him this phenomenon. Tao and Candès then investigated this problem and hence developed the theory now known as compressed sensing or compressive sensing.

The second story is about the birth of support vector machine (SVM). By the name you can already tell that it is a monster (cf. Frankenstein) built by some crazy theorist. Indeed, this theorist is Vladimir N. Vapnik, the father of statistical learning theory. The statistical learning theory empowered him and led him to discover SVM. You can see him speaking proudly in his paper that how the advance of theory gave birth to more powerful algorithms.

The above are geniuses. More frequently, what we encounter in reality are impostors. Today we frequently see algorithmic papers filled with decorative and intimidating math, like an ugly girl too fearful to hang out without painting her face first. Theorists are no better: they do not care about the correctness of their theory; they care only about the person who can write the code supporting their theory. Each theorist is an emperor in his theoretical clothes. No one cares about the truth.

I was surely exaggerating above; the reality might not be so disheartening. The take-home message is that there are two archetypes of research processes – the one using the theory as the main and the experiment as the side and the one using the experiment as the main and the theory as the side. The side is used only to support the main; in no case the side is allowed to undermine the main.

My preference: dual wielding

I like neither of the two aforementioned patterns. Instead, I have my own style, dubbed dual wielding. Since I am confident in myself on both theory and experiment, I equip myself with both during the research process. In contrast to the aforementioned two patterns where the mix happens only at the end of the research process, in my practice the mix happens during the research process, and again and again.

Indeed, I use one to support the other. I use the theory to guide my experiment design, and the design is a set larger than the theory could cover so that the experiment not only gives feedback to the theory but also conveniently explores areas that the theory cannot reach. In this way, the experiment either validates the theory or contradicts it, which leads me to either expand the theory or modify it. Then, the renewed theory guides me to expand my experiment design – a new loop begins.

I would compare this research style with DNA’s double helix structure. DNA is the combination of two RNAs, each of which can replicate the other since they are complement to each other. Similarly, in my research process, the theory and the experiment are both the backup for their counterpart. This research style improves the reliability and mitigates the hazard. My most recent research on epidemiology is an excellent demonstration of such style, without which I might have missed my discovery.

Research engine

You may wonder whether it is too costly to conduct the experiments, in my case to write the code, again and again. I cannot speak for others, but for me the extra cost is often neglectable. The secret is that I do not write the experiment; I write the engine, by which I mean modular, reusable code. That is, the experiment in the new iteration can reuse the code of the previous one.

Sure, I need to refactor now and then, and even redesign the API, which requires some extra work. I find that such efforts are worthwhile though. On the one hand, by analyzing the previous code, I am more likely to find hidden glitches, which adds another level of insurance and hence improves the reproducibility. On the other hand, as a side effect, my programming skill itself improves during this process.

Virtue is its own reward. – Donald Robertson

I am so proud of this practice that I decide to coin a dedicated term research engine for it, drawing inspiration from the game engine. I am aware that I am not the only one using or developing modular, reusable tools for research, but just let me fxxxxxx coin this term since I am the one taking the initiative to popularize this idea here. In addition, I feel entitled to say so because I am both the user and the developer of such research engines rather than like some organizations where the users and the developers are two groups of people.

That said, I do need to warn my readers that the term research engine has already been used for combustion chamber development which seems to have nothing to do with the concept I try to evangelize here.

If you have confidence in my programming skills, you may want to check out my TSCV, a scikit-learn extension for time series cross validation, and easyMH, a lightweight package for Metropolis-Hastings sampling.

Why me?

If you appreciate my practice, you may start wondering why the other researchers do not do the same. The reason is simple: they are not willing to. If a researcher is excellent at theory, chances would be that he dislikes dirty work such as programming, and vice versa. Moreover, if he is so confident in his theory, he tends to directly pinpoint a final experiment rather than conducting the experiment progressively. To further reduce the cost, he may even hire someone to do the experiment for him.

In contrast to those two groups of researchers, I am curious about both disciplines. I am eager to explore both worlds. This is the drive that supports me to follow this practice. The drawback is that I am only good at both but not excellent at both, for now at least. The one I know who is excellent at both is called Donald E. Knuth.

The dual wielding style makes me not only more independent but also easier to communicate with. Since I share a common language with both sides, I can more easily understand others and communicate my ideas to them as long as I know my audience. This is one of the reasons why almost all companies start to recruit full-stack developers.


I discussed two research organization patterns: centralized and decentralized. The former is also known as the manager-worker pattern, and the latter is a democratic one based on peers. Both patterns are acceptable, but it is unacceptable for you to not know which patterns you are within. Also, it is helpful to avoid bad hybrids compromising the two archetypal patterns.

I discussed two research process patterns. One uses the theory as the main and the experiment as the side. The other uses the experiment as the main and the theory as the side. I also present my unique dual-wielding style, which is different from the two aforementioned patterns. By dual wielding, the theory and the experiment mutually support each other, at the same time push the research forward as well as making it more reproducible.

I coined the term research engine, drawing inspiration from the term game engine. I use this term to refer to modular, reusable research tools. Although I am not the only one using or developing such tools, I might be among those rare ones who are simultaneously the user and developer of such tools. By giving the old thing a new name, I intend to raise public awareness on the rigorousness of research process, so that our research become more reproducible.

Written on May 7, 2020