The Matthew Effect, Mono-cultures, and the Natural Selection of Bad Science

A group of men in black suits and ties standing in formation, facing the camera in a formal indoor setting.

From Climate Etc.

By John Ridgway

A group of medical professionals in white lab coats discussing around a table with books and notes, in front of a chalkboard filled with scientific diagrams and equations.

Any politician faced with the challenge of protecting the public from a natural threat, such as a pandemic or climate change, will be keen to stress how much they are ‘following the science’ — by which they mean they are guided by the dominant scientific narrative of the day. We would want this to be the case because we trust the scientific method as a selective process that ensures bad science cannot hope to survive for very long.  This is not a reality I choose to ignore here, but it is something I would certainly wish to place in its proper context. The problem is that the scientific method is not the only selector in town, and when all others are taken into account, a much murkier picture emerges – certainly not one that is clear enough to place a dominant narrative upon an epistemological pedestal.

Feedback reigns supreme

Of all the selection pressures that operate within a scientific community, perhaps the most fundamental is not the peer review of academic papers but one that can be summarised as follows:

“For to everyone who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away.” (Matthew 25:29, RSV).

This is the so-called Matthew effect [1], also known as ‘cumulative advantage’. It is a positive feedback that serves to place fame and influence in the hands of a select few. This is true in life generally but also in academia specifically. For example, work that has already received a significant number of citations will tend to attract even more, if only because a currently large number of citations raises the likelihood of further reference resulting from any random selection from existing citation lists. This bibliometric phenomenon, in which success breeds success, was first studied by the physicist Derek de Solla Price, who emphasised its essentially stochastic properties:

“It is shown that such a stochastic law is governed by the Beta Function, containing only one free parameter, and this is approximated by a skew or hyperbolic distribution of the type that is widespread in bibliometrics and diverse social science phenomena.” [2]

In practice, however, the selection will be anything but random since factors such as influence and prestige will also determine the likelihood of one’s work being cited. Either way, the higher profile scientist will become even more successful.

The Matthew effect also has a bearing upon the chances of a paper being published in the first place. An editor or reviewer’s familiarity with the quality of an author’s existing canon of published work will make it easier to judge the latent merit of a submitted paper, improving that author’s chances of adding to his or her list of publications. A less well-known author will have no such advantage. This creates a positive feedback that can result in a mono-culture based upon the work of a relatively small number of dominant authors. Again, the Matthew effect may be purely statistical, requiring no particular bias or prejudice to be in evidence. As Remco Heesen and Jan-Willem Romeijn, the philosophers of science who have studied this effect, put it:

“This paper concerns biases that are rooted not in the prejudices of editors or reviewers, but rather in the statistical characteristics of editorial decision making…Hence, even if editors manage to purge their decision procedures of unconscious biases, they will be left with biases of a strictly statistical nature. The statistical biases contribute to the already existing tendency towards a mono-culture in science: a purely statistical Matthew effect.” [3]

There are in fact a number of ways in which mono-cultures may develop, each one involving the Matthew effect. For example, there is the feedback in which funding leads to success, which in turn leads to more funding. Also, a senior faculty member’s research interests will influence recruitment policy, thereby reinforcing faculty interest in those research areas [4]. Take, for example, the scientific mono-culture that quickly developed within the field of foundational physics. As physicist Lee Smolin explained back in 2006:

“The aggressive promotion of string theory has led to its becoming the primary avenue for exploring the big questions in physics. Nearly every particle physicist with a permanent position at the prestigious Institute for Advanced Study, including its director, is a string theorist; the exception is a person hired decades ago.” [5]

This dominance is not the fruit of the scientific method, since the vital element in which theories are tested experimentally is notable by its absence. This is not the case of a theory that ousted its contenders by proving to be more testable or by enjoying greater experimental verification. Its initial appeal stemmed from some early and quite spectacular theoretical successes, but string theory has since become mired in an arcane and thoroughly untestable set of mathematical conjectures that does not even qualify as a theory in the accepted sense. On the contrary, the ultimate dominance of string theory seems to be the result of positive feedbacks in which academic success became far more important than scientific achievement. In the words of Lee Smolin:

“Even as string theory struggles on the scientific side, it has triumphed within the academy.”

String theory’s rise to dominance is a classic case of what the Matthew effect can achieve when the scientific method is compromised. As such, it stands as a cautionary tale for any scientific field in which theorising and modelling ultimately outstrips the capacity for experimental confirmation.

Another of the problems with mono-cultures is that they can lead to a potentially unreliable narrative that acts as a societal beacon for normative thinking. As the narrative strengthens, and societal attitudes embed, so will the power to coerce a greater level of alignment within the scientific community. The consensus becomes a self-reinforcing social dynamic, for good or for bad. This is an example of a class of phenomena studied by organizational scientists Jörg Sydow & Georg Schreyögg:

“More often than not, organizations and also inter-organizational networks, markets or fields are characterized by dynamics that seem to run by and large beyond the control of agents…Among these mostly hidden and emergent dynamics, self-reinforcing processes seem of particular importance; they unfold their own dynamic, turning a possibly virtuous circle into a vicious one (Masuch, 1985).” [6]

Of course, no one who has been caught up in such a dynamic need assume the presence of subterfuge. That being said, the human race is no stranger to politicking and manipulation, and so bias and hoaxing remain optional extras. In particular, one has to be concerned that the growth of AI will increase the likelihood of problematic mono-cultures developing. As David Comerford, Professor of Economics and Behavioural Science, University of Stirling, points out:

“Just a few years ago it took months to produce a single paper. Now a single individual using AI can produce multiple papers that appear valid in a matter of hours.” [7]

Given that the Matthew effect is a numbers game, anything that can generate papers on an industrial scale must be of concern. And there is evidence that the trend is for such papers to be written by ghost writers in the services of corporate interest – so-called ‘resmearch’. As David Comerford explains:

“While the overwhelming majority of researchers are motivated to uncover the truth and check their findings robustly, resmearch is unconcerned with truth – it seeks only to persuade.”

And that is before considering the possibility that individuals may be using AI to increase their productivity so as to game the Matthew effect in their favour. Either way, AI has reduced the costs of producing such work to a virtual zero, thereby placing greater pressure upon the scientific method to counter the emergence of potentially unreliable mono-cultures.

The natural selection of bad science

Whilst mono-cultures are to be avoided, they won’t usually be centred upon bad science. Indeed, in science there is always a guiding hand designed to prevent this happening. Work is routinely evaluated by peers for its quality and value, and such scrutiny should be to the benefit of the good science. Except the evidence would seem to suggest that bad science can still thrive despite such scrutiny. There is another selection operating, but far from acting as a corrective force, filtering out poor work and removing both purely statistical and bias-fuelled positive feedbacks, it seems to be one that can actually promote bad science. The explanation for this troublesome effect has been given by Paul E. Smaldino and Richard McElreath. The opening statement of their paper’s abstract sets the scene:

“Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement.” [8]

The poor research design and data analyses alluded to here are the misuse of p-values and variations on the theme of data torturing, all of which have been widespread within the behavioural sciences for many years now. The problem arises because publication provides the primary method of reward, and yet publication requires positive results, which in turn encourages practices that lead to false positives. Richard Horton, editor of The Lancet, points towards the need to have the right incentives:

“Part of the problem is that no one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative.” [9]

Smaldino and McElreath emphasise that no stratagem is required:

“This paper argues that some of the most powerful incentives in contemporary science actively encourage, reward and propagate poor research methods and abuse of statistical procedures. We term this process the natural selection of bad science to indicate that it requires no conscious strategizing nor cheating on the part of researchers. Instead, it arises from the positive selection of methods and habits that lead to publication.”

They continue by pointing out the obvious fact that “Methods which are associated with greater success in academic careers will, other things being equal, tend to spread”. One would like to think that only the good practices spread, but this is clearly not the case. The ones that spread are the ones that are associated the most with career success, and this unfortunately involves a set of criteria that only partially correlates with quality of work. In this instance, the weaker the statistical power of the data, the greater the chances of publication — and publication is what everyone seems to want.

Fortunately, this is not a problem in which the scientific method stands idly by. Replication and reproducibility are its cornerstones and, as a consequence, the malpractice has manifested itself in the infamous ‘reproducibility crisis’ within science. Opinions differ regarding how serious the problem is; some would claim that the crisis is existential whilst others feel the problem is somewhat overstated. However, no one is claiming that the problem is easily rectifiable, which is unsurprising given that the problem has its roots in the reward structures that sustain academia [10].

So where does that leave us?

The social structures and reward mechanisms within science are such that both good and bad science can find itself the beneficiary of a natural selection, and it can be very difficult for the lay person to know which way the selection has operated when creating a dominant narrative. Knowing the strength of consensus is not nearly as important as understanding the mechanisms at play, and to assume that they are dominated by the scientific method is naïve. Add to that the statistical effects that predispose academia to the emergence of potentially damaging mono-cultures, and one has further reason to resist the temptation to automatically accept the narrative of the day.

It should be appreciated, however, that this is not an anti-science sentiment. It is precisely because social dynamics can entrench ideas irrespective of their epistemological validity that the scientific method is so important. Nevertheless, a mature appreciation of the importance of the scientific approach should entail an understanding that the scientific method cannot hope to be one hundred percent effective in eradicating the vagaries and stochastics of consensus building. In particular, it cannot hope to fully avoid the effects of statistical Matthew effects and their predisposition to create mono-cultures. A mature appreciation of the importance of the scientific approach should therefore include also an understating that there is really no need to invoke the idea of a scientific subterfuge. There is no conspiracy, only scientists doing their job.

Footnotes:

[1] The term was first coined in the context of the sociology of science by Robert K. Merton and Harriet Anne Zuckerman. See Merton R.K. 1968 “The Matthew effect in science”, Science, New Series, Vol 159, No. 3810, pp. 56-63. https://repo.library.stonybrook.edu/xmlui/bitstream/handle/11401/8044/mertonscience1968.pdf?sequence=1&isAllowed=y.

[2] de Solla Price, Derek J. 1976, “A general theory of bibliometric and other cumulative advantage processes”, J. Amer. Soc. Inform. Sci., 27 (5): 292–306, https://doi.org/10.1002/asi.4630270505.

[3] Heesen R., Romeijn JW. 2019 “Epistemic Diversity and Editor Decisions: A Statistical Matthew Effect”, Philosophers’ Imprint, Vol. 19, No. 39, pp. 1-20. http://hdl.handle.net/2027/spo.3521354.0019.039.

[4] In fact, respect for senior faculty members has a lot to answer for when it comes to establishing consensus. See Perret C. and Powers S. T. 2022, “An investigation of the role of leadership in consensus decision-making”, Journal of Theoretical Biology, Vol 543, 111094, https://doi.org/10.1016/j.jtbi.2022.111094.

[5] Smolin L. 2006 “The Trouble With Physics”, page xx, ISBN 978-0-141-01835-5.

[6]  Sydow, J., Schreyögg, G. 2013 “Self-Reinforcing Processes in Organizations, Networks, and Fields — An Introduction”. In: Sydow, J., Schreyögg, G. (eds) Self-Reinforcing Processes in and among Organizations. Palgrave Macmillan, London. https://doi.org/10.1057/9780230392830_1.

[7] Comerford D. 2025 “We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it”. The Conversationhttps://theconversation.com/we-risk-a-deluge-of-ai-written-science-pushing-corporate-interests-heres-what-to-do-about-it-264606.

[8] Smaldino P.E., McElreath R. 2016 “The natural selection of bad science”, R. Soc. Open Sci., 3: 160384, http://doi.org/10.1098/rsos.160384.

[9] Horton R. 2015 “Offline: What is medicine’s 5 sigma?”, The Lancet, Volume 385, Issue 9976 p1380. https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(15)60696-1/fulltext.

[10] Leyser O., Kingsley D., Grange J. 2017, “Opinion: The science ‘reproducibility crisis’ – and what can be done about it”. University of Cambridge – Research News. https://www.cam.ac.uk/research/news/opinion-the-science-reproducibility-crisis-and-what-can-be-done-about-it.


Discover more from Climate- Science.press

Subscribe to get the latest posts sent to your email.