Analytics helps improve data collection, clinical trials and public policy

A team of researchers unpacks a range of biases in epidemic research, ranging from clinical trials to data collection, and offers a game-theoretic approach to addressing them, in a new analysis. The work sheds new light on the pitfalls associated with developing and deploying technologies in the fight against global crises like COVID-19, with a look at future pandemic scenarios.

“Even today, the empirical methods used by epidemic researchers suffer from flaws in design and execution,” says Bud Mishra, a professor at New York University’s Courant Institute of Mathematical Sciences and lead author of the paper. article, which appears in the journal Technology & Innovation. “In our work, we illuminate common, but remarkably often overlooked pitfalls that plague research methodologies – and introduce a simulation tool that we believe can improve methodological decision-making.”

Even in an age when vaccines can be successfully developed in months, combating ailments in ways unimaginable in previous centuries, scientists can still be unwittingly hampered by flaws in their methods.

In the article, Mishra and her co-authors, Inavamsi Enaganti and Nivedita Ganesh, NYU graduate students in computer science, explore some standard paradoxes, errors, and biases in the context of the hypothesis and show how they are relevant to work aimed at to fight epidemics. These include Grue’s paradox, Simpson’s paradox, and confirmation bias, among others:

Grue’s paradox

The authors note that research has often been hampered by errors related to inductive reasoning, falling under what is known as Grue’s paradox. For example, if all emeralds observed during a given period are green, then all emeralds must be green. However, if we define ‘grue’ as the property of being green until a certain period of time and then blue thereafter, there is inductive evidence supporting the conclusion that all emeralds are ‘grue’. and supports the conclusion that all emeralds are green, making it impossible to draw a definitive conclusion about the color of emeralds.

“When constructing and comparing hypotheses in the context of epidemics, it is essential to identify the temporal dependence of the predicate,” write the authors. These include hypotheses about a virus mutating, inducing herd immunity, or recurring waves of infection.

Simpson’s paradox

Simpson’s paradox is a phenomenon where patterns seen in data when stratified into different groups are reversed when combined,” the authors write. “This effect is widespread in academic literature and notoriously perverts the truth.

For example, if in a clinical trial 100 subjects undergo Treatment 1 and 100 subjects undergo Treatment 2 with success rates of 40% and 37%, respectively, one would assume that Treatment 1 is more effective. However, if you divide this data by genetic markers – for example, genetic marker A and genetic marker B – the effectiveness of treatments may give different results. For example, Treatment 1 may appear superior when looking at an aggregated population, but its value may decrease for certain subgroups.

Confirmation bias

Widely known confirmation bias, or the tendency to seek out and recall data more emphatically when it supports a researcher’s hypothesis, also plagues outbreak research, the authors note.

“This phenomenon can already be observed in the context of COVID-19 in the selective gathering of data to paint a picture that supports popular belief,” they write. “For example, evidence that supports countries practicing strict lockdowns and social distancing improves public health has been given more weight than evidence suggesting that countries relaxing their measures have a similar reduction in their number of cases. , other variables that might be as influential as the lockdown, but are contextual and varied for different geographies, might have been overlooked, such as population density or vaccination history.”

To address these methodological challenges, the team created an open-source epidemic simulation platform (Episimmer) that aims to provide decision support to help answer user questions regarding policies and restrictions during an epidemic.

Episimmer, which the researchers tested in several simulated public health emergencies, performs “counterfactual” analyses, measuring what would have happened to an ecosystem in the absence of interventions and policies, helping users discover and to refine the opportunities and optimizations they could bring for their COVID-19 strategies (Note: The platform python package is available on this page: https://pypi.org/project/episimmer/ ). These could include decisions such as “What days to be remote or in-person” for schools and workplaces as well as “Which vaccination routine is most effective given local interaction patterns?”

“Faced with a rapidly evolving virus, inventors must experiment, iterate, and deploy solutions that are both creative and effective while avoiding the pitfalls that plague clinical trials and related work,” says Enaganti.

The team conducted their research as part of a larger self-assembled multidisciplinary international research group, dubbed RxCovea, and enabled the deployment of their tools in India as part of the Campus-Rakshak program.


Source link

Comments are closed.