MIT - Critical Data

MIT - Critical Data A global consortium led by the MIT Laboratory for Computational Physiology of computer scientists, e

Connect with MIT Critical Data via social medias, as follow:

Twitter: https://twitter.com/mitcriticaldata
Instagram: https://www.instagram.com/mitcriticaldata/

Critical Data Affiliates:
- Lab for Computational Physiology: http://lcp.mit.edu/
- Sana: http://sana.mit.edu/

01/15/2026

Physical restraints are supposed to keep critically ill patients safe. In this paper, we found that restraint use has been climbing steadily, even after federal reporting requirements for restraint-related deaths were introduced in 2014, and non-English speakers faced 21% higher odds. Whether Black patients showed higher restraint rates depended entirely on what factors were included in the analysis. Exclude demographics or psychiatric diagnoses, and disparities appear or disappear, suggesting these factors may be part of the causal pathway rather than mere confounders.

The sensitivity of restraint patterns to model specification tells us that disparities depend fundamentally on which patient subgroups we're comparing and what we're adjusting for. Are psychiatric diagnoses driving restraint decisions, or do they reflect systematic differences in how we assess patients from different backgrounds? Every ICU should be asking: are we restraining patients based on medical necessity, or are we reproducing patterns shaped by language barriers, implicit bias, and institutional factors we haven't fully acknowledged?

Imputation of missing data without understanding why it's missing is a great example of a lack of critical thinking. Mis...
01/14/2026

Imputation of missing data without understanding why it's missing is a great example of a lack of critical thinking. Missing data is not just empty cells that can be filled simply by inferring from the data that is present. When Indigenous Australians self-discharge from ICU at four times the rate of other patients, when Black patients with out-of-hospital cardiac arrest never make it into the datasets, when non-English speakers have their vital signs checked less frequently, these aren't statistical phenomenon to be imputed away. What's truly missing isn't the data itself. It's the context of how it came to be absent, the provenance of who collected it (or chose not to), under what conditions, with what biases, and for whom. We've obsessed over filling empty cells while ignoring the stories those empty cells tell. The real crisis isn't the missing data itself; it's our collective failure to ask why it's missing, to understand the systemic neglect that render certain lives less worthy of the care that generates data in the first place. No algorithm, however sophisticated, can rescue insights from datasets that fundamentally misrepresent reality.

This paper is a call for a factory reset in how we build AI models. Understanding data provenance means tracing backwards through the entire pipeline: which hospitals had the resources to store data comprehensively, which communities had fragmented care across institutions, which patients encountered language barriers that made documentation burdensome. The path forward isn't more clever imputation techniques: it's transforming the AI lifecycle to center context before computation, and to recognize that our databases don't just reflect health inequities, they reproduce them at scale. Until we reimagine AI systems as opportunities for repair rather than optimization, we'll continue building technologies that illuminate what we already know too well: that healthcare has never been equally accessible, and our algorithms are learning this lesson perfectly.

Author summary Healthcare data that is missing, incomplete, or inaccurately documented is often treated as a technical problem to be solved with statistical methods. We emphasize that this perspective overlooks the real issue: the data has been stripped of its context. Missing, incomplete, or inaccu...

The uncomfortable truth? Most AI development maintains the same power imbalances that plague healthcare itself: research...
01/05/2026

The uncomfortable truth? Most AI development maintains the same power imbalances that plague healthcare itself: researchers and technologists hold the cards while patients, especially those most affected by health inequities, are invited to comment but not to decide. Real partnership means fair compensation, shared voting power, transparent decision logs, and crucially, giving patients veto authority over features that don't serve them.

We describe the design of patient-powered LLM-athons that bring patients, caregivers, policymakers, and engineers together as equals and design evaluation metrics and guardrails for health-related use of LLMs. Because the people using these tools aren't just "end-users"; they're the experts on what trustworthy, culturally safe healthcare actually means. When we treat lived experience as expertise rather than input, we don't just build better AI. We redistribute power in healthcare itself.

Uses of large language models (LLMs) in health chatbots are expanding into high-stakes clinical contexts, heightening the need for tools that are evidence-

Before you build another sepsis prediction model, you should ask yourselves, "What would Illich say?".When sepsis signal...
01/05/2026

Before you build another sepsis prediction model, you should ask yourselves, "What would Illich say?".

When sepsis signals the end of life rather than a reversible crisis, are we using AI to extend care or prolong suffering? Drawing on philosopher Ivan Illich's critique of medicalized death, we argue for "humanized AI" that helps identify not just who needs intensive intervention, but who might benefit more from comfort-focused care and honest conversations about goals—shifting from viewing every sepsis death as treatment failure to recognizing when dignity matters more than protocols.

INTRODUCTION Sepsis remains the leading cause of death in critical care and in hospitals. Artificial Intelligence (AI) is touted as having the potential to transform sepsis care through early recognition, personalized treatment, and enhanced decision-making. The need for these novel AI approaches is...

12/26/2025

The open health data movement has made clinical information widely accessible, but true democratization requires more than download privileges—researchers everywhere must be able to meaningfully engage with data regardless of institutional wealth or location. While Trusted Research Environments (TREs) represent a necessary evolution from "open data sharing" to "open data access" for protecting sensitive health data, they risk creating new inequities.

In this paper, we identified 37 TREs globally, overwhelmingly concentrated in high-income countries like the UK and USA, with none in Africa or Asia. Access costs ranging from £2,500 to over $20,000 annually effectively exclude researchers from low- and middle-income countries (LMICs), creating a computational barrier that undermines AI safety in critical care, where algorithms trained on narrow populations by privileged researchers will perpetuate bias in life-or-death clinical decisions.

The solution lies in "democratized TREs" featuring subsidized access, cloud provider partnerships, capacity building in LMICs, and recognition that computational equity is a prerequisite for health equity. Brazil's CIDACS demonstrates that TREs can successfully operate in middle-income settings, offering proof of concept for broader global expansion. As we design the next generation of healthcare data platforms, we face a choice: continue down a path where computational barriers recreate the inequities that open data was meant to overcome, or build infrastructure that genuinely serves the global research community, ensuring that insights needed to improve healthcare worldwide can emerge from anywhere—not just from institutions that can afford the cloud bill.��

AI governance frameworks must be agile and reflexive—capable of pivoting to preempt emerging trends rather than merely r...
12/19/2025

AI governance frameworks must be agile and reflexive—capable of pivoting to preempt emerging trends rather than merely reacting to social media discourse. Consider this gap: most governance frameworks focus narrowly on AI for clinical decision support, yet this represents only a sliver of the AI systems affecting patient health. What about the direct use of LLMs by clinicians and patients to answer health-related questions? LLMs and retrieval-augmented generation systems currently operate in a regulatory vacuum, receiving no oversight precisely because they are not classified as health AI.

Everyone must play a role in AI oversight. This is what drives the Health AI Systems Thinking for Community (HASTC) workshops we organize globally, where participants critically examine recently published papers in health AI for their field-wide implications. More importantly, these gatherings serve as incubators for developing guardrails that extend beyond government regulation—essential safeguards for a technology that evolves at breathtaking speed while constantly shifting in form and function.

This paper describes the HASTC workshop held at the University of Toronto one year ago. Entirely student-organized and executed, it was guided by researchers at the frontlines of health AI development.

Artificial intelligence (AI) is transforming healthcare, but its rapid deployment raises concerns about equity, transparency, and accountability. Without proper oversight, these systems can reinforce biases, disproportionately affecting marginalized communities. Current regulations and policies fail...

Both physicians and large language models face relentless pressure to always have an answer, rewarding confident-soundin...
12/13/2025

Both physicians and large language models face relentless pressure to always have an answer, rewarding confident-sounding responses over admitting uncertainty. Medical culture prizes authority over accuracy, while LLMs are trained on datasets contaminated with biased research, flawed health records, and “publish or perish” science. The result? AI learns to replicate human misinformation at scale, generating hallucinations that sound authoritative but perpetuate the same BS that trained them. Breaking this cycle demands epistemic humility in medical education and abstention mechanisms in AI—teaching both humans and machines that “I don’t know” is sometimes the most competent answer. Unfortunately, “I don’t know” doesn’t generate revenue or win over venture capitalists.

https://www.bmj.com/content/391/bmj.r2570

12/12/2025

Both physicians and large language models face relentless pressure to always have an answer, rewarding confident-sounding responses over admitting uncertainty. Medical culture prizes authority over accuracy, while LLMs are trained on datasets contaminated with biased research, flawed health records, and "publish or perish" science. The result? AI learns to replicate human misinformation at scale, generating hallucinations that sound authoritative but perpetuate the same BS that trained them. Breaking this cycle demands epistemic humility in medical education and abstention mechanisms in AI—teaching both humans and machines that "I don't know" is sometimes the most competent answer. Unfortunately, "I don't know" doesn't generate revenue or win over venture capitalists.

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Standard fairness metrics often rely on flawed foundations - biased ground truth labels, imperfect predictions, and over...
12/02/2025

Standard fairness metrics often rely on flawed foundations - biased ground truth labels, imperfect predictions, and oversimplified demographic categories that mask the true complexity of health disparities. The path forward requires moving beyond technical fixes to embrace multidisciplinary collaboration, where clinicians, data scientists, ethicists, and policymakers work together to redefine fairness in ways that reflect both clinical realities and lived experiences. True equity in cancer care won’t come from perfect metrics, but from shared accountability in how we design, deploy, and govern AI systems.

https://authors.elsevier.com/c/1mCVX8Z12ybXGd

12/02/2025

Standard fairness metrics often rely on flawed foundations - biased ground truth labels, imperfect predictions, and oversimplified demographic categories that mask the true complexity of health disparities. The path forward requires moving beyond technical fixes to embrace multidisciplinary collaboration, where clinicians, data scientists, ethicists, and policymakers work together to redefine fairness in ways that reflect both clinical realities and lived experiences. True equity in cancer care won't come from perfect metrics, but from shared accountability in how we design, deploy, and govern AI systems.

Behind this paper: a physician from Iran, a social scientist from Australia, a behavioral scientist from Norway, a histo...
11/25/2025

Behind this paper: a physician from Iran, a social scientist from Australia, a behavioral scientist from Norway, a history and philosophy enthusiast from the US, and a data scientist from the US. In research, we tend to emphasize what we write about, but the who and how matter just as much, if not more. This collaboration generated more constructive tension than any project I’ve been part of—continuous pushback, revision, and hard-won consensus. The result is richer for it. I hope you enjoy reading it as much as we enjoyed learning from each other.

This paper argues that cultivating epistemic humility—the practice of acknowledging uncertainty and the limitations of human cognition—is essential for revitalizing science in an era of climate change, pandemics, and AI development. While human evolution optimized our minds for rapid, survival-oriented judgments, the scientific method succeeds by deliberately engaging slower, more analytical thinking that questions assumptions and welcomes revision. We propose practical strategies including diverse teams, careful AI integration, and metacognitive training to counter our “bias blind spot” and strengthen critical thinking in future scientists and physicians. Amid a growing crisis of public trust in science fueled by misinformation and polarization, scientists who openly acknowledge uncertainty are actually perceived as more trustworthy. By fostering a culture that values doubt, embraces complexity, and remains open to revision, science can renew itself and guide society toward discovery rather than dogma.

https://www.sciencedirect.com/science/article/pii/S2667193X25003266

11/22/2025

Behind this paper: a physician from Iran, a social scientist from Australia, a behavioral scientist from Norway, a history and philosophy enthusiast from the US, and a data scientist from the US. In research, we tend to emphasize what we write about, but the who and how matter just as much, if not more. This collaboration generated more constructive tension than any project I've been part of—continuous pushback, revision, and hard-won consensus. The result is richer for it. I hope you enjoy reading it as much as we enjoyed learning from each other.

This paper argues that cultivating epistemic humility—the practice of acknowledging uncertainty and the limitations of human cognition—is essential for revitalizing science in an era of climate change, pandemics, and AI development. While human evolution optimized our minds for rapid, survival-oriented judgments, the scientific method succeeds by deliberately engaging slower, more analytical thinking that questions assumptions and welcomes revision. We propose practical strategies including diverse teams, careful AI integration, and metacognitive training to counter our "bias blind spot" and strengthen critical thinking in future scientists and physicians. Amid a growing crisis of public trust in science fueled by misinformation and polarization, scientists who openly acknowledge uncertainty are actually perceived as more trustworthy. By fostering a culture that values doubt, embraces complexity, and remains open to revision, science can renew itself and guide society toward discovery rather than dogma.

Address

45 Carleton Street
Cambridge, MA
02139

Alerts

Be the first to know and let us send you an email when MIT - Critical Data posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Share

Share on Facebook Share on Twitter Share on LinkedIn
Share on Pinterest Share on Reddit Share via Email
Share on WhatsApp Share on Instagram Share on Telegram

Category