MIT - Critical Data

MIT - Critical Data A global consortium led by the MIT Laboratory for Computational Physiology of computer scientists, e

Connect with MIT Critical Data via social medias, as follow:

Twitter: https://twitter.com/mitcriticaldata
Instagram: https://www.instagram.com/mitcriticaldata/

Critical Data Affiliates:
- Lab for Computational Physiology: http://lcp.mit.edu/
- Sana: http://sana.mit.edu/

Instead of building AI that knows everything, we should be building AI that makes us better: more humble, more curious, ...
03/24/2026

Instead of building AI that knows everything, we should be building AI that makes us better: more humble, more curious, more creative. How can we engineer virtues directly into clinical AI systems, equipping them with self-awareness modules that detect overconfidence, flag uncertainty, and prompt clinicians to seek fresh perspectives rather than passively accept a machine’s verdict?

The implications reach far beyond medicine. If we accept that the purpose of AI is not simply to automate cognition but to catalyze our evolution as a species, then the virtues we encode into these systems matter enormously.

The consortium behind this work practices what it preaches. The initiative spans all the continents except Antarctica, deliberately weaving together students, patients, data scientists, clinicians, social scientists, indigenous knowledge holders, and artists. Ultimately, the biases baked into AI are biases baked into who gets to design it. Let us stop building AI that thinks for us and does stuff for us, and start building AI that helps us humans think together, more wisely, and with the kind of courage our most complex challenges demand.

https://news.mit.edu/2026/creating-humble-ai-0324
Image: MIT News; iStock

In this paper, we examine how a single vendor came to control the digital backbone of American healthcare. But let us be...
03/24/2026

In this paper, we examine how a single vendor came to control the digital backbone of American healthcare. But let us be clear: this is not about tearing down a company. We have nothing against technology, innovation, or AI. We believe deeply in the promise of digital tools to democratize access to expertise, to extend the reach of the best clinical knowledge to communities that have never had it. What we are calling for is the building and bridging of communities so that they gain the agency to shape these technologies rather than simply be shaped by them. When one company controls how health data is captured, exchanged, and monetized, the question is not whether the technology works. It is who it works for, and who gets to decide.

As we develop AI for healthcare, we must be deliberate about the systems that generate and capture the data on which everything downstream depends. The electronic health record is not a neutral tool; it encodes assumptions about whose experiences count, whose pain gets documented, whose outcomes get measured. That is why the entire pipeline, from care delivery to data capture, from data curation to modeling, from validation to deployment and continuous monitoring, must involve a diverse set of actors. This means patients, clinicians, data scientists, ethicists, community health workers, and most importantly, those who have been historically marginalized from the design table. Health data is a public good. Its governance should reflect the communities it is meant to serve, not the commercial priorities of any single entity. The path forward is not less technology. It is more inclusive stewardship of the infrastructure on which all of us depend.

https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0001143&?utm_id=plos111&utm_source=internal&utm_medium=email&utm_campaign=author

Privacy, as we once understood it, is dead. Every time we tap “Accept All Cookies” or scroll past a terms-of-service agr...
03/08/2026

Privacy, as we once understood it, is dead. Every time we tap “Accept All Cookies” or scroll past a terms-of-service agreement to download a fitness app, we hand over intimate details about our bodies, our habits, our vulnerabilities. We present a compelling case for transparency mandates around health data transactions. The uncomfortable starting point is one the paper dances around: the traditional framing of privacy as something we can protect through consent and de-identification is largely a fiction. Our health records, wearable data, and genomic information are already circulating through a commercial ecosystem most of us never agreed to and barely understand. The real question isn’t how to lock the barn door; it’s who took the horse, where did they ride it, and who got paid along the way. What we need is a disclosure framework built on that honest foundation: Who is selling our data? What are they doing with it? Who is profiting? And who is being harmed? That kind of radical transparency won’t restore privacy in any nostalgic sense, but it can restore something arguably more important: accountability. And accountability, specifically, relational accountability, unlike privacy, is something we can still fight for.

https://www.sciencedirect.com/science/article/pii/S2589750025001293

🚑🤖 Barcelona was buzzing at the GenAI Health Hack 2026, hosted by Hospital Clínic Barcelona, where 90+ clinicians, resea...
02/25/2026

🚑🤖 Barcelona was buzzing at the GenAI Health Hack 2026, hosted by Hospital Clínic Barcelona, where 90+ clinicians, researchers & technologists came together to rethink healthcare with generative AI.
🏆 The winning project, EdxPlain, transforms ER discharge reports into personalized, easy-to-understand guides — empowering patients to better manage their own health.
From synthetic MRI generation to AI-powered dialysis optimization, the hackathon spotlighted one thing: innovation only matters if it’s ethical, rigorous, and truly improves patient care.
The future of healthcare is collaborative, human-centered, and AI-augmented. 💡✨

Hopping on to the throwback to  #2016 trend with MIT critical data photos at Beijing and Mexico City! Will take you all ...
02/24/2026

Hopping on to the throwback to #2016 trend with MIT critical data photos at Beijing and Mexico City! Will take you all on more journeys through time and space soon! ✈️

The American Medical Association and MIT Critical Data organized "AI as a Catalyst" on January 15, 2026 at MIT, a transf...
02/01/2026

The American Medical Association and MIT Critical Data organized "AI as a Catalyst" on January 15, 2026 at MIT, a transformative approach to reimagining education and healthcare by deliberately centering voices typically excluded from academic discourse, i.e., indigenous knowledge holders, musicians, artists, religious leaders, storytellers, and activists, alongside educators and clinicians. The event created an interactive space for community-centered dialogue using six conceptual tools: the mirror (reflection), flashlight (illumination), microscope (analysis), paintbrush (creativity), podium (shared storytelling), and the slingshot (dismantling of power structures). Through three workshops exploring creative expression as pedagogical practice, indigenous and religious wisdom in medical training, and justice-centered AI development, participants charted pathways toward transforming how we prepare the next generation of healers and change agents. The event embodies MIT Critical Data's commitment to challenging traditional academic power structures by recognizing that addressing healthcare's challenges in the AI era requires wisdom from diverse epistemologies and lived experiences, not just technical expertise, ultimately working to build a community of practice committed to centering creativity, love, and justice in education and healthcare.

Watch the recording of the event here:

Building a global community committed to developing & improving health AI.

After decades of conflicting evidence about how tightly to control blood sugar in critically ill patients, our analysis ...
01/30/2026

After decades of conflicting evidence about how tightly to control blood sugar in critically ill patients, our analysis using causal inference methods on the MIMIC-IV database offers clarity, but clarity that must be understood within its specific context. By combining targeted maximum likelihood estimation with joint longitudinal-survival modeling on 8,002 patients from a single academic medical center in Boston, we discovered a U-shaped relationship between glucose and mortality, where aiming for glucose levels between 160-190 mg/dL appeared optimal, with overly aggressive glucose lowering dramatically increasing hypoglycemia risk (77% of patients at 100 mg/dL targets) without improving survival. However, our cohort, median age 66 years, 57% with diabetes, and significantly older and more comorbid than the general global ICU population, reflects the limitations inherent to single-center observational studies. Institutional variations in insulin protocols, glucose monitoring frequency, and clinical workflows at Beth Israel Deaconess Medical Center may not translate to other settings. This finding validates current guidelines recommending liberal glucose ranges and exemplifies how sophisticated analytics on high-resolution health data can help us move beyond the costly cycle of contradictory randomized trials, but it also underscores a critical principle: causal inference in ICU settings should be viewed as causally suggestive rather than definitive, precisely because we can never fully verify the absence of unmeasured confounding or account for the complex, context-dependent practices that shape real-world care. The real innovation here isn’t methodological; it’s about using these frameworks to ask better questions and identify promising directions before launching expensive trials, while maintaining epistemic humility about what observational data can and cannot tell us across diverse healthcare contexts and patient populations.

https://bmjopen.bmj.com/content/16/1/e104916.full

01/29/2026

After decades of conflicting evidence about how tightly to control blood sugar in critically ill patients, our analysis using causal inference methods on the MIMIC-IV database offers clarity, but clarity that must be understood within its specific context. By combining targeted maximum likelihood estimation with joint longitudinal-survival modeling on 8,002 patients from a single academic medical center in Boston, we discovered a U-shaped relationship between glucose and mortality, where aiming for glucose levels between 160-190 mg/dL appeared optimal, with overly aggressive glucose lowering dramatically increasing hypoglycemia risk (77% of patients at 100 mg/dL targets) without improving survival. However, our cohort, median age 66 years, 57% with diabetes, and significantly older and more comorbid than the general global ICU population, reflects the limitations inherent to single-center observational studies. Institutional variations in insulin protocols, glucose monitoring frequency, and clinical workflows at Beth Israel Deaconess Medical Center may not translate to other settings. This finding validates current guidelines recommending liberal glucose ranges and exemplifies how sophisticated analytics on high-resolution health data can help us move beyond the costly cycle of contradictory randomized trials, but it also underscores a critical principle: causal inference in ICU settings should be viewed as causally suggestive rather than definitive, precisely because we can never fully verify the absence of unmeasured confounding or account for the complex, context-dependent practices that shape real-world care. The real innovation here isn't methodological; it's about using these frameworks to ask better questions and identify promising directions before launching expensive trials, while maintaining epistemic humility about what observational data can and cannot tell us across diverse healthcare contexts and patient populations.

https://bmjopen.bmj.com/content/16/1/e104916.full

01/15/2026

Physical restraints are supposed to keep critically ill patients safe. In this paper, we found that restraint use has been climbing steadily, even after federal reporting requirements for restraint-related deaths were introduced in 2014, and non-English speakers faced 21% higher odds. Whether Black patients showed higher restraint rates depended entirely on what factors were included in the analysis. Exclude demographics or psychiatric diagnoses, and disparities appear or disappear, suggesting these factors may be part of the causal pathway rather than mere confounders.

The sensitivity of restraint patterns to model specification tells us that disparities depend fundamentally on which patient subgroups we're comparing and what we're adjusting for. Are psychiatric diagnoses driving restraint decisions, or do they reflect systematic differences in how we assess patients from different backgrounds? Every ICU should be asking: are we restraining patients based on medical necessity, or are we reproducing patterns shaped by language barriers, implicit bias, and institutional factors we haven't fully acknowledged?

Imputation of missing data without understanding why it's missing is a great example of a lack of critical thinking. Mis...
01/14/2026

Imputation of missing data without understanding why it's missing is a great example of a lack of critical thinking. Missing data is not just empty cells that can be filled simply by inferring from the data that is present. When Indigenous Australians self-discharge from ICU at four times the rate of other patients, when Black patients with out-of-hospital cardiac arrest never make it into the datasets, when non-English speakers have their vital signs checked less frequently, these aren't statistical phenomenon to be imputed away. What's truly missing isn't the data itself. It's the context of how it came to be absent, the provenance of who collected it (or chose not to), under what conditions, with what biases, and for whom. We've obsessed over filling empty cells while ignoring the stories those empty cells tell. The real crisis isn't the missing data itself; it's our collective failure to ask why it's missing, to understand the systemic neglect that render certain lives less worthy of the care that generates data in the first place. No algorithm, however sophisticated, can rescue insights from datasets that fundamentally misrepresent reality.

This paper is a call for a factory reset in how we build AI models. Understanding data provenance means tracing backwards through the entire pipeline: which hospitals had the resources to store data comprehensively, which communities had fragmented care across institutions, which patients encountered language barriers that made documentation burdensome. The path forward isn't more clever imputation techniques: it's transforming the AI lifecycle to center context before computation, and to recognize that our databases don't just reflect health inequities, they reproduce them at scale. Until we reimagine AI systems as opportunities for repair rather than optimization, we'll continue building technologies that illuminate what we already know too well: that healthcare has never been equally accessible, and our algorithms are learning this lesson perfectly.

Author summary Healthcare data that is missing, incomplete, or inaccurately documented is often treated as a technical problem to be solved with statistical methods. We emphasize that this perspective overlooks the real issue: the data has been stripped of its context. Missing, incomplete, or inaccu...

The uncomfortable truth? Most AI development maintains the same power imbalances that plague healthcare itself: research...
01/05/2026

The uncomfortable truth? Most AI development maintains the same power imbalances that plague healthcare itself: researchers and technologists hold the cards while patients, especially those most affected by health inequities, are invited to comment but not to decide. Real partnership means fair compensation, shared voting power, transparent decision logs, and crucially, giving patients veto authority over features that don't serve them.

We describe the design of patient-powered LLM-athons that bring patients, caregivers, policymakers, and engineers together as equals and design evaluation metrics and guardrails for health-related use of LLMs. Because the people using these tools aren't just "end-users"; they're the experts on what trustworthy, culturally safe healthcare actually means. When we treat lived experience as expertise rather than input, we don't just build better AI. We redistribute power in healthcare itself.

Uses of large language models (LLMs) in health chatbots are expanding into high-stakes clinical contexts, heightening the need for tools that are evidence-

Before you build another sepsis prediction model, you should ask yourselves, "What would Illich say?".When sepsis signal...
01/05/2026

Before you build another sepsis prediction model, you should ask yourselves, "What would Illich say?".

When sepsis signals the end of life rather than a reversible crisis, are we using AI to extend care or prolong suffering? Drawing on philosopher Ivan Illich's critique of medicalized death, we argue for "humanized AI" that helps identify not just who needs intensive intervention, but who might benefit more from comfort-focused care and honest conversations about goals—shifting from viewing every sepsis death as treatment failure to recognizing when dignity matters more than protocols.

INTRODUCTION Sepsis remains the leading cause of death in critical care and in hospitals. Artificial Intelligence (AI) is touted as having the potential to transform sepsis care through early recognition, personalized treatment, and enhanced decision-making. The need for these novel AI approaches is...

Address

45 Carleton Street
Cambridge, MA
02139

Alerts

Be the first to know and let us send you an email when MIT - Critical Data posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Share

Share on Facebook Share on Twitter Share on LinkedIn
Share on Pinterest Share on Reddit Share via Email
Share on WhatsApp Share on Instagram Share on Telegram

Category