Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

OpenAI’s President Gave Millions to Trump. He Says It’s for Humanity

February 18, 2026

Meta Goes to Trial in a New Mexico Child Safety Case. Here’s What’s at Stake

February 16, 2026

Salesforce Workers Circulate Open Letter Urging CEO Marc Benioff to Denounce ICE

February 15, 2026
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » Make The Doctor’s Office Fair! AI Can Help Create A More Equitable Healthcare System
Innovation

Make The Doctor’s Office Fair! AI Can Help Create A More Equitable Healthcare System

adminBy adminJuly 25, 20230 ViewsNo Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

Video: We want systems to be fair. AI may be able to help us enforce our values.

Bringing a level of scrutiny to AI processes in healthcare, Marzyeh Ghassemi has some solutions for harmful bias in systems that, as a society, we want to root out.

Demonstrating triage models, Ghassemi talked about labeling and how to audit state-of-the-art AI/ML systems that can perform competitively with human doctors. Beginning with some of the more quotidian data collection processes, she tied those into deeper-level mandates that engineering teams and innovators have, to guard against potentially dangerous outcomes in automations.

“We take a lot of data,” she said, warning that things like false positives can compromise the fairness of clinical procedures.

Ghassemi talked about findings on intersectionality, and how bias so often works in both human-centered and AI-centered systems.

Solving these problems, she said, will require diverse data and diverse teams.

“The question is, how does this do (for) all people?” she said, stressing that just using one sub-section of a populace is not enough to really produce transparency on applicable problems and concerns.

Outlining five stages of a pipeline, Ghassemi mentioned problem selection, data collection, outcome definition, algorithm development and postdeployment considerations.

Looking at this entire life cycle, she said, will help stakeholders to move forward with ethical AI in health, and deal with deeply embedded biases that can otherwise have a negative effect on the fairness that we want in healthcare systems.

In a shocking example of evaluating radiology images, Ghassemi showed how AI can still figure out a person’s self-reported race where a human doctor would not be able to make that prediction.

“It’s not the obvious spurious correlations that you might imagine you could remove from medical imaging data,” she said of AI’s strategic ability to classify the images according to race. “It’s not body mass index, breast density, bone density, it’s not disease distribution. In fact, you can filter this image in a variety of ways until it doesn’t really look like a chest X-ray anymore, and machine learning models can still tell the self-reported race of a patient. This is information that’s incredibly deeply (embedded) in data, and you can’t just remove it simply.”

To illustrate the inner biases that can direct systems unfairly, Ghassemi also showed a chart note automation system that tended to send “belligerent and/or violent” white patients to hospitals, but black patients with the same note to prison.

However, she said, in seeking equitable and just outcomes, engineers can look at prescriptive versus descriptive methods, and work toward safe integration.

“(In) machine learning models that we’re training right now, with the outcome labeling practices, we have created much harsher judgments than if we would have collected labels from humans for the normative setting that we were applying these models to,” she said, noting that changing the labels and the method will change the level of “harshness” in the model’s findings.

Going through some other unfair outcomes including the use of GPT systems, Ghassemi suggested that some of the problems arise when GPT “tells (humans) what to do in a biased way” and described efforts to try to correct a lot of this at the algorithmic and methodical levels. She also presented on how differences in labeling instructions cause human labelers to act in surprisingly diverse ways, and suggested that phenomenon bears a lot more study, in general.

In closing, she reviewed some of the principles that can help us find our way through the challenges confronting clinicians and others who don’t want to be affected by undue bias.

“We can’t just focus on one part of the pipeline,” she said. “We need to consider sources of bias in the data that we collect, including the labels … we need to evaluate our models comprehensively as we’re developing algorithms, and we need to recognize that not all gaps can be corrected, but maybe they don’t have to, if you deploy them intelligently such that when they are wrong, they don’t disproportionately bias care stuff. And by doing this, we think that we can create actionable insights in human health.”

Marzyeh Ghassemi is an Assistant Professor at CSAIL, IMES, & EECS MIT. Ghassemi is an accomplished data scientist and researcher, known for her groundbreaking work at the intersection of machine learning and healthcare. With a deep understanding of data-driven solutions, she has made significant contributions to improving patient outcomes and clinical decision-making.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Today’s Wordle #1686 Hints And Answer For Friday, January 30

Innovation January 30, 2026

Today’s Wordle #1685 Hints And Answer For Thursday, January 29

Innovation January 29, 2026

Today’s Wordle #1684 Hints And Answer For Wednesday, January 28

Innovation January 28, 2026

U.S. Revamps Wildfire Response Into Modern Central Organization

Innovation January 27, 2026

Studies Are Increasingly Finding High Blood Sugar May Be Associated With Dementia

Innovation January 26, 2026

Google’s Last Minute Offer For Pixel Customers

Innovation January 25, 2026
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

OpenAI’s President Gave Millions to Trump. He Says It’s for Humanity

February 18, 2026

Meta Goes to Trial in a New Mexico Child Safety Case. Here’s What’s at Stake

February 16, 2026

Salesforce Workers Circulate Open Letter Urging CEO Marc Benioff to Denounce ICE

February 15, 2026

Jeffrey Epstein Advised an Elon Musk Associate on Taking Tesla Private

February 14, 2026

AI Industry Rivals Are Teaming Up on a Startup Accelerator

February 13, 2026

Latest Posts

How iPhones Made a Surprising Comeback in China

February 10, 2026

Loyalty Is Dead in Silicon Valley

February 9, 2026

Epstein Files Reveal Peter Thiel’s Elaborate Dietary Restrictions

February 7, 2026

The Tech Elites in the Epstein Files

February 6, 2026

Elon Musk Is Rolling xAI Into SpaceX—Creating the World’s Most Valuable Private Company

February 5, 2026
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2026 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT