Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

‘The Bad Guys 2’ New On Streaming This Week, Report Says

August 17, 2025

Warren Buffett’s ‘Mystery’ $1.8 Billion Investment Revealed

August 17, 2025

How to Run Multiple Businesses — From a CEO Who’s Doing It

August 17, 2025
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
Startup DreamersStartup Dreamers
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
Startup DreamersStartup Dreamers
Home » Make The Doctor’s Office Fair! AI Can Help Create A More Equitable Healthcare System
Innovation

Make The Doctor’s Office Fair! AI Can Help Create A More Equitable Healthcare System

adminBy adminJuly 25, 20230 ViewsNo Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

Video: We want systems to be fair. AI may be able to help us enforce our values.

Bringing a level of scrutiny to AI processes in healthcare, Marzyeh Ghassemi has some solutions for harmful bias in systems that, as a society, we want to root out.

Demonstrating triage models, Ghassemi talked about labeling and how to audit state-of-the-art AI/ML systems that can perform competitively with human doctors. Beginning with some of the more quotidian data collection processes, she tied those into deeper-level mandates that engineering teams and innovators have, to guard against potentially dangerous outcomes in automations.

“We take a lot of data,” she said, warning that things like false positives can compromise the fairness of clinical procedures.

Ghassemi talked about findings on intersectionality, and how bias so often works in both human-centered and AI-centered systems.

Solving these problems, she said, will require diverse data and diverse teams.

“The question is, how does this do (for) all people?” she said, stressing that just using one sub-section of a populace is not enough to really produce transparency on applicable problems and concerns.

Outlining five stages of a pipeline, Ghassemi mentioned problem selection, data collection, outcome definition, algorithm development and postdeployment considerations.

Looking at this entire life cycle, she said, will help stakeholders to move forward with ethical AI in health, and deal with deeply embedded biases that can otherwise have a negative effect on the fairness that we want in healthcare systems.

In a shocking example of evaluating radiology images, Ghassemi showed how AI can still figure out a person’s self-reported race where a human doctor would not be able to make that prediction.

“It’s not the obvious spurious correlations that you might imagine you could remove from medical imaging data,” she said of AI’s strategic ability to classify the images according to race. “It’s not body mass index, breast density, bone density, it’s not disease distribution. In fact, you can filter this image in a variety of ways until it doesn’t really look like a chest X-ray anymore, and machine learning models can still tell the self-reported race of a patient. This is information that’s incredibly deeply (embedded) in data, and you can’t just remove it simply.”

To illustrate the inner biases that can direct systems unfairly, Ghassemi also showed a chart note automation system that tended to send “belligerent and/or violent” white patients to hospitals, but black patients with the same note to prison.

However, she said, in seeking equitable and just outcomes, engineers can look at prescriptive versus descriptive methods, and work toward safe integration.

“(In) machine learning models that we’re training right now, with the outcome labeling practices, we have created much harsher judgments than if we would have collected labels from humans for the normative setting that we were applying these models to,” she said, noting that changing the labels and the method will change the level of “harshness” in the model’s findings.

Going through some other unfair outcomes including the use of GPT systems, Ghassemi suggested that some of the problems arise when GPT “tells (humans) what to do in a biased way” and described efforts to try to correct a lot of this at the algorithmic and methodical levels. She also presented on how differences in labeling instructions cause human labelers to act in surprisingly diverse ways, and suggested that phenomenon bears a lot more study, in general.

In closing, she reviewed some of the principles that can help us find our way through the challenges confronting clinicians and others who don’t want to be affected by undue bias.

“We can’t just focus on one part of the pipeline,” she said. “We need to consider sources of bias in the data that we collect, including the labels … we need to evaluate our models comprehensively as we’re developing algorithms, and we need to recognize that not all gaps can be corrected, but maybe they don’t have to, if you deploy them intelligently such that when they are wrong, they don’t disproportionately bias care stuff. And by doing this, we think that we can create actionable insights in human health.”

Marzyeh Ghassemi is an Assistant Professor at CSAIL, IMES, & EECS MIT. Ghassemi is an accomplished data scientist and researcher, known for her groundbreaking work at the intersection of machine learning and healthcare. With a deep understanding of data-driven solutions, she has made significant contributions to improving patient outcomes and clinical decision-making.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

‘The Bad Guys 2’ New On Streaming This Week, Report Says

Innovation August 17, 2025

Today’s NYT Mini Crossword Clues And Answers For Saturday, August 16th

Innovation August 16, 2025

Apple’s Robot Strategy Revealed, Grok Adds Spicy GenAI Video, 3D Instagram In Meta VR

Innovation August 15, 2025

‘TOAPLAN Arcade Collection Vol 1 & 2’ Switch Review: Shmup Excellence

Innovation August 14, 2025

Dana White Confirms Huge Update Coming To Fighter Bonuses

Innovation August 13, 2025

Today’s NYT Mini Crossword Clues And Answers For Tuesday, August 12

Innovation August 12, 2025
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

‘The Bad Guys 2’ New On Streaming This Week, Report Says

August 17, 2025

Warren Buffett’s ‘Mystery’ $1.8 Billion Investment Revealed

August 17, 2025

How to Run Multiple Businesses — From a CEO Who’s Doing It

August 17, 2025

Donald Trump Orders Crackdown on Politically Motivated ‘Debanking’

August 17, 2025

Today’s NYT Mini Crossword Clues And Answers For Saturday, August 16th

August 16, 2025

Latest Posts

Friends’ Kitchen Side Hustle Surpassed $130,000 in 3 Days

August 16, 2025

Trump Is Undermining Trust in Official Economic Statistics. China Shows Where That Path Can Lead

August 16, 2025

Apple’s Robot Strategy Revealed, Grok Adds Spicy GenAI Video, 3D Instagram In Meta VR

August 15, 2025

3 Traits You Need to Succeed as a Franchisor

August 15, 2025

Struggling to Find New Clients? Use the ‘Lumberjack Strategy’.

August 15, 2025
Advertisement
Demo

Startup Dreamers is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2025 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.

GET $5000 NO CREDIT