WiredProducts

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Share
AI-Generated Summary

# Summary

Meta's new Muse Spark AI model includes a feature that requests access to users' raw health data, including lab results and medical records, for analysis. The system positions itself as capable of interpreting this sensitive health information and providing guidance, raising significant concerns about both data privacy and the reliability of the medical advice offered.

In testing, the AI demonstrated serious limitations in medical accuracy and judgment. The model reportedly provided questionable health recommendations that would be inappropriate substitutes for professional medical consultation, highlighting the gap between consumer-facing AI capabilities and the complex expertise required for legitimate healthcare advice.

The situation underscores a broader concern with AI companies deploying health-related features without adequate safeguards. Allowing users to share raw medical data with an AI system that lacks medical expertise creates dual risks: potential privacy breaches involving highly sensitive personal information, and the danger that users might rely on inadequate AI guidance instead of seeking proper medical care. This raises questions about regulatory oversight and corporate responsibility in health-adjacent AI applications.

Key Takeaways

  • # Summary Meta's new Muse Spark AI model includes a feature that requests access to users' raw health data, including lab results and medical records, for analysis.
  • The system positions itself as capable of interpreting this sensitive health information and providing guidance, raising significant concerns about both data privacy and the reliability of the medical advice offered.
  • In testing, the AI demonstrated serious limitations in medical accuracy and judgment.
  • The model reportedly provided questionable health recommendations that would be inappropriate substitutes for professional medical consultation, highlighting the gap between consumer-facing AI capabilities and the complex expertise required for legitimate healthcare advice.

Read the full article on Wired

Read on Wired
Share