As AI chatbots become increasingly popular, more people are turning to them with sensitive medical questions. However, security experts are warning against uploading medical images such as X-rays, MRIs, and PET scans to AI chatbots. While it might seem convenient, doing so could put your privacy at risk.
Users have recently been encouraged to upload medical images to platforms like X’s Grok, an AI chatbot, in hopes of getting better insights into their health. However, privacy and security experts are urging caution.
Medical data is highly sensitive and protected by federal laws, such as HIPAA. However, many consumer apps, including AI platforms, aren’t covered by these laws, meaning your data may not have the protections you think it does.
When you upload data, AI models can use it to improve their accuracy, but this comes with risks. There’s no clear way to know how your data is being used, who it’s shared with, or how long it’s stored.
For instance, X’s Grok collects medical imagery to enhance its AI’s ability to interpret scans, but it’s unclear who has access to that data. Some of it may be shared with related companies.
What’s more troubling is that AI models are often trained on the data they receive. This means your private medical records could end up in training datasets, which could be accessed by third parties, such as healthcare providers, employers, or even government agencies.
Elon Musk encouraged users to upload their medical images to Grok in his post, acknowledging that the AI’s results are still in the early stages. However, the lack of clear privacy guidelines raises serious concerns about the safety of your sensitive information.
As a rule of thumb, always think twice before sharing sensitive data online—what goes on the internet never truly disappears