UnitedHealthcare’s Optum had an AI chatbot used by employees exposed to the internet. This chatbot, designed for employees to inquire about claims, was accessible publicly. The exposure raises concerns about the security of sensitive data and the potential for unauthorized access. This incident highlights the risks associated with deploying AI tools without adequate security measures. The AI chatbot exposure occurred amid broader scrutiny of UnitedHealthcare for its use of AI in claims denials.
Microsoft’s new AI feature ‘Recall’ for Copilot+ PCs stores screenshots of sensitive data, including credit cards and social security numbers, even when a ‘sensitive information’ filter is enabled. This has raised serious privacy and security concerns among users. This feature takes continuous screenshots of everything a user does. The data is stored locally but sent off to Microsoft’s LLM for analysis. This has prompted an investigation by the UK Information Commissioner’s Office. This incident highlights the potential risks of AI-powered surveillance features and the importance of user privacy.