Oh dear, it appears that even the wizards of AI can stumble upon an unfortunate mishap. Microsoft AI researchers recently found themselves in a quandary when they accidentally exposed tens of terabytes of sensitive data, including private keys and passwords. How did this happen, you ask? Well, it happened while publishing a storage bucket of open-source training data on GitHub. But fret not, for they are taking steps to rectify the situation. Let’s dive into the details of this incident and shed some light on the actions being taken.
In the vast realm of AI research, Microsoft’s AI division encountered an unexpected hiccup. As part of their efforts to share open-source training data on GitHub, researchers mistakenly exposed a treasure trove of sensitive data, comprising tens of terabytes. This unintentional disclosure included private keys and passwords, which normally should remain carefully guarded.
Enter cloud security startup Wiz, who uncovered this mishap during their ongoing work. Through their research, they stumbled upon a GitHub repository belonging to Microsoft’s AI research division that contained these inadvertently exposed assets. Sensing the gravity of the situation, Wiz promptly shared their discovery with the folks at TechCrunch, sparking awareness and further investigation.
But fear not, for Microsoft has not been caught napping. They swiftly responded to the incident by investigating the exposed data and promptly securing the storage bucket. They have assured the public that there is no evidence of any malicious access or misuse of the exposed information.
To prevent similar incidents in the future, Microsoft has also pledged to conduct a thorough review of their security practices and implement additional safeguards. This incident serves as a reminder to organizations, big and small, about the importance of implementing rigorous security measures throughout their data handling processes.
At 1on1 Webs, we understand the crucial role that security plays in the realm of technology and data. Our expert team is always
Original Article https://techcrunch.com/2023/09/18/microsoft-ai-researchers-accidentally-exposed-terabytes-of-internal-sensitive-data/