In March 2025, a cybersecurity researcher uncovered an unprotected database containing nearly 100,000 records linked to GenNomis by AI-NOMIS, a South Korean AI company specializing in face-swapping and “Nudify” adult content services. The exposed database, totaling 47.8 GB, included 93,485 images and JSON files, some depicting explicit AI-generated images of individuals who appeared to be very young. ​

The JSON files within the database logged command prompts and links to the generated images, providing insight into the operations of the AI image generator. Although no personally identifiable information (PII) was found, the exposure highlighted potential risks associated with unrestricted AI-generated content and underscored the need for developers to implement robust security measures. ​

Upon discovering the vulnerability, the researcher promptly notified GenNomis and AI-NOMIS. The database was subsequently secured, restricting public access. However, the companies did not acknowledge or respond to the disclosure notice, leaving uncertainties regarding the duration of the exposure and whether unauthorized parties accessed the data. ​

This incident raises broader concerns about the ethical implications and potential abuses of AI-driven image generation technologies. It emphasizes the necessity for companies in this sector to prioritize data security and ethical guidelines to prevent misuse and protect individuals from potential harm. ​

About The Author