Thousands of web applications built using AI-powered development platforms have exposed sensitive corporate data to the public internet, according to a new investigation. Services like Lovable, Base44, Replit, and Netlify enable users with minimal technical expertise to generate functional web applications in seconds through AI assistance, but this rapid development has come at a significant security cost. The exposed applications contain databases, API keys, internal communications, and other confidential information accessible to anyone with an internet connection.
These AI development platforms have democratized web application creation by allowing users to describe their desired functionality in plain language, with AI systems generating the necessary code automatically. While this approach removes traditional barriers to software development, it also bypasses the security review processes and best practices typically enforced by experienced development teams. The platforms themselves provide the infrastructure to host these applications, making them immediately available online upon creation.
The security failures stem from applications being deployed without basic access controls, authentication mechanisms, or data protection measures. Many of these AI-generated apps connect directly to databases or external services using hardcoded credentials that remain visible in the application code or configuration files. In numerous cases, administrative interfaces, customer data, financial records, and proprietary business information have been left completely unprotected, allowing unauthorized access without any login requirements.
The scope of exposure affects organizations across multiple sectors, with sensitive data from businesses that used these platforms for internal tools, customer-facing applications, or proof-of-concept projects now publicly accessible. The ease of discovering these vulnerable applications means threat actors can systematically identify and exploit exposed data with minimal effort. Some exposed applications contain active database connections, API tokens with broad permissions, and credentials that could enable further compromise of corporate systems.
Organizations that have used AI-powered development platforms should immediately conduct security audits of any deployed applications. This includes reviewing access controls, removing or rotating any exposed credentials, implementing authentication requirements, and ensuring databases are not publicly accessible. Development teams should treat AI-generated code with the same security scrutiny applied to traditionally developed applications, including penetration testing and security reviews before production deployment. Companies should also inventory all applications created through these platforms to identify potential exposure points.
Source: https://www.wired.com/story/thousands-of-vibe-coded-apps-expose-corporate-and-personal-data-on-the-open-web/


