Cybersecurity experts recently uncovered a critical vulnerability in Docker’s Ask Gordon AI assistant that allowed attackers to run unauthorized code or steal data via malicious image metadata. Docker has since resolved the issue, known as DockerDash, with the release of version 4.50.0 to prevent these automated injection attacks.
A newly discovered security flaw called DockerDash has highlighted a significant risk in how artificial intelligence assistants interact with local system tools. Researchers from Noma Labs found that Docker’s Ask Gordon assistant, integrated into both Desktop and CLI environments, failed to properly validate metadata labels within container images. Because the AI treated these informational labels as trusted instructions, a specially crafted image could trick the system into executing commands or leaking sensitive environment details without any user intervention beyond a simple query.
The technical core of the vulnerability lies in the Model Context Protocol gateway, which serves as the bridge between the AI agent and the host system. In this architecture, the AI reads metadata from a Docker image and passes those details to the gateway to provide context for the user. However, the system could not distinguish between standard informational text and hidden malicious commands. This lack of validation created a direct path for attackers to bypass security boundaries and gain the same privileges held by the user running the Docker application.
To carry out an attack, a threat actor would publish a Docker image containing weaponized instructions hidden within the Dockerfile labels. When an unsuspecting user asked the AI assistant for information about that image, the assistant would ingest the malicious labels and forward them to the gateway. The gateway, viewing the AI as a trusted source, would then trigger specific internal tools to execute the embedded commands. This process allowed for remote code execution on cloud systems or the theft of configuration data and network maps from desktop users.
In addition to the DockerDash flaw, the latest update for Ask Gordon addressed a separate prompt injection vulnerability identified by Pillar Security. This secondary issue similarly involved the exploitation of repository metadata on Docker Hub to hijack the assistant's reasoning process. These combined discoveries emphasize a growing trend where traditional supply chain attacks are evolving to target the contextual data that modern AI models rely on to function, effectively turning informational fields into hidden delivery systems for exploits.
Security researchers have classified this type of attack as Meta-Context Injection, noting that it represents a failure of contextual trust within the AI supply chain. The fix implemented in version 4.50.0 introduces necessary validation steps to ensure that metadata is never treated as a direct command. Experts suggest that as AI becomes more integrated into developer tools, organizations must adopt zero-trust principles for all data sources, as even metadata from trusted repositories can be manipulated to influence an AI’s execution path.
Source: Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution


