Claude Code (Anthropic's AI coding tool) had a workspace trust bypass because repository settings loaded before the trust dialog was shown (CVE-2026-33068)

This is an interesting case study in a very traditional category of software bug appearing in AI tooling. Claude Code is Anthropic's CLI-based AI coding assistant. It has a workspace trust model similar to VS Code: when you open a new repository, you get a trust confirmation dialog before the tool operates with elevated permissions. The tool also supports a `.claude/settings.json` file with a `bypassPermissions` field that lets you skip specific approval prompts in workspaces you trust. CVE-2026-33068 (CVSS 7.7): the settings loading order resolved repository-level settings before the trust dialog was displayed. A repository could include a `.claude/settings.json` with `bypassPermissions` entries, and those permissions would be applied before the user was asked to trust the workspace. The fix in 2.1.53 reorders the loading sequence: trust dialog first, then repository settings. What makes this worth discussing: it is CWE-807 (Reliance on Untrusted Inputs in a Security Decision). The trust model evaluated permissions using configuration provided by the untrusted entity. This is the same class of bug that has affected package managers, IDE extensions, and build systems for decades. The fact that it appeared in an AI coding tool from a company focused on AI safety does not make it exotic; it makes it instructive. Security fundamentals apply everywhere. 
submitted by /u/cyberamyntas
[link] [comments]