A new AI jailbreak method called Echo Chamber manipulates LLMs into generating harmful content using subtle, multi-turn prompts that evade safety filters.