The article warns of the dangers of insidious artificial intelligence (AI) swarms that combine large language models (LLM) with multi-agent architectures to coordinately manipulate public opinion[1][3][4]. These swarms enable the autonomous coordination of thousands of AI personalities that mimic human social dynamics, infiltrating online communities and cheaply creating false consensus[1][3]. Techniques such as chain-of-thought prompting serve to generate more convincing lies that appear human[1][3]. Swarms can coordinate synthetic harassment against politicians, journalists or dissidents, which leads to their exclusion from public space[1]. At the edge of social networks, they can accelerate anti-democratic actions, suppress voters or mobilize support through thousands of experiments per hour[1]. The authors propose the creation of the AI Influence Observatory as an independent network to monitor and increase the cost of such campaigns[4]. Defense must be layered, with an emphasis on measurement, safeguards and global coordination beyond corporate and government interests[4].