The Dark Side of LLMs: Agent-based Attacks for Complete Computer Takeover
| Title | The Dark Side of LLMs: Agent-based Attacks for Complete Computer Takeover |
| Publication Type | Miscellaneous |
| Year of Publication | 2025 |
| Authors | Lupinacci, M, Pironti, FAurelio, Blefari, F, Romeo, F, Arena, L, Furfaro, A |
| Keywords | Artificial Intelligence (cs.AI), Cryptography and Security (cs.CR), FOS: Computer and information sciences |
| Abstract | The rapid adoption of Large Language Model (LLM) agents and multi-agent systems enables remarkable capabilities in natural language processing and generation. However, these systems introduce security vulnerabilities that extend beyond traditional content generation to system-level compromises. This paper presents a comprehensive evaluation of the LLMs security used as reasoning engines within autonomous agents, highlighting how they can be exploited as attack vectors capable of achieving computer takeovers. We focus on how different attack surfaces and trust boundaries can be leveraged to orchestrate such takeovers. We demonstrate that adversaries can effectively coerce popular LLMs into autonomously installing and executing malware on victim machines. Our evaluation of 18 state-of-the-art LLMs reveals an alarming scenario: 94.4% of models succumb to Direct Prompt Injection, and 83.3% are vulnerable to the more stealthy and evasive RAG Backdoor Attack. Notably, we tested trust boundaries within multi-agent systems, where LLM agents interact and influence each other, and we revealed that LLMs which successfully resist direct injection or RAG backdoor attacks will execute identical payloads when requested by peer agents. We found that 100.0% of tested LLMs can be compromised through Inter-Agent Trust Exploitation attacks, and that every model exhibits context-dependent security behaviors that create exploitable blind spots. |
| DOI | 10.48550/ARXIV.2507.06850 |
