LLMRisks Archive
Google Chrome silently installs a 4 GB AI model on your device without consent. At a billion-device scale the climate costs are insane. — That Privacy Guy!
Google Chrome is downloading a 4 GB Gemini Nano model onto users' machines without consent, with no opt-in, no opt-out short of enterprise tooling, and an automatic re-download every time the user deletes it. The pattern is identical to the Anthropic Claude Desktop case I wrote about last month, but the scale is between two and three orders of magnitude larger. This article does the legal analysis and, for the first time, the environmental analysis. The numbers are not small.
The Mother of All AI Supply Chains: Technical Deep Dive | OX Security
No Input Sanitization, No Warning: The MCP Vulnerability Behind 30+ Disclosures This post is part of OX Security's The Mother of All AI Supply Chains research — a comprehensive investigation into one systemic vulnerability at the heart of the MCP ecosystem, covering 30+ disclosures and 10+ CVEs. Download the full eBook for the complete findings Explore the full advisory Read…
Terraform Audit Guide: Monitoring, Logging & Compliance
Audit Terraform end to end: code changes, plan/apply runs, state access, and cloud audit logs to prove who changed what, when, and why.
Don’t Trust Password Managers? HIPPO May Be The Answer!
The modern web is a major pain to use without a password manager app. However, using such a service requires you to entrust your precious secrets to a third party. They could also be compromised, t…
How Anthropic’s Model Context Protocol Allows For Easy Remote Execution
As part of the effort to push Large Language Model (LLM) ‘AI’ into more and more places, Anthropic’s Model Context Protocol (MCP) has been adopted as the standard to connect LLMs …
Copy Fail — 732 Bytes to Root
CVE-2026-31431. 100% Reliable Linux LPE — no race, no per-distro offsets, page-cache write that bypasses on-disk file-integrity tools and crosses containers. Found by Xint Code.
Introduction to Secret Sharing from First Principles - Stoffel - MPC Made Simple | Privacy-First Application Development
Ship features that can't leak user data—even in a breach. Stoffel's secure multiparty computation (MPC) platform lets you compute on encrypted inputs. Math-backed privacy, not promises.
Quantum Computers Are Not a Threat to 128-bit Symmetric Keys
There is no need to update symmetric key sizes as part of the post-quantum transition, due to the details of how Grover's algorithm scales. Most authorities agree.
Behavioral Credentials: Why Static Authorization Fails Autonomous Agents
Enterprise AI governance still authorizes agents as if they were stable software artifacts.They are not.An enterprise deploys a LangChain-based research
Dutch navy frigate tracked by mailing it a Bluetooth tracker
: Or, how public information and a €5 tracker exposed an avoidable opsec lapse
Security audit
A package manager for the Erlang ecosystem
Y2K 2.0: The AI security reckoning - Anil Dash
A blog about making culture. Since 1999.
Russia Hacked Routers to Steal Microsoft Office Tokens
Hackers linked to Russia's military intelligence units are using known flaws in older Internet routers to mass harvest authentication tokens from Microsoft Office users, security experts warned today. The spying campaign allowed state-backed Russian hackers to quietly siphon authentication tokens…
In 2026, every Android brand is moments away from disaster
In 2026, hardly any smartphone brands are immune to the risk of the market passing them by.
Cybersecurity in the Age of Instant Software - Schneier on Security
AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: “instant software.” Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted. AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve...
Intro to Reality Pentesting
A Conceptual Field Topology for Proactive Cognitive Defense
CERT-EU: European Commission hack exposes data of 30 EU entities
The European Union's Cybersecurity Service (CERT-EU) has attributed the European Commission cloud hack to the TeamPCP threat group, saying the resulting breach exposed the data of at least 29 other Union entities.
Every Package You Install Can Read Your Secrets
Why npm, pip, and direct Git dependencies can expose your secrets, how the attack works, and which controls actually reduce the blast radius.
axios Compromised on npm - Malicious Versions Drop Remote Access Trojan - StepSecurity
Hijacked maintainer account used to publish poisoned axios releases including 1.14.1 and 0.30.4. The attacker injected a hidden dependency that drops a cross platform RAT. We are actively investigating and will update this post with a full technical analysis.
How V8 Leaks Your Headless Browser's Identity
Chrome's DevTools Protocol leaks its own presence through V8's object serialization. A classic CDP detection signal was patched — this one wasn't.
It's Not Just What Agents Can Do...It's When They Can Do It!
Agents don't just perform actions; they execute plans where the safety of each step depends on what has already happened. That makes sequencing an authorization problem. This post explores how policy, delegation data, and multi-signature approval can govern the order in which agents receive authority, not just the scope of it.
Cross-Domain Delegation in a Society of Agents
Cross-domain delegation requires more than transferring a credential. In a society of agents, policies define boundaries, promises communicate intent derived from those policies, credentials carry delegated authority, and reputation allows trust to emerge through repeated interactions.
The LiteLLM Supply Chain Attack: A Complete Technical Breakdown | The CyberSec Guru
An in-depth investigative report on the March 2026 LiteLLM supply chain attack. Discover how the Trivy GitHub Actions hack led to a massive PyPI compromise
Compromised telnyx on PyPI: WAV Steganography and Credential Theft
Analysis of malicious telnyx 4.87.1 and 4.87.2 on PyPI — a package with over 1 million monthly downloads: injected code uses WAV audio steganography to deliver payloads that steal credentials and establish persistence. Attributed to TeamPCP.
Agentic AI Threat Modeling Framework: MAESTRO | CSA
MAESTRO (Multi-Agent Environment, Security, Threat, Risk, & Outcome) is a novel threat modeling framework for Agentic AI. Assess risks across the AI lifecycle.
Secure Communication, Buried In A News App
Cryptography is a funny thing. Supposedly, if you do the right kind of maths to a message, you can send it off to somebody else, and as long as they’re the only one that knows a secret little…
Claude.ai Prompt Injection Vulnerability | Oasis Security
Three Claude.ai vulnerabilities chained into a full attack: prompt injection to silent data exfiltration. Oasis Security research disclosure.
Supply-chain attack using invisible code hits GitHub and other repositories
Unicode that's invisible to the human eye was largely abandoned—until attackers took notice.
‘Fake workers’ from North Korea use AI to exploit European companies
archived 15 Mar 2026 15:46:35 UTC