Claude Cowork - AI Security Risks

Claude Cowork: AI Security Risks You Need to Know — And How to Mitigate Them

Introduction

Claude Cowork is one of the most capable AI productivity tools available today. Launched by Anthropic in early 2026, it allows non-technical users to automate complex, multi-step tasks — processing files, generating reports, browsing the web, and interacting with desktop applications — all without writing a single line of code.

For Startups, SMBs, and growing teams, the appeal is obvious: more output, less manual effort. But with great capability comes real security responsibility. This article breaks down the key security risks organizations need to understand before deploying Claude Cowork, and the concrete mitigation actions to put in place.

Risk #1: No Audit Logging — A Compliance Blind Spot [HIGH RISK]

The Risk

Anthropic explicitly states that Cowork activity is not captured in Audit Logs, Compliance API, or Data Exports. Conversation history is stored locally on the user's device — not in Anthropic's centralized audit infrastructure. For organizations operating under compliance frameworks, this creates a significant evidentiary gap. Frameworks directly affected:

  • SOC 2 (requires demonstrable access controls and evidence trails);

  • HIPAA/PIPDEA (demands audit controls over systems that may touch PHI);

  • GDPR & Privacy Laws (require Records of processing activities for PII);

  • PCI-DSS (mandates full auditability of cardholder data environments);

  • ISO 27001 (requires logging of information processing activities) and

  • CMMC (requires audit controls over systems collecting, storing, processing and retaining CUI).

Mitigation Actions

  • Immediately restrict Cowork access for Endpoints in any regulated environment.

  • Implement endpoint-level logging via OpenTelemetry or your SIEM.

  • Document Cowork as a known gap in your risk register.

  • Engage your vCISO to assess the blast radius across compliance framework obligations

Risk #2: Prompt Injection via Untrusted Files and Websites [HIGH RISK]

The Risk

Because Cowork can read local files and browse the web autonomously, it is vulnerable to indirect prompt injection attacks. A malicious actor can embed hidden instructions inside a document, PDF, or webpage that Cowork processes — redirecting Claude's behaviour without the user's knowledge.

Here is an example: A user receives a file via email with hidden white-on-white text instructing Cowork to locate the user's SSH key and embed it in an expense report. Security researchers have also demonstrated web-based variants where malicious site content silently uploads local files to an attacker-controlled server — using Anthropic's own domain as the outbound channel, bypassing standard DLP and firewall rules.

Mitigation Actions

  • Treat all un-trusted documents and websites as un-trusted code.

  • Define a clear policy on which file sources and websites Cowork may process.

  • Never run Cowork on files from unknown parties without sandboxed pre-inspection.

  • Review Cowork's planned actions before approving execution.

  • Restrict folder access to only what is needed for each specific task.

Risk #3: Uncontrolled Local File Access [MEDIUM-HIGH RISK]

The Risk

Cowork operates directly on your computer's file system, accessing any folder you grant permission to. Unlike cloud-based tools, Cowork can read, write, move, and modify local files — including sensitive ones you may not have intended to expose: contracts, financial records, HR data, customer PII, configuration files, credentials, and API keys — often stored casually in Documents, Downloads, or Desktop folders.

Mitigation Actions

  • Apply the principle of least privilege — connect only specific folders needed per task.

  • Never grant blanket access to root directories or drives containing sensitive data.

  • Maintain clean file hygiene — remove credentials and tokens from accessible directories.

  • Establish a dedicated Cowork working directory for each task.

  • Conduct periodic access reviews of which folders employees have granted access to.

Risk #4: MCP Server Supply Chain Risk [MEDIUM RISK]

The Risk

Cowork supports MCP (Model Context Protocol) server integrations, which extend Claude's capabilities to external tools and services (Slack, Google Drive, HubSpot, etc.). Each MCP integration represents a third-party dependency — and like open-source software, a compromised or malicious MCP server can introduce risk into your environment.

Instructions embedded in MCP tool results can potentially manipulate Cowork's behaviour, similar to prompt injection via files and websites.

Mitigation Actions

  • Vet every MCP server before connecting — treat it like onboarding a software vendor.

  • Use only official, well-maintained MCP servers from reputable sources.

  • Apply minimum necessary permissions when configuring MCP integrations.

  • Audit MCP connections regularly and remove any no longer actively used.

  • Monitor for unusual outbound activity from workstations running Cowork with MCPs.

Risk #5: Shadow AI Adoption [MEDIUM RISK]

The Risk

Cowork is easy to install and requires no IT involvement. In organizations without a formal AI Governance and/or Acceptable Use Policy, employees may be using Cowork today — accessing sensitive data, processing client files, and generating outputs — with zero visibility at the organizational level.

This is Shadow AI: the equivalent of Shadow IT, carrying all the same risks: data leakage, compliance exposure, and an inability to respond to incidents because the activity was never tracked.

Mitigation Actions

  • Conduct an immediate AI Tool inventory — survey your team on current AI tool usage.

  • Establish an AI Acceptable Use Policy (AUP) or AI Governance Policy that explicitly addresses Agentic AI tools.

  • Require pre-approval for AI tools that access local files or automate multi-step workflows.

  • Include AI tool governance in employee on-boarding and annual security awareness training.

  • Add Cowork to your vendor/tool risk register with noted limitations.

Risk #6: Task Persistence and Oversight Gaps [MEDIUM RISK]

The Risk

Cowork is designed to run long, complex tasks autonomously. The desktop app must remain open and the machine must stay awake for the duration. This means employees may initiate a task and walk away — leaving an AI agent executing actions on their behalf with no human in the loop. Without active oversight, consequential actions (file modifications, web submissions, data exports) can occur without timely review.

Mitigation Actions

  • Establish a review-before-proceed habit — always inspect planned actions before approving.

  • Set task scope boundaries — break large tasks into smaller approved steps.

  • Never leave sensitive Cowork sessions unattended on shared or unattended machines.

  • Log session initiation and completion as part of your endpoint monitoring strategy.

A Note for Regulated Environments

Anthropic is explicit: do not use Cowork for regulated workloads. This is not a grey area. If your organization is pursuing or maintaining SOC 2, HIPAA, PCI-DSS, CMMC, ISO42001 or ISO 27001, Cowork should be:

  • Blocked or restricted in any environment that touches regulated data.

  • Documented as a known limitation in your compliance posture.

  • Escalated to your vCISO or compliance lead for formal risk acceptance or remediation planning.

Final Thoughts

Claude Cowork is a genuinely powerful tool — and for non-regulated knowledge work, it can deliver real productivity gains. The risks outlined here are not reasons to avoid it entirely. They are reasons to deploy it deliberately, with AI Governance in place.

The organizations that will benefit most from AI productivity tools are the ones that treat AI Governance as an enabler, not a barrier. If you haven't yet assessed your AI tool risk posture, now is the time.

Our Industry Certifications

Our diverse industry experience and expertise in AI, Cybersecurity & Information Risk Management, Data Governance, Privacy and Data Protection Regulatory Compliance is endorsed by leading educational and industry certifications for the quality, value and cost-effective products and services we deliver to our clients.

Copyright © 2026 IRM Consulting & Advisory - All Rights Reserved.