tech

Anthropic Lets Claude Code Control Your Computer — With a Leash

The new feature for Claude Code and Cowork allows the AI to execute multi-step tasks directly on your machine, a major step toward autonomous agents that Anthropic itself admits carries inherent risks.

SignalEdge·March 25, 2026·3 min read
Developer's desk with computer monitors showing code and a flowchart, symbolizing an AI agent's automated workflow.

Key Takeaways

  • Anthropic's Claude Code and Claude Cowork can now operate a user's computer to complete tasks.
  • New capabilities include opening files, using the browser, and running developer tools.
  • The feature is being released as a "research preview," and Anthropic warns its safeguards "aren't absolute," according to Ars Technica.
  • The AI will first attempt to use API connectors to services like Google Workspace before resorting to direct computer control.

Anthropic has updated its Claude Code and Claude Cowork AI tools to directly operate a user's computer, enabling them to execute tasks like opening files, browsing the web, and running developer tools. This move places Anthropic squarely in the race to build more autonomous AI agents, but the company is hedging its bets by framing the release as a cautious experiment.

The Push for Autonomy

The update allows the Claude chatbot to become an active participant rather than a passive assistant. According to Engadget, the AI will first attempt to complete a task by using connectors to supported services like the Google workplace suite or Slack. If a specific connector isn't available, the model can now fall back on directly manipulating the user's machine to get the job done. TechCrunch reports this is part of a new "auto mode" designed to let the AI execute tasks with fewer manual approvals from the user.

This two-pronged approach—APIs first, direct control second—is a logical step in making AI assistants more useful. Developers have been cobbling together similar workflows for months using open-source frameworks. Anthropic is now building that capability directly into its product, aiming to streamline the automation of complex coding and administrative tasks. The pattern indicates a clear market demand for AI that does more than just talk.

A Short Leash and Explicit Warnings

Despite the new capabilities, Anthropic is delivering a strong message of caution. All three reports highlight the experimental nature of the feature. Ars Technica notes that the company is urging caution, with Anthropic stating that the safeguards in this "research preview" are not absolute. This is a direct admission that things can go wrong.

The term "research preview" is doing a lot of work here. It allows Anthropic to release a powerful, competitive feature while simultaneously distancing itself from liability if the AI misbehaves. The framing, as described by TechCrunch, is that Claude Code has been given more control, but is being kept "on a leash." Together, these reports point to a central tension in the AI industry: the commercial pressure to deploy increasingly autonomous systems is running ahead of the ability to guarantee their safety. For now, the responsibility for managing that risk falls squarely on the user who enables the feature.

SignalEdge Insight

  • What this means: The industry is shifting from conversational AI to functional AI agents, but the safety frameworks are still playing catch-up.
  • Who benefits: Developers who can now automate more complex workflows, and Anthropic, which maintains feature parity with competitors building agent-like tools.
  • Who loses: Users who enable these preview features without fully understanding the explicit warnings that the safeguards are imperfect.
  • What to watch: How long this feature remains a "research preview" and whether competitors like OpenAI follow suit with similarly cautious rollouts.

Sources & References

Daily Newsletter

Stay ahead of the curve

Get the most important stories in tech, business, and finance delivered to your inbox every morning.

You might also like