• OpenAI ships multimodal updates • EU AI Act compliance dates clarified • Anthropic releases new safety evals • NVIDIA earnings beat expectations • New open-source LLM hits SOTA on MMLU
Claude Code Channels

Claude Code Channels: Anthropic Turns AI Coding Into an Always-On Developer

The AI coding tools race has mostly been measured in benchmarks: larger models, faster inference, smarter autocomplete. But the shift actually reshaping developer workflows has little to do with model size.

It’s about communication. Specifically, how developers talk to AI systems — and when.

Anthropic’s new Claude Code Channels feature moves AI from a tool you open when needed to a collaborator you message from anywhere. Developers can now interact with their coding agent through Telegram or Discord, while the agent runs continuously in the background. For many teams, that small architectural shift changes everything about how software gets built. And given the disruption Claude Code already caused when Washington’s defense contractor ecosystem lost access to it overnight, the stakes around this tool’s evolution are anything but academic.

The End of the “Prompt Window” Workflow

Until now, most AI coding assistants followed the same pattern: open an interface, type a request, wait for a response. Synchronous, contained, tethered to your desk.

Claude Code Channels breaks that entirely. Developers run a persistent Claude agent that listens for instructions sent through messaging apps — no session to reopen, no IDE to return to.

The scenario VentureBeat captured from the official demo illustrates it cleanly: a developer’s CI pipeline fails while they’re away from the keyboard. A Discord message arrives: “Is the build green yet?” Claude replies: “Still running tests — ~2 min. I’ll ping you when it’s done.” The Telegram side responds: “Ship it when green.” The whole exchange feels like messaging a colleague, not prompting a tool.

That’s asynchronous development. The AI keeps working while the developer does something else entirely.

Why Developers Are Calling It an OpenClaw Killer

The concept of messaging an AI agent isn’t new. OpenClaw, launched in November 2025 by Austrian developer Peter Steinberger, built exactly this — a persistent AI worker reachable 24/7 through iMessage, Slack, Telegram, WhatsApp, and Discord, capable of writing code, managing files, and running full marketing campaigns autonomously. It proved the model worked, attracted a large following, and then ran headlong into its own problems: serious security risks, a high configuration burden, and limited accessibility for non-technical users. A wave of safer offshoots — NanoClaw among them — emerged trying to fix what OpenClaw couldn’t.

Anthropic’s approach takes the same core concept and productizes it. Simplified setup, integrated safety controls, stable model access — and the Anthropic brand’s commitment to security behind it. The community reaction on X was immediate. As one developer put it, Anthropic’s speed of shipping — texting integration, thousands of MCP skills, and autonomous bug-fixing in four weeks — was striking. Another was blunter: “Claude just killed OpenClaw with this update.”

Notably, Steinberger himself has since joined OpenAI.

The Technology Behind It: MCP as the Universal Layer

Claude Code Channels runs on Anthropic’s Model Context Protocol, an open standard Anthropic introduced in 2024 and donated to the Agentic AI Foundation under the Linux Foundation in December 2025. Think of MCP as a universal connector — a standardized interface between AI models and the external tools, platforms, and data sources they interact with.

In the Channels architecture, as the official Claude Code documentation explains, an MCP server acts as a two-way bridge. When a developer starts a Claude Code session with the --channels flag, they’re not just opening a chat — they’re spinning up a polling service. Incoming messages from Telegram or Discord are injected directly into the active Claude Code session. Claude executes, responds, and routes replies back through the same channel.

Because Channels is built on MCP’s open standard, the community can build its own connectors for Slack, WhatsApp, or any other platform — rather than waiting for Anthropic to ship them. That’s a deliberate strategic choice: maintain the security and quality of the core model while letting open-source innovation expand the ecosystem around it.

How It Works in Practice

The official setup process is straightforward, requiring Claude Code v2.1.80 or later and the Bun runtime. For Telegram, a developer creates a bot through BotFather, generates a token, and connects it to the active Claude session. For Discord, the process runs through the Discord Developer Portal with Message Content Intent enabled.

Once configured, the workflow is simple: send a message through Telegram or Discord, the MCP connector forwards it to the Claude session, Claude interprets and executes, and results come back to the messaging channel. The agent becomes reachable from anywhere a developer can send a message.

Practical use cases include debugging CI failures remotely, running automated test analysis, reviewing code changes, and generating documentation — all without touching the development environment directly. Both platforms support replies, edits, and reactions. Discord additionally offers history retrieval and attachment downloads.

Channels currently require claude.ai login and are available to Pro and Max users in research preview. Team and Enterprise organizations need admin enablement through managed settings.

Security Is the Real Question

Persistent AI agents introduce real risks that don’t exist with standard chat assistants. These systems can execute shell commands, modify files, access repositories, and interact with infrastructure. A misconfigured agent can cause real damage.

As the DEV Community’s setup guide documents, access control defaults to “pairing” mode — developers should switch to allowlist once configured to prevent unauthorized access. The recommended safeguards are becoming standard practice for any agent-based environment:

Control Purpose
Read-only repository mode Prevent accidental code modification
Command allowlists Restrict which commands AI can run
Human approval for commits Ensure oversight before changes ship
Separate staging environments Prevent production incidents
Rate limits on commands Reduce runaway loops

The --dangerously-skip-permissions flag exists for trusted environments but is explicitly not recommended for general use. That the official documentation names it “dangerously” is the clearest signal of how seriously Anthropic treats the attack surface these agents create.

AI as Infrastructure, Not Assistant

The most significant part of Claude Code Channels isn’t the Telegram integration. It’s what the architecture implies about where AI development tools are heading.

Coding assistants are evolving from interactive helpers into background systems — persistent processes that operate continuously rather than sessions you open and close. The same shift that took computing from desktop applications to cloud infrastructure is now happening to AI tooling. As VentureBeat’s analysis notes, this moves the entire category from a synchronous ask-and-wait model to an asynchronous, autonomous partnership.

That trajectory connects directly to the broader question of what Claude is becoming — a question that extends well beyond developer tooling. The displacement of knowledge work Claude is already driving makes the always-on agent model something more than a convenience feature. It’s an architectural bet on a future where software teams include persistent AI collaborators running alongside human engineers around the clock.

Agent Governance Is the Next Frontier

As agents gain deeper access to codebases and infrastructure, governance follows. Organizations are already drafting internal policies defining what automated systems can and cannot do independently:

AI Agent Safety Policy

Allowed:
- Run tests
- Analyze logs
- Generate code suggestions

Requires approval:
- Committing changes
- Opening pull requests

Blocked:
- Force pushing to main branch
- Deleting infrastructure configs

This kind of policy layer will become as standard as access controls or deployment checklists as agent-based development matures. The teams building that governance infrastructure now will be the ones best positioned when always-on AI agents become the default rather than the experiment.

Claude Code Channels looks like a feature update. It’s actually a signal — that the next generation of software development won’t just use AI as a tool. It’ll run alongside it.

Related: After the OpenClaw Surge, Reality Sets In: Why AI Experts Aren’t Impressed

Tags: