Anthropic announces upgraded Claude 3.5 Sonnet AI

Claude Sonnet
Claude Sonnet

Anthropic, an AI research company, has released a new tool that allows its AI models to control a user’s mouse cursor and perform basic computer tasks. The tool, called “Computer Use,” is currently available with the company’s Claude 3.5 Sonnet model through an API. Users can give Claude multi-step instructions to accomplish tasks on their computer.

Claude analyzes screenshots of what is visible to the user and calculates the precise pixel movement needed to click in the correct place. The tool has some limitations. It operates by capturing rapid screenshots rather than working with a live video stream.

This means it can miss fleeting notifications or other changes. It also cannot yet perform certain common tasks like drag and drop. During testing, the tool sometimes made errors.

In one case, it abandoned a coding task midway to browse photos of Yellowstone National Park. The tool is now in public beta after being tested by partner organizations in limited ways. These organizations include employees of companies like Amazon, Canva, Asana, and Notion.

Introducing the Computer Use feature

Other AI companies like OpenAI are also developing similar tools but have not yet made them publicly available. These tools are projected to generate substantial revenue in the coming years.

They have the potential to automate many tasks in office jobs. Anthropic has long emphasized to investors that its AI tools could execute large portions of some office jobs more efficiently than humans. The public testing of the Computer Use feature aligns with this goal.

The introduction of this technology has sparked debate about its potential implications. Some argue that such tools will make jobs easier, while others fear they could replace workers across various industries. To address these concerns, Anthropic has implemented several safeguards.

The company has developed methods to flag and mitigate potential abuses. Given the upcoming US elections, Anthropic is on high alert for attempted misuses that could undermine public trust in electoral processes. While current capabilities are not advanced enough to present heightened risks, Anthropic has measures to monitor and guide AI activity, especially around sensitive areas.

Anthropic is testing Computer Use in the public sphere to identify and address any issues that arise. The company is collaborating with developers to enhance the tool’s capabilities and find positive uses.

More Stories