Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

In a move toward increased transparency, San Francisco-based AI startup Anthropic has published the system prompts for three of its most recent large language models: Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku. The information is now available on the web and in the Claude iOS and Android apps. The prompts are instruction sets that reveal what the models can and cannot do. Anthropic says it will regularly update the information, emphasizing that evolving system prompts do not affect the API. Examples of Claude’s prompts include “Claude cannot open URLs, links, or videos” and, when dealing with images, “avoid identifying or naming any humans.”

TechCrunch calls the Claude system prompt changelogs “the first of their kind from a major AI vendor,” and suggests that by revealing them Anthropic is “exerting pressure on competitors to publish the same.”

The prompts, located on the release notes section of Anthropic’s website, offer insight into how AI “thinks,” which is not unlike how human brains might adopt and adhere to certain principles, but without the fluidity. These are hard and fast rules.

“Facial recognition is a big no-no,” writes TechCrunch, adding that prompts build “certain personality traits” in models — in this instance, the “characteristics that Anthropic would have the Claude models exemplify.”

Claude 3 Opus, for example, includes prompts that TechCrunch says instruct it “to appear as if it ‘[is] very smart and intellectually curious,’ and ‘enjoys hearing what humans think on an issue and engaging in discussion on a wide variety of topics.’”

It also mandates Claude handle controversial topics “with impartiality and objectivity, providing ‘careful thoughts’ and ‘clear information’ — and never to begin responses with the words ‘certainly’ or ‘absolutely.’”

While the effect “gives the impression that Claude is some sort of consciousness,” TechCrunch concludes “that’s an illusion. If the prompts for Claude tell us anything, it’s that without human guidance and hand-holding, these models are frighteningly blank slates.”

Nonetheless, VentureBeat reports developers are celebrating Anthropic’s peek behind the curtain, writing that “a common complaint about generative AI systems revolves around the concept of a ‘black box,’ where it’s difficult to find out why and how a model came to a decision.”

“Public access to system prompts is a step towards opening up that black box a bit, but only to the extent that people understand the rules set by AI companies for models they’ve created,” VB adds.

ZDNet does a deep dive on the prompts across the three Claude models, while TechCrunch documents how in 16 weeks Claude crossed the milestone of “$1 million in gross mobile app revenue across iOS and Android,” nearly half of it U.S.-based.

“However, Claude is still ranking far behind top rival ChatGPT, which is No. 1 by overall downloads,” writes TechCrunch.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.