Every major generative AI vendor, including OpenAI and Anthropic, uses system prompts to guide model behavior and tone.
These prompts are designed to prevent models from acting inappropriately and to set guidelines for their responses, such as being polite but not overly apologetic, or admitting when they don’t know something.
Secrecy of System Prompts:
Most vendors keep their system prompts confidential, likely for competitive reasons and to prevent users from finding ways to bypass them.
For instance, GPT-4’s system prompt can only be exposed through a prompt injection attack, though even then, the results may not be entirely reliable.
Anthropic’s Approach:
Anthropic is taking a different approach by openly publishing the system prompts for its latest models.
The prompts for Claude 3 Opus, Claude 3.5 Sonnet, and Claude 3.5 Haiku are available in the Claude iOS and Android apps, as well as on the web.
This move aligns with Anthropic’s commitment to positioning itself as a more ethical and transparent AI vendor.