• Ben's Bites
  • Posts
  • OpenAI adds election guardrails to ChatGPT

OpenAI adds election guardrails to ChatGPT

If you ask ChatGPT about US elections now, it won't discuss it and will refer you to CanIVote.org instead. This new tool lets OpenAI set policies on what ChatGPT can and can't talk about.

What's going on here?

OpenAI recently added a new tool to ChatGPT that limits what it can say about US elections.

What does this mean?

OpenAI quietly put a "guardian_tool" function into ChatGPT’s content policy that stops it from talking about voting and elections in the US. It now tells people to go to CanIVote.org for that info. OpenAI is being proactive about ChatGPT spreading misinformation before the 2024 US elections.

The tool isn't just for elections either - OpenAI can add policies to restrict other sensitive stuff too. Since it's built-in as a function, ChatGPT will automatically know when to use it based on the conversation. It goes beyond the previous ways OpenAI trained ChatGPT.

Why should I care?

In 2024, half of the world will be going through elections. OpenAI is taking steps to use AI responsibly as ChatGPT is getting more popular. Hallucinations are still present in chatGPT (and other LLM systems). Restricting election info and redirecting to resources that have human-verified information is a safe way to deal with the current state of the world and these systems—for people and OpenAI both.

Reply

or to participate.