Summary
Anthropic, the company behind the Claude AI chatbot, has started asking some users to prove their identity. This new process requires people to show a government-issued ID and take a live selfie. While the company says this is only for specific tasks, many users are worried about their privacy and how their personal data will be handled. This move marks a major change in how AI companies manage their users.
Main Impact
The biggest impact of this decision is the loss of privacy for people using AI tools. For a long time, users could interact with chatbots using just an email address or a credit card. By asking for official government documents, Anthropic is treating AI access more like a bank account or a high-security service. This could make some users stop using Claude because they do not want to share such sensitive information with a tech company.
Key Details
What Happened
Anthropic recently updated its support pages to explain a new identity check. When a user tries to access certain features, a message may pop up asking them to verify who they are. To finish the check, the user must hold up a physical ID, like a driver's license or passport, to their camera. They also have to take a selfie so the system can compare their face to the photo on the ID. Anthropic has not yet listed exactly which features will trigger this requirement.
Important Numbers and Facts
Anthropic is not doing this work alone. They are using a third-party company called Persona to handle the checks. Persona is a well-known service that also works with other big names like OpenAI and Roblox. Anthropic stated that Persona is legally required to keep the data safe. They also promised that the images of IDs and faces will not be used to train their AI models. The data is encrypted, which means it is turned into a secret code so that hackers cannot easily read it.
Background and Context
Identity verification is common in industries where safety and legal rules are very strict. For example, you often have to show an ID to open a bank account or buy certain items online. In the AI world, companies are under pressure to make sure their tools are not used for bad things, like creating fake news or helping with crimes. By knowing exactly who is using the software, companies can hold people accountable for what they do with the AI.
Public or Industry Reaction
The reaction from the public has been mostly negative. On websites like Reddit and Hacker News, users expressed anger and confusion. Many people pointed out that if they already pay for a subscription with a credit card, the company already has a way to identify them. There is also a lot of talk about the people who fund Persona. One of Persona's main investors is Founders Fund, a firm started by Peter Thiel. Thiel also started Palantir, a company that sells surveillance tools to the FBI and CIA. This connection makes many users worry that their personal data might eventually end up in government databases.
What This Means Going Forward
This move by Anthropic might be the start of a new trend. If one major AI company starts asking for IDs, others like Google or Microsoft might do the same. This could create a "walled garden" where only people willing to give up their privacy can use the most powerful AI tools. It also raises questions about what happens if these databases are ever hacked. Storing millions of government IDs and selfies creates a huge target for cybercriminals. In the coming months, we will likely see if more users accept these rules or if they move to smaller, more private AI services.
Final Take
Anthropic is trying to balance safety with technology, but asking for government IDs is a bold step that many feel goes too far. While the company promises the data is safe and won't be used for AI training, the links to surveillance firms make users uneasy. For a tool meant to help people write and create, the requirement to show a passport feels out of place for many. The success of this move will depend on whether the "special features" Anthropic is protecting are valuable enough for users to give up their personal privacy.
Frequently Asked Questions
Why is Anthropic asking for my ID?
The company says it needs to verify identities for "a few use cases" to ensure the safety and security of its platform. They have not yet specified which exact features require this check.
Will my ID be used to train the Claude AI?
No. Anthropic has stated clearly that they will not use any identity documents or selfies to train their AI models or improve their technology.
Who sees my personal information?
The verification is handled by a company called Persona. Anthropic says Persona is contractually blocked from using your data for anything other than verification and that all information is encrypted.