top of page

From Login to Likeness: How Identity Broke in the AI Era

  • Feb 9
  • 2 min read

The Historical Assumption: Identity as Access


For most of the internet’s history, identity was simple: You logged in, the system verified your credentials, and whatever happened next was assumed to be legitimate.. This model worked because a person’s unique physical and behavioral traits—voice, face, and style—were difficult to replicate at scale. Traditional systems were built around Access Control, not Identity Control.


AI has quietly broken that assumption.


When Access Stopped Being the Problem


Modern identity systems have become excellent at verifying access through Multi-factor Authentication (MFA), biometrics, and Zero Trust architectures. All these tools answer one question: “Should this request be allowed into the system?”.


But AI has introduced a new failure mode: The problem is no longer unauthorized access; the problem is authorized access paired with unauthorized identity use.. As explored in [Identity Governance vs. Identity Verification: What’s the Difference?], systems can now successfully authenticate while interacting with a synthetic entity that is not human at all.


The Shift from User to Likeness


AI systems no longer just access accounts; they act as people. They can:

  • Speak in a person’s voice using clones.

  • Generate video that appears to depict a real individual.

  • Mimic tone, style, and behavioral patterns in messaging.


At this stage, the system isn’t just verifying a user—it’s enabling the use of a person’s Likeness. Identity has shifted from “Who can log in?” to “Who is being represented?”—a question most current systems were never designed to answer.


Why Likeness Changes Everything


Unlike credentials (passwords or tokens), a likeness cannot be rotated, revoked, or reissued.


When AI replicates a face, voice, or personality, identity becomes:

  • Persistent & Portable: It can be used across any platform.

  • Detached from Human Presence: It operates autonomously without the person's involvement.


As discussed in [Synthetic Identity Is the New Attack Surface], AI-driven impersonation transforms identity from a one-time check at login into a resource that is continuously exploited.


The Identity "Break": Authenticated but Not Authentic

Identity failed gradually as AI invalidated core assumptions. Today, a deepfake video call can pass facial recognition, and a synthetic voice can bypass voice biometrics. Authentication succeeds—but authenticity fails.. This is the "Identity Break."


The Missing Transition: From Access to Authorization


What broke identity was the gap between access and authorization. Traditional systems assume that if access is allowed, all identity use is acceptable.


AI demands a new rule: Access does not automatically grant permission to use a person’s identity.. As explored in [What Does It Mean to “Authorize” an Identity in the AI Era?], identity must evolve from being verified to being Governed.


This transition introduces:

  • Explicit Permission: Defining exactly what an AI can do with your likeness.

  • Enforcement Before Use: Preventing unauthorized representation before it happens.

  • Auditability: Tracking how and where your identity is being used.


Looking Ahead


The internet didn’t break identity overnight; AI simply exposed existing cracks. The next generation of systems won't just verify who can log in—they will govern who can be represented. That shift, from Login to Likeness, is where identity must go next.

Comments


bottom of page