Friday, August 08, 2008, 10:28 PM

Identifiers are Negative Authenticators

I was just responding to a friend's question about using biometrics, and I realized that one good way of looking at certain class of identifiers is as negative authenticators...

In a separate blog entry ( http://blog.onghome.com/2003/12/problems-with-biometrics.htm ), I pontificate on why biometrics should not be used as authenticators.

At best, biometrics can be used as identifiers (negative authenticators) -- if you don't have the biometric, you'll not be authenticated. But just because you have the biometric, does not mean that you're authenticated... something like how social security numbers or should be treated.

Perhaps we can label this class of identifiers Private Identifiers ... identifiers that you should try to keep as private as possible, but should expect that some group of people would have them. Private identifiers (your social security number), compared to public identifiers (your name), are expected to be more confidential... But I guess it is a matter of degree of public-ness we are talking about. A 100% private identifier is a secret that is never shared -- and thus, is pretty useless for identifying an entity.

Saturday, May 10, 2008, 8:57 PM

Access Agents

Access agents, which are a form of personal directories, are required to solve multiple problems in digital identity. Access agents should perform the user-centric, end-point management of user-id/password pairs, personal private keys, OTP (on-time password) seeds, OpenID tokens, etc. -- all the credentials an end-user possesses (and is expected to manage). Access agents should follow end-users around to all the end-points where human comes into contact with cyberspace. (I like to think of end-points as the 4P's -- PC's, PDA, phones, and portals.)

There are multiple reasons for end-point access agents:

1. Simplification of the user's world
2. Migration to multi-factor authentication
3. Integration

But the bottom-line is control. Control for the end-user in that he/she can finally stop worrying about dozens of access codes. And with better control comes the possibility of increasing security. Which also results in control for the enterprise in better security and more auditability. (Yes, the access agent can act as big brother for the enterprise.)

Dave Kearns has written a bunch on the need for personal directories. He sees most of the work on identity management, including OpenID and InfoCard, leading to a logical conclusion - the personal directory system.

Links to Dave's Articles
o May 2002, The need for a personal directory (http://www.networkworld.com/newsletters/dir/2002/01331333.html)
o January 2007, Someone else wants a personal directory! (http://vquill.com/labels/personal%20directory.html)

Monday, May 14, 2007, 3:42 AM

The Turing Event

A few (10~15) years from now, I will get a phone call from my friend's assistant requesting that since we have not touch bases in a while, that we should meet up over dinner. I think it's a good idea, pull out my PDA/calendar, and start working a meeting time and place with his assistant. In the course of our interaction, we joke about the kinds of food my friend detests and make casual chatter about the weather. After I hang up the phone, I would realize that I have no idea if I just talked to a human being or a machine.

Alan Turing proposed that the way we measure machine intelligence is by comparing an interaction with a machine to our interaction with humans. And if we can't tell them apart, then the machine can be labelled as "intelligent". (This test is known as the Turing Test.)

The first time in history when society can't tell the difference between machines and humans is what I refer to as the Turing Event.

Think about the impact of machines in a post Turing Event world... think seriously, because most of us will still be alive and kicking when we get there. How will economies be impacted? Which occupations will be considered "suitable" for humans, and which not? How much social unrest will there be?

Think about what identity would mean in that world. Do our assistants assume our identities, or do we give them their own? What are the questions we should be asking today that we're not asking?

I didn't write this article to give answers; just to ask questions.

What do you think?

P.S. Mitch Kapor has a bet with Ray Kurzweil that this will not happen by 2029.

Sunday, September 03, 2006, 11:35 PM

At the Core of Authentication

Authentication is the process of an entity proving it's identity to a system, typically to get access to certain resources managed by the system.

The industry typically talks about authentication in terms of:
     o  what you know
     o  what you have, and,
     o  who you are
and, occasionally,
     o  how you do something
is also included.

In this article, I want to get to the real core operation of authentication, and make the case, again, for focusing on asymmetric key exchanges for strong authentication. If you look at what constitutes authentication, it is as simple as proof of identity based on information exchange.

.  "What you know" is, of course, information. However, "what you have", "who you are", and "how you do something" is also information in the following senses:

.  "What you have" is information stored in an object (eg. a smart card), as opposed to your brain.

.  "Who you are" is information stored somewhere in/on your body (eg. your thumb, your retina), as opposed to the neurons in your head.

.  "How you do something" is a reflection of learned or innate pattern in your muscular system (e.g. your typing cadence). It is less direct, but authentication in this form is just the computer extracting your body's parameters on the action you are taking.

Conclusion #1: Authentication can be reduced to using "the information you have" to identity yourself to a system.

(BTW, "you" could be an entity other than a human.)


There are two fundamental ways you can use information to uniquely prove an entity's identity to a system:
     o  Shared secrets
     o  Asymmetric key exchange

The bulk of authentication system use shared secrets. From passwords (shared between the system and your brain), to thumbprint readers (the system and your thumb), to most card key systems (the system and the access card). The biggest problem with shared secrets is that the identifying secret needs to be exchanged during the authentication process. This means that it is vulnerable to attacks that can sniff out the shared secrets during the exchange.

The advantage asymmetric key exchange (i.e. PKI) is the only way we know to establish identity of an entity (i.e. that the entity has a certain unique secret, a private key in this case) without the exchange of the secret. The identifying secret never has to be exposed by the entity (see Physicalization).

Therefore...

Conclusion #2: The most secure form of authentication has to utilize asymmetric key exchange.

Wednesday, August 30, 2006, 1:50 AM

Anonymity - A Binary Switch?

There's been a slew of postings on the topic of anonymity, so I thought I'd jott down a few of my thoughts too... and collect the links here.

Key Points:
  1. Norlin’s Maxim: Your personal data is shifting from private to public.
  2. What becomes public stays public.
  3. If the default for digital identities is anonymity, it will give the user more control.
  4. The default in most systems is not anonymity.
  5. Anonymity and strong identity should be orthorgonal issues, and can be technically.
  6. Anonymity is not typically supported in most systems, so the stronger your identity, the less anonymous it is.

Binary Switch? Eric Norlin critics Dave Weinberger in that Eric believes that there is a spectrum of choices from anonymous, through a range of pseudonymity, to unanonymous identities. Eric asserts that "... online identity is *not* a binary issue." I wonder. If you believe in "Norlin’s Maxim", then so long as there is some small piece of information that links a pseudonym to the user, sooner or later, a pseudonym identity becomes an unanonymous identity. I believe that anonymity is a binary decision. If your digital identity is not fully anonymous, then it is (or soon will be) unanonymous.

Resources:
  1. Ben Laurie, Anonymity is the Substrate (http://www.links.org/?p=123). August 24, 2006.
  2. Akma Adam, Plus Ça Change (http://akma.disseminary.org/archives/2006/08/plus_a_change.html). August 20, 2006.
  3. David Weinberger, Anonymity as the default, and why digital ID should be a solution, not a platform (http://www.hyperorg.com/blogger/mtarchive/anonymity_as_the_default_and_w.html). August 16, 2006.
  4. Dave Kearns, Yet more on anonymity (http://vquill.com/2006/08/yet-more-on-anonymity.html). August 15, 2006.
  5. Eric Norlin, Should the online world reflect the "real" world? (http://blogs.zdnet.com/digitalID/?p=61). August 15, 2006.
  6. Bavo De Ridder, Do you really think you are anonymous? (http://bderidder.wordpress.com/2006/08/15/do-you-really-think-you-are-anonymous/). August 15, 2006.
  7. Kim Cameron, Dave Kearns takes on anonymity (http://www.identityblog.com/?p=530). August 14, 2006.
  8. Dave Kearns, More on Privacy vs Anonymity (http://vquill.com/2006/08/more-on-privacy-vs-anonymity.html). August 14, 2006.
  9. Tom Maddox, Ben Laurie on Anonymity (http://blog.opinity.com/2006/08/ben_laurie_on_a.html). August 14, 2006.
  10. Dave Kearns, Anonymity, identity - and privacy (http://www.vquill.com/2006/08/anonymity-identity-and-privacy.html). August 14, 2006.
  11. Kim Cameron, Norlin’s Maxim (http://www.identityblog.com/?p=525). August 12, 2006.
  12. Willliam Beem, Security by Obscurity (http://william.beem.us/2006/08/security_by_obscurity.html). August 10, 2006.
  13. Eric Norlin, Anonymity and identity (http://blogs.zdnet.com/digitalID/?p=60). August 10, 2006.
  14. David Weinberger, Transparency and Shadows (http://www.strumpette.com/archives/162-Cluetrain-author-dispels-absolute-transparency-myth.html). August 8, 2006.
  15. P.T. Ong, Strong Identities Can Be Anonymous (http://blog.onghome.com/2005/03/strong-identities-can-be-anonymous.htm). March 11, 2005.
  16. P.T. Ong, Support for Anonymity (http://blog.onghome.com/2005/01/support-for-anonymity.htm). January 30, 2005.

Monday, August 28, 2006, 10:11 PM

OpenSSO Available

Noted. I was browsing through Pat Patterson's blog and noticed his posting on the release of OpenSSO. OpenSSO source code, released on August 17, 2006, is now available at https://opensso.dev.java.net/public/use/.

The cost of deploying backend-based SSO systems has traditionally not been in the cost of the software itself. Netegrity (now CA) and Oblix (now Oracle) both had technology similar to OpenSSO. The biggest challenge in rolling out these systems is that you had to integrate it to the backend servers, resulting in very slow deployment projects. It also meant that most companies couldn't really achieve Single Sign-On. Hence, the term Reduced Sign-On (RSO) was born.

I'm unclear as to how OpenSSO will affect the industry. What do you think?

Friday, August 04, 2006, 1:46 AM

Recent Articles of Interest

Noted. Haven't had much time to write my own thoughts ... so here are a few of the more interesting articles I've read over the last few months:

The identity silo paradox. Eric Norlin points out the reality that the organizations that have the large identity silos of internet users have very little business incentive to share that information -- i.e. to be identity providers. Bavo De Ridder responds in Is there an identity silo paradox?.

The Long View of Identity. Andy Oram gives a good overview of the major issues surrounding the issue of identity -- I tried to point out the key issues in a mushier way in Painting the Future: Panopticons and Choice.

Top 5 Identity Fallacies [#1] [#2] [#3] [#4] [#5]. Phil Becker writes eloquently about the misunderstandings of options we have when we build digital systems.

Credit Bureau as Identity Provider? Pete Rowley talks about credit bureaus as future identity providers. Similar to my thoughts about how credit card companies could server a similar role.

Tuesday, May 16, 2006, 3:19 PM

Much Ado About Nothing?

Been busy. Six months without a post ... thought I'd better either shut the blog down, or start posting again. I decided in favor of the latter. And it just so happens that there is interesting stuff to post about...

"51% oppose NSA database" was USA Today's headlines on Monday (at least it was on the copy I picked up in Hong Kong). Interesting. So I read through all the related articles.

The long and short of it is the NSA has been collecting phone call records directly from most phone companies. Qwest, according to USA Today, was the only one who didn't release their customers' records. 51% of the 809 people USA Today polled was against the idea. (Not sure how -- I always like to know how a poll was conducted). USAToday's editorial (written by Keith Simmons) agreed with the majority view.

I think we could get a little bit more practical about the problem, and move away from the privacy debate -- which typically degenerates to a religious debate based on one's normative beliefs on the relationship between the individual and society. Huh? :-) Right.

Why collect the data? To catch the bad guys, right?

Well, if you assume that the bad guys are stupid, they will register phones under their real names and use their personal credit cards topay the bills. Everything traceable.

However, if the bad guys are a bit smarter, they would go out to the nearest Best Buy (Dixon's if they're in the UK) and get a pre-paid phone, using cash... buy lot's of pre-paid vouchers (again, with cash)... and viola! anonymous calling on a mobile phone. This might be a bit more expensive than regular phones, but a few bucks more on the phone bill is not a major consideration for these bad guys. And sure, if they are dumb enough to add credit to their phone with a personal credit card, or set up their phone from an ISP which can link the connection to them, then they might be hosed.

So, assuming a modicum of smarts in the bad guys, what is the reason for amassing personal phone records? I can't think of one. Can you?

Postscript: Here's one suggested by a friend: If you have a phone# linked to a well-known bad guy, the patterns of numbers the well-known phone calls might be useful information, even if there are anonymous phones involved. Well... serves them right for calling anonymous phones with well-known phones!

Wednesday, May 10, 2006, 2:27 AM

What Must Happen

The future of digital identity is set in the context of the evolution of digital systems. This article might be a bit off topic (in that it is not specifically about digital identity), but I think it's important for us to consider the bigger context of the evolution of digital systems.

WHAT MUST HAPPEN

When trying to figure out what building technology, answering the question "what must happen" is a necessity. Not what would be good to happen, but what must happen...

Software that Runs Software: Software to-date have been built for human use. But because of the sheer numbers of systems we are exposed to, the next generation of software needs to be software that runs software -- for humans. Agents, or meta-applications, if you will.

Dominant Systems Define Standards: All these attempts to define standards just result in a mishmash of "standards". Just about the only way to create widely adopted protocols is to create a dominant system, and then open it up. For example, Skype has a tremendous opportunity to set an industrial standard, if they open up fast enough and flexibly enough.

Sandboxes vs Always-On: (i.e. P2P vs Client/Server). Because the physical still matters, and ownership still matters, sandboxes are still needed, and will always be needed. Even if it is possible to be always on the network, the user might not choose to refer to a network resource, but rather, have a copy of it he/she manages. For example, instead of pointing to a web page on a website owned by someone else, the use might want a copy kept in his/her own blog or wiki -- just in case the owner changes it, or stops exporting it.

ASP systems (e.g. Salesforce.com) ultimately will reach full functionality only if they provides P2P facilities.

Synchronization Must Be Done Right: A corollary to the sandboxing trend is that synchronization as a science and engineering technique must be done right.

Lego My Servers: Servers are too complicated to set up and to run. Future servers will come in "Lego" building block format. Run out of disk space on your email server? Plug another email server "brick" next to your first, and the problem is solved. Want redundancy? Buy another two bricks, put them else where, point them to the first pair, and you will have a hot-fail-over system. The bricks will be very specialized: email server, web server, directory server, file server, system admin servers, dataservers, etc.

Of course strong security, including strong digital identity, is required in server bricks.

Evolutionary Revolutions: Respect Legacy. Systems that do not respect and work with legacy systems will fail (unless they perform a function heretofore did not exist). That's why, also, the next generation of software will be meta-applications.


WHAT SHOULD HAPPEN (Normative Statements)

Here are a couple of things I believe should happen, but might not because short term commercial drivers might not be there to make them happen ...

Software for the Long Haul: All too often, we design software without thinking about the long haul. For example, 4-byte IP address space (which has long since run out of room) and 32-bit time integer in Unix (which will expire in 2038). See http://blog.onghome.com/2005/06/long-lived-software.htm.

Basic Software Engineering: Professional software engineering means that we hold ourselves up to the highest engineering standards. Basic issues like designing for testability, internationalization, code coverage, error handling, UI useability, etc. needs to be part of what we do day-to-day in Software Engineering -- otherwise, we should just call it hacking.

[This article was initially written on December 2005.]

Sunday, October 09, 2005, 10:44 PM

If a Tree Falls ...

Johannes' post on Phil Windley puts his finger on why defining "Digital Identity" is hard asserts that an identity is more than a set of claims.

If there is an entity, and there are no claims made about it, does it still have an identity?

If a tree falls in the forrest, and no one hears it, does it make a sound?

Ah, semantics!

From a materialistic perspective, define "sound" and you've answered the second question. Define "identity" and you've answered the first.

This is why Dave and Timothy (and I, to some extent) are on a rant about ontology and semantics. You don't get definitions right, it's hard to have lucid thoughts, let alone unambiguious communications.

"Do identical twins have different identities even if we can't tell them apart?" Define what you mean by "identity" and I'll answer your question.

We can't even answer basic questions about the "things" we are talking about because we don't have common definitions of them. Convinced yet about the importance of a well defined ontology for the digital identity community?

Wednesday, September 28, 2005, 12:10 PM

Identity or Persona?

I recently posted to the idworkshop list some thoughts on the terms Identity vs Persona. But I've just noticed a strong bias expressed by two bloggers whose opinions I respect: Timothy Grayson and Dave Kearns.

Both have been very clear in their statement that each person has exactly one identity in the following articles:
o Piling on: "The importance of [the word] identity"
o Piling on 2: "The Importance of Identity" Online and off
o Crying in the wilderness, again.

They both prefer the classical (philosophical) definition of identity -- identity is the thing that is you. So, by definition, one person can only have one identity. (BTW, Tim, I don't think your identity goes away when you die -- but perhaps that's not what you meant.) The other "identities" that people are talking about are actually personas.

While I agree with both Tim and Dave in their desires to be precise in discussions, I do think the train has left the station on how the word identity is understood. By popular usage, folks such as Phil Winley and Esther Dyson (as pointed out by Tim and Dave) use the term identity imprecisely to mean persona. Frankly, I think the term identity is so overused in both technical and pop culture that it has been rendered not-very-useful for technical discussions -- it might actually be a source of confusion. I would suggest, when we need more exact terms, we should use words with less cultural burden -- like persona; and, we need to find a word/phrase to refer to these unique things that are people (and objects) -- perhaps entity.


PS. I'm still swamped with work, so my postings will be haphazard, at best.



Date: Fri, 23 Sep 2005 14:36:59 -0700
To: idworkshop@googlegroups.com
From: "P.T. Ong" <p.t.ong@onghome.com>
Subject: Re: persona/identity

Strangely enough, I was just doing a systems design / object decomposition exercise last week, and decided to ditch "digital identity" and use "digital persona" instead; specifically because the phrase avoids the broader meanings of "identity" ... like "sense of self", "roots".

I think it's easier to understand "my persona for Acme Bank" than "my identity for Acme Bank". The term "persona" is less personal, so the user is more able to disassociate himself from the "persona" -- as it should be...

Getting more philosophical, I might never really know your true identity, but I can always use personas to point to the entity that is you.

Also, the discussion on anonymity gets easier. People can get confused when we talk about "anonymous identities" as the phrase is, superficially, a contradiction in terms -- "identity" might imply the lack of anonymity because it is tied closely with "sense of self". (http://blog.onghome.com/2005/03/strong-identities-can-be-anonymous.htm)

In the real world, words have associated meaning, connotations, emotional baggage, etc.; and it's confusing to the rest of the world (and to us too) when we technical folks try to use them in ways that conflict (or is in dissonance) with their commonplace uses.

pt

PS. I do realize that marketing-wise, it's too late to move from the use of "identity".


At 08:39 AM 9/23/2005, Dave Kearns wrote:
>From: "Luke Razzell" <luke@i-together.net>
>>
>> My dramatherapist girlfriend, Charla, pointed out to
>> me that "persona" is from the Greek for "mask":
>>
>
>That's where the usage came from. The "persona" in a Greek play
>represented the "role" that the actor was playing. Which, in today's
>usage (as opposed to, say, the arcane world of 1999) really confuses
>the issues of identity, persona and role.
>
>In fact, what we're calling "digital identity" used to be referred
>to as "digital persona"
>(http://www.networkworld.com/columnists/2000/1106kearns.html) (And I
>still have the outline of a book I wanted to write with that title.
>Until a biometrics company came along and took the name.)
>
>At the time, the few people involved in "digital identity"
>deliberately chose the term "digital persona" so as not to confuse
>people with the "I" word. From the discussion we've had here, it
>does seem that the confusion still rages. So I can heartily agree
>with Luke when he says:
>
>> In this way, deprecating "digital identity" in favour of the
>> synonymous "persona" helps to disambiguate the discussion:
>> we are left with comparisons of "personas" and "identities"
>> rather than the supremely confusing "digital identities" and
>>"identities"!
>>
>
>-dave


Update (Oct 1, 2005):
I forgot to cross-reference Luke Razzell's post on Persona and identity (http://www.i-together.net/weaverluke/2005/09/persona-and-identity.html).

Update (Oct 8, 2005):
Here are a few more follow-on posts on the topic:
o Timothy Grayson, The living language of identity
o Phil Windley, On the Word 'Identity'
o Johannes Ernst, Phil Windley puts his finger on why defining "Digital Identity" is hard

Friday, September 02, 2005, 12:49 PM

Stupid Users?!

Valerie Steeves has just posted an article about he observations at the World Summit on the Information Society meeting on cybersecurity. She expressed concern about how a certain European delegate said, "It’s the stupid users. If we could just get them to use the technology properly, then we wouldn’t have a problem."

I've been reading Tom Peter's recent book(let) on Design. When talking about technology (and every tool we use was at some point "technology"), we tend to blame the user when problems come up. In reality, most of these problems are becuase the technology was not designed for the parameters of human capability.

For example, as I like to say, there is an impedence mismatch between digital security requirements and human brains. Specifically, human brains are not configured to remember and precisely reproduce many sequences of complex symbols -- so we should not be surprised when we discover that passwords (managed by humans) are one of the weakest links in computer security.

Valerie went on to talk about how people use the need for security as a way to justify compromising privacy of end-users. I agree. It is all too tempting to "solve" problems using brute force.

Thursday, August 25, 2005, 12:06 AM

Humans as Smart Cards

Valery pointed to a great quote in the “Network Security – Private Communication in a Public World” by Kaufman, Perlman and Speciener, Prentice Hall 1995 ISBN 0-13-061466-1.
Humans are incapable of securely storing high-quality cryptographic keys, and they have unacceptable speed and accuracy when performing cryptographic operations. (They are also large, expensive to maintain, difficult to manage, and they pollute environment. It is astonishing that these devices continue to be manufactured and deployed. But they are sufficiently pervasive that we must design our protocols around their limitations.)
The way I talk about it is that there is an impedence mismatch between the human brain and digital security requirements.

Tuesday, August 09, 2005, 8:37 PM

Identity and Privacy in Security

As I reread my post on the problems with RFID passports (http://blog.onghome.com/2005/04/sanity-around-rfid-passports.htm), it occurred to me that there is a more fundamental observation that needs to be made here...

When designing security systems based on strong authentication and identities, privacy is an important dimension to consider. The US State Department thought we could have better security by introducing strong(er) digital identities in passport via RFID tags. They forgot (or didn't realize) that without privacy considerations, the strong identity could be used, perhaps lethally, against the identity owner.

This reinforces my belief in the importance of privacy (and the works of individuals like Stefan Brands) to ensure the digital identity systems we build are actually usable.

Sunday, July 10, 2005, 9:39 PM

InfoCard is Not the Identity Metasystem

Noted. Just been catching up on the chatter on InfoCard.

Most notable is the point that Johannes Ernst, Doc Searls and Dave Kearns are making that Microsoft's InfoCard is not the identity metasystem. At best, it is a component of the metasystem.

o Johannes Ernst, More on the relationship between InfoCard and the Identity Metasystem.
o Doc Searls, Distinguishing between the Identity Metasystem and InfoCard.
o Dave Kearns, Identity metamagic.
o Johannes Ernst, What might an "Identity Meta-System" be?.
o Doc Searls, Some questions about the Identity Metasystem.

See Also
o P.T. Ong, More on InfoCards.