Some collective feedback from UW-Madison on the SirTfi
draft:
We responded to this positioning ourselves primarily in the
role of IdP. Some of the terminology was not easy to translate
to our world, for example DITI was a challengingly abstract
term and the glossary did not include a definition of claims
processor which came up in a couple sections of the draft.
We looked over each of the six operational areas:
- Operational Security
The draft suggested a four-level self-rating for how well
an organization meets each specific requirement within the six
operational areas. In our IdP and its supporting environment,
we have different segments of users that fall under different
procedures. Each segment might get a different rating for
requirements OS1 to OS4. More loosely affiliated users might
rate a 0 on OS1, the general NetID population might be at 1;
but only identified sub-populations might reach level 2,
depending on the ID proofing and single (or multiple)
credentials issued. Basically it would not be possible for us
to give a meaningful single rating to the operational security
requirements. It seems it will be necessary to specify for a
given user and given session a level of assurance that would
imply one of the four levels.
On requirement OS2, around patching, it seems that
'verified, recorded and communicated to the appropriate
contacts' sets a high bar.
- Incident Response
Our roadmap includes support for authentication via
external providers (Google, Twitter, etc.) through a
campus-level gateway. If such externally authenticated users
access federated services, there are any number of challenges
around security incident responses. Again, it seems that
minimally SPs will need authentication context information
that will allow them to determine whether they want to accept
a given provider's assertion or not.
One approach under consideration at UW-Madison is to carry
stub identity records for federated (including external AuthN
providers') users. If we become aware of a compromised
credential of this kind, we can't 'turn it off' at the source,
but we can break the link to the internal user record, giving
us a means to head off access by those users to SPs.
- Traceability
TR1: Who is in a position to release what? We can't get
logs from Google & FB; language is fairly general in this
draft, seems like it will eventually at least need to spell
out minimum retention period; Locale at which user
registration took place could also be an important item of
information.
TR3: Re identifying users: The bare logs won't suffice: AD
GUIDs won't do the log recipient any good; To go beyond that
imposes a significant work load on the IdP side.
- Participant Responsibilities
PRU1: we have AUP for NetID holders; but with social
credentials, how do we catch & deliver AUP;
PRC3: Legal staff come down firmly against implications
that we're liable for behavior of individual users. SPs, of
course, retain rights to discontinue access to an individual
or group to protect integrity of service for others.
- Legal/Management Issues
The statement leading into the individual requirements sets
a high bar with the language '...policies and procedures
appropriately communicated to all participants, that address
legal issues including but not limited to...'
LI2: The focus on making participants aware of their
obligations is a sound one (compare with our comments on PRC3
above).
- Protection and processing of Personal Data/Personally
Identifiable Information
Lots of jurisdictional sticky wickets in this area.
In general we recognize the significant challenge of
establishing a trust framework that can function in an
international context. But we found ourselves wondering
whether there were ways to leverage well-defined existing
frameworks, PCI, FISMA and others. Granted that this would
likely mean different reference points for the US, EU, etc.
The reference to the Traffic Light Protocol in the Incident
Response section of this draft is one example of this kind of
approach.
-- Tom Jordan(IAM), Jeff Savoy (security), Keith
Hazelton (architecture)