Skip to Content.
Sympa Menu

per-entity - Re: [Per-Entity] Some thoughts about availability and scalability

Subject: Per-Entity Metadata Working Group

List archive

Re: [Per-Entity] Some thoughts about availability and scalability


Chronological Thread 
  • From: David Walker <>
  • To: <>
  • Subject: Re: [Per-Entity] Some thoughts about availability and scalability
  • Date: Tue, 2 Aug 2016 15:15:00 -0700
  • Authentication-results: spf=none (sender IP is ) ;
  • Spamdiagnosticmetadata: NSPM
  • Spamdiagnosticoutput: 1:99

Good discussion today.

I've added "Requirements for availability and scalability" to tomorrow's agenda after we finish the discussion of risks.  We can at least start the discussion; I'm sure it'll be more than a single call's topic.

FYI, I've also added some thoughts I've had on risks.  I'd like to finish tomorrow's call with a reasonably complete inventory of the risks, even the ones we don't think require additional mitigation, so please add your thoughts, as well.

Looking forward to tomorrow's discussion.

David


On 08/02/2016 10:45 AM, Chris Phillips wrote:
Tom:

I see my faux examples have highlighted the no 'requirements' problem ..
It may be better to phrase things as 'recommendations on what to do when
writing an MDQ client' :)

If I go back to this WG's charter, calling out requirements are going to
be critical to hit 3a),b),c) and item 4 explicitly. One could interpret
that item 4 and 1 while referring to inCommon are essentially 'any
federation operator'.

Sounds like there needs to be requirements for the different roles as
called out in the charter too.


To Scott's points:

+1 to alluding to DNS. The techniques to resolve, cache, expire, refer,
forward reference, and in general 'be resilient' are evident in there.  I
think we are of the same mind to borrow as much as possible in technique
and architecture that makes sense instead of creating anew. There's a
reason there are DNS libraries on machines instead of inside the software
that runs on them..

I do fall short from saying:
 'Stuff the signed per entity record in a DNS TXT entry and let DNS ship
it' 
Or 
'put this as an entry in a TXT entry in a DNSSec signed domain
(some-serviceprovider.com.incommon.org) and then rely on DNS to ship
around metadata and let existing libraries extract things.'

This is likely to just shift the whole size issue from a metadata
aggregate to 'my DNS server is mysteriously 25mb larger now and behaves
differently' and out of the realm of diagnosing what the heck is going on.

To Michael on workaround vs redundancy by design:

I think what we are gravitating to is that good design IS redundancy by
design but with a pragmatic eye to effort and value trade off.
SAMLbits will only work as well as the underlying origin data
(md.incommon.org, caf-shib2ops.ca, etc) for aggregates being 'available'
and smooth things over for a period of time being a caching layer to
afford less traffic to the origin of truth of who is a valid member.  That
cache layer wwill also be 'wrong' for a little while as data become
eventually consistent. A cache in Brazil will have one aged item that will
be different than North American one will -- and is that acceptable to get
two different signature validated answers? In short, yes, but I think in
the recommendations document there needs to be a section about
consequences or risks of a given architecture and this is one aspect.

I also think the model that Roland H. exemplified in writing the tests for
the reference function of an OIDC client would be valuable here.  Once the
'requirements' are written an MDQ test suite(service?) to validate against
will provide a self verifying way that MDQ clients (the Idp/SP and
anything else) are provably doing the optimal MDQ behaviour. This will be
invaluable to offload effort and enable others like Microsoft, Oracle, etc
can also test theirs too and gain a pass/fail state.


C


On 2016-08-02, 12:30 PM, 
 wrote:

On Tue, Aug 2, 2016 at 11:03 AM, Chris Phillips
 wrote:
Can someone weigh in on the qualities REQUIRED of an MDQ consumer and
the
place on the wiki they exist?
E.g. 'Consumer MUST be able to retrieve MDQ ack/nack of a record in X
ms'
AFAIK, no one has voiced such a requirement. In any case, I'm not sure
how valuable such a requirement would be. It's unrealistic to think a
server will never fail. The question is: what happens when it *does*
fail.

'Consumer MUST respect validUntil timestamp and cache MDQ response until
then'
No, you're confusing validUntil with cacheDuration, but in any case
this is purely a client-side issue. Both Shibboleth and SSP respect
these XML attributes AFAIK. Others clients may be lacking, I don't
know.

Btw, InCommon metadata has never had an explicit cacheDuration value.
Should the MDQ server include one? A realistic value would be 24
hours. Not sure if this is useful or not.

Federation operators
around the globe operate this now and MDQ 'answers'  are not very
different
from serving up a metadata aggregate at all ‹ are they?
I claim they are definitely different (but at least Scott thinks
otherwise).

Tom

    

Attachment: signature.asc
Description: OpenPGP digital signature




Archive powered by MHonArc 2.6.19.

Top of Page