Skip to Content.
Sympa Menu

metadata-support - Re: [Metadata-Support] Extending Metadata Query Protocol

Subject: InCommon metadata support

List archive

Re: [Metadata-Support] Extending Metadata Query Protocol


Chronological Thread 
  • From: Ian Young <>
  • To:
  • Subject: Re: [Metadata-Support] Extending Metadata Query Protocol
  • Date: Mon, 23 Mar 2015 15:53:05 +0000


> On 18 Mar 2015, at 22:37, Tom Scavo
> <>
> wrote:
>
> On Wed, Mar 18, 2015 at 9:25 AM, Jaime Perez Crespo
> <>
> wrote:
>> I understand the first one could be easily disregarded by using the MDX as
>> a standard metadata feed, that is, fetching the whole metadata set it
>> serves, processing, caching, and then proceeding onwards by leveraging the
>> second one.
>
> That was my first thought as well. Before you brought it up, I've
> always thought an implementation would fetch the entire aggregate
> while booting up and then use the MDQ protocol to keep up to date.

As Scott points out, there isn't much point to this approach. It may reduce
the amount of data you're transferring from the server to the client, but it
does nothing to address the storage requirements at the client. Although
moving less data around is always good, the thing which we're running into
problems with today is in-memory storage and reducing that is one of the big
wins of per-entity metadata. An IdP that fully starts up before you can type
"tail -f" is a joy to behold.

The fundamental idea of the query protocol is that a client entity only needs
to retrieve and store metadata for entities it interacts with, and that the
basic model is an on-demand one. An end entity would be expected to start up
ignorant of all metadata and fetch only what it needs, when it needs it.

One client-side improvement one might consider would be to re-fetch popular
entities before they fell out of the cache, as I described in the other mail.
Another would be retaining a list of "frequently used entities" between runs
and prefetch those at startup in a background thread.

Neither of these change the fundamental model, which is essentially that of a
DNS query from a stub resolver.

The MDQ protocol as currently drafted is intended for use between such a
"stub resolver" analogue in a client entity and a relatively local fast
oracle taking the same role as something like a campus DNS server. Where the
local oracle gets metadata from is out of scope for the MDQ protocol
specification. It probably isn't MDQ, though. In the current environment,
it's probably pulling aggregates from elsewhere and adding some local
information, which you could analogise to a combination of DNS zone transfers
and locally authoritative zones.

Longer term, we probably do need an additional protocol to describe how
actors other than end entities exchange metadata for later presentation to
clients. That's probably not MDQ, which is why I was so emphatic about not
calling the query protocol MDX. That's the place in the model where
incremental updates and the like make most sense, because there you probably
*are* legitimately dealing with the movement of large amounts of data.

-- Ian




Attachment: smime.p7s
Description: S/MIME cryptographic signature




Archive powered by MHonArc 2.6.16.

Top of Page