per-entity - Re: [Per-Entity] distribution of aggregate metadata
Subject: Per-Entity Metadata Working Group
List archive
- From: Nick Roy <>
- To: Scott Koranda <>
- Cc: Patrick Radtke <>, "Cantor, Scott" <>, Rhys Smith <>, Thomas Scavo <>, Per-Entity Metadata Working Group <>
- Subject: Re: [Per-Entity] distribution of aggregate metadata
- Date: Thu, 11 Aug 2016 16:51:39 +0000
- Accept-language: en-US
- Authentication-results: spf=none (sender IP is ) ;
- Ironport-phdr: 9a23:X3QjohGoLKkb3QwYbPPvX51GYnF86YWxBRYc798ds5kLTJ76rs6wAkXT6L1XgUPTWs2DsrQf1LqQ7vurADFIyK3CmU5BWaQEbwUCh8QSkl5oK+++Imq/EsTXaTcnFt9JTl5v8iLzG0FUHMHjew+a+SXqvnY6Uy/yPgttJ+nzBpWaz4Huj7jzqNXvZFBzjz2hfftRKw+/qwnY/p0Ngox4I6A9wzPGp3JJf6JdwmY+dnyJmBOpwMa7/9ZZ9DUY7/Q78N9oUKPmcr4+QKACSjkqLjZmt4XQqRDfQF7XtTMnWWIMn08NWlCd4Q==
- Spamdiagnosticmetadata: NSPM
- Spamdiagnosticoutput: 1:99
And if so, could we do that in a way that still leverages the geo IP
capabilities of the multiple CDNs? Straight DNS round-robin will be
problematic in a number of ways:
1) No easy way to detect down CDN nodes and take them out of the mix
(probably could be scripted, the script itself would probably be a
significant investment and possibly a point of failure)
2) Might not be compatible with geo IP (I don’t know - does anyone here know?)
Thanks,
Nick
> On Aug 11, 2016, at 10:47 AM, Scott Koranda
> <>
> wrote:
>
>> On Wed, Aug 10, 2016 at 5:12 PM, Cantor, Scott
>> <>
>> wrote:
>>
>>> Of course, usually one extended outage is much, much better than
>>> constant short ones. I'm more interested in monthly or weekly than annual.
>>
>> Cloud harmony monitors most providers and give a dashboard for weekly
>> or the last month.
>> Here is the CDN dashboard: https://cloudharmony.com/status-of-cdn
>>
>
> Thanks Patrick.
>
> Naive question: could an organization like InCommon publish
> across multiple, different CDN providers in order to decrease
> still further the likelihood of going completely dark?
>
> Scott K
>
>
- Re: [Per-Entity] distribution of aggregate metadata, (continued)
- Re: [Per-Entity] distribution of aggregate metadata, Rhys Smith, 08/10/2016
- Re: [Per-Entity] distribution of aggregate metadata, Nick Roy, 08/10/2016
- Re: [Per-Entity] distribution of aggregate metadata, Patrick Radtke, 08/10/2016
- Re: [Per-Entity] distribution of aggregate metadata, Cantor, Scott, 08/10/2016
- Re: [Per-Entity] distribution of aggregate metadata, Patrick Radtke, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Cantor, Scott, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Chris Phillips, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Patrick Radtke, 08/11/2016
- RE: [Per-Entity] distribution of aggregate metadata, Cantor, Scott, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Scott Koranda, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Nick Roy, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Chris Phillips, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Nick Roy, 08/11/2016
- RE: [Per-Entity] distribution of aggregate metadata, Cantor, Scott, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Nick Roy, 08/11/2016
- RE: [Per-Entity] distribution of aggregate metadata, Cantor, Scott, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Nick Roy, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Walter Forbes Hoehn (wassa), 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Tom Scavo, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Scott Koranda, 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Walter Forbes Hoehn (wassa), 08/11/2016
- Re: [Per-Entity] distribution of aggregate metadata, Cantor, Scott, 08/10/2016
- Re: [Per-Entity] distribution of aggregate metadata, Patrick Radtke, 08/10/2016
- Re: [Per-Entity] distribution of aggregate metadata, Nick Roy, 08/10/2016
- Re: [Per-Entity] distribution of aggregate metadata, Rhys Smith, 08/10/2016
Archive powered by MHonArc 2.6.19.