Skip to Content.
Sympa Menu

metadata-support - RE: [Metadata-Support] Shibboleth cannot download InCommon metadata XML consistently

Subject: InCommon metadata support

List archive

RE: [Metadata-Support] Shibboleth cannot download InCommon metadata XML consistently


Chronological Thread 
  • From: Brent Wygant <>
  • To: "" <>
  • Cc: Stephan Fix <>
  • Subject: RE: [Metadata-Support] Shibboleth cannot download InCommon metadata XML consistently
  • Date: Fri, 5 Feb 2016 21:42:56 +0000
  • Accept-language: en-US

Thanks for the input, Tom!

We've actually began implementing a different config change suggested by
another user, adding the following inside the MetadataProvider tag:
<TransportOption provider="CURL" option="13">120</TransportOption>

This seems to be working as expected in our Stage environment, as we've see
zero connection errors since it was put in place. It's good to know the
metadata will be almost doubling in size. Hopefully the two-minute timeout
for CURL will still accommodate the larger file size.

Regards,
Brent

-----Original Message-----
From:


[mailto:]
On Behalf Of Tom Scavo
Sent: Friday, February 05, 2016 4:28 PM
To:

Cc: Stephan Fix
Subject: Re: [Metadata-Support] Shibboleth cannot download InCommon metadata
XML consistently

Hi Brent,

On Wed, Feb 3, 2016 at 4:28 PM, Brent Wygant
<>
wrote:
>
> We are running Shibboleth SP version 2.4.3. The application is configured
> to download (cURL) the IdP metadata file from the URL
> http://md.incommon.org/InCommon/InCommon-metadata.xml. This is where the
> problem comes in. Once or twice per day, we see the following error in the
> logs that indicate a problem with connecting a socket...

As Scott already concluded, that's a network issue. Let me ask,
though: Is there a temporal pattern to the failures, that is, does it fail
the same time or different times each day or days of the week?
The reason I ask is: The file changes at roughly the same time every business
day, so if you're seeing systematic failures at a fixed time each day, that
might be something for me to look at more closely.
(It's not likely the problem is on my end but I can't rule it out.)

There's not much to do except to configure your SP to automatically retry
after failure (as Alex mentioned). FWIW, here's a link to our Shib config
page: https://spaces.internet2.edu/x/XAQjAQ

If you find some other configuration that works better, I'd love to hear
about it.

Now here's the bad news. On Feb 15, the size of the aggregate will grow from
~17MB to ~32MB, so if you're having intermittent failures now, it's only
gonna get worse. All I can say is: configure your SP properly and keep on eye
on it. If you observe a pattern, let us know.

Hope this helps,

Tom



Archive powered by MHonArc 2.6.16.

Top of Page