Skip to Content.
Sympa Menu

per-entity - Re: [Per-Entity] Latency figures for CDNs

Subject: Per-Entity Metadata Working Group

List archive

Re: [Per-Entity] Latency figures for CDNs


Chronological Thread 
  • From: Tom Scavo <>
  • To: Patrick Radtke <>
  • Cc: Per-Entity Metadata Working Group <>
  • Subject: Re: [Per-Entity] Latency figures for CDNs
  • Date: Tue, 6 Sep 2016 13:44:15 -0400
  • Ironport-phdr: 9a23:AZqs9heb760t6G6036px6uH/lGMj4u6mDksu8pMizoh2WeGdxc66YR7h7PlgxGXEQZ/co6odzbGJ4+a9AidZvN6oizMrTt9lb1c9k8IYnggtUoauKHbQC7rUVRE8B9lIT1R//nu2YgB/Ecf6YEDO8DXptWZBUj22Dwd+J/z0F4jOlIz3krnqo9yAKzlP0QK0falzJxb+lwzdqsobyd96I7oryxDKinBJYf5L3mJkKFSPngrtoMy3+cgw3T5Xvqcd8MsIaq7zYaNwGaBCFz8vLWcd5cv3uAPFQBfVoHYQTzNFwVJzHwHZ4USiDd/KuSzgu784gXHCMA==

On Tue, Sep 6, 2016 at 1:02 PM, Patrick Radtke
<>
wrote:
> On Tue, Sep 6, 2016 at 8:15 AM, Tom Scavo
> <>
> wrote:
>>> 175-200 ms from home.
>>
>> I'd be interested to know how much of that is DNS resolution, how much
>> is connection time, and how much is actually due to data preparation
>> (as opposed to data transfer).
>
> Here is the data using a couple runs with these curl timing outputs
> %{time_namelookup}:%{time_connect}:%{time_pretransfer}:%{time_starttransfer}:%{time_total}
>
> 0.004:0.089:0.089:0.300:0.302
> 0.005:0.091:0.091:0.181:0.182
> $ Flush dns cache
> 0.132:0.216:0.216:0.301:0.304
> 0.005:0.088:0.088:0.174:0.175
> $ Flush dns cache
> 0.135:0.219:0.219:0.305:0.307

Okay, testing my theory...the diff (time_starttransfer -
time_pretransfer) in each case is:

0.211
0.090
0.085
0.086
0.086

The first one is clearly an outlier...any idea why?

Tom



Archive powered by MHonArc 2.6.19.

Top of Page