Skip to Content.
Sympa Menu

assurance - Re: [Assurance] Counting Failed Logins Update

Subject: Assurance

List archive

Re: [Assurance] Counting Failed Logins Update


Chronological Thread 
  • From: "Joe St Sauver" <>
  • To:
  • Subject: Re: [Assurance] Counting Failed Logins Update
  • Date: Tue, 25 Jun 2013 12:36:25 -0700 (PDT)

Brendan commented:

#One way to thwart brute force attacks without adversely affecting account
#owners is to exponentially slow down the responses to bad password attempts,
#rather than all password attempts. There is no need to lock out the account
in
#this case. This also prevents the accidental lockouts that would be caused
by
#the "hyperactive email checkers" you refer to.

Couple of further thoughts on this...

1) If the bad guys employ a breadth-first attack against a host using this
approach, is there a risk of suffering state exhaustion attacks as more
and more connections end up waiting for connections trying to login, but
stuck in an exponential backoff wait state, waiting to count down?

Normally state exhaustion attacks target firewalls and similar stateful
security appliances, but anything that's processing and maintaining
state (and has finite capacity) is potentially at risk, I think, including
systems processing logins.

I think that if you want to avoid state exhaustion attacks, you *can't*
"hold onto" connections while they count down... you need to triage those
connections, dropping them just as quickly as you can.

2) I also worry that an exponential backoff policy might increase help desk
loads ("I try to login, but my account just hangs -- it doesn't fail,
but it doesn't let me login, either, it just seems to get stuck or
something...")

I suppose the login process explicitly provides user feedback ("Too many
login failures. Please wait at least N seconds before you try to login
again."), but of course, providing that sort of feedback will allow the
bad guys to get a pretty good idea of just how aggressive your anti-brute
force algorithm is, thereby helping them to schedule/optimize their
attack.

There's also the issue that non-interactive logins (think POP/IMAP clients)
may not have a clean way of exposing that sort of feedback to the user.

3) Coming back to the issue of "how aggressive should the anti-brute force
algorithm should be," how aggressive an exponential backoff *do* we need?

[warning, geek out, hit delete now]

Assume 30 days/month, 24 hours/day, 3600 seconds/hour ==> 2592000 secs/month

By the 50th attempt (or 100th attempt, or whatever threshold you want to
specify), we need a delay that ensures login won't be possible until "next
month". If we assume the bad guys attack at 12:01 on the first day of the
month, this means that by the 50th attempt, we need to impose a delay of
at least 2592000...

What if (as a simple rule) we simply try doubling the delay after each
login failure? (Those of you who recall the so-called "wheat and chessboard"
story, see http://en.wikipedia.org/wiki/Wheat_and_chessboard_problem ,
already know where this is going...)

This rule would generate pretty aggressive delays that look like...

1 second, 2 seconds, 4 seconds, 8 seconds, 16 seconds, 32 seconds, 64
seconds, 128 seconds, 256 seconds, 512 seconds,

[somewhere about this point, I think folks are going to begin to be unwilling
to continue to wait for things to cook down... 512/60=8.5 minutes...]

1024 seconds, 2048 seconds, 4096 seconds, 8192 seconds, 16384 seconds,
32768 seconds, 65536 seconds, 131072 seconds, 262144 seconds, 524288
seconds, 1048576 seconds...

Thus, after just the 20th failure, a user would effectively be facing a
1048576/3600=291.27 hour wait... from a practical POV, I think I'd call
that "indistinguishable from being locked out" (and many would probably
set that perceptual threshold somewhere *far* lower than that).

Arguably, this means the suggested exponent (2.0) is far *too* aggressive.

Put another way, what is X, if X^50 must be >= 2,592,000 ? Testing a
couple of values:

1.3^50 = 497,929.22
1.4^50 = 20,248,916.23

so clearly the required exponent is somewhere between 1.3 (too small)
and 1.4 (*way* more than necessary). Refining that further via the good
old brute force method, what do we see if we try 1.35, and 1.34, maybe?

1.34^50 = 2,265,895.71
1.35^50 = 3,286,157.87

We've now bracketed the required value (2,592,000) pretty closely.

Just for the sake of argument/to beat a dead horse, what if we bisect
those values?

1.345^50 = 2,729,695.58

I think that's probably good enough for government work, or if you just
want to do this the exact way, take the 50th root of 2592000, which
works out to 1.3436083679306952 (there's an N-th root calculator at
http://www.basic-mathematics.com/nth-root-calculator.html if you don't
have Mathematica or a good scientific calculator).

If we just use the 1.345 approximation, that would imply wait times that
look like:

1.345 seconds, 1.809 seconds, 2.433 seconds, 3.272 seconds, 4.401 seconds,
5.920 seconds, 7.962 seconds, 10.709 seconds, 14.404 seconds, 19.374 seconds,
26.058 seconds, 35.048 seconds, 47.140 seconds, 63.403 seconds, 85.277
seconds,
114.698 seconds, 154.269 seconds, 207.491 seconds, 279.076 seconds,
375.358 seconds (e.g., over a five minute wait as of the 20th failure)

Is that what folks would like to see, in terms of the user-experienced
exponential delays?

Regards,

Joe



Archive powered by MHonArc 2.6.16.

Top of Page