Hello all!
I'm new here and have a simple question (the subject line) that I can't get a solid answer to. Please correct me where I'm wrong.
- Brute force attacks involve multiple attempts using automated trial-and-error.
- No human (by manual typing) could approximate a brute force attack.
I understand why automated access via password would require brute force protection, but not why human access would. If a password is expected to be entered by a human (e.g. master password for LastPass), a simple 5-second delay between attempts (max dozen per hour) would be of minimal inconvenience to the human but make automated brute force attacks impractical. Why can't systems be made that limit frequency of attempts? I appreciate that this may require tying code to data (i.e. raw data only accessible via code), but I assume someone has tackled that problem.
Thank you in advance for your thoughtful responses!
Thank you for your response. If I understand correctly, we need brute force protection not against access to the data (which software can control), but against decryption if the data file has already been accessed, i.e. stolen, where there is no code to force a delay, etc. That makes sense. The logical follow-up question is, why do services (like LastPass) store our data protected only by the user's password? Wouldn't it make sense to layer on other encryption keys? Stolen data would need not just the user password but multiple other keys only the service provider has, stored in such a way that would make it very unlikely that a hacker could steal both the user data and the provider keys.
Basically, I'm just annoyed that service providers put the onus of encryption (complex passwords) on users instead of having better protection techniques. Relying on a single user-generated key (that can be brute-forced) seems lazy.