Mondays issue of Peter Neumanns Risks Digest highlighted a subtle security problem that is rarely discussed and probably slipping under the radar of most site administrators.
Earlier this month, the University of Texas discovered that the Social Security numbers and other personal information of approximately 55,200 people had been illegally retrieved by an outside attacker.
How the attacker got in is an instructive lesson for us all. There was no buffer overflow, no low-level attack—in fact, no subverting of security measures at all.
The attacker discovered a publicly accessible Web application that allowed a user to query a database using only a Social Security number and then returned student and staff data for a person with the matching number.
At this point, harvesting the data was only a matter of data-entry tedium. To fix this problem, the attacker wrote a program to automatically scan through a range of SSN values and save returned data. Over a period of five days, the attacker or attackers scanned through three ranges of SSNs, making about 2.7 million queries to this application in the process.
There are a number of interlocking problems here:
1) Using the SSN as a database key ( better approaches);
2) Using only the structured, guessable SSN as both the authentication and the lookup key—asking for a single extra piece of information, such as a last name, would have blocked this attack;
3) Not having a mechanism in place to notice highly unusual application usage.
This last point is subtle. Computers are stupid creatures and will happily do what they are designed to do as fast as resource constraints will allow for as long as asked to do so.
In February, I wrote about the Open Web Application Security Projects new Top Ten list of application security mistakes. One phrase buried in its section on handling application errors (on Page 16 of the report) came back to me when I heard about the University of Texas case.
“Very few sites have any intrusion detection capabilities in their Web application, but it is certainly conceivable that a Web application could track repeated failed attempts and generate alerts. Note that the vast majority of Web application attacks are never detected because so few sites have the capability to detect them. Therefore, the prevalence of Web application security attacks is likely to be seriously underestimated.”
This is a warning we need to take to heart. Do your IT systems use any kind of internal event logging mechanism to record their actions? Are security events like failed log-ins recorded somewhere? Are logs centralized and inspected regularly?
Note that just recording security errors isnt enough: Too many things done right are a problem, too.
In one of Orson Scott Cards rare cyberpunk stories “Dogwalker” (full text, and now being made into a movie), the protagonist cracks into a government database by logging in using a system administrators password.
He gets busted because the G-man always deliberately fails his first log-in attempt and logs in correctly on his second try: “The system knew the pattern, thats what. Jesse H. is so precise he never changed a bit, so when we came in on the first try, that set off alarms.”
Now thats effective pattern recognition.
Were not there yet in the security field, but putting systems in place to detect massive changes in typical system usage is certainly possible. Its clear what the consequences of not having these kinds of checks in place can be.
West Coast Technical Director Timothy Dyck can be reached at timothy_dyck@ziffdavis.com.