Do we ever learn? This question might sound a bit harsh, but it seems justified when looking at the size and scope of the recently discovered Java vulnerability. Over the last few weeks, we’ve heard that a small piece of software installed on nearly every modern PC has had a fatal back-door for many, many years.
We all know that software development is a complex matter and that building secure applications is like chasing your own shadow – you might seem to get close, but you never really catch up. Still, we should be aware of the long history of software blunders, and try to ensure that what we build now is better than what came before.
After all, mistakes of the others are the most painless way to learn.
That's why I've decided to compile a list of 5 notorious security flaws spotted in the last few years – with suggestions as to what we could learn from them.
1) The mighty Java bug
Let’s start with the thing that inspired this article. At the end of 2012, we’ve heard the news of a spectacular Java vulnerability that had been there for a really long time. It affected not only version 7 of the Java plug-in, but also versions 5 and 6. That’s every Java release in the past eight years! The vulnerability allowed to break out of the sandbox created by the plugin and to run malicious code on a remote computer fairly easily. Potential victims? One billion users.
Hackers of all sorts were quick to weaponize this exploit when it became publically known. Soon, it was featured in two of the most popular Web threat tools – the BlackHole Exploit Kit and the Cool Exploit Kit.
Oracle, a company that developed Java, scrambled to provide a fix. They issued a security patch on January 14, but many claim it’s rushed and incomplete. According to ArsTechnica, a member of one of the underground hacking communities offered to reveal the way to re-open the hole in the patched Java plug-in for a reasonable price of $5000.
He might have been a simple scammer, but security consultants agree that patching this exploit might prove more complicated than Oracle would want us to believe. CMU/SEI CERT experts suggest to disable the Java plugin even after installing a patch. Chief security officer of Rapid 7 claims that it can take as much as two years for Oracle to prepare a really secure version of the plug-in.
We’re not in the clear yet, and neither is Oracle.
What have we learned? Even if a piece of software was used and scrutinized for many years, it may still contain surprising backdoor routes. Nothing we use is 100% hack-proof, so we should aim to reduce the number of active runtimes or code execution environments on our system to the bare minimum.
2) The XP day-zero conundrum
This one is a bit older, but nevertheless interesting, due to a simple paradox. All too often people receive software in an unpolished state. It happens mostly due to nasty things called deadlines that upper management in software development houses treats very seriously. This pushed many companies into a habit of fixing certain problems post-release, with patches and updates. And that’s all fine, until you realize that in order to download this update, you first have to connect to the Internet with a system perforated like a grater.
If you think that this window of opportunity is too small for someone to take advantage of, think again. That’s exactly what happened to many Windows XP users who were compromised before they even had a chance to download security updates or install antivirus software. What makes things even worse, there is a large group of late adopters who seem to have never installed those pesky patches. They keep using software in a zero-day state for years, oblivious of any threats.
The most recent zero-day vulnerability in Windows XP was found in 2010 and allowed hackers to install applications and malware on the system remotely, thanks to a gap in the Help & Support Center module. Shortly after the bug became public, Microsoft reported as many as 10 000 attacks being conducted from all over the world on fresh or non-updated copies of Windows XP.
So here’s a mind-boggling paradox: how to hack-proof your system, if the fact of turning it on gets you hacked?
What have we learned? Be nice for people who point out big flaws in your security. The man who discovered the recent XP bug, Tavis Ormandy from Google, published it after trying to contact Microsoft and making them plug the gap for 60 days straight. Should the software giant react faster, it would save itself some shame, while also saving 10 000 people from data breach. Also, before you install any system or a piece of software, it might be good to check for the already known zero-day vulnerabilities, download offline patches and apply them before you put the Ethernet cord into the socket.
3) A game company gamed by hackers
In 2011, on April 20, much to the surprise of many customers, Sony announced that its PlayStation network was compromised. Soon, company pulled the plug on the entire network – and everyone realized it’s much more than just an ordinary hacking attempt. After all, this move deprived millions of PS3 and PSP owners of online functionalities for many days.
Sony spokesperson blamed external intrusion, but as we all know “external intrusions” mostly happen due to security flaws and vulnerabilities.
Just like with the Java plug-in, the flaw of PlayStation Network was fundamental, and proved both expensive and time-consuming to fix. It took Sony seven days to find out what happened, how it happened, and to get the list of what was stolen.
As it turned out, data of 77 million people had been accessed and downloaded. This included login details, credit card numbers, transaction histories and real-world addresses. The network went offline for 24 long days, while IT staff tried to remove the vulnerability.
What was the cause? Sony is very tight-lipped about this incident, but fortunately we’ve got reports from security companies that investigated the hack. According to them, Sony’s PlayStation service and authentication service ran on old, unpatched Apache web server with known security issues. The servers were connected to the Internet without proper firewalls, and their configuration was insecure.
And the final straw? Sony employees knew about the weakness, but chose to ignore it, presumably because fixing the entire architecture would require too much work. Most likely, they also didn’t want their superiors to know, that servers were prone to attack in the first place.
What have we learned? Update your stuff. Your platform is only as secure as its weakest link. Know that databases of extraordinary value will attract extraordinary efforts from hackers. If you learn about potential exploit, do everything to patch it ASAP. It’s almost certain someone else will know about it too. Don’t rely on internal assessments – if you want to know if something’s really secure, hire a third party to test it.
4) An 8-year old Web form leads to a disaster
The Sony hack was arguably the most famous in recent years. It was widely commented both in the gaming media and in the business press. That’s because the company in question was so recognizable, and the people affected were mostly young and outspoken tech-savvy individuals.
But it was not the largest security breach by far. This crown goes to a simple exploit that let unwelcome guests into the database of Heartland Payment Systems – a leading US payment processing company – and led to a theft of 130 million records. That’s almost twice the amount of data extracted from Sony.
To this day it is regarded as a single largest data breach in the history of IT.
And it all happened thanks to an 8-year old Web form that languished, covered in cobwebs, somewhere on the corporate website. Maybe because it was created in a different day and age, and maybe due to a simple oversight of the programmer, this form was prone to SQL injection attack. In plain English: if, instead of a proper text, you’d paste a special SQL script into the text box, the server would run it, giving you access to things you shouldn’t have access to.
During those eight years Heartland Systems conducted several safety audits, but most of them were related to the payment processing modules. No one ever bothered to take a look at small Web forms buried deep in a byzantine structure of Heartland’s front-end.
Now, there are many automated tools you can use to scan your Web app for potential SQL injection targets. Heartland’s staff never used them. Some enterprising hackers did. After gaining access to the corporate network, they were able to crack the payment processing systems from the inside, and steal a lot of data.
That’s how one piece of obsolete code led to 130 000 000 lost records and $3.5 billion in damages.
What have we learned? Be aware of all the legacy architectures in your system. A long-forgotten search window, an unused protocol, a legacy API – under right circumstances they all become entry points for unwanted guests. Stay on top of tools used by hackers. Be sure your system doesn’t have a huge “attack me” sign written all over it. When testing your systems for security, test all of them, not only the priority ones. They might all be connected in ways you wouldn’t even imagine.
5) Republican Wunderwaffe falls flat
It was not a vulnerability or a hack, but it deserves a honorable mention. ORCA, a Web application prepared by Mitt Romney’ staff, was a secret weapon that was supposed to give Republican volunteers a significant edge on the election day, pointing them to people who still hadn’t voted.
Except, it crashed and burned in a spectacular way, leaving users across the country scratching their heads, because of something that seems almost unbelievable.
The ORCA Web app used HTTPS protocol, and no one bothered to set up an automatic redirection from the HTTP address. On the election day, when people tried to access the Web app, their mobile Android and iOS browsers would default to HTTP address and show a generic server error message.
You can read more about the entire fiasco here.
It’s almost scary that a custom application developed for a wealthy political party in one of the mightiest countries on earth could be prone to such basic failure.
What’s even more scary – tests conducted prior to the deployment didn’t point out this flaw. It appears not even a single person took out his or her personal smartphone and tried to access the ORCA outside of scheduled tests. Not that was easy. Most end users didn’t even get a chance to see the app until the election day.
What have we learned? Sometimes, people forget about the most basic things. Be sure to check the obvious stuff, and do it just like an end user would. Don’t settle for scripts and automated testing tools. Let real users check the app or a platform on their own hardware, in their own network. In the case of ORCA, a simple fact that someone before used the same browser to access the app was enough to skew the results, as it would auto-complete the HTTPS address.
And that concludes my personal list of top five vulnerabilities in recent history. What’s most striking, nearly all of them could have been easily avoided if someone, somewhere, at some time had shown extra care.
That's one of the reasons why recently Xebia co-hosted the Quality in IT conference in six major polish cities. That's also why we've acquired a small but very succesfull company TestBenefit that specializes in software testing, security and quality assurance. TestBenefit specialists now cooperate with us on most major projects. We also take care to educate ourselves about many current and past software security issues.
That’s something all software developers should have in mind, else they end up on a list just like this, and we’ll have to read through another story of “surprising backdoors” and “avoidable flaws”.