DEFENSE EVALUATION

We evaluated various defensive methods with respect to their effectiveness against these malicious servers, namely blacklisting, patching, and using a different browser.

Blacklisting

First, we evaluated blacklisting as a method of defense. We inspected 27,812 URLs of known bad sites. These URLs originate from the well known hosts file from www.mvps.org for DNS blackholing on a Windows machine, and from the clearinghouse of stopbadware.org. A large number of 1,937 URLs successfully attacked our client honeypot running Internet Explorer 6 SP2. This is a percentage of 6.9646%. It appears that the providers of these lists know about malicious URLs that exist on the Internet. However, they only know a minority of malicious URLs. Checking which of the servers that we identified as malicious appear in the lists of known bad sites reveals that only 12% of malicious servers are known.

Does this mean that blacklisting is an ineffective method? In order to answer this question, we repeated our analysis of the 306 malicious URLs on a client honeypot that uses a DNS blackhole list, including the servers in the hosts file from www.mvps.org and the servers in the clearinghouse of stopbadware.org, and repeated our analysis. Considering that only 12% of the servers we identified as malicious were included in our blacklist, one would expect a remaining high number of malicious classifications by our client honeypot. Surprisingly, only one URL remained malicious. We conclude that blacklisting is indeed a very effective method to thwart these attacks.

To understand why blacklisting is an effective method, one needs to take a closer look at the exploits deployed on the URLs. Primarily, exploits are not found directly on a page, but are rather imported from a server (for example, via iframes, redirects, JavaScript client-side redirects, etc.) that specializes in providing exploits as shown in Figure 5. These are centralized machines that are likely to be referenced from numerous URLs. The blacklist that we used seems to include most of these centralized exploit servers, but not the malicious web server that the client initially contacted. Despite this lack of knowledge of the malicious web servers, the exploit can still be successfully blocked via the blacklist because the follow up request to the exploit server that resulted from the redirect is blocked. In the side boxes on in-depth analysis, we take a closer look at the relationship between the input URL and the imported exploits of a specific site.

figure5

Figure 5 - Central Exploit Server

Patching

Second, we evaluated whether patching is an effective method to defend against malicious web servers. All malicious URLs that we have encountered to date (this includes 47 malicious URLs that were submitted to the client honeypot via our web site) were evaluated once again with a fully patched (as of May 10th 2007) version of Internet Explorer 6. A total of 2,289 URLs were inspected; this resulted in 0 successful compromises.

Patching does seem to be very successful as a defense measure. This comes as no surprise because attackers tend to rely known exploits if there is a chance of success. And it appears that the odds of success using these older exploits are high, because a large user base still does not seem to use automatic patching on their systems, leaving them vulnerable. The current browser distribution from w3schools.com reveals that Internet Explorer 6 is still used by 38.1% of Internet users worldwide as of May 2007. Considering that Internet Explorer 7 has been pushed as a high security update by Microsoft for several months, there is an indication that a large number of these users probably do not have automatic updates turned on. Some portion of these 38.1% that do have automatic updates turned on have probably made a conscious decision not to update to Internet Explorer 7, but rather to just accept Internet Explorer 6 patches. Nevertheless, we suspect that many simply do not have automatic updates enabled.

Despite its great success as a defensive approach, patching still does not provide complete protection. There is certainly the risk of zero-day exploits that can successfully attack a fully patched version of a browser. In addition, there is the risk of exploits targeting publicly disclosed vulnerabilities for which no patch is currently available. Users of Internet Explorer 6 were in this position recently: in September 2006 due to the VML vulnerability and in March 2007 due to the ANI vulnerability. These vulnerabilities, which allowed for remote code execution, were publicly disclosed and were left unpatched for about one week. We suspect that malicious servers are quick to distribute such exploits as they are very effective and easily obtainable. To confirm this we will need to collect data with our client honeypot farm once a situation similar to the VML or ANI vulnerability arises in the future.
Different Browser

Internet Explorer 6 SP2 is known to be at risk of being attacked by malicious web servers. As the final piece of this study, we evaluated whether using a different browser would be an effective means to reduce the risk of attack. We compared three browsers: Internet Explorer 6 SP2, Firefox 1.5.0 and Opera 8.0.0. Common perception about Internet Explorer and Firefox is that Firefox is safe and Internet Explorer is unsafe. However, a review of the remote code execution vulnerabilities (primary source: ) that were publicly disclosed for Firefox 1.5 and Internet Explorer SP2 reveals that, in fact, more were disclosed for Firefox 1.5 (see Figure 6) indicating more the opposite is true.

figure6

Figure 6 - Remote code execution vulnerabilities per browser

To determine which browser is actually safer to use, we set up our client honeypots to use these browsers to interact with the servers. Due to time constraints, we were not able to re-evaluate all 300,000 URLs with each browser, but we did reinspect the highly malicious category of adult content comprising approximately 30,000 URLs. As shown in Figure 7, these input URLs that resulted in a 0.5735% of successful compromises of Internet Explorer 6 SP2 did not cause a single successful attack on Firefox 1.5.0 or Opera 8.0.0. Particularly the results on Firefox 1.5.0 are surprising, considering the number of remote code execution vulnerabilities that were publicly disclosed for this browser and the fact that Firefox is also a popular browser. We can only speculate why Firefox wasn’t targeted. We suspect that attacking Firefox is a more difficult task as it uses an automated and “immediate” update mechanism. Since Firefox is a standalone application that is not as integrated with the operating system as Internet Explorer, we suspect that users are more likely to have this update mechanism turned on. Firefox is truly a moving target. The success of an attack on a user of Internet Explorer 6 SP2 is likely to be higher than on a Firefox user, and therefore attackers target Internet Explorer 6 SP2.

figure7

Figure 7 - Malicious classifications of adult content URLs per browser