WannaCry Ransomware Is Self-Inflicted and Entirely Predictable Drew Wilson | May 19, 2017 As the situation regarding the WannaCry ransomware evolves, the security nightmare should be raising some important questions. Earlier this month, the WannaCry ransomware began taking the world by storm. The gist of the ransomware is that this is malware that is typically spread through e-mail. As long as one person opens up an infected e-mail, every computer on the network is put at risk. The malware installs itself onto an infected computer and encrypts all of the files on it. After that, a screen pops up telling the user that their files have been encrypted. The screen demands payment of a couple of hundred dollars worth of Bitcoins and says that they only have a limited time to comply. The malware is indiscriminate and is hitting numerous countries around the world. Media outlets have been reporting on the evolving story ever since. One common line of reasoning behind the attack is that “cyber criminals” are getting smarter and the weapons they use are getting better. Of course, if one digs deeper into the facts, a very different story emerges. One may ask, where did this WannaCry ransomware come from? Did this come from the twisted mind of a highly skilled hacker wanting to make a fast buck off of the backs of others? Not exactly. As Arstechnica notes, if you want to find the source of the attack, look no further than the American government: Another cause for concern: wcry copies a weapons-grade exploit codenamed Eternalblue that the NSA used for years to remotely commandeer computers running Microsoft Windows. Eternalblue, which works reliably against computers running Microsoft Windows XP through Windows Server 2012, was one of several potent exploits published in the most recent Shadow Brokers release in mid-April. The Wcry developers have combined the Eternalblue exploit with a self-replicating payload that allows the ransomware to spread virally from vulnerable machine to vulnerable machine, without requiring operators to open e-mails, click on links, or take any other sort of action. So-called worms, which spread quickly amid a chain of attacks, are among the most virulent forms of malware. Researchers are still investigating how Wcry takes hold. The awesome power of worms came to the world’s attention in 2001 when Code Red managed to infect more than 359,000 Windows computers around the world in 14 hours. “The initial infection vector is something we are still trying to find out,” Adam Kujawa, a researcher at antivirus provider Malwarebytes, told Ars. “Considering that this attack seems targeted, it might have been either through a vulnerability in the network defenses or a very well-crafted spear phishing attack. Regardless, it is spreading through infected networks using the EternalBlue vulnerability, infecting additional unpatched systems.” Arstechnica also notes that while WannaCry did use NSA tools, it’s unclear if the tools in question are the only thing that makes up the malware as a whole. The question then becomes, why are government organizations building malware to begin with? Is this a new thing? For the latter question, the answer is actually no. Governments from around the world have been pushing to allow their spies to build malware for years. In one example, we can point to the LOPPSI 2 debate in France back in 2009. While the debate is old and, in some respects, seemingly out of date, a lot of questions being raised remains the same. From my report back then: Last month, we broke the news for English speakers about this legislation and now a French cybercrime expert was able to discuss various aspects of the law in French newspaper Le Monde (Google translation) and there were some interesting points being made throughout the numerous responses. The first response noted that, traditionally, surveillance involved microphones and video cameras. Since it requires a lot of time and money to have them installed covertly on someone, it’s not scalable – that is to say, you can’t spy on tens of thousands of people because it requires too much time and money. The same cannot be said for installing key loggers and trojan horses on peoples computers for covert surveillance purposes since once one creates a trojan or a piece of spyware, theoretically, they can be installed on thousands of machines at no extra cost because the scalability is far greater. This leads to the fact that this legislations paves the way for unprecedented surveillance powers for police and the government. Another point is the fact that people with malicious intent, or criminals for that matter, use precisely the same kind of technology that is suppose to be used by police. The reason that is important is because anti-virus and anti-spyware technology is specifically designed to block such technology. It then leads into a more disturbing question – are anti-virus companies going to be ordered by the French government to create white-lists for Trojans and spyware? Not mentioned in the response is if someone is going to create their own programs to detect and remove such technology should that happen. In one part of the conversation, there was the question on who these viruses and spyware intended in terms of geography. The legislation is intended to be for traditional criminals on French soil. Not mentioned in the response is that given how networked todays society is on the internet, how malware can be confined to one country in particular is going to be an extremely difficult proposition in and of itself. Still, in another response, Lovet discussed the fact that the legislation is intended to stop child pornography and terrorists – yet, in practise, that turned out to not be the case in countries like Australia, England and Thailand where legitimate websites wound up being in the blocklist as well – both Australia and Thailand had sites on the blacklist for nothing more than political purposes. To be fair, France is by far not the only country whose government expressed interest in being malware creators in years past. Another part of the debate is the concept of malware leaking to the public. If government agencies are building the malware in the first place, what’s to stop that malware from getting into the wrong hands? That question obviously applies to both the debate back then and the debate now regarding WannaCry. Of course, many people back then would be much more willing to dismiss such lines of questioning as little more than foolish paranoia, yet, here we are today seeing that line of questioning evolving into reality. In the above excerpt in my earlier piece, I touched on the topics of vulnerabilities. Should anti-virus vendors whitelist certain forms of malware to aid the government? In the general philosophical sense, this is basically a question of whether or not systems should be crippled to certain forms of attack. For many activists, the answer is a straight up “no”. This is because deliberate security holes are simply problematic. How does one ensure that the vulnerability will only be used by the government? The simple answer is, you can’t. Fast forward to today, we see the consequences of setting something like this up. Neowin noted last week that Microsoft has patched the security vulnerability. Still, users will have to actually download and install the patch in question. Given the infamous Windows 10 rollout, some users might express reluctance to do so for fear of a forced OS upgrade. That alone presents a whole host of problems in the grand scheme of things. Right now, Microsoft does find itself in an awkward situation because of reports that the company withheld security patches that would have protected users from the WannaCry virus. From CNET: Microsoft could have slowed the devastating spread of ransomware WannaCry to businesses, the Financial Times reports. Instead, it held back a free repair update on machines running older software like Windows XP. Microsoft wanted hefty fees of up to $1,000 a year from businesses for “custom” support and protection against attacks like WannaCry, which locks your computer unless you pay the hackers in bitcoin, said the publication. Microsoft and the NSA are basically pointing fingers at each other over the malware attack at this point. In an article on NetworkWorld, Microsoft places the blame for the WannaCry mess squarely on the NSA: Microsoft’s top lawyer has blamed the government’s stockpiling of hacking tools as part of the reason for the WannaCry attack, the worldwide ransomware that has hit hundreds of thousands of systems in recent days. Brad Smith, president and chief legal officer, pointed out that WannaCrypt is based on an exploit developed by the National Security Agency (NSA) and renewed his call for a new “Digital Geneva Convention,” which would require governments to report vulnerabilities to vendors rather than stockpile, sell, or exploit them. Smith said: “The governments of the world should treat this attack as a wake-up call. They need to take a different approach and adhere in cyber space to the same rules applied to weapons in the physical world. We need governments to consider the damage to civilians that comes from hoarding these vulnerabilities and the use of these exploits.” Smith said he hopes the recent WannaCry attack will change the minds of government agencies and stop developing hacking tools in secret and holding them for use against adversaries, especially since the technology for WannaCry was stolen from the NSA. As TechCrunch notes, some are defending the NSA’s use of malware: Alexander was asked how much responsibility the NSA bears for the WannaCrypt virus — given reports have indicated the virus utilizes an exploit that was stolen from the NSA. Yesterday Microsoft also explicitly called out government agencies for undermining global cyber security by stockpiling exploits. “The NSA didn’t use the WannaCry, criminals did –- someone stole it,” he shot back on that. “This WannaCry starts to split [government agencies and industry] apart but our nation needs industry and government to work together,” he added. He also implicitly defended the NSA’s use of exploits — saying the agency needs “capabilities” to allow it to know what adversaries are doing, and should not be required to release all the exploits it finds. “We’ve got to have tools,” he said. “[NSA] don’t hoard exploits; they release 90+ percent of what they get but to go after a terrorist you need an exploit.” Alexander’s big pitch was for government and industry to work together to try to de-risk these intelligence agency “tools” — i.e. to patch up and firefight critical scenarios whereby an intelligence agency exploit has been leaked and is in the hands of cyber criminals. So, who is really to blame for all of this? In this case, when it comes to Microsoft and the NSA, there really are no winners. For Microsoft’s part, the fact that vulnerability patches being withheld for the NSA’s benefit suggests that the company was, at least at one point, complicit in this scheme. The patch was only released after it was exposed by the leaking of the tools in the first place. For the NSA’s part, they built to tools that exploited these vulnerabilities in the first place. The bottom line is that this ransomware attack is an entirely self-inflicted problem. So, what’s the solution in all of this? While it may sound like a siding with Microsoft on this debate, companies should simply go back to patching any vulnerability that crops up. For the governments, there are better ways to investigate threats than just creating malware. As long as there are government created malware tools being made and a complicit software industry, there will always be more WannaCry viruses that will make their way into the wild. It’s not a matter of if, but a matter of when. Drew Wilson on Twitter: @icecube85 and Google+.