Dec 122012
 

Forget Disclosure — Hackers Should Keep Security Holes to Themselves

WIRED
by Andrew Auernheimer
November 29, 2012

aka “Weev”

 

Editor’s Note: The author of this opinion piece, aka “weev,” was found guilty last week of computer intrusion for obtaining the unprotected e-mail addresses of more than 100,000 iPad owners from AT&T’s website, and passing them to a journalist. His sentencing is set for February 25, 2013.

Right now there’s a hacker out there somewhere producing a zero-day attack. When he’s done, his “exploit” will enable whatever parties possess it to access thousands — even millions — of computer systems.

But the critical moment isn’t production — it’s distribution. What will the hacker do with his exploit? Here’s what could happen next:


The hacker decides to sell it to a third party.

The hacker could sell the exploit to unscrupulous information-security vendors running a protection racket, offering their product as the “protection.” Or the hacker could sell the exploit to repressive governments who can use it to spy on activists protesting their authority. (It’s not unheard of for governments, including that of the U.S., to use exploits to gather both foreign and domestic intelligence.)


The hacker notifies the vendor, who may — or may not — patch.
The vendor may patch mission-critical customers (read: those paying more money) before other users. Or, the vendor may decide not to release a patch because a cost/benefit analysis conducted by an in-house MBA determines that it’s cheaper to simply do … nothing.

The vendor patches, but pickup is slow. It’s not uncommon for large customers to do their own extensive testing — often breaking software features that couldn’t have been anticipated by the vendor — before deploying improved patches to their employees. All of this means that vendor patches can be left undeployed for months (or even years) for the vast majority of users.

 

The vendor creates an armored executable with anti-forensic methods to prevent reverse engineering. This is the right way to deploy a patch. It’s also manpower-intensive, which means it rarely happens. So discovering vulnerabilities is as easy as popping the old and new executable into an IDA Pro debugger with BinDiff to compare what’s changed in the disassembled code. Like I said: easy.

Basically, exploiting the vast unpatched masses is an easy game for attackers. Everyone has their own interests to protect, and they aren’t always the best interests of users.

Things Aren’t So Black and White

Vendors are motivated to protect their profits and their shareholders’ interests over everything else. Governments are motivated to value their own security interests over the individual rights of their citizens, let alone those of other nations. And for many information security players, it’s far more lucrative to sell incrementally improved treatments of a disease’s symptoms than it is to sell the cure.

Clearly, not all the players will act ethically, or capably. To top it all off, the original hacker rarely gets paid for his or her highly skilled application of a unique scientific discipline towards improving a vendor’s software and ultimately protecting users.

So who should you tell? The answer: nobody at all.

White hats are the hackers who decide to disclose: to the vendor or to the public. Yet the so-called whitehats of the world have been playing a role in distributing digital arms through their disclosures.

Researcher Dan Guido reverse-engineered all the major malware toolkits used for mass exploitation (such as Zeus, SpyEye, Clampi, and others). His findings about the sources of exploits, as reported through the Exploit Intelligence Project, are compelling:

The so-called whitehats of the world have been playing a role in distributing digital arms.
  • None of the exploits used for mass exploitation were developed by malware authors.
  • Instead, all of the exploits came from “Advanced Persistent Threats” (an industry term for nation states) or from whitehat disclosures.
  • Whitehatdisclosures accounted for 100 percent of the logic flaws used for exploitation.

Criminals actually “prefer whitehat code,” according to Guido, because it works far more reliably than code provided from underground sources. Many malware authors actually lack the sophistication to alter even existing exploits to increase their effectiveness.

 

Navigating the Gray

 

A few farsighted hackers of the EFnet-based computer underground saw this morally conflicted security quagmire coming 14 years ago. Uninterested in acquiring personal wealth, they gave birth to the computational ethics movement known as Anti Security or “antisec.”

Antisec hackers focused on exploit development as an intellectual, almost spiritual discipline. Antisec wasn’t — isn’t — a “group” so much as a philosophy with a single core position:

An exploit is a powerful weapon that should only be disclosed to an individual whom you know (through personal experience) will act in the interest of social justice.

After all, dropping an exploit to unethical entities makes you a party to their crimes: It’s no different than giving a rifle to a man you know is going to shoot someone.

Dropping an exploit to unethical entities makes you a party to their crimes.

Though the movement is over a decade old, the term “antisec” has recently come back into the news. But now, I believe that state-sanctioned criminal acts are being branded as antisec. For example: Lulzsec’s Sabu was first arrested last year on June 7, and his criminal actions were labeled “antisec” on June 20, which means everything Sabu did under this banner was done with the full knowledge and possible condonement of the FBI. (This included the public disclosure of tables of authentication data that compromised the identities of possibly millions of private individuals.)

This version of antisec has nothing in common with the principles behind the antisec movement I’m talking about.

But the children entrapped into criminal activity — the hackers who made the morally bankrupt decision of selling exploits to governments — are beginning to publicly defend their egregious sins. This is where antisec provides a useful cultural framework, and guiding philosophy, for addressing the gray areas of hacking. For example, a core function of antisec was making it unfashionable for young hackers to cultivate a relationship with the military-industrial complex.

The only ethical place to take your zero-day is to someone who will use it in the interests of social justice.

Clearly, software exploitation brings society human rights abuses and privacy violations. And clearly, we need to do something about it. Yet I don’t believe in legislative controls on the development and sale of exploits. Those who sell exploits should not be barred from their free trade — but they should be reviled.

In an age of rampant cyber espionage and crackdowns on dissidents, the only ethical place to take your zero-day is to someone who will use it in the interests of social justice. And that’s not the vendor, the governments, or the corporations — it’s the individuals.

In a few cases, that individual might be a journalist who can facilitate the public shaming of a web application operator. However, in many cases the harm of disclosure to the un-patched masses (and the loss of the exploit’s potential as a tool against oppressive governments) greatly outweighs any benefit that comes from shaming vendors. In these cases, the antisec philosophy shines as morally superior and you shouldn’t disclose to anyone.

So it’s time for antisec to come back into the public dialogue about the ethics of disclosing hacks. This is the only way we can arm the good guys — whoever you think they are — for a change.

 

Direct Link:  http://www.wired.com/opinion/2012/11/hacking-choice-and-disclosure/

 

 

Feb 042012
 

 

F.B.I. Admits Hacker Group’s Eavesdropping

The New York Times
‘By SCOTT SHANE
February 3, 2012

 

WASHINGTON —

The international hackers group known as Anonymous turned the tables on the F.B.I. by listening in on a conference call last month between the bureau, Scotland Yard and other foreign police agencies about their joint investigation of the group and its allies.

Anonymous posted a 16-minute recording of the call on the Web on Friday and crowed about the episode in via Twitter: “The FBI might be curious how we’re able to continuously read their internal comms for some time now.”

Hours later, the group took responsibility for hacking the Web site of a law firm that had represented Staff Sgt. Frank Wuterich, who was accused of leading a group of Marines responsible for killing 24 unarmed civilians in Haditha, Iraq, in 2005. The group said it would soon make public “mails, faxes, transcriptions” and other material related to the case, taken from the site of Puckett & Faraj, a Washington-area law firm. A voluminous 2.55 gigabyte file labeled as those files was later posted on a site often used by hackers, Pirate Bay.

Somini Sengupta and Nicole Perlroth contributed reporting from San Francisco.

 

Read the full article at… Direct Link:  http://www.nytimes.com/2012/02/04/us/fbi-admits-hacker-groups-eavesdropping.html?nl=todaysheadlines&emc=tha22

Nov 232011
 

14 Enterprise Security Tips From Anonymous Hacker
Former Anonymous member “SparkyBlaze” advises companies on how to avoid massive data breaches.
InformationWeek
By Mathew J. Schwartz
August 31, 2011

Want to avoid large-scale data breaches of the type served up by hacking group Anonymous, and its LulzSec and AntiSec offshoots? Start by paying attention to the security basics, including hiring good people and training employees to be security-savvy.

“Information security is a mess. … Companies don’t want to spend the time/money on computer security because they don’t think it matters,” said ex-Anonymous hacker “SparkyBlaze,” in an exclusive interview with Cisco’s Jason Lackey, published on Cisco’s website Tuesday.

Accordingly, what’s the best way for businesses to improve the effectiveness of their information security efforts? SparkyBlaze offered 14 tips, ranging from using “defense-in-depth” and “a strict information security policy”; regularly contracting with an outside firm to audit corporate security; and hiring system administrators “who understand security.” Also encrypt data–“something like AE-256,” he said–and “keep an eye on what information you are letting out into the public domain.”

Other best practices: use an intrusion prevention system or intrustion detection system to detect unusual network activity. Employ “good physical security” too, he said, to ensure no one routes around your information security measures by simply walking through the front door. Finally, pay attention to employees’ security habits and keep them briefed on the threat of social engineering attacks, since all it takes is one person opening a malicious attachment to trigger a data breach of RSA-scale proportions.

While SparkyBlaze’s back-to-basics guidance isn’t new, it bears repeating given the number of data breaches and releases executed by hacktivist groups in recent months. According to security experts, these attacks aren’t necessarily highly sophisticated, and most don’t make use of so-called advanced persistent threats. Rather, attackers often exploit common vulnerabilities or misconfigurations in Web applications, just as they’ve done for years.

SparkyBlaze defected from Anonymous earlier this month, saying via a Pastebin post that he was “fed up with Anon putting people’s data online and then claiming to be the big heroes.” As that suggests, there’s no clear and easy definition of what constitutes “hacktivism.” Even so, the “scope creep” in the type of data collected and released by Anonymous and its offshoots is evidently turning some people away from the collective.

“I love hacking and I believe in free speech and anti-censorship, so putting both together was easy for me. I feel that it is ok if you are attacking the governments. Getting files and giving them to WikiLeaks, that sort of thing, that does hurt governments,” said SparkyBlaze to Cisco’s Lackey.

But in his Pastebin post, SparkyBlaze said that AntiSec and LulzSec had increasingly been operating against the supposed mission statement of Anonymous, which was ostensibly formed to keep governments accountable. “AntiSec has released gig after gig of innocent people’s information. For what? What did they do? Does Anon have the right to remove the anonymity of innocent people? They are always talking about people’s right to remain anonymous so why are they removing that right?”

On a related note, the raison d’etre of Anonymous–WikiLeaks–appears to have lately suffered its own data breach, or at least loss of data control. On Monday, German weekly news magazine Der Spiegel reported that a file posted by WikiLeaks supporters to the Internet included concealed, password-protected, and unexpurgated versions of the 251,000 U.S. State Department cables that WikiLeaks released–with many sources omitted–in November 2010.

Through a somewhat circuitous sequence of events, possibly involving personnel disagreements inside WikiLeaks, the existence of a 1.73-GB “cables.csv” file, which contains the uncensored cables and which is protected by a password, became publicly known. Furthermore, thanks to an “external contact” of WikiLeaks, according to Der Spiegel, the password was also publicly disclosed, enabling the file to be unlocked.

But in a statement on Twitter, WikiLeaks disputed responsibility for the leak: “There has been no ‘leak at WikiLeaks’. The issue relates to a mainstream media partner and a malicious individual.” WikiLeaks, however, didn’t name either.

Direct Link: http://www.informationweek.com/news/security/intrusion-prevention/231600561?itc=edit_in_body_cross