Main

Security Archives

October 30, 2006

Old hacker t-shirts.


packetstorm-front.jpg
Originally uploaded by Adam J. O'Donnell.
Years of being associated with the hacker community has led me to accumulate a number of t-shirts. Elias Levy suggested that I should take pictures of them before they end up being turned into an art project.

Even more old hacker shirts.

Elias is also contributing pictures of his shirts to the OldHackerShirts tag. We must preserve these invaluable pieces of our shared heritage, even if they are sometimes horribly sweat-stained*.

* I can only attest to the state of my shirts.

November 16, 2006

Punditry, PatchGuard, and Diversity

I, like many other members of the security community, have been thinking about the PatchGuard architecture that will be implemented in Vista for the past few weeks. I resisted blogging (erg) about it because I don't want to sound like a pompous ass, but might as well get my thoughts down on the subject rather than have them rattle around.

PatchGuard is essentially Microsoft's method for handling the volume of malware in the wild. Hooking kernel calls will become far more difficult, device drivers will have to be signed, and software that traditionally requires access to non-userland features, like firewalls and AV tools, will have to go through APIs standardized out of Redmond.

Obviously, this move raised the hackles from the traditional consumer AV organizations. Any technologic edge that one had over the other that involved interfacing with the kernel, and possibly preventing more malicious software, have been eliminated. If one of the third party vendors requires an avenue into the kernel that is not provided, they have to make a formal request to Microsoft for the API feature and wait for a subsequent Service Pack to provide it.

Normalizing access to the kernel is a "good thing" from an architecture standpoint. Microsoft can't hope to manage security threats in Windows unless it reduces the attack surface, or the number of possible entry points that can be used by an attacker. Third party vendors, however, face compression their margins as Microsoft enters the space and technological innovation in this critical area is standardized across the industry.

At face value, this leaves us with the consumer-grade security products industry on the ropes and a vastly more secure operating system, all because of interface standardization. An opposing view comes forth when we consider the issue of "software diversity". This discipline, which I spent a fair bit of time studying, asserts that populations of systems are more secure when they are "different", or do not share common faults. In non-infosec terms, this is equivalent to diversifying a financial portfolio to reduce the risk of loss associated with correlated securities. By standardizing all security software to essentially the same kernel interface, a new common fault, and a new target, is introduced. We won't know until Vista is widely deployed if the drop in diversity incurred in the standardization of security will offset the gains made by the changes made by PatchGuard.

November 24, 2006

On Testing, Part 1.

Testing a security product can sometimes be a very hard job. I don't mean internal QA-type testing, where you look for logic or syntax flaws in a system; I am talking about validating that a technology is effective against a difficult-to-enumerate threat. If you are testing a flaw finder, you can create code with a specific number of flaws and then refine the code until all the flaws are detected. Likewise, if you are writing a vulnerability scanner to search for known holes (ala Nessus), you can construct a pool of example systems where the flaw can be found.

There are many situations where a known set of test vectors cannot be created, making the validation of a security technology somewhat hairy. What happens when the product you are testing is designed to catch threats that are rapidly evolving? Building corpora of known threats for testing against a live system is somewhat futile if the time between when the corpora is constructed and used for testing is long enough for the threat to evolve significantly. Either the live system has to be tested against live data, or the system, which most likely has been fetching updates, has to be "rolled back" to the state it was in at the time each element in the test corpus was first detected.

Consider anti-spam systems, for example. Performing an accurate test of our software was one of the difficulties my organization, Cloudmark, has had with potential customers. One thing we stress, over and over, is that the test environment has to be as close to the production environment as possible, especially from a temporal standpoint. Just as users don't expect their mail to be routinely delayed by 6 hours before being delivered, evaluators shouldn't run a 6 hour-old spam corpus through the system to evaluate its efficacy as a filter. As the time between when a message is received at a mail gateway and when it is scanned by the filter increases, the accuracy of the evaluation should approach perfection, thus invalidating the test.

The "accuracy drift" of the anti-spam system over the course of 6 hours would be insignificant if it wasn't for the fact that spam evolves so damned fast. If spam didn't evolve, then Bayesian filters and sender blacklists would have been the end-all be-all solution for the problem. The past year has seen more and more customers realize, sometimes before we even talk to them at length about testing, that a live data stream is essential for evaluating a new anti-spam solution. I suspect this is because they found their previous methodology lacking when it showed that the competitor's product they bought performed more poorly than predicted by the test.

I started out this post by saying I would discuss security products, and so far I have only mentioned anti-spam systems. The issues with testing became apparent in this area first because of the number of eyes watching the performance of anti-spam systems, i.e. every e-mail user on the planet. In a later post, I will discuss why this also matters for both other specific security systems and the security space in general. For now, I am heading off to see a good friend before he leaves on an extended trip to the .eu.

November 26, 2006

On Testing, Part 2.

I had begun discussing the topic of testing security products in a previous post, where I began discussing the difficulty seen in evaluating security products. Essentially, the issues revolve around generating test vectors that is representative of the current threat state. If the system is attempting to counter a rapidly evolving security threat, then the time between when the test vector is generated and the time when the test is performed becomes critical for the fidelity of the test. For anti-spam systems, the length of time between when a test vector and when a test is conducted becomes critical in trying to quantify the accuracy of the solution once it is in production; spam evolves so fast that in a matter of minutes test vectors are no longer representative of the current spam state.

What about other filtration methods? In the past, Anti-Virus systems had to contend with several hundred new viruses a year. A set of viruses could easily be created that would be fairly representative of what a typical user would face for many days or weeks, as long as both the rates of emergence and propagation of new viruses was "low enough". This assumption, which no longer holds, worked very well when viruses were created by amateurs without a motive other than fame. Contemporary viruses are not written by kids screwing around, but by individuals attempting to build large networks of compromised home machines with the intention of leasing them out to for profit. This profit motive drives a far higher rate of virus and malware production than previously seen, as exemplified by the volume of Stration/Warezov variants, which have been causing many AV companies fits in their attempts to prevent this program from propagating all over the place. By testing against even a slightly-stale corpus, AV filter designers don't test against new variants, allowing them to claim far higher accuracy numbers than their products actually provide.

What's the big deal if people don't correctly perform testing? Well, engineers typically design and build systems to meet a specification, and they place their system under test to verify that the spec is being met. If their testing methodology is flawed, then their design is flawed. Eventually, these flaws will come to light in the public eye, as consumers start to realize that a product which claims 100% accuracy has been allowing an awfully high number of viruses to get through.

I am by no means the first person to discuss testing of security products. AV accuracy received quite a bit of attention when Consumer Reports attempted to test AV systems by using newly created viruses rather than the standard corpus. While their attempt at devising a new testing methodology was commendable, it is still not representative of how threats appear on the Internet. Using new, non-propagating viruses to test an AV system begs comparisons to the proverbial tree that falls in a forrest that no one is around to hear. Additionally, it isn't the incremental changes in viruses that are difficult to catch, it is the radical evolutions in viruses as well as the time required for the AV vendors to react that we have to be concerned about. These are things that can't be modeled via corpus testing, but via extended testing on live traffic.

We should be asking why people don't test more frequently on live data as opposed to corpus testing. I suspect it is because of two reasons: labor and repeatability. With corpus testing, you hand verify each element in the corpus as either being a virus once, and that cost is amortized over every test you conduct using the corpus. This isn't exactly an option with live testing, as every message that is either blocked or passed by the filter has to be hand-examined. There is also the issue of testing repeatability, where re-verification of previous results becomes difficult as the live feed evolves. Just because something is hard doesn't mean it shouldn't be done, however.

While systems are under live testing, the content they are filtering is being actively mutated to evade the system under test, essentially creating a multi-player noncooperative game with a limited number of participants. I will continue this discussion by examining the ramifications caused by this game in my next post.

November 30, 2006

On Testing, Part 3.

I have been commenting on the testing of security software, specifically anti-spam and anti-virus products. The main point I made in both of those posts was that testing has to be on live data feeds, regardless of how difficult the task, because the threats evolve at such a high rate that corpus-based testing quickly becomes stale and does not represent the true state of incoming traffic.

In situations where there are a limited number of security vendors and adversaries, even live testing becomes extremely difficult. Let's consider an extreme case, where there is only one security vendor and multiple adversaries. Every single system is identical, running up to date anti-virus packages. (Yes, I fully realize this is a completely unrealistic example, but bear with me.) From the standpoint of the testing and user community, the accuracy of the system is perfect; no viruses are seen by the system, as they don't even have an opportunity to propagate. At the same time, virus writers realize there is a huge, untapped market of machines just waiting to be compromised if they could only gain a foothold. These guys sit around and hack code until a vulnerability is found in the AV system, and upon finding it, will release a virus that exploits this in the wild.

Before the virus is released, the accuracy of the system is:


  1. 100%: it catches all known viruses.
  2. 0-100%: there is no way to test it.

After the virus is released, havoc breaks out, aircraft fall out of the sky, and dogs and cats start living together. 5% of all computers worldwide are infected before the vendor releases a patch. If the vendor was able to move faster, the number of compromised systems would have been only 1%, but left to its own devices, the virus would have compromised every system connected to the net. In this situation, the accuracy of the system is:


  1. (1 - 1/(# of old viruses))*100%: only one virus couldn't be stopped.
  2. 0%: no viruses were in circulation at the time except for the one that caused mass havoc.
  3. (1-(# of compromised systems)/(# of uncompromised systems))*100%: the expected number of compromised systems at the end of the virus' run.

The third of these three accuracy measures seems the most appropriate, and the most flexible given a variety of network and economic conditions adversary styles. The measure, which is effectively the expectation of exploitation for a given host, is what is used today by anti-spam system evaluators. It is a slightly more sophisticated way of saying "what is the probability that a piece of spam will get through."

From a general security standpoint, however, it covers a difficult and often ignored parameter critical to the accuracy of a security product: response time. If the window of vulnerability between when the virus first appears and when signatures are issued is shrunk, the accuracy expressed by this metric improves. In fact, the Zero-Hour Anti-Virus industry is an emergent cottage industry in the security space. Ferris covered it back in 2004, and I talked about it at Virus Bulletin 2006.

Many of these zero-hour technologies are being used primarily in the message stream, but this probably won't last for long. I suspect the technology popped up here first because of the sheer volume of e-mail based viruses as well as the ease of which service providers, who ultimately end up spending money on these technologies, can quantify its cost. They store all mail then forward it along, unlike web-based trojans which just fly through on port 80, and have an opportunity to actually examine the number of viruses. As industry gains experience with automated means of identifying and distributing fingerprints or signatures for newly identified malware, we will see it spring up in other places as well.

December 7, 2006

Maneuver Warfare and Infosec Products

The modern practice of network security is essentially an exercise in information warfare. The two competing parties, namely the network operators and the botnet managers, are continually evolving to combat the other's tactics, each driven by economic motives. The attackers are attempting to create a distributed services platform out of the defender's systems for delivering... rich media content in the form of image spam, phishing landing pages, and DDoS packets, while the defenders are trying to keep their employer's underlying infrastructure in one piece. This is a very old analogy, one exploited heavily by individuals looking to grab funding earmarked for national defense or attempting to scaremonger groups into the potential threat of an "Electronic Pearl Harbor". The use of these analogies by demagogues does not make them any less apropos; there are many interesting conclusions that can be drawn from the application of modern military theory to the information security space.

Let's consider the somewhat popular work of John Boyd and the tenants of maneuver warfare. Maneuver warfare emphasizes rapid movement, distributed decision making, and dynamism of tactical objectives rather than the costly brute strength of an attrition campaign. This method of warfare has likely been around since the dawn of interstate combat, with Hannibal's tactics at Cannae serving as a brilliant example. In a briefing entitled Patterns of Conflict, Boyd formalized these ideas into what is now referred to as the OODA Loop. This is an embarrassingly brief description, but Boyd viewed warfare as being a a continuous cycle of Observation, Orientation, Decision, and Action, and that those who succeed in warfare are those who can correctly execute the loop in the shortest period of time. Another way of viewing it is whoever can predict their opponents next move and act/react before the other party can assess their situation will win the conflict. This can only be achieved by employing a fast operational tempo, rapidly altering tactics, obscuring your decision state from the enemy, and reducing infrastructure-based friction, such as communication cost.

The most effective infosec schemes on the market today rely upon principles that can be viewed as derivative from these lessons. The effective DDoS and Anti-Virus systems available today and under development seem to work by employing:

  • Large sensor networks to reduce observation time.
  • Automated analysis schemes with either zero in-loop human interaction or a slice of massive amounts of distributed human interaction to minimize feedback time.
  • Rapid decision deployment to clients.
  • Massive monitoring to detect and correct poor decisions.
  • A large variety of detection and response tactics.
  • Ability to quickly roll out new tactics in light of effective evasion methods.

As the tempo of financially-driven security events, i.e. spyware and its ilk, increases, any security system that is solely dependent on human-scale timelines to make decisions will labeled ineffective. Solutions dependent upon individual decision makers will have to complement their scheme with rapid reaction schemes or face decreasing continually decreasing accuracy figures.

December 19, 2006

Everyone point and laugh... (and why I should be faster with this site)

... at Checkpoint for buying NFR. Matasano and Tom beat me to the laugh, however. This is a lousy consolation prize for Sourcefire, which they attempted to buy last year. Does anyone even run NFR anymore?

December 22, 2006

NFR's Market Penetration.

I don't have any decent figures on NFR's market penetration, but I do know that the Sourcefire/Checkpoint deal was nixed because of security concerns. While the deal was canned shortly after the whole Dubai ports debacle, it was likely not due to xenophobia. The specter of a foreign government having ownership of a network monitoring technology with wide penetration in the defense sector was clearly unacceptable. I guess the implicit message here is that not many people use NFR anymore.

December 25, 2006

Ada's WaveBubble


wavebubble
Originally uploaded by ladyada.
As a Christmas gift to the world, Lady Ada has posted the design for a microprocessor-controlled RF jammer called WaveBubble. It covers most of the important consumer bands pretty effectively, including WaveLan and GPS. She did an excellent job with the design, especially given the relative lack of equipment available in her lab.

I may have provided some assistance with the specs and layout for the RF chain, which is something I haven't spent much time looking at since I worked here.

January 4, 2007

Cisco grabs IronPort for $830m.


transbay motorway
Originally uploaded by Richard Soderberg.
Cisco picked up IronPort today for a nice chunk of change. This helps to plug the messaging content hole in Cisco's "security in the network" offering. Given that IronPort's revenue's were probably around $100m, it keeps the multiple for acquisitions in the security and anti-spam product space up at around 8 to 10. This is a good thing.

January 12, 2007

New Blog: Matt Blaze's Exhaustive Search

UPenn Professor Matt Blaze has launched a blog on the first of the year. His cross-disciplinary writings on human-scale security are always worth reading, and it is probably worthwhile throwing his site into your RSS list.

Making money on stock spam.


Spam Stock Symbols
Originally uploaded by pjaol.
There have been many blog posts that basically say making money on stock spam is impossible, but I have to disagree. Sure, if you were to go long on the securities, you will lose a fortune. Over the short term, however, the spammers appear to be making a mint. I wrote a short article for an upcoming issue of IEEE Security and Privacy that essentially says that there is so much money being made on thinly traded equities by spammers that it is driving innovation in spam generation. I'll throw up a post once the magazine hits the presses.

January 13, 2007

More discussion on IronPort acquisition.


Quality of Life
Originally uploaded by Telstar Logistics.
This is a followup to a post i made earlier. Multiple analysts have chimed in on the IronPort acquisition, basically saying that all the old guard security companies are trying to grab a piece of the anti-spam pie.

January 19, 2007

90 years young.


Ztel1b
Originally uploaded by Adam J. O'Donnell.
Today is the 90th anniversary of the transmission of the Zimmermann note.

January 31, 2007

Softtware diversity discussion over at nCircle


windows of our minds
Originally uploaded by xem39.
An interesting write-up on software diversity popped up on the nCircle blog. In the past, this sort of crazy talk in industry caused authors to lose their jobs. Leveraging diversity to increase the attack tolerance of a network received attention in places relatively insulated from industry politics; I did some work for my Ph.D. that showed that the allocation of diversity could be expressed as a graph theory problem as well as it being an effective method for slowing a virus. Tim Keanini isn't trying to point fingers but is attempting to describe economically efficient means by which diversity can be realized in today's data centers.

February 17, 2007

Small steps and shiny buttons.


MEAT Buttons
Originally uploaded by Adam J. O'Donnell.
It's been a few weeks since I posted anything substantial here. I took on a new role at work that has cut into the time I spend abstracting random security problems into bigger conceptual issues. That hasn't prevented me from writing, however. My article on stock spam, referenced here, made it into S&P this month. This is the first magazine article that I have written professionally, and while this may sound extremely dispassionate, the experience was very enjoyable. I like to write, and I didn't feel the pressures of proof and novelty that came with many of the academic publications I worked on in the past.

I started this blog as a scratch pad that I could use to transcribe random thoughts on techniques and trends in the industry, and then later on bake those into full blown articles for later consumption. The three testing articles have been repurposed for a work that will be published in Virus Bulletin shortly. The confidence that I gained by first jotting down random thoughts on the topic, sharing them with my community, then assembling them into a full blown article was invaluable, and a great way, for me at least, to build up an idea pipeline. Making sure I keep feeding the pipeline and posting blog entries will continually be a challenge, but at least I can establish milestones that are more finely grained than "concept" and "published work".

P.S. The picture is from the vendor table at the MEAT's 5 year anniversary party, held at DNA Lounge. I will post a few more pictures when I pull them off the camera or when they pop up on the DNA page.

March 2, 2007

Stock Spam, AV Testing articles now available.


Bird Flu Virus H5N1
Originally uploaded by Worker101.
I put up the Anti-Virus Testing and Stock Spam articles for public consumption.

May 18, 2007

What the hell have I been doing? Part 2: Data Representation

Like it or not, any analysis work that you do is pretty much worthless unless you are able to present the data effectively. Effective data presentation becomes more difficult when new data has to be consumed on a regular basis. Hand-massaging the information is forced to take a back seat to automation, otherwise you (the analyst) will spend your entire life recreating the same report. The data also has to be extremely accessible, otherwise your customers will just not even bother looking at the information.

For example, lets consider the story of some data analyst named... Rudiger. Rudiger has a large volume of numbers about... virus outbreaks locked up in SQL somewhere. Using the tried and true methods acquired as a grad student, Rudiger glues some Perl scripts together followed by smoothing and other cherry-picking using Matlab or, god forbid, Excel. As people ask for the data on a more frequent basis, our intrepid hero tries to come up with more additional automation to make his report generation easier, with graphs e-mailed to him and other concerned parties on a regular basis. He quickly discovers that no one is reading his data-laden e-mails anymore, leaving poor Rudiger to announce conclusions that others could draw from simply looking at a graph provided for him.

What Rudiger doesn't quite realize is that people need to be able to feel like they can own data on their own and manipulate it so that it tells them a story, and not just the story that the graph Rudiger wants them to see tells them. In much the same way that many "technical" (absurdity!) stock analysts will generate multiple forms of charts rather than looking at the standard data provided by financial news sites, data consumers want the ability to feel they can draw their own conclusions and interact in the process rather than be shown some static information. There are several interweb startups based upon this very concept.

For those of you who haven't figured this out by now, I'm Rudiger. Rather than send out static graph after static graph that no one looks at, I learned a web language and threw together an internal website that allows people of multiple technical levels to explore information about virus outbreaks. While it is nowhere as sophisticated as ATLAS, the service tries to emulate Flickr's content and tag navigation structure, where viruses are the content and tags are what we know about the specific threat. The architecture is easy to use and provides both a low barrier to entry, as everyone knows how to use a web page. Also, the "friction" associated with the data is low, as anyone who is really interested can subscribe to an RSS feed which goes right to a web page on the virus; two mouse clicks versus pulling data from SQL.

I am generally more accustomed to writing english or algorithms rather than web code. Frankly, I hadn't produced a web app since PHP 3.x was the hotness. After consulting with some of my coworkers and my old friend Jacqui Maher, I decided to throw the site together using Ruby on Rails. With Jacqui on IM and a copy of Ruby on Rails: Up and Running in hand, I went from a cold start to a functioning prototype in about 2 weeks. I was pretty surprised with how far web development has come since 2000, as ad-hoc methods for presenting data from a table have were replaced with formalized architectures integrated deeply into the popular coding frameworks.

Moral(s) of the story?: Reduce the cost and barriers to analyzing your own data. Put your data in the hands of the consumer in a digestible, navigable form. Remove yourself from the loop. Don't worry, you will still be valuable even when you aren't the go-to guy for generating graphs, as there is plenty of work to go around right now.

[Sidenote: The sad thing is I learned this lesson about reducing the burden of analyzing regularly generated data once before. The entire motivation behind a project I consulted on many moons ago, namely Sourcefire's RNA Visualization Module, was to provide attack analysts with an easy-to-absorb presentation of underlying complex data.]

May 31, 2007

Botnets and Emissions Trading

Many of the customers I engage with at work have been struggling with how to identify and handle the botnet drones. Now, I am going to assume that everyone who either reads or stumbles upon this page has some understanding of botnets and their impact. Over the past several weeks, Estonians have become very familiar with the effects botnet-enabled DDoS attacks can have on everyday life. The networks are the prime source of spam. There is common agreement that yes, botnets are a problem and yes, they need to go away. Who should actually bear the burden of de-fanging these networks?

Disarming the actors behind these attacks involves dismantling the botnets themselves, which is itself an increasingly challenging problem. Older-style bots used IRC servers as a central command-and-control mechanism, making them vulnerable to decapitation attacks by security personnel. Newer systems use P2P-style C&C protocols adapted from guerilla file-sharing systems that are notoriously difficult to control. Other than traffic and content mitigation, which several organizations have proven to be extremely effective, the solution is to take down botnets node-by-node.

So who should eliminate botnets? End users don't feel responsible or even recognize that there is a problem; all they know is that they are using their computer then someone comes along and tells them they are infected with a virus. Service providers (telephone and cable companies) with infected customers aren't really responsible, and pay the cost through outbound bandwidth charges and outbound MTA capacity, which is relatively minor charge compared to the people who are the targets of the attacks. Operating system vendors aren't responsible, because once they sell the product to the customer, they are no longer liable for if, when, or how the customer becomes compromised. Ultimately, the people who bear the largest cost are the ones who are least capable of remediating the source of the spam, namely the service providers of the attack recipients. These actors have to pay for bandwidth for inbound attacks, storage for spam, and support calls from their customers asking why their computer is slow when it is, in reality, a botted system.

In many ways, we have a classic Tragedy of the Commons-type issue. The communal grazing areas, or shared resources that were critical for the working class' ability to make a living, have been replaced by today's fiber lines. Currently the "tragedy" is solved via by bandwidth providers through embargoes of one another: if one service provider gets out of line, the others will block all mail originating from the offender. Recently I have been pondering another possible solution, one based upon financial mechanisms.

While it would likely be impossible to implement, a Cap-and-Trade-style trading system seems extremely appropriate. Similar to carbon trading schemes, a cap-and-trade system for malicious content established between providers would create economic incentives to correctly monitor and reduce the volume of unwanted content that flows between their networks. The system would involve a cap on how much malicious content the parties would deem acceptable to send to one another. Providers who are able to better control the amount of malicious traffic, through expenditures on personnel and products. They can recoup those costs through the sale of credits associated with the difference between their level of outbound malicious content and the agreed-upon cap. Providers who don't police their traffic are forced to buy credits from those who do, which in turn puts a price on their lack of responsibility.

Eventually, the provider may choose to expose this cost of security to the end user, with rebates or special offers extended to users who keep their systems clean and never cause a problem. The end users in turn are incented to keep their machines clean, the Internet would return to the pre-fall-from-eden utopia that it once was, and the world would be a happy place once again.*

* Having providers buy into this concept, building a monitoring infrastructure, setting prices, assembling a market, and maintaining a clearinghouse for credit trades would be pretty damned hard. I don't think this is a practical idea, it does make for a fun thought experiment.

June 1, 2007

Who cares if a spammer is arrested?

I was quoted in the USA Today regarding the spammer who was recently charged with multiple counts of being a general pain in the rear and being an accessory to being a pain in the rear. I talked to several reporters about this yesterday, and here are some of my soundbites which may or may not have made it into print:

  • Spam, like most forms of organized crime, is too profitable to end by arresting single individuals.
  • This arrest solves a spam problem from four years ago, and not today's issues.
  • The spammers are manipulating equities markets and compromising financial accounts. Anti-spam regulations are the least of their concern.

June 9, 2007

Testing show AV sucks.

The sorry state of industry-accepted anti-virus tests is gathering some attention in the technical hobbyist's press. New, independent testing organizations are getting into the act as well. Eventually someone in the mainstream press will pick up on the topic, but I doubt that it will lead consumers to purchasing AV products that actually work. Viruses and trojans have become far better at hiding their presence from the end user; unlike 10 years ago, we rarely hear about systems being wiped out by a virus. Most infected consumers don't realize it, and may not feel they need to remediate the issue. After all, if their mp3's aren't being deleted, who cares? Infected systems affect people around the user more than they do the user him/herself. The spam they send out goes to other people, and not to their own inbox.

Sidenote: I am very surprised at the low numbers quoted by av-comparatives for Kapersky's scanner.

June 10, 2007

Quote on Defense Technology

“No single defensive technology is forever. If they were, we would all be living in fortified castles with moats.?

-- Michael Barrett, CISO @ Paypal; via an article by Brad Stone on CAPCHA's.

June 13, 2007

Second Life explains Defense in Depth

This has to be the greatest thing I have ever seen on youtube. Link found on Schneier's Blog.

June 15, 2007

DEFCON and the TCP/IP Drinking Game

I will be standing in for Mudge for this year's TCP/IP Drinking Game. Drop me a line if you have suggestions for panelists.

June 19, 2007

Baysec 2 - The Baysec-ening Tomorrow, June 20th!

For those of you who live in the Bay Area, tomorrow is the second monthly meetup of security professionals known as BaySec. This month's will be held at 21st Amendment, located on 2nd between Brannan and Bryant in San Francisco. Much thanks go to Nate Lawson for promoting this event!

August 9, 2007

The Security Innovator's Dilemma (Part 1)

The most common themes I heard during this year's BlackHat conference were driven by the implications of the underground economy. Monetization of the attack space has dramatically changed how the information security community handle emerging threats. Practitioners no longer talk about 100% effectiveness and other meaningless metrics and instead focus on minimization of harm. I have been towing this line myself for some time, and I would like to share with you the general framework in which I think about security in this current context.

Five years ago or so, Dan Geer and several others put forth the concept that the root cause of infosec issues was the monoculture of Microsoft systems. No longer a controversial idea in the community, the statement caused a gigantic uproar at the time, leading to Dr. Geer's departure from @Stake. The paper was a milestone for those working in the security economics field, as its basic postulate linked the creation of individual exploits to the value that can be derived from an exploit. In other words, people exploited windows because their work would create far more value for the author, as it could be applied to the vast majority of computer systems in the world.

We can formalize this concept as a zero-sum non-cooperative game. Consider two players, the Attacker (A) and Defender (D). A and D can either attack/defend one of two classes of system, denoted 1 and 2. Systems 1 and 2 cover assets valued at v1 and v2. A given system may be the entire class of Microsoft OS's, a class of messaging technologies (e-mail vs. SMS), processor architectures, Anti-Virus products, etc. The value associated with a class of systems is what the attacker assumes the monetization rate to be for that class of products: a block of ATM machines versus several hundred spam generating home computers. I digress.

During each iteration of the game, the defender can invest his energy into defending either of the two systems. If the defender chooses the same system n as the attacker, then he has a probability p of success, giving the attacker an expected payoff of (1-p)vn. If the attacker and defender choose different systems, then the payoff to the attacker is vn, as the system is undefended.

One of the implications of the model is that there are situations where it is never the best decision to attack the system that covers the least assets, even if it is undefended. If we consider two system classes n and m, if the value of attacking the defended system is greater than that of attacking an undefended system ((1-p)vn > vm), then the strategy of attacking vn strictly dominates the strategy of attacking vm. In other words, a rational attacker will ignore an unprotected system if he or she can profit by attacking a far more valuable but defended system.

This appears to be a validation of the concept of software diversity, but I consider this model to be interesting for a very different reason: it effectively segments the market for both attacks and defenses based upon what I call quantifiable rationality, or whether or not someone can put a dollar value on the work that is being done. Attackers and defenders who choose to go after systems which are either minimally valued or difficult to value are doing so for publicity, which is notoriously difficult to economically quantify, or expectations that the future will shift the relative valuations of the protected systems. Likewise, attackers and defenders who focus on the highest valued systems are the same individuals who are able to truly quantify their market. Consider the iPhone browser vulnerabilities and SMS spam, Hypervisor Rootkits and Detection and actual working AV Technologies, and Network-layer Firewalls and Application Layer Protections: each of these parings consists of a concept that either dominates either the mind-share or security market, while the problems that cause true financial pain points remain unaddressed.

As we will see in a later post, the two halves of the security market act in very different ways, necessitating different technologies and business practices.

August 14, 2007

Next Baysec: Next Monday! (8/20)

Nate Lawson has posted the next Baysec date. Hint: it's Monday the 20th.

August 26, 2007

Monocultures Abound

Everyone probably saw the two items I'm mentioning, but if Windows Update == a DDoS against Skype, then you've just proven the monoculture conjecture. Similarly, if you can slow down the entire Internet with a 9mm, then you've just proven the fragility conjecture.

http://heartbeat.skype.com/2007/08/what_happened_on_august_16.html
http://it.slashdot.org/article.pl?sid=07/08/21/1531216

-- Dan Geer on the DailyDave mailing list. Via Ralph Logan.

September 8, 2007

Renaming the Gartner Magic Quadrant

If you are involved in enterprise software development, you have heard of the Gartner Magic Quadrant. The vendor is charted on "completeness of vision", or how well they understand where the space is going, and "ability to execute", how well they can complete that vision. Organizations that have both are in the "Magic Quadrant."

Enterprise software vendors have MarCom groups which are almost solely tasked with putting their company in the magic quadrant. This usually has nothing to do with the quality of the solution, of the vision of the organization, but the MarCom's group to be able to sell people on the solution or on the organization's vision. If the group successfully puts the organization in the Magic Quadrant, they all earn their bonuses for the year.

As a result, I hearby re-christen the Magic Quadrant the Gartner G-Spot.

Keep that in mind every time people panic over Gartner ratings, and you will feel a little better.

November 29, 2007

Security Implications of "Two Chicks, One Cup": not a joke.

I can't believe I am writing this... but...

Several weeks ago, a video entitled "Two Chicks, One Cup"/"Two Girls, One Cup" was posted on the Internet. Mercifully briefly, it is a segment of film clipped from a coprophilia porn. The emergence of this video has been regarded as Web2.0's goatse and tubgirl, or single images of... lets say sexual activity that is several sigma's away from the norm. Exposure to this remarkable product of human ingenuity has increased dramatically since BoingBoing, one of the most popular blogs on the net, started referring to it recently. Reaction video's of people watching the video for the first time are more popular than the video itself. This meme seems to have a good bit of life in it as semi-professional spoofs (sfw) and follow-up videos, such as "Two Chicks, One Finger" (NO WAY SFW), started appearing on the net.

Websites have started sprouting up that claim to host the video, but actually host malware. If you attempt to search for either "Two Chicks(Girls), One Cup" or "Two Chicks(Girls), One Finger", you may end up at malware sites likes these. This is similar to the codec attacks recently described by Sunbelt Software. I am concerned that... I can't believe I am writing this... security vendors will be loathe to post warnings regarding malicious versions of the content because the content itself is so wretched. Users who become infected won't want to admit they were infected with malware while watching two women smear each other with shit and/or vomit.

You have been forewarned. Now I have to go bleach my eyes.

December 5, 2007

Next BaySec Tomorrow, 6 Dec 07

Yep, BaySec is tomorrow night, December 6th at 7pm. See you at O'Neils.

December 8, 2007

Symantec marketing is getting better.

This Norton Fighter shtick is far better than the Symantec Revolution jingle.

December 22, 2007

Army Fights the Monoculture

A colleague from graduate school, Nick Kirsch, sent me this article that discusses the military's plan to incorporate Macs into their IT infrastructure. He said it was evidence that someone read my thesis, but I suspect Dan Geer's work had far more influence.

January 14, 2008

BaySec: Thursday, 1/17 @ Pete's Tavern

BaySec is going to be at Pete's Tavern this month, just down the block from O'Neill's. NYSEC (NYC) and BeanSec (Boston) are on Tuesday and Wednesday, respectively. I dare you to hit all three.

January 19, 2008

Yes, Virginia, there is a Santa Claus SCADA Attack

Long-predicted attacks against infrastructure control systems (SCADA) have arrived, according to the CIA. Bejtlich doubts its authenticity, but I have every reasons to believe it to be true for the following reasons:
  • Bellovin correctly pointed out that maintaining the air gap between critical networks and non-critical networks is nearly impossible, making the likelihood that at least a few critical networks are somehow connected to the public internet extremely high. Information behaves like heat, in that it leaks out unless tightly constrained, like hot coffee in a dewar flask.
  • My old business partner Ralph Logan was quoted in the article. Given the work we did together and the work that he does now, I consider him to be an absolute authority on the topic.
  • The early monetization techniques employed by attackers whenever they discover a tool are usually extortion-related schemes. The first botnet business model was based upon DDoS extortion, where victims were taken off of the network if they didn't pay the attacker protection money. Here we have attackers demanding protection money in exchange for not taking down the power grid. Botnets evolved into spam and phishing engines. I am willing to bet that the next step in the racket will involve selling the attacks to nation states now that infrastructure attacks have been reduced to practice.

January 31, 2008

Computer security solved, lets all write Sex Advice.

Eric Rescorla guest-wrote this week's Savage Love column.

February 7, 2008

RIAA's Slippery Slope

Gizmodo reported today that the RIAA has been asking for the AV vendors to filter for pirated content. You are walking down a slippery slope if you conflate arbitrary content filtering with information security. Anyone who is savvy enough to bootleg media is also savvy enough to disable their AV filter, which would quickly cause the system to become compromised. Additionally, these users are not likely to detect any infection that does occur, leading to yet another system sending out spam and malware. In short, BAD IDEA.

I want names.

Holy Jesus, who is responsible for this:

Having CheckPoint sing the first few lines from My Way is hilarious.

Via the Hoff.

February 27, 2008

ISOI4

I will be at ISOI 4 presenting a completed and extended version of this discussion. Slides to follow...

February 29, 2008

I am so Web 2.0 I am Web 2.5

Keynote apparently allows you to send your slides straight to YouTube, so here are my slides for today. I also opened up a Twitter account after hearing about its use amongst the other the security bloggers.

Note: slides are back!

March 12, 2008

Wireless attack against a heart device? Duh.

So someone announced a wireless attack against an implantable cardiac device. While it does make for good press, I can see many valid arguments against the required remediation step, namely authentication of cardiac device programmers. Authentication of the cardiac programmers may impede use of the programmers in an emergency by an ambulance crew, for example. Additionally, key revokation would require surgery. This would be bad news. Long story short: interesting class of attacks, but don't freak out about it to your cardiologist; you could give yourself a heart attack that way.

As a side-note, medical devices have a long history of spoofing attacks, though. I do remember Joe Grand built a Palm Pilot program to control IV drug infusers maybe a decade ago.

March 14, 2008

Dan Geer's SOURCEBoston Keynote

This is the best security-related talk I have heard in many years. Read it.

Moving RSS to FeedBurner

I am moving my RSS Feed over to FeedBurner. Please tell me if it breaks for anybody.

RSS Feed Redirect: Complete!

Okay readers, if you are reading this via RSS, the cutover to the FeedBurner Feed should be complete.

March 15, 2008

SOURCE Boston 2008 Wrap-up

SOURCE Boston 2008 was a huge success. We could not have hoped for a better outcome from a first-year conference. The conference hit great niches, namely application security and the business of security, as evidenced by our attendees' responses.

Some important points:

  • Dan Geer's talk made my trip. In what was probably the most intellectually stimulating hour I have had in a long time, Dan examined the current and future state of network security leveraging lessons from evolutionary biology and economics. It is a must read.
  • The L0pht panel was hugely successful, and it was probably the first time I have seen a standing-room only crowd at the last talk of a conference. Here are some solid pictures of the event.
  • All the attendees had a blast, as evidenced by multiple Flickr photo pools.
  • Twitter was the communication mechanism for the conference. Jennifer Leggio herded the numerous security cats into using it, and it worked extremely well. She has been continuously updating a list of security twits, many of whom you may know, if you want to get into the game. Here is my feed.

March 17, 2008

Best meme to come from SOURCE Boston...

Certified Pre-0wned. Think malware-infected picture frames.

March 18, 2008

Macs and AV Software

Mogull published an article on TidBITS discussing the issues surrounding Mac AV. It is a solid read. I threw some quotes his way based on some of my recent game theory work.

USA Today, KOMO Interview...

I was interviewed in the USA Today this week, along with friends Rick Wesson, Jose Nazario, and a large group of security researchers who are all far more intelligent than myself. The article lead to a radio interview for KOMO 1000 in Seattle. I slapped a photo onto the interview and voila, web 2.0 magic:

March 19, 2008

We will pay you to host malware.

Apparently InstallsCash's business model is to pay people to host malware. Fantastic. Thanks to a friend for the heads up.

Keynoting 2008 MIT Spam Conference

It appears that I will be giving the keynote of the 2008 MIT Spam Conference. Drop me a line if you will be in attendance.

March 22, 2008

Unusual blog spam vector exploited

Security Blog MCWResearch was hit by a large amount of spammy posts over the past day. It turns out the blog allowed posting via e-mail, and this feature has been subsequently disabled. I wouldn't be surprised if we see an enterprising spammer search for populations of e-mail to blog gateways. They can use their preexisting infrastructure to push spam into a new direction. Remediation for the population would be trivial, as e-mail-to-post functionality is not critical for the functioning of blogs.

Lesson learned: don't allow unauthenticated access to services unless you are required to do so (inbound MTAs, public web servers, etc).

March 27, 2008

Social Network Phishing

Phishing doesn't just happen against banks. It also hits social networks, including MySpace and Facebook. Phishing only occurs if the target can be monetized; in other words, the phishers have to make money. Early social networking phish were likely extensions of the ransomware methodology, where money would have to be exchanged for the account to be turned back over to the phished user. Nowadays these phished accounts are being used to send spam and phish to social network users, propagating the problem.

March 29, 2008

MIT Spam Conference 2008 Followup

Here are my slides from the spam conference keynote I mentioned earlier. These are a refinement of the ISOI slides I posted back in February.

It seems that SlideShare produced far nicer results with this type of content than YouTube.

April 1, 2008

Security Marketing: Hugs for Hackers

AVG's Hugs for Hackers is definitely less mean-spirited than Palo Alto's Security Idol.

Applied Security Visualization gets a bookcover

Raffael Marty's upcoming book, Applied Security Visualization, now has book cover art.

Judging from the cover art, I think the book has something to do with applied security visualization and dinosaurs with targeting reticles etched into their eyeballs.

April 2, 2008

CEAS CFP Extended

If you were planning on submitting a paper to CEAS, the Conference on E-Mail and Anti-Spam, you now have a few more days. Although it is not yet reflected on the website, the CFP has been extended to April 10th.

April 3, 2008

Biological Niches and Malware Prevalence

During a recent presentation I was asked a rather astute and interesting question. The audience member compared the information security world to the biological world, and wanted to know why, when parasites fill every biological niche in the ecosphere, the niche of Macs has not been infested with malware. I have now forgotten what I said in response, but I do remember thinking at the time my answer was bullshit.

The correct answer is as follows: The biological analogy frays at the edges when you consider monetized malware. Parasites inhabit every biological niche because their only goal is to propagate the species, not be the biggest species out there. Malware writers' goal is to make the most money, and will spend their energy creating attacks that allow them to make the most money. The motive of profit maximization causes them to abandon portions of the target space entirely. In terms of the biological argument, consider a parasite was not rewarded for continuing its species, but instead was rewarded for the number of infected hosts. If the parasite had the opportunity to make the split decision between producing offspring that can infect coelacanths or infect beetles, which would be the better strategy?

April 5, 2008

BusinessWeek covers Apple Security

This article on the potential emergence of Macintosh malware appears with auspicious timing.

April 7, 2008

RSA hates the Irish

Because I have an apostrophe in my last name, I attempt a SQL injection attack every time I fill out a form. The RSA conference is aware of this, and requires everyone who has an apostrophe in their last name to stand in a separate line. Apparently they have not yet learned that it is possible to secure a webapp against the dreaded ' without blacklisting the content.

I find this to be equivalent to segregation against those of us who have apostrophes in our name, and by the principle of transitivity, RSA is attempting to segregate out the Irish without posting an "Irish Need Not Apply" sign. Mark my words, first they will come for our crypto keys, and then they will come for our potatoes.

April 8, 2008

Maybe RSA doesn't hate the Irish.

Bono was walking the RSA floor last night. He was there for Nokia, which rocks security apparently. I guess RSA doesn't hate the Irish too much.

April 10, 2008

RSA still hates the Irish.

Nokia, the phone company that doesn't do security but does OEM SourceFire and CheckPoint technology, brought in the fake Bono.

Why Google is Brilliant; Case Study: Google App Engine

Let's say you are a startup and you choose to use the Google App Engine for your infrastructure. If Google buys you out, they don't have to port the code. They directly quantify your company's technology opex and revenue, since they see both the CPU overhead and the eyeball count via Google Analytics. Brilliant.

April 11, 2008

Malware shifts and value chains.

Amrit Williams is calling me on predicting malware emergence. His assertion is that by the time AV improves enough to push attackers onto Macs at their current market share, then attackers will shift to another layer altogether and abandon the idea of monetized malware. I had always assumed that the value chain established by attackers would be largely preserved, but he may be right: there could be a point where AV is so good that attackers will just move to popping webmail accounts and routers rather than attacking client systems. Now wouldn't that be nice.

April 14, 2008

What the hell have I been doing? Part $e^{j\pi}$

I just submitted an article for IEEE Security and Privacy and spent the past week attending RSA. I did do a podcast for Schwartz PR during their RSA party that is available here.

April 17, 2008

How Storm Communicates

Thorsten Holz and team put together a fantastic paper on how the Storm Worm communicates and how it can be infiltrated. Thanks go to Jose Nazario for the heads up.

April 23, 2008

Storm Defeated?

Apparently if you have kernel-level and below control of every Windows PC out there, you can pull out a botnet infestation. Let's see how long it takes for either the botters to be caught or for a new infection to come out that disables Windows Update. Thanks go to Bryan and Jose for the heads up.

April 29, 2008

Kraken Reveng

There is a solid writeup by Pedram Amini @ TippingPoint on the Kraken RevEng here and here. Thanks to Richard Soderberg for the heads up.

May 3, 2008

Spam is now 30.

Spam is now 30. Frankly, if spam still bothers you after all this time, buy a better filter.

May 12, 2008

Processing ported to Javascript.

The domain-specific visualization language Processing has been ported to Javascript. This is a "good thing". Thanks to Raffael Marty for the heads up.

Also, this is my 100th post.

May 22, 2008

Game Theory of Malware article online.

I wrote an article on the game theory of emergent threats that is now online. It is based on presentations from earlier this year. You can grab the article here.

As a sidenote, Amrit thinks that security people like to talk about game theory because they like to play video games. I of course strongly disagree. I will have you know the only video game I still like to play is portfolio explosion on e-trade.

May 26, 2008

Dino launches a blog.

Dino Dai Zovi has joined the great effluent of the blogosphere. Yay for Dino!

May 30, 2008

Bad marketing department! Bad! No bagel day!

Hoff had a post about a VirtSec startup known as Hyperbole. Their product/feature names include such gems as HyperTension, HyperSensitivity, and HyperVentilated.

...

All I can think is of some CSO three years from now muttering "I bought HyperTension and all I got was hypertension."

June 9, 2008

Amrit: iPhone creates mobile malware tipping point.

Amrit Williams gets the first post on how the iPhone 2.0 will create the domestic mobile malware tipping point. What is a malware tipping point you may ask? Well, you can read about that here.

Sidenote: I believe SymbianOS 2nd Edition may have created the international one some time ago.

June 10, 2008

Don't look at me, I'm hideous!

I did a local news interview on social networking spam.

June 17, 2008

The highest form of flattery.

Juan Caballero, Theocharis Kampouris, Dawn Song, and Jia Wang published some interesting extensions at this year's NDSS of the work presented by Harish Sethu and me at CCS '04.

Both papers examine the software diversity problem, which states that networks of systems would be more secure if they minimized the number of possible common mode faults by running different software and operating systems, by relating it to the graph coloring problem. The thesis of both papers is that software diversity can be improved by using graph coloring algorithms to maximize diverse software allocation.

This title of this post's implication is only in jest, as I am incredibly happy to see our idea extended by the research community.

June 18, 2008

Next BaySec: 6/19/08

The next BaySec will occur tomorrow night, 6/19/2008 at Pete's Tavern in San Francisco. Thanks to Ryan for setting it up and to Nate for passively reminding me to blog it.

June 20, 2008

Best Security Marketing Video Ever.

Kaspersky did this:

Hats off to Ryan Naraine for finding it.

June 26, 2008

If you liked the game theory stuff...

... nominate the work for a pwnie. I won't nominate my own work (tacky), but I am not above shilling my own work (only slightly less tacky).

July 6, 2008

@twitterspammers.

jill1194 pitch

jill1194 profile

missyinpink1987 pitch

missyinpink1987 profile


Spammers went after Twitter pretty hard this holiday weekend using the "friend invite" model that was first developed against other social networking services. Briefly, the attack involves creating a large number of spammy profiles and then inviting people to view the spam by performing a friend request, or in twitters case, "following" the spam target. I have included screenshots of a few of these attacks.


An individual can remediate this attack in the short term by disabling e-mail notifications of people following you. This is by no means an optimal solution. The only people who can really address the situation is twitter, through a combination of blacklisting, throttling, CAPTCHAs, and content analysis.

July 8, 2008

reCAPTCHA launches free mail address hider.

I guess this is easier than making a little graphic of your e-mail address. The attack surface for reCAPTCHA is pretty large at this point, and web page scraping is not the only means by which a spammer can grab your address, leading me to question how effective this will be for keeping your inbox clean. Thanks to Jennifer for the heads up.

July 9, 2008

Anti-spam company employee spamming on twitter.

Hilarious. Oh yeah, this is the company in question.

July 10, 2008

Westside!

I was interviewed by SC Magazine's Dan Kaplan on the value of education in the security industry and its associated interpretation on both the west and east coast.

July 14, 2008

CoverItLive Event on Social Networking Security

I will be co-hosting a live blogging event on social networking security tonight with Jennifer Leggio on CoverItLive. You should be able to view the content in the horrifying iframe below here:

Thanks go to Plurk's Plurkshops for sponsoring the event.

Attackers hit close to home.

My wife Sophy's gmail account started spewing spam this morning to everyone in
her sent mail folder. Given that my wife has been working in technology for
about as long as I have been in information security, and specifically three
years in anti-spam, I was both slightly intrigued and rather miffed when I
received the following message in my inbox:

outbound_spam

If this were a PC laptop, I would chalk this up to a desktop compromise. There
has not been a significant number of reports of OSX malware that does address
book scraping, making this possibility rather remote. I had Sophy immediately
rotate her gmail password, log in, and pass over a screenshot of her access
history:

access_history

If we take a closer look at 123.12.254.155, we can see the IP doesn't exactly
reside in San Francisco:

route:        123.8.0.0/13
descr:        CNC Group CHINA169 Henan Province Network
country:      CN
origin:       AS4837
mnt-by:       MAINT-CNCGROUP-RR
changed:      abuse@cnc-noc.net 20070111
source:       APNIC

I am pretty certain that neither of us were in China this morning, and at this
point I was certain that her desktop was safe as the compromise likely affected her
webmail account only. I later discovered that Sophy had used similar passwords
on multiple websites, leading me to believe that one of the many websites she
accessed was compromised, handing the attacker a legitimate Gmail login (her
e-mail address) and password.

The moral of the story is that you absolutely have to use a different password
for each and every website you use, or at least cluster your accounts based
upon attack propagation tolerance. In other words, you can use the same
password across multiple junk message boards, but doing the same across
multiple financial websites would be Bad.

Oh, and the attackers didn't just send spam from her mail account, they also
deleted all her mail on Gmail. Because Sophy maintains backups of her mail, a
potentially stressful day was avoided. Oh yeah, thats the other moral of the
story: maintain good backups, please.

August 5, 2008

Vegas

I will be in Las Vegas for the Blackhat and Defcon conferences this week. I hope to see you all there!

Defcon TCP/IP Drinking Game

I will be hosting the Defcon TCP/IP Drinking Game again this year. Drop by Friday night to see your favorite information security experts make fools of themselves.

August 9, 2008

Dispatches from Blackhat/Defcon: Facebook/MySpace "Worm"

I have been at BlackHat/DefCon since Tuesday, and I have been slightly out of the loop on some recent security events. Coincident with the presentations on social network security and new XSS attacks against MySpace, reports of a worm hitting MySpace and Facebook started trickling in via SMS messages from our team back at the office. My initial concern was that this was a full-blown Samy-style worm hitting both social network sites, and some of my comments were oriented towards this threat.

It turns out that the MySpace/Facebook worm was less a worm and more a standard malware-push technique. Rather than having malware infect a system to send spam to other users that enticed them to install the same malware, the authors had the malware hijack MySpace and Facebook profiles on login by the user, spamming their friends with a malware download pitch. Basically this ends up being a hybrid worm, that requires more than just pure browser support, like XSS and CSRF attacks, to propagate. Good show, spammers.

The interesting part of this incident is that attackers, the media, end users, and vendors are focusing on this as a social networking story and not a desktop malware story, when it is equal parts of both. It is further evidence to me that desktops are being considered by home users to be nothing more than browser containers, with their activities being almost completely focused around a handful of major (social) web properties.

August 11, 2008

What a difference a word makes.

I enjoy talking with reporters, and I do so quite frequently. It is part of my responsibilities at Cloudmark. Thankfully, most of the guys I talk to on a regular basis are extremely responsible, detail oriented, and diligent about the facts; a single omitted word can radically alter the meaning of a phrase.

Chris Hoff, a very well seasoned speaker and media contact, is now experiencing the repercussions of such an error. By dropping the word "security" from the phrase "Virtualizing security will not save you money, it will cost you more.", a reporter changed Hoff's statement from a negative statement about the security to a negative statement about his employer. As you can imagine, this has caused a massive headache for Hoff and his employer.

The only way to fix any misquote in the current media climate is to generate corrective content early and often, as I am doing with this post.

August 12, 2008

Twitter "Following" Limits: Smart.

The web has started commenting on twitter's decision to limit the number of accounts that a given user can follow. Having a hard limit is a smart move for multiple reasons. Not only does it allow you to more finely bound the computational load of the message passing architecture, it negatively impacts only two groups, namely spammers and the obsessive-compulsive.

This is a good first step that I have pointed out in an interview once before. I suspect that Twitter will also be working on a throttling policy as well as an IP and content blacklisting technology as follow-on mechanisms to continue to battle spam.

August 14, 2008

Recent sightings of friends in the media.

September 8, 2008

ZDNet

I am now also blogging for ZDNet.

March 30, 2009

Back from ZDNet, but soon a new home.


Blog banner
Originally uploaded by Adam J. O'Donnell
After seven months of blogging at ZDNet, I am back to the personal blog. The fall-off in advertising revenue across the media space has necessitated cutbacks, and my spot on the security beat was axed.

I won't stop generating content, but I am not quite sure where it will be hosted right now. I will update you as soon as I find out.

In the meantime, here is a full list of posts I have authored on ZDNet, and I hope to see many of you at RSA. Also, here is my updated RSS feed.

Take care.

April 9, 2009

Conficker wakes up to push spam and... scareware?

The Conficker worm has woken up to... drumroll please... push fake antivirus products and spam from an older piece of spam-generating malware. It appears that like many Bay-area startups, Conficker is long on technical ability and short on innovative business models.


I am not trashing the MMBA (Malware MBA)'s ability to extract money from criminal activities. There really are only a handful of ways malware authors have shown they can successfully make money: they can sniff keystrokes, send spam, DDoS websites, or re-sell access to their software and machines to do the same work. However, for all the hype that surrounded the worm I expected something far more sophisticated.


The story for the average consumer is pretty basic. First off, you should not be using any anti-virus software that magically pops up on your system that you have never heard of before. If you are reading this website, chances are you already know this. The spam engine sounds like a ripoff of older technology, so we should expect no dramatic shift in spam mutation techniques. We should expect an increase in spam delivered to people's inboxes due only to the increase in the volume of spam transmission attempts.


Then again, while it is unprofitable, tomorrow the Conficker writers could push down a DDoS package and melt the Internet. This isn't alarmism, it is just what is possible when a single group controls a very large botnet.

April 11, 2009

85% to 95% of all e-mail is spam? Yeah, that makes sense.

There is only one security problem that the average consumer will get visibly angry about, and that is spam. Well, that and identity theft, but spam ranks pretty far up there. When I tell people I work in anti-spam as my day job, I get a pat on the back and a comment about how they can't believe how much spam there is in their inbox. To reinforce what we already know, security companies publish statistics claiming that, depending upon the day of the week, 85% to 95% of all e-mail is spam. While this number is seemingly unbelievable, I can guarantee that it is correct. How did we get to the point that approximately 9 out of every 10 e-mails is spam? Paradoxically, the reason why we have so much spam is because our anti-spam is so incredibly effective today.


To understand why this number is not really that shocking, it is helpful to think of spam not as a singular entity but as a living, evolving creature that has responded to spam filters in new and unique ways. Let's imagine you are at a cocktail party in a nearly-full room with a number of people having a good time. As the evening progresses, the ambient noise in the room gets progressively louder. People respond to the increasing loudness in the room by straining their voices, and eventually the room is a 70dB cacophony of random chatter. The same kind of relationship exists between spam filters and spammers.


Spammers want to be heard, and will accept a certain rate of response to their content. Before the days of ubiquitous spam filters, they would generate content at a far lower rate, since they were getting responses at that rate. As decent spam filters became standard operating equipment on the Internet, the spammers needed to change their game to continue being heard. They did this by mutating their content and sending spam from more locations, resulting in a higher rate of delivery attempts. Again, anti-spam responded with better filters that looked at both content and the IP address of the send systems, and the spammers responded in kind by pushing their mutation rates and transmission rates further up, thus leading to these almost unbelievable spam rates.


If you are a home user, you shouldn't really need to think about this too much. Your ISP or your free webmail provider has to do at least a halfway decent job of filtering spam at this point. If your provider didn't do a good job, then they would have to over-provision their mail servers and mail stores by a factor of 10 or so. E-mail is a pretty cost-conscious business, and this kind of outlay would put them out of business. If your ISP is completely dropping the ball or you have a small business domain that is getting inundated with spam filtering, either call up the domain hosting company and complain or buy a desktop anti-spam product.

April 19, 2009

Have we reached the Mac Malware tipping point yet? Eh... maybe?

The technical media is all a twitter over what appears to be the emergence of the first mac botnet. The infector appears to be an updated version of a trojaned version of iWork that popped up earlier this year. Anyone who has worked as a Windows virus analyst would scoff at the relatively unsophistication exhibited by the malware, but nevertheless, it is a piece of malware, and it is out there. I wanted to take this opportunity to answer some of the most common questions people have about mac malware.

Does this mean that Mac users should rush to buy anti-virus software and expect their machines to end up as compromised as a PC? Probably not, but soon. For now, as long as you aren't downloading pirated software you are safe.

Does this mean mac malware is going to become endemic? Yes. If no one is running anti-virus, then there is nothing to clean up infected systems beyond end-of-life hardware replacement. Given the state of the economy and mac hardware longevity, that can take a very long time.

Does this mean we hit the mac malware tipping point? That I don't know. We can't say that we have reached the mac malware tipping point unless we come up with a definition for the tipping point itself. Dino Dai Zovi and I have been kicking around a potential "warning sign" that, when seen, indicates we are now in the mac malware epidemic state. Our current preferred indicator is the emergence of websites that perform drive-by exploits of the browser to install botnet-controllable malware, regardless if the exploit is a zero-day attack or not. In other words, when we see what happens every day on the PC side happen once on the Mac side, then we all need to run out and buy anti-virus software.

Some time ago you predicted that mac malware would hit its tipping point at 15%. Does this mean you are wrong? Well, my prediction was based on the difficulty to attack a PC versus the market share of a Mac. I assumed that the difficult of attacking a PC was strictly defined by the effectiveness of current anti-virus products on a new piece of malware. My back-of-the-envelope estimate put an attacker's success rate at compromising a PC at around 20%, which meant that Macs would have to around 16% market share before they attract the attention of serious malware authors. If the real success rate of an attacker is lower, then you should expect a mac malware epidemic far earlier. So the answer is: maybe I'm wrong, but I don't know yet.

In short, the story for mac malware hasn't changed this week contrary to popular opinion. However, as both users and as information security professionals, we need to remain vigilant and watch for the tipping point in mac malware, and use that as the trigger to install Mac AV software.

April 22, 2009

Breaking down the "electric grid is vulnerable" stories.

We have been seeing an increasing number of stories on the vulnerability of our electric grid to outside attackers, but determining whether or not these stories are legitimate is exceedingly difficult. The reports are, understandably, short on facts and real metrics and long on anonymous quotes, speculation, and recriminations from the various involved parties. We may not be able to discern what the true nature of the threat against our power grid is, but we can figure out what are the right questions to ask so we can cast a more critical eye to the various news reports.

When the media claims that the electric grid is compromised out the wazoo, it is important to know what exactly is compromised. We can break down the target systems into two classes, specifically non-critical and critical. The non-critical systems consist of desktops and laptops belonging to the administrative, operational, and executive staff of the firm. Anyone who provides statistics showing the percentage of total systems that are known to be compromised at a power plant is likely only providing statistics on these non-critical systems. It would be foolish to suspect that these figures are going to be any different than any other similarly-sized enterprise. Also, while the number of compromised non-critical systems is a proxy indicator for the general security posture of the firm, but it does not tell us anything concrete about the other class of systems.

The far more important question is how many of the systems that are directly attached to industrial hardware are compromised. A compromise of a desktop or a server that is connected to a controller or a process control monitor could directly lead to blackouts and equipment destruction. Remotely enumerating these critical systems is extremely difficult, and determining their level of compromise without the explicit support of the power industry is almost impossible. Therefore, getting a third-party verification of the "power systems are compromised" story is not achievable at this time.

I am not saying that the power grid is secure or insecure. I am saying, however, that we must cast a critical eye to these stories to make sure we don't fall victim to the fear-mongering that permeates all too many security stories.

April 27, 2009

On assuming that you are owned.

Security professionals made a comment at last week's RSA that organizations should assume that they are currently owned by an outside attacker. While this may strike some as paranoia, it is a good assumption for minimizing impact in the event of a serious compromise.


For both individuals and businesses, determining the impact of getting owned begins with listing all the things that you use that are own-able, and then determining a risk mitigation strategy, a containment strategy, and a recovery strategy for each system. These all boil down to a series of "what if" questions that anyone can think through. For the average user, the set of systems that can be compromised includes, but is not limited to, all physical systems, backup mechanisms, and hosted services like e-mail and social networks.


We start from the most "distant" system inwards -- the hosted services. How can an individual's hosted accounts become compromised? The easiest way an attacker could compromise your account is through weak passwords or by sniffing passwords off the wire; therefore, we can reduce the risk of compromise by using strong passwords for our accounts and not accessing them from public access terminals and insecure wireless networks. If you are using a weak password on one site, it is entirely likely you are using a weak password elsewhere. Preventing the attacker from hopping from one hosted account to another can be as simple as using a strong and unique password on every site you access. It isn't just access to the data we should be concerned about. If the service is compromised, it is possible that everything in the account could be deleted, in which case having a backup of, say, all blog posts and all e-mail transactions would be required to get back up and running.


Let's say that the attacker has moved beyond our hosted account and either remotely compromised our physical system or actually stolen the hardware. In both cases, we should expect that all of our unencrypted data is accessible to the world. Both scenarios necessitate file-by-file encryption and a combination of physically secured on-site or off-site backups. A remote compromise would be a far worse situation: even though you don't lose the hardware, the attacker has the opportunity to capture passwords used for hosted services as well as financial accounts. The only way to limit your exposure here is to use cryptographic key fobs (like a SecureID token) and hope they aren't controlling the entire session.


Ultimately the only way to minimize the impact of a compromise is to assume that all of your data is compromised and consequently reduce the amount of data you either keep accessible to content that would not be devastating if it was leaked. In other words, never commit anything to bytes that you don't want your spouse, children, parents, or coworkers to see; the data may only be a single attack away from leaking out into the ether.

April 30, 2009

*cough* Have to work from home *sneeze*?

Not that there is any reason to say this, but it is possible that a significant portion of the workforce will be either absent or working from home in the next few months. This could mean opening up the corporate network to far larger numbers of telecommuters whose systems may be in various states of security disrepair. IT managers should be planning on how give secure access to the corporate network to a batch of relatively untrained employees.


If you don't work in the IT department, the story is pretty simple. Get your laptop set up to connect to your work network if it cannot do so already. Laptops that are primarily home systems should be reformatted and installed from scratch if there is any concern that the machine may contain malware; just because you aren't going to work sick doesn't mean your system should.


For those of you who do work in the IT department, well, I don't envy the job ahead of you. If your network wasn't de-perimeterized before, it will be soon, whether you like it or not. Not only do you need to prep employees' personal systems to connect to the corporate infrastructure, you also need to educate them on the risks of bringing a relatively-unclean personal system into the corporate environment. Given that home systems are not nearly as well looked-after as corporate systems, you also are going to be dealing with all the infections that your employee's home PCs will be bringing past the firewall and NAT systems and into the core network.


There aren't too many recommendations I can make that aren't common sense. For example, you can distribute more laptops to employees who don't have them. Also, you should consider extending the corporate licenses for the anti-virus products to the home systems of employees who do not possess a company-managed PC but will be expected to work remotely.


Plans similar to the one described above should be in the dusty business continuity plans that many organizations created in late 2001. It's time to update them and get ready to put them to practice.

May 5, 2009

Phishing on social networks is no real surprise

For some reason users of social networks appear surprised by the rate at which phishing attacks are appearing on social networks like Facebook. There is the belief among computer users that they can run from one platform, like e-mail, to the next platform, like social networking, to escape preexisting security problems. Much like social problems in the real world, movement to a new electronic location will provide only a temporary respite from endemic social ills. Rather than allowing their population to depart due to a perception of a lack of security, social networks need to make a two pronged attack at reducing their users vulnerability to phishing attacks.

The first prong consists of attempting to improve issues at what is known as the "layer 8", or the human interaction layer. This consists of giving users clues as to what is good content and what is questionable content. For example, social networks can warn users when they are leaving the safety of the network's walled garden and are clicking on a link that has not been explicitly vetted. They can also alert users when there is an increased risk of phishing or malware attacks based upon recent activity, and make this indicator a predominant UI element that appears when links are activated.

The second prong involves the continual improvement of technology for the prevention of in-network phishing attacks. All of the major players have a security team that is already in place to address issues as they come up. Truth be told, these guys are actually doing a pretty decent job as it is right now. These teams are far more empowered to fix problems in their network than you will ever see in almost every other part of the computing world. They have complete control of the internal architecture, and are not bound by standards bodies on how they handle messaging or communication between those systems. Social networks have been able to combat abuse mostly by taking full advantage of all the information they have at their disposal regarding their users, including the IP address they are connecting from and a full record of their behavior inside the network. Nevertheless, phishing is a hard problem, and several of the social networks are going down the path of employing third-party solutions to address the issue.

Without a combination of user education and appropriate technology, participants will end up moving to location to location in search of a completely abuse free environment. Much like many of the problems that society faces, however, the residents of a social network are enabling the attackers to take advantage of them, and as a result make it far more difficult to eliminate the problem. If individuals didn't fall for phishing attacks, then the phishers would leave the platform altogether. Sadly, once phishers' appetites have been whet by a few successes, they are unlikely to depart anytime soon.

January 6, 2011

Immunet Acquired by Sourcefire

I haven't been blogging much. I have been busy.

About Security

This page contains an archive of all entries posted to NP-Incomplete in the Security category. They are listed from oldest to newest.

Personal is the previous category.

Many more can be found on the main index page or by looking through the archives.

Powered by
Movable Type 3.33