Companies that make anti-virus and similar software that helps internet users guard against dangerous online content must now think twice about whether they can be sued for their decisions about what content to filter. This is the result of a significant ruling last month by the United States Court of Appeals for the Ninth Circuit, in which the court held that a provider of such software cannot filter content produced by a competitor solely to restrict access to that competitor’s software. The decision may lead companies to design their software to filter less content. As a result, internet users who rely on anti-virus and similar filtering software may find themselves at a greater risk that their software will fail to protect them from dangerous online content.
The Ninth Circuit reached this decision in Enigma Software Group USA LLC v. Malwarebytes, Inc., in which the court was asked whether Malwarebytes, a company that sells products and services to help users block and detect against viruses, ransomware, and other internet-borne threats, could be held liable for designing threat-prevention software in a way that prevented its users from using products made by a competitor company, Enigma Software. In a split 2-1 decision, the court held that Malwarebytes could be liable for engaging in anticompetitive practices, assuming for purposes of the decision that Malwarebytes’ sole reason for filtering Enigma’s software was to harm the competitor company. The decision threatens to upend the settled expectations of companies that provide threat-prevention software, which prior to Enigma reasonably believed that their decisions about content filtering were immunized from liability under federal law.
Section 230 of the CDA Provides Broad Immunity to Internet Platforms
At the heart of the Enigma decision lies Section 230 of the Communications Decency Act (“CDA”). Section 230 provides a broad immunity to internet platforms, such as internet service providers, social media companies, and message boards, from being held liable either as the “publisher or speaker” of information provided by a third party or for “restrict[ing] access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Put more simply – Section 230 protects a website or internet platform from any liability arising from its “transmitting or taking down user-generated content.”
Section 230 was intended to make the internet safer for its users. Congress enacted Section 230 in 1996, following federal and state court rulings that imposed liability on internet service providers for their decisions to filter offensive content from their network. Members of Congress feared this result was “backward,” and warned that the outcome would be to impede online service providers from taking steps to shield objectionable content from children. Section 230 was enacted with the specific intent of overruling these then-recent decisions – in part, as the Ninth Circuit explained in Enigma, to encourage software companies to create and provide filtering software that would empower parents to prevent their children from accessing offensive content. To achieve this outcome, Section 230(c) of the CDA provides explicit “Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material.”
Despite this intent to encourage software companies to promulgate filtering software, the majority of cases applying Section 230 actually have focused on whether and when an online platform can be held liable for allegedly unlawful third-party speech. Platforms are generally able to defeat efforts to hold them liable for third-party content. Accordingly, Section 230 has been found to provide immunity to an online dating service that published a false profile of the plaintiff, an internet service provider that distributed defamatory content to a chatroom, and airline ticket marketplaces that published fraudulent ticket listings, among many other examples. In each case, the court found that the underlying internet platform was immune because the allegedly unlawful content was supplied by a third party. However in a related line of cases, courts have found exceptions to Section 230’s immunity conferring provisions, most often where the online platform is found to “contribute materially” to the allegedly unlawful published content.
The Zango Case Applied Section 230’s Protections to Threat-Prevention Software
Yet while this prong of Section 230 has been somewhat in flux, prior to the Enigma decision, companies that provide threat-prevention software enjoyed near limitless immunity for their decisions about content filtering. The principal case on this point is the Ninth Circuit’s 2009 opinion in Zango, Inc. v. Kaspersky Lab, Inc. At issue in that case was whether Kaspersky, an internet software company that distributed threat-prevention software, could be held liable for blocking Zango’s software. Zango provided free software to users who agreed to receive pop-up ads as they browsed the internet. Kaspersky’s software classified Zango’s products as adware, and blocked them accordingly. The Zango court concluded that Section 230 protected Kaspersky’s decision to block Zango’s content. Quoting Section 230, the court explained that the Good Samaritan provision “provides protection for ‘any action taken to enable or make available . . . the technical means to restrict access’ to [obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable material].” Kaspersky fell within this definitional list, the court explained, by providing software that offered a means to restrict “otherwise objectionable” content. Thus, under Section 230, it was immune from any liability related to its decision to block Zango’s software.
The Zango court left open – implicitly – whether there could still be some limit on Section 230’s grant of immunity for content-filtering decisions. The majority did not address this question at all, and even noted that there was no argument before it that Kaspersky did not consider Zango’s software to be “otherwise objectionable.” In a concurring opinion, however, Judge Raymond C. Fisher warned that pernicious consequences could follow if future courts permitted online platforms to have unchecked authority to define what content is “otherwise objectionable.” Explaining that he had “anticompetitive blocking in mind,” Judge Fisher wrote that “[u]nless [Section 230] imposes some good faith limitation on what a blocking software provider can consider ‘otherwise objectionable,’ or some requirement that blocking be consistent with user choice, immunity might stretch to cover conduct Congress very likely did not intend to immunize.” Yet the Zango court was not compelled to address that concern, and for more than 10 years following the Zango decision no court found that a manufacturer of threat-prevention software could face liability as a result of the “good-faith limitation” advocated by Judge Fisher.
Section 230 Does Not Protect Anti-Competitive Content Filtering
This changed on September 12, 2019, when the Ninth Circuit issued its decision in Enigma. Much like in Zango, the case concerned a threat-prevention software product (this one offered by Malwarebytes). The Malwarebytes product flagged Enigma’s software as a security risk and subjected Enigma’s software to quarantine. Enigma sued Malwarebytes, arguing that Malwarebytes’ filtering conduct was part of a “bad faith campaign of unfair competition” and that Enigma’s software “pose[s] no security threat.” Notably, the court assumed that Malwarebytes acted with bad faith and specifically that Malwarebytes’ blocking was anticompetitive. The court did this because the district court had not considered or decided whether Malwarebytes could provide another reason to filter Enigma Software’s products, and courts of appeals generally will not decide or review factual questions that have not previously been decided by a district court. As a result of applying this rule in the Enigma case, the Ninth Circuit was required to squarely address Judge Fisher’s previously unanswered question from Zango.
By a 2-1 vote, the Enigma court held that “otherwise objectionable” content did not include that which the provider of filtering software objects to solely on anticompetitive grounds. Instead, the phrase “otherwise objectionable” “was more likely intended to encapsulate forms of unwanted online content that Congress could not identify in the 1990s.” The majority reasoned that when it enacted Section 230, “Congress wanted to give internet users tools to avoid not only violent or sexually explicit materials” – some categories of which were specifically defined in the statute – “but also harassing materials.” For this reason, Section 230 provides immunity for the type of content at issue in the Zango case, adware (and other harassing content such as spam and malware). But for the first time, the court held that there are limits to the meaning of “otherwise objectionable,” including content that is deemed “objectionable” solely because it is provided by a competitor company.
Notably, Judge Johnnie B. Rawlinson dissented. In her view, Section 230 is written broadly and should be applied as such. She further argued that the facts in Enigma were not materially different from those in Zango. Judge Rawlinson’s dissent is important both for demonstrating the difficulty courts can face in interpreting Section 230, and also for highlighting the potential need for Congress to revisit Section 230 and clarify the breadth of immunity it intended Section 230 to confer.
The Enigma decision will have implications to both the parties before it and also for other providers of threat-prevention software more broadly. For Malwarebytes and Enigma, this litigation is all but certain to continue. Under the Federal Rules of Appellate Procedure, Malwarebytes has the option to seek review from a larger number of Ninth Circuit judges (known as en banc review, and only rarely granted). It also may ask the Supreme Court to overturn the Ninth Circuit’s decision (also an infrequent occurrence). Barring changes from either of these courts, though, Malwarebytes and Enigma then most likely will head back to the district court to determine whether Malwarebytes can identify a different or additional reason why Enigma’s software is “otherwise objectionable.” It is likely, but not certain, that identifying such a reason would entitle Malwarebytes to Section 230’s Good Samaritan immunity.
What Are The Implications for Companies that Filter Content?
More broadly, other providers of threat-prevention software are now at risk of being sued for their own decisions about what content to filter. Under the Enigma decision, it is clear that threat-prevention software providers receive Section 230 immunity if they provide software that filters or helps to filter content “that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, [or] harassing.” Equally, companies cannot claim Section 230 immunity, at least in the Ninth Circuit, if they produce threat-prevention software that filters or enables filtering solely for anticompetitive reasons. But this, of course, leaves significant space in between. And it is likely that those on the wrong end of a filtering decision will elect to sue threat-prevention software companies by arguing that the filtering decision encoded into the threat-prevention software is illegal.
In light of this uncertainty, companies that provide filtering products will have to think critically about how to avoid being sued for the filtering decisions embedded into their threat-prevention products. To be sure, these companies will assuredly still choose to filter content widely recognized to be malicious, such as viruses. But as Professor Eric Goldman has argued, it is less certain how Section 230 would apply to content that is arguably “otherwise objectionable” in the eyes of a threat-protection vendor, where that arguably objectionable content also is produced by a competitor company. Equally, it is not clear how closely a filtering decision must relate to a specifically enumerated harm of Section 230. The only certain way to avoid liability in these situations will be to choose not to filter such content. For this reason, Professor Goldman observes that the “ruling makes the internet less safe.”
Courts – and perhaps Congress – likely will be called upon to resolve this uncertainty in the coming years. Until they do, companies and users alike will be wise to exercise even greater caution in their respective decisions. To this end, companies will be well served by ensuring that their filtering decisions are closely analogous to those codified in Section 230 or previously upheld by the courts. Similarly, because there is now a greater risk that threat-prevention filtering decisions will need to be defended in court, companies should make sure that they can clearly explain their filtering decisions. Companies may also choose to make clear to users of their products what content is, and is not, being filtered. To the extent that a company is concerned about content that it nonetheless decides not to filter – perhaps because the potentially dangerous content is produced by a competitor – it risks alienating users who are not made aware of the potential risk. While companies navigate how to make filtering decisions in light of Enigma, users of threat-prevention software must assume that they will now be exposed to a greater volume of potentially dangerous content. They should, in turn, exercise greater diligence to ensure that they are confident in the safety of their internet activity.