California’s legislature recently passed SB-327, which is designed to require Internet of Things (IoT) and other “connected device” manufacturers to implement security features into internet connected devices. California Governor Jerry Brown signed the bill into law on September 28, 2018. While the attempt to improve security of these devices is admirable, the law is ambiguous in many respects, and will likely create significant challenges in its implementation and effectiveness.
As we have previously written, there have been a number of significant distributed denial-of-service (“DDoS”) attacks over the past few years resulting in large part from the proliferation of IoT devices. The increasing prevalence of IoT devices present a heightened risk of DDoS attacks because many of these attacks exploit the relative security weaknesses in IoT devices to create ”botnets” made up of thousands of devices, to launch powerful DDoS attacks.
DDoS attacks work by sending a high volume of data from different locations to a particular server or set of servers. Because the servers can only handle a certain amount of data at a time, these attacks overwhelm the servers causing them to slow significantly or fail altogether. By taking control of large numbers of IoT devices, attackers can generate overwhelming internet traffic originating from various sources across widespread geographic locations, making them more difficult to detect and prevent. Several variants of these attacks, including the Mirai botnet, utilize internet-connected digital video recorders (DVRs), cameras, and other IoT devices and then direct their traffic against various Internet properties and services. In many cases, these attacks are made possible by security gaps in the IoT devices that the attackers can easily exploit, such as the use of factory default or hard-coded usernames and passwords.
With this environment as a backdrop, the California legislature enacted SB-327 to address IoT security. The key provisions of SB-327 are outlined below.
- 91.04(a): Requirement that connected device manufacturers equip the device with “a reasonable security feature or features” that are “(1) appropriate to the nature and function of the device, (2) appropriate to the information it may collect, contain, or transmit, and (3) designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure.”
This “reasonableness” language is a likely attempt to create a fluid standard to allow evolution of the law as technology progresses. However, by using this flexible language, the statutory requirement is arguably nebulous and fails to provide a clear path to compliance. While a “reasonableness” standard has proven workable in other legal contexts (e.g., common law negligence), this is typically accomplished through evolution and further elaboration of industry standards and case law. In the IoT context, however, the pace of technological change coupled with the constant mutation of the threat environment may make the practical application of “reasonable security” very difficult. For example, if a court finds the use of a particular encryption algorithm or key length to be reasonable today, it is highly unlikely that a company can rely upon that case law for any significant period of time into the future as computing power increases, rendering levels of encryption previously believed to be secure, now insecure.
As various industry experts have opined, ideally the law should include clear standards that provide a manufacturer with a road map it can follow and benchmarks it can validate through the manufacturing process.
Beyond the challenges of adhering to a reasonable security standard, experts have also explained that, unlike the European Union’s General Data Protection Regulation, the California law does not address security-by-design. Others have pointed out that the law likewise fails to address fundamental security practices and requirements, such as device attestation, code signing, or a requirement to conduct security audits for firmware in the low-level components vendors buy-in from overseas suppliers.
- 91.04(b): Requirement that devices use a “preprogrammed password . . . unique to each device manufactured” or “require[ ] a user to generate a new means of authentication before access is granted to the device for the first time.”
As outlined above, the use of default or hard-coded passwords has been identified as a significant vulnerability which has led to widespread exploitation of connected devices. On its face, the requirement to use either unique passwords or user-generated passwords seems like a step in the right direction. However, the lack of specificity in this statutory language allows manufacturers to use unique, but insecure passwords, that still technically comply with the law. For example, a manufacturer could use easily guessable passwords (e.g., password1, password2) or, repeat the errors of TP-Link, and assign the password based on information (like the device’s MAC address) that is being broadcast from the device.
The efficacy of second alternative of permitting a user-generated password will largely depend on how the term “access is granted to the device” is interpreted. Many IoT devices are designed for “plug-and-play” functionality, such that no user interaction or configuration is necessary. In such cases, if may be that access is never granted to the device. Consumer wireless routers provide a great example – in most cases these devices provide two types of authentication: one to connect to the wireless access point and then connect to the Internet, and another to connect to the device’s administrative console to change its settings or configuration. The former is something that virtually all users will do, while the latter (which is arguably more likely to be considered “access . . . to the device”) is far less frequently done by the average user. Thus, in this example, the router manufacturer might require a user-generated password upon first access to the administrative console, but the typical user may not access this console until long after the device is first put to use, if ever.
While using unique or user-generated passwords is certainly a step in the right direction, there has been much research done and advice provided regarding secure passwords that this law fails to incorporate. As a result, the law may fail to achieve the intended security benefits.
- 91.05(b): Definition of “Connected device” that includes “any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.”
While this law has largely been described as regulating the security of IoT devices, the definition of “connected devices” in the law is far broader. Indeed, using the definition of something “capable of connecting to the Internet” and “that is assigned an Internet Protocol address or Bluetooth address,” the law would cover not only IoT devices, but computers, tablets, smartphones, smart watches, and virtually any other computing device. This broad scope could be problematic insofar as it may restrict the ability to interpret the existing requirements and exceptions or prevent future amendments from incorporating new requirements to this statute because of the broad applicability of the language.
- 91.06(a): Exception for “unaffiliated third-party software or applications that a user chooses to add to a connected device.”
In light of the broad language of Section 1798.91.05(b), which places devices like laptop or desktop computers as well as smart phones with in the scope of the law, the exception for unaffiliated third-party software or applications becomes much more significant. For example, for computers hardware may be manufactured by one or more parties (e.g., HP, Intel, etc.), operating systems by another (e.g., Microsoft Windows), and applications by numerous others. Similarly, Android-based smartphones might have multiple layers of manufacturers involved. In the first instance, it is not clear to whom the law would actually apply. Furthermore, Section 1798.91.06(a) makes clear that the downstream software or application manufacturers fall outside of coverage. If this is the case, the original manufacturer of the hardware is unlikely to have real user authentication mechanisms, much less any that are used regularly. Rather, most of the authentication occurs at the operating system (e.g., password, PIN or biometric login features) or further downstream at the application level. If the intended scope of the law is to capture these devices, further clarity would be required as to the applicability of the law’s requirements and exceptions to affect any significant change.
SB-327 was approved by the California legislature on September 6, 2018, and was signed into law by Governor Jerry Brown last week. The law is slated to go into effect on January 1, 2020, the same date that the California Consumer Privacy Act of 2018 (CaCPA) will come into force.