Connecticut Supreme Court Ruling Allows Private Plaintiff to Assert Negligence Claims Based on HIPAA

November 15th, 2014 by Paul Pittman

Recently, the Connecticut Supreme Court ruled that a plaintiff may assert state law negligence claims against a healthcare clinic that allegedly released confidential patient health data based on the Health Insurance Portability and Accountability Act (“HIPAA”). The ruling enables private plaintiffs to use the standard of care set forth under HIPAA to support a negligence claim in Connecticut and could result in a flood of litigation if other state courts follow suit.

In Emily Byrne v. Avery Center for Obstetrics and Gynecology, Byrne asserted common law claims, including negligence, alleging that the healthcare clinic impermissibly released her medical records to her ex-boyfriend while complying with a subpoena, in violation of her privacy right of confidentiality under HIPAA. Defendant argued that Byrne’s common law claims were precluded by HIPAA and the trial court agreed.

The Connecticut Supreme Court reversed, holding that “neither HIPAA nor its implementing regulations were intended to preempt tort actions under state law arising out of the unauthorized release of the plaintiff’s medical records.” Pointing to numerous decisions by courts in other jurisdictions, the Court also held that “HIPAA and its implementing regulations may be utilized to inform the standard of care applicable to such claims arising from allegations of negligence in the disclosure of patients’ medical records pursuant to a subpoena.” The Court remanded the case to the trial court to allow Byrne to proceed on her negligence claims.

The decision by the Connecticut Supreme Court expands plaintiff attorney’s arsenal when it comes to claims for breaches of health care data by providing an avenue for plaintiffs to essentially assert a private right of action based on a violation of HIPAA – an area that is traditionally reserved for federal regulators such as the Department of Health and Human Services under HIPAA. The ruling not only impacts healthcare and other covered entities, but also “business associates” of the covered entities who are also subject to the compliance requirements of HIPAA. Plaintiff attorneys will surely use this decision to assert similar claims and bolster new claims, such as unfair competition and invasion of privacy, in other states.

It’s worth noting that any favorable impact of this decision could be short lived if the trial court rejects Byrne’s negligence claims. Byrne may have a particularly difficult time proving damages under her negligence claims, which is a showing many data breach plaintiffs have traditionally had trouble establishing.

Regardless of the outcome, covered entities and their business associates are best protected by taking the steps necessary to ensure compliance with HIPAA. Doing so will allow defendants to avoid the traditional enforcement actions and fines issued by regulators for violations of HIPAA, as well as any damages sought under novel common law claims based on the HIPAA standard of care.

Federal Communications Commission Steps Loudly Into Realm of Data Privacy Enforcement

October 31st, 2014 by Paul Pittman

Last week, the Federal Communications Commission (“FCC”) announced its entrance into the data privacy enforcement realm by issuing a $10 million fine against two telecommunications companies for failing to adequately safeguard their customer’s sensitive personal information. In doing so, the FCC joined a growing list of regulatory bodies, such as the Federal Trade Commission (“FTC”) and Department of Health and Human Services, as well as state attorneys general, who have asserted enforcement authority over companies and entities who fail to protect and secure consumer data.

FCC Finds Lax Data Security “Unfair and Unreasonable Practice” Under Communications Act

The two telecommunication companies subject to the $10 million FCC privacy breach fine – TerraCom Inc. and YourTel America, Inc. – were accused of storing consumer’s personally identifiable information (“PII”), including Social Security numbers, names, addresses and driver’s license numbers on unsecured Internet servers. While the privacy policies of the two companies purported to safeguard customer information from unauthorized access or use, by employing “technology and security features,” the PII was stored in a format and at an unsecured location that allowed easy access to the information over the Internet. As a result, the PII of up to 305,000 consumers was exposed.

In issuing the fine, the FCC determined that the telecommunications companies violated their duty to protect personal information by engaging in unfair and unreasonably safe practice under section 201(b) of the Communications Act. The FCC further determined that the telecommunication companies engaged in unfair and unreasonable practices by providing deceptive and misleading representations regarding its privacy protections and failing to notify consumers of the breach.

The FCC’s data privacy breach fine follows a $7.4 million settlement with Verizon last month over allegations that Verizon used the personal information of customers to market other services without providing the customers with notice of their privacy rights and failing to obtain the customer’s consent to use their personal information.

FCC Enforcement Action Puts Companies on Notice But Provides Little Guidance

Generally, the FCC is authorized to exercise jurisdiction over communications companies which include wireless, satellite and cable companies. As a result, the FCC’s ability to bring data privacy enforcement actions is limited. Nonetheless, the imposition of such a substantial fine by the FCC for the telecommunications companies’ failure to adequately secure its network, in the absence of any actual injury to the consumer, signals an aggressive approach by the regulator against those companies handling consumer data that fall within the purview of the FCC. Further, given the breadth of services provided by communications companies, enforcement actions by the FCC may overlap with those of other agencies with a broader reach, such as the FTC, potentially subjecting companies to multiple regulatory schemes and enforcement actions.

Ultimately, the FCC’s enforcement action and $10 million fine puts companies on notice that there is an additional, active regulatory body that should be considered when developing privacy policies and implementing processes, standards and procedures. Yet, the FCC’s reliance on the Communications Act in exercising enforcement authority here is likely to be called into question in a way similar to that faced by the FTC. In fact, one of the FCC commissioners who dissented in the issuance of the proposed fine noted that the Communications Act “was never intended to address the security of the data on the Internet.”

Importantly, the FCC’s initial data privacy enforcement action here creates ambiguity about the data security standards that are actually compliant with the Communications Act, and provides little clarity about what other types of data it covers. Given these uncertainties, communications companies should tread carefully in handling consumer data, until a firm body of directives, decisions and rulemaking by the FCC in the data privacy realm is established to provide guidance.

California Passes New Data Breach Laws: Requirement to Offer Identity Theft Protection at No Cost, New Duties Imposed on “Maintainers” of Personal Information, and Sale of Social Security Numbers Banned

October 3rd, 2014 by Nora Wetzel

California added new provisions to its data breach law on October 1 by signing Bill AB 1710 into law. The amendment to California’s Civil Code (1) requires entities that experience a data breach to provide identity theft prevention and mitigation services at no cost for 12 months if the notifying entity is the “source” of the breach, (2) requires entities that “maintain” personal information to implement the same safeguards to protect personal information as already required for those that own or license personal information, and (3) prohibits the sale (or offer to sell) individuals’ social security numbers. These new provisions will undoubtedly affect any business that deals with computerized personal information.

Identity Theft Services                                                                 

The new law requires entities that own or license specified personal information to offer free identity theft protection and mitigation services for no less than 12 months to individuals affected by a data breach. Moreover, a data breach notice sent to affected individuals must include all information necessary to take advantage of the offer.

This new provision only applies if the notifying entity was the source of the breach and if specific personal information was involved. While the new law does not define “source,” the bill’s legislative history suggests that “source” refers to the location where the data breach occurred. To illustrate, a retailer would be the “source” of a data breach if hackers obtained consumers’ credit card information from the retailer’s computer system. It is not clear, however, if a retailer contracts with a third-party vendor, such as a cloud service provider, whether the vendor or the retailer is the source of the breach, where a breach of the vendor’s system occurs. Presumably, the vendor would be the source of the breach. This could create tension surrounding the notification to affected individuals because the retailer has a strong interest in preserving its relationship with its customers. The retailer likely will want to control the notification message to their customers, yet the vendor may be charged with the duty to notify the affected customers.

Likewise, the new identity theft protection provision only applies to particular personal information—an individual’s first name or initial and last name combined with a social security number, driver’s license number or California identification card when either the name or the data elements are unencrypted. Personal information in this context does not include financial account information or medical information. Entities should verify that they encrypt this type of personal information to avoid application of the identity theft protection provision.

Under most circumstances, HIPPA-covered entities will be exempt from this new provision. California’s existing law provides that HIPPA-covered entities complying “completely” with Section 13402(f) of the federal HITECH Act will be “deemed to have complied with” the section of California law requiring the offer of free identity theft protection services.

Safeguarding Personal Information Applies to Those Who “Maintain”

Another new provision requires entities that maintain personal information to: (1) implement and maintain reasonable security procedures and practices to protect that information from unauthorized access, destruction, use or modification, and (2) notify owners or licensees of that information “immediately following discovery” of a breach of the security of the data. The new law does not clearly define “maintain” but, again, looking at legislative history of the bill suggests the drafters intended “maintain” to refer to an entity that stores, gathers, or holds personal information like a retailer may do with a customer’s financial information, in contrast to the “owner” of such financial information which would be a financial institution.

This new provision encompasses a broader scope of personal information than that included in the new identity theft protection provision. Personal information here includes financial information such as account, credit or debit card numbers with any required security code or password, or medical information, in addition to an individual’s first name or initial and last name combined with a social security number, driver’s license number or California identification card.

Entities that maintain personal information should review their security practices and procedures to ensure any personal information implicated by this new provision is adequately protected against unauthorized access, destruction, use or modification. The reasonableness of an organization’s data security safeguards will likely be based upon its size, complexity and capabilities in order to take into account the resource limitations of smaller entities.

No Sale of Social Security Numbers

California also added new provisions to its data breach law prohibiting the sale, advertisements for sale, or offer to sell individuals’ social security numbers. While the new provisions specifically exempt releasing individuals’ social security numbers incident to a larger transaction and necessary to identify the person to accomplish a legitimate business purpose, releasing individuals’ social security numbers for marketing purposes is expressly banned.

The new additions to California’s data breach law can be found out at: http://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml.

California Enacts Smartphone Kill Switch Law to Promote Data Security

August 29th, 2014 by Matthew Fischer

This week California enacted into law Senate Bill 962, which requires a “kill switch” on all smartphones that would render the device inoperable. The law applies to all smartphones manufactured after July 1, 2015 and sold in the state, but exempts other mobile devices such as tablets and smartwatches.

While Minnesota passed a similar law in June, its statute (as well as comparable legislation pending in New York, Illinois and Rhode Island) does not require that a kill switch is enabled as the default setting as mandated under S.B. 962. California has been a leader in privacy and data security legislation and has the nation’s largest economy and population of smartphone users. As a result, the law will have a sweeping impact since it is unlikely that cell phone manufacturers will limit the kill switch feature to those phones sold in California and Minnesota. The feature has enough supporters that similar federal legislation, “The Smartphone Prevention Act,” was introduced to the U.S. Senate in February.

Apple iPhones with the iOS 7 operating system already include an “Activation Lock” feature that is largely compliant with S.B. 962 except for the fact that it is not a default setting. Google and Microsoft are expected to add kill switches in future versions of their operating systems.

California’s law and the growing popularity of kill switches are in response to the surge in smartphone thefts over the last year. A Consumer Reports survey indicated that approximately 3.1 million Americans were victims of smartphone thefts in 2013, up from 1.6 million in 2012. Smartphone thefts are particularly prevalent in the tech-centric Bay Area where a large percent of the population carry mobile devices. Smartphones pose a huge liability for data security since consumers store everything from credit card numbers to passwords to accounts and websites, and even Social Security numbers.

The law is not without its detractors. CTIA, the trade association for the telecommunications industry, initially opposed the law out of concern that a patchwork of state-specific laws would increase costs without providing a comprehensive solution, while inhibiting competition and innovation. Opponents have pointed to the availability of other technological solutions such as remote wipe functionality. The Electronic Frontier Foundation (“EFF”) remains opposed due to concerns related to potential civil rights abuses and the possibility of criminal exploitation. EFF representatives have expressed concerns that a kill switch could be used by perpetrators of domestic violence and stalker crimes to prevent the victims from reporting the abuse, and would create a means for law enforcement to disable smartphones of protestors, akin to when cell phone access on BART subways was shut down in 2011 in response to a planned protest by commuters. Another worry is that hackers could potentially access the kill switch.

Retailers could incur a civil penalty ranging from $500 to $2,500 per smartphone sold in violation of the law.

Class Action Plaintiffs Look to Fair Credit Reporting Act for Private Relief from Data Breaches Involving Health Information

August 21st, 2014 by Paul Pittman

A recent class action brought against the University of Miami (“University”) previews what could become an emerging trend among plaintiffs’ class action attorneys to seek damages for the unauthorized disclosure of personal health information under the Fair Credit Reporting Act (“FCRA” or the “Act”). Enforcement actions for data breaches involving the unauthorized disclosure of personal health information (“PHI”) by health care systems or hospitals typically fall under the purview of the Department of Health and Human Services pursuant to the Health Insurance Portability and Accountability Act (“HIPAA”). However, recent class action plaintiffs’ attorneys have advanced unique arguments in an attempt to bring data breaches involving PHI under the protections afforded by the FCRA.

The FCRA governs Credit Reporting Agencies (“CRAs”) and was enacted to ensure that CRAs accurately and fairly assemble personal information on consumers while maintaining the privacy of their personal information. 15 U.S.C. § 1681a(f). Typically, CRAs assemble and sell “consumer reports” for businesses, such as credit card companies and banks, to use in evaluating a consumer’s eligibility for credit, insurance or employment purposes. 15 U.S.C. § 1681a(d). The FCRA requires that CRAs follow reasonable procedures to protect the information. 15 U.S.C. § 1681e(a). Well known CRAs include Experian, TransUnion and Equifax. Notably, the FCRA provides for statutory damages of up to $1,000 and punitive damages for willful noncompliance with the Act. 15 U.S.C. § 1681n(b). Attorney’s fees may also be collected under the Act. 15 U.S.C. §§ 1681n(c) & 1681o(b).

Class Action Claims Against the University of Miami Health System

In February, current and former patients (“Patients”) filed a class action complaint in the U.S. District Court for the Southern District of Florida against the University of Miami (“University”) alleging that the University allowed the unauthorized access of confidential records of putative class members, including PHI, held by a third-party offsite records vendor without their knowledge or consent and without sufficient security.

Patients asserted, among other things, that the hospital violated the FCRA by failing to implement adequate safeguards to protect their personally identifiable information and PHI from a data breach suffered by the third party vendors. The Patients argued that the hospital was a CRA that created “consumer reports” containing sensitive information including names, dates of birth, social security numbers, billing information and confidential health records, and disseminated this information to medical service providers affiliated with the University. Patients alleged that the University allowed employees of the outside vendor and others to gain unrestricted access to the patients’ personally identifiable information and PHI, which was allegedly misused and intentionally disclosed to third parties for profit.

The University settled these claims last week for just over $100,000, before the court could consider the viability of plaintiffs’ arguments under the FCRA. Nonetheless, there is a class action currently pending in the U.S. District Court for the Middle District of Alabama where hospital patients advanced similar arguments regarding the disclosure of medical and personal information by a hospital under the FCRA. In light of the settlement by the University, the outcome of this case in Alabama may reveal how courts will consider these arguments under the FCRA.

Fair Credit Reporting Act

Plaintiffs’ theory of liability under the FCRA is likely based on the fact that the Act specifically restricts the reporting of medical information to limited purposes and only if the patient has specifically consented to the disclosure. 15 U.S.C. § 1681b(g). The Act also allows for the distribution of consumer reports for “any legitimate business need.” 15 U.S.C. § 1681b(3)(e). However, it is questionable whether hospitals and healthcare systems are CRAs that engage in the business of “regularly assembling or evaluating consumer credit information or other information on consumers for the purpose of furnishing consumer reports to third parties.” Hospitals have not traditionally been considered CRAs. Further, hospitals typically collect personal identity information and PHI for their own business and record keeping purposes, not for the purpose of creating and furnishing “consumer reports” to third parties as is required under the FCRA.

Emerging Cause of Action for Data Breach Involving Private Health Information

Importantly, the claims asserted by class plaintiffs in these cases illustrate a novel use of the FCRA in the context of private health data. Plaintiffs have traditionally utilized HIPAA to redress data breaches involving PHI. However, should courts accept the argument that hospitals and medical providers are CRAs subject to the requirements of the FCRA, it will enable plaintiffs to assert claims for statutory and punitive damages, rather than enlisting the HHS to institute enforcement actions under HIPAA when data breaches occur. As the recent data breach of 4.5 million patient records at Community Health Systems, Inc. illustrates, the number of patient records that may be involved in a particular incident can produce very substantial and potentially crippling statutory damages. If plaintiffs’ claims under the FCRA find traction, hospitals, medical providers and healthcare systems can certainly expect these types of private patient actions to follow.

FTC Clarifies COPPA “Verifiable Parental Consent” Requirements

July 21st, 2014 by Afigo Fadahunsi

The Federal Trade Commission (FTC) modified guidelines it issues to developers who make apps specifically for children. App developers have taken advantage of the soaring lucrative app market aimed at a younger audience that not only enjoys the fast-paced adrenaline rush of modern technology, but actually relies on technology for educational development, as more school districts have introduced the use of tablets in the classroom. The trouble is, however, with increasing and constant presence of adolescent online activity comes a greater degree of parental concern for their privacy.

The Children’s Online Privacy Protection Act (COPPA) was created primarily to protect children under the age of 13 from the collection of their personal data online for commercial use. The goal of COPPA is to keep parents in control of what their children under 13 are viewing and disclosing on the Internet. The FTC’s recent changes to a list of guidelines not only ensures that app developers and app stores notify parents of how their children are using apps, they also reaffirm these entities’ obligation to obtain verifiable parental consent before collecting personal information from children.

The FTC initially provided that charging a parental credit card was sufficient to satisfy parental consent, as the parent, at the very least, would see the charge on the monthly statement, and would have notice of the child’s activity on the website. In its revisions, the FTC now clarifies that a credit card need not be charged to obtain parental consent, so long as the collection of the credit card is supplemented with other effective safeguards, such as questions to which only parents would know the answer.

The FTC also revised its guidelines to establish that the developer of a child-related app may use a third party, such as an app store, to obtain parental consent on its behalf. In that instance, if the app store provides the required notice and consent verification prior to or at the time of the purchase of a mobile app for children under 13, the mobile app developer may rely on that consent.

Finally, the FTC suggested that it supports the creation of “multiple-operator” methods or common consent mechanisms – app stores that assist developers operating on their platform with providing a verifiable consent mechanism will not be held liable under COPPA so long as they do not “misrepresent the level of oversight [provided] for a child-directed app.”

Canada’s Anti-Spam Legislation (CASL) Will Impact U.S. Companies

July 7th, 2014 by Matthew Fischer

Canada’s Fighting Internet and Wireless Spam Bill, better known as Canada’s Anti-Spam Legislation (CASL), was enacted in December 2010, but enforcement of the law did not commence until July 1, 2014, on Canada Day. The law impacts any U.S. company or individual sending commercial electronic messages (CEMs) to businesses in Canada and it has several aspects that differ from the restrictions under CAN-SPAM and TCPA in the U.S. As a result, a “one size fits all” approach with respect to electronic marketing campaigns that include our neighbors to the north will not work.

The law applies to CEMs sent from or to computers and devices located in Canada. It includes emails, SMS, instant messaging and certain social networking communications that are sent to email addresses, instant message accounts, phone accounts and social media accounts for the purpose of conveying commercial or promotional information to customers or prospects in Canada. Fax messages do not fall under the statute.

CASL also prohibits the altering of transmission data, and the installation of a computer program without consent, but this post will focus on the CEM aspect of the statute.

Consent

Unlike CAN-SPAM, which requires an “opt-out” model, CASL requires an “opt-in” mechanism whereby senders must first procure either implied or express consent before sending a CEM. Accordingly, marketers cannot use a pre-checked toggle box when seeking consent.

Implied consent occurs if the recipient: (1) has purchased a product, service or entered into another business deal, agreement, or membership with the sender within the last 2 years, or; (2) made a donation or gift, volunteered with, or been a member of the sender within the last 2 years, if the sender is a registered charity or political organization.

Unlike the TCPA, express consent under CASL may be obtained orally or in writing, but it must be sought separately for each of the three acts covered by CASL (i.e., sending a CEM, altering transmission data and installing a computer program). A request for written consent must include:

  • A clear and concise description of the purpose for which consent is sought;
  • The name of the person seeking consent, or the person on whose behalf consent is sought;
  • The requestor’s contact information (mailing address, and either a telephone number, email address or website URL);
  • A statement that the recipient can withdraw consent at any time.

The Act is ambiguous with respect to a number of written consent issues, however, such as whether the person seeking consent must specify the particular device that will receive the CEMs, whether a hyperlink to the requestor’s contact information is permitted and the level of detail required for the purpose statement.

The CRTC has provided guidance for obtaining oral consent, which it deems sufficient if it can be verified by an independent third party, or where a complete and unedited audio recording of the consent is retained by the person seeking consent.

A number of categories of electronic messages are exempt from CASL, including:

  • CEMs sent between businesses that have an ongoing business relationship and that are sent by an employee, representative, contractor or franchisee and that are relevant to the business, role, function or duties of the recipient. Also exempt are CEMs sent to third-party business partners.
  • Messages sent and received via an electronic messaging service, provided that (i) the information and unsubscribe mechanism that are required under the Act are conspicuously posted and readily available on the user interface through which the CEM is accessed and (ii) the recipient either expressly or implicitly consented to receive it.
  • If the sender has a personal or family relationship with the recipient.
  • Messages sent to consumers in response to requests for information, inquiries or complaints.
  • Third-party referrals, provided the sender identifies in the CEM the full name of the referring person and the referring person has a current relationship (personal or business) with the recipient.
  • CEMs regarding the delivery of a product or service in relation to a previous transaction, including messages to facilitate or complete a transaction.
  • Messages sent by telecommunications service providers for the installation of computer programs without consent in order to either (i) protect network security, (ii) upgrade or update the network, or (iii) correct a failure in the operation of a computer system or program installed on the network.
  • CEMs sent to a limited-access secure and confidential account to which messages can only be sent by the person who provides the account.
  • Messages sent by a registered charity or political organization with the primary purpose of raising funds.Messages sent to satisfy a legal obligation, to provide notice of or to enforce a legal right, order, obligation or judgment.

Enforcement

Three different government agencies will share enforcement responsibilities of CASL. The main enforcement body is the Canadian Radio-television and Telecommunications Commission (CRTC) which will issue administrative monetary penalties for sending non-compliant CEMs, altering transmission data (e.g., misdirecting users to a website they did not intend to visit), or installing computer programs on a system without express consent. The Competition Bureau will administer monetary penalties or criminal sanctions for false and misleading representations and deceptive marketing practices. The Office of the Privacy Commissioner will enforce against the collection of personal information through the unauthorized access to computer systems or the harvesting of electronic addresses by compiling bulk email lists through mechanisms.

Penalties for the more serious violations can range as high as $1 million for individuals and $10 million for businesses, per violation. The law is being implemented in stages and starting July 1, 2017 a private right of action will be permitted against violators who will be liable for statutory damages that could be as high as $1 million per day.

Compliance Considerations

Companies marketing to businesses in Canada should create a checklist to ascertain whether a message constitutes a CEM and, if so, whether any of the many exceptions apply. We also recommend undertaking a thorough review of existing policies and guidelines, or developing new ones, for requesting consent to send CEMs and structuring a database to maintain records of each consent obtained (whether written or verbal). Existing databases of email addresses and phone numbers must be reviewed and scrubbed, if necessary, to determine which means of contact are still valid (i.e., an existing business relationship can be verified) and whether a new consent should be obtained. Businesses should also update their CEM templates and “unsubscribe” mechanisms to ensure compliance with the CEM aspect of the new law.

Aereo Loses Battle with Broadcasters Over Online Television Programming

June 30th, 2014 by Paul Pittman

In a highly anticipated decision, the Supreme Court ruled on Wednesday that Aereo Inc.’s online service that broadcasts television programming over the Internet infringed on the exclusive right of television broadcasters to provide those broadcasts to the public under the Copyright Act. The decision is a win for major broadcasters and content providers hoping to prevent online upstarts from impermissibly poaching their content. However, the Supreme Court’s ruling is limited and its application to future cases involving television content providers and storage is uncertain because the decision does not set forth a clear standard to identify the types of entities and services that violate the public performance provision of the Copyright Act.

Aereo’s Service

For a monthly fee Aereo broadcasted television programming online to its subscribers, virtually simultaneously with the actual broadcast of the programming on television. Aereo’s programming included copyrighted works that Aereo did not own and did not have a license to broadcast.

Technically, Aereo’s service worked as follows: subscribers to Aereo visited Aereo’s website and selected the desired television program. Aereo’s servers selected an antenna, specifically dedicated to the subscriber, to pick up the broadcast and convert it to a digital form for transmission across the Internet. The digital version of the broadcast was then sent to Aereo’s server which saved the data into a folder specifically assigned to the subscriber, creating a personal copy of the broadcast for the subscriber. Once several seconds of programming had accumulated, Aereo’s servers began streaming the programming to the subscriber until the entire show had been sent to the subscriber. The digital copy that was sent to the subscriber was solely for that subscriber and that copy was not sent to other subscribers. If multiple subscribers wanted to watch the same show, Aereo created individual copies of the program for each subscriber in their own folder to use to stream the content.

The copyright owners for these television programs – television producers, marketers, distributors and broadcasters – brought suit against Aereo for infringing their right to publicly perform their works under the Transmit Clause of the Copyright Act and sought a preliminary injunction enjoining Aereo’s service. The District Court for the Southern District of New York denied the request for preliminary injunction finding that Aereo’s service did not transmit the broadcast to the public in violation of the Copyright Act, but rather sent private transmissions to individual subscribers. The Second Circuit affirmed and the case was certified to the Supreme Court.

Does Aereo’s Service Transmit a Performance to the Public?

The “exclusive right” to perform a work is defined in the Transmit Clause of the Copyright Act as the right to “transmit or otherwise communicate a performance . . . of the copyrighted work . . . to the public, by means of any device or process.” Based on these provisions of the Copyright Act, the Supreme Court (“Court”) identified two determinative issues: (1) whether Aereo’s service “performs”; and (2) if so, whether the performance was done publicly.

Aereo’s Performance

To determine whether Aereo “performed” the Court focused on the purpose of the Copyright Act as gleaned from the amendments to the Copyright Act in 1976. The Copyright Act was amended to specifically address early cable TV providers who carried local television broadcasts to subscribers in other cities. These early cable TV providers used antennas to receive television signals and coaxial cables to carry the signal to their subscriber’s television set. A subscriber was free to choose the desired program by turning the knob on their television set. Under the law existing prior to the 1976 amendments to the Copyright Act, broadcasters “performed” because they selected the programs to be viewed and sent the programming to the viewers. However viewers, which included the cable TV providers, did not “perform” because they simply carried the programs they received.

The Court explained that Copyright Act of 1976 was amended to erase this distinction between broadcasters and viewers and instead clarified that one “performs” by showing a work’s “images in any sequence or to make the sounds accompanying it audible.” The Copyright Act’s Transmit Clause further addressed this activity by defining the transmission of a “performance” as the communication “by any device or process whereby images or sounds are received beyond the place where they are sent.” The Court determined that these amendments make clear that an entity that acts like the early cable TV providers “performs” when it enhances a viewer’s ability to receive broadcast television signals.

With that in mind, the Court held that Aereo “performed” under the Copyright Act because Aereo’s service was substantially similar to the service provided by the early cable TV providers that the Copyright Act sought to address. The Court dismissed the differences in the underlying technology between Aereo and the early cable TV providers – subscribers to Aereo have to initiate the transmission of the specific broadcast online while early cable TV providers sent a continuous feed to the subscriber’s televisions – finding that this difference was invisible to the consumer, and that Aereo’s service’s similarity to early cable TV providers was controlling.

Work Performed Publicly

Aereo argued that it did not “perform” any work “publicly” because the performance it transmitted was a new performance, distinct from the original broadcast, that was created by its act of transmitting. Aereo further argued that each new performance was transmitted privately to a single subscriber, not publicly. The Court found that even accepting Aereo’s argument that its transmission was a new performance, Aereo’s service nonetheless fell within the Copyright Act because the new performance still communicated the same images and sounds contained in the original broadcast “by means of a device or process,” albeit contemporaneously.

Importantly, the Court determined that Aereo transmitted the work to the public. Although Aereo’s service stored individual copies for each subscriber, the court considered this a technology issue that had no effect on the ultimate viewing experience of Aereo’s subscribers nor Aereo’s commercial objectives. To the Court, the nature of the service Aereo provided was indistinguishable from the early cable TV providers who performed publicly. The Court found that although Aereo creates and transmits personal copies of the works to each subscriber, overall its service shows the same work (images and sounds) to multiple subscribers who request the broadcast. These subscribers constitute the public, since they are unrelated and unknown to each other.

Having found that Aereo’s service “performs” the copyrighted work and transmits that performance to the “public” similar to the service provided by the early Cable TV providers, the Court found that Aereo infringed Plaintiff’s exclusive rights under the Copyright Act and reversed and remanded the case.

Future Impact

In Aereo, the Court applied a standard that simply looked at the similarities between the nature of the services provided by Aereo and the early cable TV providers. This “cable TV like” standard is inexact and may create uncertainty in the application of the standard in cases going forward. The Court even acknowledged that the Aereo case could have limited application. Whether and to what extent Aereo can be applied beyond its limited holding may be answered soon, as Fox Broadcasting Co. is already wielding Aereo in support of their infringement claims in the Ninth Circuit against Dish Network over features of its digital video recorder (DVR) service that allows users to upload recorded broadcasts to mobile devices. (see Fox Broadcasting Co., et al v. Dish Network et al, Civ. Case No. 13-56818)

The Court further noted that the decision is not intended to discourage the development of new technologies. Novel issues concerning other technologies, such as cloud computing and DVR services, will have to be addressed as they come before the Court. Despite the Court’s attempt to tread carefully, the decision to ignore the intricacies of the technology and focus on the ultimate effect of the service provided is significant and could hinder the growth of unique technologies in this area.

Ultimately, Aereo allows television broadcasters and content providers to breathe a sigh of relief knowing that the Copyright Act protects them from online entities seeking to offer television services by circumventing copyright laws and impermissibly siphoning off broadcasts. Aereo has been forced to shut down its service since the Supreme Court ruling. Any online media company or content provider hoping to broadcast television content must work with the major broadcasters, obtain licenses and pay fees to do so. Otherwise, they risk suffering the same fate as Aereo.

FTC Report Sheds Light on Dark World of Data Brokers

June 11th, 2014 by Paul Pittman

The Federal Trade Commission (“FTC”) recently issued a report entitled “Data Brokers: A Call for Transparency and Accountability” that identifies various concerns raised by the “Big Data” industry and provides solutions for addressing those concerns. The report focuses on data brokers – entities that collect consumer personal information and share or sell the information to third parties – who are largely invisible to the public. The FTC sought to illuminate the practices of the data broker business where consumer data is bought and sold with no direct consumer interaction. To that end, the FTC ordered nine data brokers, Axciom, Corelogic, Datalogix, eBureau, ID Analytics, Intelius, PeekYou, Rapleaf and Recorded Future, to provide information about the way they collect and use data, and how they enable consumers to access and control that data.

In the report, the FTC found that the data brokers have collected over a billion pieces of consumer data from a wide range of commercial, government and public sources. The data brokers create products using this raw data and the inferences made from that data to sell to clients. The FTC separated the products provided by data brokers to their clients into three categories: (1) marketing products; (2) risk mitigation products; and (3) people search products.

• With marketing products, data brokers provide clients with access to consumer contact information and interests determined through the use of cookies and other tracking mechanisms to facilitate targeted advertising of relevant products to consumers. In some cases, data brokers create products based on categories such as “Dog Owner” or “Motorcycle Lover,” for example, but also may segment consumers based on sensitive categories such as ethnicity or income level. While some data brokers may allow consumers limited access to the data collected from them, consumers are likely unaware that they have such access.

• For the risk mitigation products, data brokers provide their clients with the ability to verify a consumer’s identity and determine the existence of any fraudulent activity associated with a consumer. However, data brokers do not typically allow consumers to correct their information.

• A few data brokers also provide websites that enable people searches to obtain publicly available information about consumers. Unlike the marketing and risk mitigation products, data brokers providing people search websites typically allow for consumers to correct information and also allow consumers to opt out of having their information disclosed.

According to the report, the products provided by data brokers benefit consumers in many ways, by allowing businesses to effectively tailor advertisements, improve products, protect consumer identity from fraud and facilitate consumer interaction. However, the agency noted that the practices of some data brokers can also create risk to consumers if the information collected is incorrect, if a consumer is identified in a negative way or segmented based on discriminatory criteria. In addition, the collection of such vast amounts of consumer data create a security risk if that data is held indefinitely.

Call for Congress to Act

Based on its finding, the FTC called on Congress to enact legislation that would require data brokers to give consumers detailed access to their data and allow consumers the ability to opt out of having their data disseminated for marketing purposes. According to the agency, any such legislation should:

1. Enable consumers to identify the data brokers who collect their information, and determine how to access and opt out of the collection of that data, such as through a centralized portal;

2. Require data brokers to disclose how they use raw data and the types of inferences that are made from the raw data;

3. Require data brokers to disclose the sources of data to allow consumers to correct their data if necessary;

4. Ensure that data brokers provide consumers with notice that their data will be shared with data brokers, obtain consent (especially to collect sensitive data such as health information) and the ability to opt out of having their data shared with data brokers; and

5. Identify when a company uses risk mitigation products to limit a consumer’s ability to complete a transaction, especially where it adversely impacts a consumer’s ability to obtain certain benefits.

Outside of congressional action, the FTC also recommends that companies institute best practices, such as implementing privacy-by-design (considering privacy issues at every stage of product development) and refraining from collecting information from minors. Data brokers should also take steps to ensure that others who use the data they gather do not use it for eligibility determinations or to unlawfully discriminate.

Shifting Tide

The FTC report sheds light on the data brokerage industry and calls for legislation to protect consumers. So far, Congress has been slow to enact any data privacy legislation despite recent high profile consumer data breaches. However, momentum is building. The White House issued a report last month that focuses on how companies gather and use data online about individuals, and how those practices could be used to discriminate against certain groups. And at least one bill – the Data Broker Accountability and Transparency Act of 2014 – has been introduced by Senators John Rockefeller (WV) and Edward Markey (MA) that would require data brokers to disclose their efforts and provide consumers with options to control their information.

Even in the absence of clear direction by Congress, the FTC’s enforcement authority should compel any company that collects consumer data to heed the FTC’s recommendations, not just data brokers. As the FTC has shown, when it issues guidance on data privacy issues it will take action against companies that do not comply. At a minimum, companies collecting consumer data should ensure that their customers are notified that their data may ultimately be sold to data brokers, obtain consent and provide them with the opportunity to opt out of such disclosure. Doing so will ensure that companies meet the FTC’s consumer data use and collection expectations.

California AG’s Guidelines for CalOPPA and Do Not Track Disclosures

June 1st, 2014 by Matthew Fischer

California Attorney General Kamala Harris recently released guidelines for compliance with the California Online Privacy Protection Act (“CalOPPA”) entitled, Making Your Privacy Practices Public.  Attorney General Harris stated that the guide can be used as “a tool for businesses to create clear and transparent privacy policies that reflect the state’s privacy laws and allow consumers to make informed decisions.”

While the guide addresses the entire statute, it has been eagerly awaited for the insights it provides to the most recent amendments to CalOPPA under Assembly Bill 370, which requires all privacy policies to describe how website operators respond to Do Not Track signals.  The amendments have created a great deal of uncertainty since there is no general consensus even as to the definition of Do Not Track.  AB 370 compels website operators to disclose: (1) how they respond to a consumer’s Do Not Track signals or other similar mechanisms if they collect Personally Identifiable Information (PII) about individual consumer’s online activities across time and websites, and (2) whether they allow other parties to collect PII about an individual consumer’s online activities when a consumer uses the operator’s website.

The Online Tracking and Do Not Track section of the guide makes three key recommendations:

  • Make it easy for consumers to find the section in the privacy policy regarding online tracking by using clear labels and headers, such as: “How We Respond to Do Not Track Signals,” “Online Tracking” or “California Do Not Track Disclosures.”
    • This suggestion exceeds the literal requirements under CalOPPA which only obligates a website operator to include its tracking disclosures within its privacy policy and does not address the placement of the disclosure or formatting issues.
  • Describe how one responds to a browser’s Do Not Track signal or other mechanisms, which is more transparent and therefore preferable to simply providing a link to a related program or protocol.
    • This recommendation also surpasses the statutory requirements.  An operator can satisfy its disclosure obligations regarding its own tracking policies by including a link to a program that offers consumers a choice about online tracking in lieu of describing how it responds to a Do Not Track signal. If a website operator does include a link to another program to which it adheres, the guide encourages operators to provide a general description of what that program does.  CalOPPA does not require this added step if an operator decides to use a link.  
  • State whether other parties are or may be collecting PII of consumers while they are on your site or service. 
    • Compliance with this aspect of CalOPPA may require the website operator to ensure that only approved third parties collect PII from consumers on the site and to verify that those third parties, in fact, comply with the operator’s Do Not Track policy.

Highlights of the guide that address other aspects of CalOPPA include the following:

  • Use plain, straightforward language that avoids technical or legal jargon and that is easy to understand through the use of a layered or other clear format.
  • Explain the site’s uses of PII beyond what is necessary for fulfilling a consumer transaction while they are on your site or service.
  • Whenever possible, provide a link to the privacy policies of third parties with whom you share PII.
  • Describe the choices a consumer has regarding the collection, use and sharing of his or her personal information.
  • Tell your customers whom they can contact with questions or concerns about your privacy policies and practices.

California has been at the forefront of online privacy protections for consumers and Harris already demonstrated her willingness to enforce CalOPPA when she sued Delta Airlines under the statute back in December of 2012 for failing to include a privacy policy in its mobile app.  With the issuance of these long-awaited guidelines, companies should review their online privacy policies to ensure that they meet the Do Not Track disclosure requirements, as well as other provisions of CalOPPA, as more California AG enforcement measures can be expected.

About Us
Sedgwick provides trial, appellate, litigation management, counseling, risk management and transactional legal services to the world’s leading companies. With more than 350 attorneys in offices throughout North America and Europe, Sedgwick's collective experience spans the globe and virtually every industry. more >

Search
Subscribe
Subscribe via RSS Feed
Receive blog updates via email: