DAA RELEASES AD MARKER IMPLEMENTATION GUIDELINES FOR MOBILE

April 13th, 2014 by Matthew Fischer

On April 7, the Digital Advertising Alliance (DAA) announced the release of its Ad Marker Implementation Guidelines for Mobile (Ad Marker Guidelines) at the Interactive Advertising Bureau’s (IAB) Mobile Marketplace conference. The DAA is a consortium of national advertising and marketing trade groups that acts as an industry self-regulatory body. While the DAA traditionally focused on online advertising, the surge in mobile advertising in the last few years has caused it to increasingly address issues unique to the mobile ad space. The Ad Marker Guidelines follow on the heels of the DAA’s publication last summer of a policy guidance document on mobile advertising titled, “Application of Self-Regulatory Principles to the Mobile Environment.”

The DAA’s AdChoices (Ad Marker) icon is the blue triangular image that is the centerpiece of the organization’s ad choices program and is often delivered in or alongside interest-based ads in the online and mobile environments. Approved text accompanying the icon includes any of the following:

  • Why did I get this ad?
  • Interest Based Ads
  • AdChoices

When a consumer clicks on the Ad Marker, they receive information about the targeted nature of the advertisement and guidance on how to opt-out of behaviorally targeting advertising. The Ad Marker Guidelines “address use cases in which consumers interact with the screen without using a cursor, as is the case when they use mobile devices such as smart phones and tablets.”

The Ad Marker Guidelines cover both in-ad implementation (i.e., size, touchpad area, in-ad placement and in-ad user experience) and app developer and publisher implementation (i.e., ad marker placement and flow for developers and publishers). Below are some of the key takeaways.

In-Ad Implementation

Size: The smaller screen size and ad creative sizes associated with mobile devices justify implementation of the Ad Marker through the icon itself, provided it is at least 12 pixels by 12 pixels in size.

Touchpad Area: The Ad Marker should include an invisible touch pad area between 20×20 and 40×40 pixels and mobile devices should include enough area to allow the user to easily interact with the Ad Marker.

In-Ad Placement: For an in-ad placement, the entity serving the notice may position the Ad Marker in any one of the four corners of the ad, although placement in the upper right hand corner is discouraged because that is where the close button for ads is normally located. When the icon is used concurrently with approved text, the Ad Marker Guidelines recommend placing the icon in the immediate corner of the ad with the approved text adjacent to the icon.

In-Ad User Experience: Tapping on the Ad Marker results in any one of the following four experiences:

  • Link directly to a notice that contains a mechanism that allows users to exercise their interest based preferences or to instructions for device-specific advertising preferences.
  • An interstitial opens up that provides the user a choice to access a preference mechanism, access a privacy policy, go back to the ad, or close the interstitial.
  • Tapping on the icon the first time expands the notice to show the approved text and a second tap brings the user to the preference mechanism or to instruction for device-specific controls.
  • When the user taps the Ad Marker in a rich media ad that is in a collapsed state, the Ad Marker icon expands to provide the user with the option to: (i) close the in-ad interstitial to view the ad; (ii) access the privacy policy or; (iii) access a preference mechanism or instruction for device-specific controls.

App Developer and Publisher Implementation

The Ad Marker Guidelines advise that “[w]hen implementing the DAA Ad Marker, application developers and mobile Web publishers need to consider both the placement of the Ad Marker and user access to the notice and choice it provides.”

Mobile publisher notices should use any of the three approved texts and when the icon accompanies an approved text, it should be at least 12 pixels by 12 pixels in size.

The in-app notice is accessible from the app’s Settings menu. The best placement of the notice is in the mobile page footer.

The Ad Marker Guidelines provide practical, easy to understand directions that will allow those serving ads in the mobile environment, including those on the creative size, to consistently utilize the Ad Marker icon. Use of the Ad Marker helps facilitate compliance with the enhanced notice requirements set forth in the DAA’s Application of Self-Regulatory Principles to the Mobile Environment.

FTC Jurisdiction Now Includes Whether Your Data Security Protocols Are Reasonable

April 8th, 2014 by Jia-Ming Shang

When it rains, it pours.  This morning I posted a piece on the Fourth Circuit’s recent decision in FTC v. Ross, WL 703739, No. 12-2340 (4th Cir. Feb. 25, 2014) as a precursor to a case before a district court in New Jersey (FTC v. Wyndham Worldwide Corporation, et. al.) on the scope of the FTC’s jurisdiction over businesses’ data security efforts.   This afternoon, the New Jersey district court issued its opinion in Wyndham.

Nearly three months after briefing was completed, the United States District Court of New Jersey today ruled in a 42-page opinion that the Federal Trade Commission (“FTC”) had jurisdiction to investigate and punish data security breaches at private corporations under its statutory authority to prevent unfair business practices. FTC v. Wyndham Worldwide Corporation, et. al., No. 13-1887(ES), Docket No. 181 (D.N.J. April 7, 2014).  Under the mantle of its consumer protection mandate, the FTC has recently pursued civil penalties against companies who fail to adequately protect the personal information of consumers and has publicly lobbied for exclusive authority over storage of consumer information online.  According to a December 3, 2013 article in the National Law Journal:

[FTC Chair Edith] Ramirez said she favors making the FTC the sole federal agency in charge of enforcing a uniform set of national data breach notification requirements. Such requirements would compel businesses to notify consumers of a data breach promptly, and also to notify credit bureaus. The FTC has urged Congress to give the agency civil penalty authority against companies that fail to maintain reasonable security.

A lawsuit was brought by the FTC against Wyndham Worldwide for unfair and deceptive practices based on Wyndham’s representation to customers that it used industry-standard practices to protect consumer online data when in fact, the FTC says, that was not the case. Over the course of two years, hackers from Russia repeatedly exploited weak spots in Wyndham’s networks and accessed credit card and other personal information of thousands of Wyndham customers.  Fraudulent credit card charges to Wyndham customers exceeded $10 million according to the FTC.

The fact of the data breaches was undisputed, but Wyndham moved to have the FTC complaint dismissed in part on the grounds that the agency’s consumer protection jurisdiction did not extend to the adequacy of its Internet security systems, and that the agency lacked authority to dictate how companies store consumer information.  Wyndham argued that Congress did not intend to broadly extend jurisdiction over all information security, instead limiting the FTC to specific subjects such as protection of children in online activity and regulation of how financial institutions use consumer information.  The FTC countered that its broad consumer protection mandate naturally extended to corporate Internet security, even in the absence of explicit Congressional authority and particularly in cases causing actual consumer injury.  The FTC claimed jurisdiction because Wyndham’s allegedly weak data security standards, in combination with its claim of using industry standard security, was an “unfair or deceptive acts or practices” subject to FTC regulation under 15 U.S.C. § 45(a)(1).

Wyndham relied almost exclusively on FDA v. Brown & Williamson Tobacco Corp., 529 U.S. 120 (2000), which ruled that the Food and Drug Administration lacked authority to ban cigarettes or otherwise exercise significant policymaking authority over tobacco products where Congress had passed extensive, tobacco-specific legislation that “created a distinct regulatory scheme for tobacco products” at odds with concurrent FDA jurisdiction.

In sweeping language today, the Court distinguished Brown and “reject[ed] [Wyndham’s] invitation to carve out a data-security exception to the FTC’s unfairness authority” where FTC authority over data security was neither “incompatible with legislation” nor would “plainly contradict congressional policy.”   The Court noted that there was no comprehensive legislative or regulatory scheme over data-security storage, and that “the FTC’s unfairness authority over data security can coexist with the existing data-security regulatory scheme.”

Despite being a relatively pedestrian question of jurisdiction, the case attracted intense scrutiny from business groups. Virtually every consumer-oriented business stores consumer data online in some form and faces data security risk.  Today’s ruling recognizing FTC jurisdiction over storage of consumer data online gives the agency authority to determine data security standards and punish businesses whose security standards fall short, regardless of industry.  It opens an entirely new, yet ubiquitous, aspect of doing business on the Internet – one which theoretically might involve no misrepresentation to the consumer – to regulation by the FTC.  Businesses looking for a concrete data security standard will be disappointed.  The FTC has avoided bright line rules and has taken the position that “in the data-security context, reasonableness is the touchstone and that unreasonable data security practices are unfair.”

 

FTC v. Ross – Nudging Closer to FTC Jurisdiction Over Internet Data Storage

April 7th, 2014 by Jia-Ming Shang

Data privacy practitioners continue to wait in suspense on the decision of the District Court of New Jersey in FTC v. Wyndham regarding whether the FTC has jurisdiction to regulate the storage and security of consumer information in the Internet space, with defendants there arguing that the FTC lacks explicit jurisdiction over cyberspace matters.  Oral argument was heard on November 7, 2013 and additional briefing submitted on January 21, 2014, but no opinion has issued yet.

In the interim, other decisions are inching toward establishing the FTC’s jurisdiction over Internet matters.  FTC v. Ross, 2014 WL 703739, No. 12-2340 (4th Cir. Feb. 25, 2014) upheld the FTC’s authority over a “scareware” scheme where defendant’s software encouraged consumers to conduct a “system scan” that would locate and isolate viruses and other malware on the customer’s computer.  Despite the presence of graphics suggesting a “scan” taking place, no actual “scan” occurred.  Instead, computer users were told that their computers were infected with viruses, Trojan horses, and other malware and told to purchase software that would remove the malware.  Requests for refunds were refused even where users realized they’d been duped.

On appeal, one of the defendants contended, similar to Wyndham, that the FTC lacked statutory authority because the FTC Act did not expressly authorize consumer redress in cyber cases.   The Fourth Circuit agreed that there was no explicit authority but upheld jurisdiction, reasoning that “Congress was aware of the court’s equitable jurisdiction to decide all relevant matters in dispute and to award complete relief,” even if the alleged unfair practice occurred in cyberspace.

Although the FTC’s jurisdiction over unfair trade practices taking place via the Internet is less controversial than jurisdiction over the strength and sufficiency of a company’s Internet security protocols, Ross further cements the FTC’s jurisdiction over technology matters and strongly suggests that courts will not exempt new markets and technologies from FTC oversight simply because they were not explicitly mentioned in the FTC Act.

 

Recent US-EU Safe Harbor Enforcement Actions and International Data Security Programs Signal Increased Focus on Cross-Border Data Transfers

March 12th, 2014 by Matthew Fischer

On March 6, the U.S. Federal Trade Commission (FTC) signed a memorandum of understanding (MOU) with the UK Information Commissioner’s Office (ICO), Great Britain’s data protection authority, to promote increased cooperation and the sharing of information between the two agencies to bolster their data protection efforts. This objective would be achieved through the following means: (1) sharing information, including complaints; (2) providing investigative assistance where appropriate, such as obtaining evidence in the local jurisdiction on behalf of the other agency; (3) joint training and exchanging staff, and; (4) coordinating enforcement actions for privacy violations that constitute breaches in both countries. The FTC and the ICO have coordinated privacy and investigations and promoted joint anti-spam initiatives over the last several years.

The MOU comes in the wake of increased criticism regarding the US-EU Safe Harbor program (Safe Harbor) by European Union (EU) countries. The EU Data Protection Directive (95/46/EC) bars the transfer of personal data from within the European Economic Area to third countries unless they have established acceptable levels of protection. The Safe Harbor provides a self-certification program that requires U.S. companies to protect data containing personal information received from EU countries pursuant to an agreed upon set of seven privacy principles that are enforceable under U.S. law.

On July 19, 2013, the EU Commissioner responsible for data protection, Viviane Reding, stated that the European Commission (EC) would be reviewing its Safe Harbor Agreement with the U.S., in part due to the scandal surrounding Edward Snowden’s leak of top secret data collected under the U.S. National Security Agency’s (NSA) Internet surveillance program called PRISM. Commissioner Reding cited the PRISM controversy as a “wake-up call” which necessitated “data protection reform” from the EC. The EC also expressed concerns over the self-certification nature of the Safe Harbor program, which it viewed as susceptible to lapses in compliance by participants. Shortly thereafter, on July 24, German data protection authorities announced they would not issue new permissions for data transfers to countries outside the EU and were considering whether data transfers conducted on the basis of the Safe Harbor should be suspended altogether. The German authorities also cited concerns over reports of the NSA’s PRISM program.

On November 27, 2013, the EC published the results of its Safe Harbor review and reported that it had identified “a number of weaknesses” which caused it to opine that “the current implementation of Safe Harbor cannot be maintained.” The EC listed 13 recommendations for the U.S. to consider and implement by summer 2014. The recommendations included greater transparency on the part of participating companies, ensuring a right of redress for data subjects, increased investigations and reporting of non-compliance by the U.S. Department of Commerce to applicable EU data protection authorities and restriction of the national security exception to only those circumstances where it is definitely necessary or proportionate.

FTC Commissioner Julie Brill responded on December 11 that the Safe Harbor program is “a very effective tool for protecting the privacy of EU consumers” and asserted, not surprisingly, that it should be neither suspended nor renegotiated. In addressing the EC’s criticism of the Safe Harbor framework, Commissioner Brill argued that the U.S. had undertaken numerous Safe Harbor compliance investigations, which have resulted in 10 enforcement actions since 2009.

The FTC flexed its enforcement muscles further when it announced on January 21, 2014 that it had settled claims against 12 different companies that allegedly falsely claimed to have been in compliance with the Safe Harbor program. The FTC complaints charged that the companies had represented in their privacy policies or through the display of the Safe Harbor certification mark, that they held current Safe Harbor certifications, despite having allowed their certifications to lapse. That same month, the Department of Commerce’s International Trade Administration (ITA) posted a document entitled “Key Points Concerning the Benefits, Oversight, and Enforcement of Safe Harbor.” The Key Points document defends the Safe Harbor program by denoting the following advantages of the program:

• The program provides important economic benefits to the EU and Swiss economies, as well as to the U.S. economy;
• Claims of Safe Harbor participation and certification status can be readily verified via the official Safe Harbor List(s) that are accessible online;
• The ITA plays an important oversight role that balances the self-certification aspect of the program;
• Safe Harbor requires that there be “readily available and affordable” dispute resolution for data subjects, and;
• The FTC had brought 10 enforcement actions in recent years, resulting in consent decrees (and that number has since skyrocketed to 22 enforcement actions after the FTC’s January 21 announcement).

More recently, officials with the FTC, EU and Asia-Pacific Economic Cooperation (APEC) economies announced the execution of a joint agreement designed to facilitate companies’ compliance efforts for cross-border data transfers. The agreement – called a “referential” – is intended to serve as an “informal pragmatic checklist for organizations” that seek double certification under the EC’s binding corporate rules and APEC’s cross-border privacy rules. Companies involved in cross-border transfer can use the referential to design and adopt data protection policies that comply with both systems.

As large scale data breaches continue to grab headlines in the U.S and concerns over NSA spying remain among EU countries, companies involved in cross-border data transfers can expect increased enforcement measures from data protection authorities on both sides of the Atlantic Ocean.

State Attorneys General Emerge as Enforcers for Consumer Data Privacy

February 7th, 2014 by Paul Pittman

A recent lawsuit brought by the California state attorneys general accusing Kaiser Permanente of unreasonable delay in revealing a 2011 data breach to affected individuals, continues a rising trend of enforcement of consumer data privacy protections laws by state attorneys general. Traditionally, consumer online and data privacy protection enforcement has been dominated by the Federal Trade Commission (“FTC”). However, state attorneys general have increasingly become more involved in filing actions on behalf of consumers whose privacy rights have been impacted. With the FTC facing stiff challenges to its authority to bring consumer data privacy enforcement actions in the Wyndham and LabMD Inc., cases, state attorneys general are poised to take on a more prominent role in protecting consumer data privacy online. In many ways, state attorneys general possess more power to enforce consumer online and data privacy protections than the FTC and represent a formidable authority that organizations must consider when engaging in e-commerce.

The Kaiser Permanente case is just the latest example of how active state attorneys general are in data privacy protection enforcement actions nationwide. In The People of the State of California v. Kaiser Foundation Health Plan, Inc., the California’s attorney general, Kamala Harris, settled state unfair competition claims with Kaiser for $150,000 and Kaiser agreed to make improvements to its data security system. The settlement resolved claims that Kaiser waited four months to notify more than 20,000 current and former employees that their personally identifiable information (“PII”) had been compromised when an unencrypted hard drive containing the PII was purchased at a thrift shop in 2011. The court found that Kaiser had gathered sufficient information to notify some of the individuals after the recovery of the hard drive in December 2011 and prior to the end of its investigation in February 2012.

Many of the high profile consumer data privacy enforcement actions brought in the past year have been initiated by state attorneys general. In November 2013, 37 state attorneys general and the District of Columbia settled with Google for $17 million over Google’s alleged violations of various state consumer protection and privacy laws when it allowed third party cookies on Apple’s Safari browser after it told users that Safari’s default settings would block such cookies. In addition, state attorneys general for 38 states and the District of Columbia settled with Google for $7 million over claims that Google collected personal consumer data from unsecured Wi-Fi networks through its Street View vehicles. Other states have initiated investigations and enforcement actions against both nationwide entities such as Living Social and local entities who fail to maintain consumer privacy online.

Notably, state attorneys general from around the nation (along with the United States Attorney General) are currently investigating the Target data breach incident in December 2013 that affected over 110 million consumers. This investigation may be a precursor to significant enforcement actions by the state attorneys general collectively (or individually) against Target. Such a coordinated and widespread action by the state attorneys general would certainly signal to companies and businesses that state attorneys general are on the front line in enforcing consumer data privacy protections.

In general, state attorneys general have more tools available to them to protect consumer data privacy than the FTC, since Section 5 of the FTC Act limits the FTC to pursuing causes of action for “unfair or deceptive practices.” Most states already have analogous consumer protection statutes that allow actions to address unfair business practices, but many do not contain the same limitations on the recovery of monetary and civil penalties as the FTC Act. In addition, state attorneys general also have access to state-specific consumer data privacy laws (e.g., the Maryland Personal Identification Protection Act, the Massachusetts Data Privacy Act and the California Online Privacy Protection Act) that focus on the use and protection of consumer data, and notification when consumer data has been compromised. Further, state attorneys general may even assert claims for consumer data protection violations under various federal laws, such as the Health Insurance Portability and Accountability Act, which the FTC cannot.

Armed with these tools, state attorneys general have signaled their intent to increase their enforcement efforts and attention on matters involving Internet privacy and to use their arsenal to hold companies accountable when they breach consumer trust by mishandling consumers’ PII or misrepresenting their privacy practices. To that end, many state attorneys general have dedicated units that are responsible for investigations of consumer internet privacy actions. In addition, state attorneys general collaborate and coordinate efforts frequently on privacy enforcement matters that impact consumers in multiple states, as the prior Google settlements and current Target inquiries show.

Although enforcement actions by state attorneys general do not fundamentally differ in effect from enforcement actions by the FTC, the differences that do exist merit consideration by businesses engaging in e-commerce. For instance, actions by state attorneys general involve full blown litigation, which can be considerably more expensive than administrative actions or investigations brought by the FTC. In addition, state attorneys general actions subject companies to liability under a wider array of state and federal laws, thus increasing a companies’ exposure. Further, state attorneys general may seek damages and other forms of relief that are not available to the FTC and are not limited by the FTC Act. Significantly, companies could face individual lawsuits by the state attorneys general in each state in which it operates, if state actions are not consolidated. As a result, understanding the law and regulations that impact consumer data protection in each state where the company operates is critical to protecting against unwanted attention by state attorneys general.

While state attorneys general have demonstrated a willingness to work with companies who are transparent and forthcoming in their privacy policies and operations, companies that delay data breach notification, forgo implementing adequate data security measures, and misrepresent or lack transparency with respect to their privacy practices may draw the ire of state attorneys general. Ultimately, companies should consider how their data privacy practices might come under scrutiny, not only from the FTC but from the various state attorneys general. Companies that take a transparent and proactive approach to working with state attorneys general may prevent damage to their company and brand that could result from the mishandling of consumer personal data.

LinkedIn Sues Unknown Defendants Operating BOTS for Stealing Members’ Data

January 10th, 2014 by Nora Wetzel

LinkedIn sued unknown defendants for employing automated software programs known as bots to copy members’ data and to register thousands of fake member profiles on the social network’s site.  In addition to its free networking benefits, LinkedIn also provides a subscription Recruiter service which permits headhunters to search for potential candidates. 

LinkedIn filed a complaint on January 6, 2014 in the Northern District of California, alleging violations of the Computer Fraud and Abuse Act, California’s Comprehensive Computer Access and Fraud Act, the Digital Millennium Copyright Act, breach of contract, misappropriation, and trespass.  At the heart of LinkedIn’s claims is that unknown defendants accessed LinkedIn’s website and servers without permission by using bots to circumvent existing technical barriers.

LinkedIn asserts the bots registered thousands of fake profiles on the site to view thousands of members’ profiles each day and copy data from their profiles.  This automated copying is referred to as scraping.  LinkedIn’s User Agreement prohibits scraping, as well as prohibits members from registering more than one profile.  The User Agreement also requires members to use their real names and provide accurate information.  All members agree to the User Agreement as a condition of joining the website.

LinkedIn believes the data copied by the bots was used by the unknown defendants to compete with LinkedIn or even sold to LinkedIn competitors.  As a result of the bots’ unauthorized copying of member’s data, LinkedIn contends its Recruiter service is devalued.  LinkedIn also asserts the bots’ fake profiles cause the non-subscription portion of its site to lose its integrity.  For example, LinkedIn claims that legitimate members tried to connect with fake members after seeing that a fake member viewed the legitimate member’s profile. 

When LinkedIn discovered bots registering fake profiles and copying member’s data, LinkedIn disabled the fake member profiles and implemented additional safeguards to try and prevent unauthorized access by bots.  The company plans to subpoena Amazon Web Services, which the bots used to access LinkedIn’s website and servers, in order to determine the identity of the unknown defendants.  LinkedIn’s litigation strategy is not without precedent, as Facebook and Craigslist have in the past filed lawsuits against scrapers and Facebook even won over $3 million in damages for resulting spam messages.

It will be interesting to see if LinkedIn members take any action either against LinkedIn or the defendants, once identified, in connection with unauthorized copying of the members’ data.

Facebook Hit With New Lawsuit Over User Information Privacy

January 6th, 2014 by Jia-Ming Shang

Perennial issues with Facebook’s data use policies have spawned another class action lawsuit in the Northern District of California. Plaintiffs in Campbell et al v. Facebook Inc., No. 13-CV-05996 make two distinct claims in their December 30, 2013 complaint, one of which might surprise even longtime Facebook users.

The first and less controversial claim is that Facebook monitors users’ “private” messages with other Facebook users to provide data to marketers and target advertisements. The complaint cites third-party research, including a report by a Swiss firm that sent Facebook private messages embedded with unique, hidden URL addresses to ascertain whether any of the test URL’s would be “clicked.” According to the complaint, “Facebook was one of the Web Services that was caught scanning URLs despite such activity remaining undisclosed to the user.” The plaintiffs allege that this practice is contrary to Facebook’s public statements touting the privacy of its messaging service and violates an alleged promise by Facebook “that only the sender and the recipient or recipients will be privy to the private message’s content, to the exclusion of any other party, including Facebook.” Google and Yahoo have faced similar lawsuits alleging that users of their respective email services have their mail “scanned” for advertising and marketing purposes. E.g., Dunbar v. Google, 10-CV-00194 (E.D. Tex.)

The second, more startling claim is unique to Facebook.  According to Plaintiffs, if the private messages contain a URL address, Facebook crawls the linked page to see if it contains one of Facebook’s “Like” buttons. If it does, Plaintiffs allege that Facebook registers that private-message link as a “Like” on the linked site’s Facebook page, effectively and automatically clicking “like” links within Facebook messages on behalf of its users without their knowledge.

A frequent issue in these types of lawsuits is whether the defendant’s data use practices have been adequately disclosed to users and what expectations reasonable consumers would have as to how their information is used. With respect to Facebook’s scanning and use of information in private messages, plaintiffs acknowledge that Facebook’s data use policy discloses that Facebook will “receive” information from the users, but plaintiffs allege that Facebook’s scanning, mining and “manipulate[ion]” of private message content goes well beyond the scope of disclosure.

The Campbell plaintiffs’ claim of Facebook auto-liking a web page based on the inclusion of a link to that web page in a private message is more problematic and may present novel legal issues. A credible argument can be made that Facebook’s liking of a webpage discussed in private message makes the message (albeit, a small part) public, raising questions of whether this use of nominally private data rises to the level of a public disclosure. The complaint also suggests (but does not directly allege) that automatic “likes” generated by a private message URL might be traced back to the message sender or recipients. If true, tort and fraud issues abound: imagine the consequences if a job applicant had sent a white supremacist organization’s URL in a private message only to later find that he or she had inadvertently and publicly “liked” the page as well.

This suit is the latest in a string of similar class actions that together represent a backlash against how tech companies use, share and profit from personal data. In addition to the lawsuits involving Gmail and Yahoo Mail, LinkedIn has recently been accused of scanning emails to spam users’ contacts. Third party research reports that the Campbell plaintiffs relied on have also accused Twitter, Google+, Formspring, bit.ly and goo.gl of misusing user data without their consent or knowledge. Whether these concerns will be resolved by the courts or through legislation is unclear, but it is increasingly apparent that consumers and plaintiffs’ lawyers are paying more attention to how personal information is being used without users’ knowledge.

California’s SB 46 Amends Data Breach Notification Law

January 3rd, 2014 by Matthew Fischer

With the start of 2014 come several new privacy laws in California. In an earlier post we discussed A.B. 370, which requires companies to disclose how companies that collect personally identifiable information about consumers’ online activities across time and websites respond to consumers’ do not track signals. On the data breach side, California S.B. 46 expands breach notification requirements in 2014 to individual online user accounts.

California’s breach notification requirements apply to persons and entities that conduct business in the state and own or license computerized data that includes personal information. Notification must be provided to any California resident whose unencrypted personal information was, or is reasonably believed to have been, accessed by an unauthorized person.

S.B. 46 amends California Civil Code § 1798.82, which currently defines personal information as an individual’s first name or first initial and last name, combined with one or more of the following data elements, when either the name or the data elements are not encrypted: social security number; driver’s license number or California Identification Card number; account number, credit or debit card number, in combination with a required security code, access code or password that enables access to an individual’s financial account; medical information, or; health insurance information. Cal. Civ. Code § 1798.82(h)(1). The broadened definition now includes, “A user name or email address, in combination with a password or security question and answer that would permit access to an online account.” Cal. Civ. Code § 1798.82(h)(2).

In correlation with the addition of online account information to the definition of personal information, the amendment also changes the method by which notice must be provided, depending upon the type of information compromised. Notice of a breach that only involves a username or email address, in combination with a password or security question and answer that would permit access to an online account, can be made in an electronic form. The notice must direct the affected consumer “to change his or her password and security question or answer, as applicable, or to take other steps appropriate to protect the online account with the person or business and all other online accounts for which the person whose personal information has been breached uses the same user name or email address and password or security question or answer.”

However, if a breach involves login credentials for an email account, notice cannot be made to the affected email address. Cal. Civ. Code § 1798.82(h)(5). Instead, notice must be given by one of the accepted methods delineated in the statute for breaches of other personal information, or by providing “clear and conspicuous notice delivered to the resident online when the resident is connected to the online account from an Internet Protocol address or online location from which the person or business knows the resident customarily accesses the account.” This bar is a common sense approach since the possibility exists that an unauthorized person may have assumed control of the user’s online account.

The statutory requirements for the content to be included in the notification have not been altered. With California often at the forefront of privacy law, those states whose breach notification laws do not already address online user account information may soon follow.

Notice and Consent Under Scrutiny by FTC

December 21st, 2013 by Paul Pittman

Recently, the FTC settled a suit with a mobile app creator over charges that the app developer deceived consumers about whether and how their geolocation information would be collected and shared with third party advertisers.

The app known as “Brightest Flashlight Free” activated all lights on one’s device, but simultaneously collected geolocation information that was sent to third parties and advertisers. The mobile app creator, Goldenshores, represented in its notice that users have the option to refuse the collection and sharing of their data. The FTC alleged, however, that although a consent page was displayed to the user, the app began collecting and sharing geolocation information before the consent was given and regardless of whether consent was given. The FTC contended that Goldenshore’s practices were deceptive, and false and misleading. Under the terms of the settlement, Goldenshore agrees to delete any information already collected from consumers, and provide explicit descriptions to consumers of how their data is collected and shared. The settlement also requires Goldeshore to provide notice and details about the collection of geolocation data “immediately prior” to the collection of the data.

The decision is consistent with the FTC’s recent reaffirmations that it will enforce notice and consent requirements when it comes to the collection and sharing of consumer data. Companies and app developers would do well to take heed. This case illustrates that the FTC will not just take procedural notice and consent implementation at face value, but will look at the substance and veracity of the notice provided and the effectiveness and validity of the consent given. Where critical information regarding the collection and sharing of information is excluded from a notice, the FTC may bring enforcement actions under Section 5(a) for deceptive practices. Where the request for consent is “illusory,” as the FTC termed it, and personal data is collected and shared regardless of whether consent is requested or given, the FTC may institute Section 5(a) enforcement actions for false or misleading practices. Taking care to provide transparent, complete and detailed disclosures regarding the data collection and sharing aptitude of one’s mobile apps will go a long way towards avoiding FTC scrutiny.

Click here for a summary of the case, and here for copies of the FTC’s complaint and settlement.

FTC’s Internet of Things Workshop Draws a Crowd

November 21st, 2013 by Paul Pittman

Yesterday, the Federal Trade Commission (“FTC”) held a long awaited workshop on the “Internet of Things” (“IoT”) where nearly 200 data privacy and security professionals, device and appliance manufacturers such as Microsoft and GE, and lawyers and lawmakers engaged in a roundtable discussion about the evolution of connected devices and the data privacy and security perils it presents. The IoT refers to the technological ecosystem of the future, where the devices we use on a daily basis, such as cars, appliances, pacemakers, and smart phones, are interconnected in ways that creates many efficiencies and benefits in our daily lives, but that also result in the collection of a tremendous amount of data by these devices that paint a valuable picture of consumers for businesses and advertisers. As Carolyn Nguyen, Director of the Technology Policy Group at Microsoft described it, IoT consists of sensors (devices) that act as intelligent agents for individuals and are ubiquitously present to collect and transmit data about your every move.

One segment of the IoT workshop focused on “The Smart Home” which illustrated the increased connectivity of our homes: from refrigerators that collect data about their contents and the length of time and hour at which you spend perusing for a midnight snack, to smart meters that collect data about the amount of electricity you use and periods of high and low usage, to heating and cooling systems that collect data on the number of people occupying a room and adjusts lighting and temperature automatically.

Other segments included “Connected Health & Fitness” which described connected devices such as pacemakers that instantaneously transmit health information to your doctor and “Connected Cars” that collect data on your driving tendencies and locations, while controlling your speed at critical moments. The paramount concern that emerged from these discussions was the concept of providing notice and consent to consumers regarding the collection, storage and use of consumer data in this unprecedented, complex and connected environment. Opinions differed about whether effective notice and privacy are even possible to achieve in the IoT environment.

Despite these concerns, participants were uniformly of the opinion that regulation is not appropriate yet, since the field is still evolving. In his keynote address, Vint Cerf, Internet pioneer and VP at Google, suggested that he might be uncomfortable developing regulations for the IoT because of the uncertainty about the types of problems that could emerge. Cerf stated that “before we write regulations, we need to understand the problems more deeply.” He also identified seven technical challenges facing the IoT:

• Need to standardize interfaces;
• Difficulty in configuring massive numbers of devices;
• Developing strong access control and authentication;
• Privacy and safety;
• Instrumentation and feedback;
• Dealing with software errors, vulnerabilities and software updates; and
• Potential opportunities for third party businesses.

Regulators staked their positions at the workshop regarding the IoT. Edith Ramirez, Chairwoman for the FTC, stated that the IoT will accelerate the disappearance of the boundaries between the virtual and physical worlds, thereby ensuring that our personal data will infiltrate every facet of our life. Nonetheless, Ramirez emphasized the FTC’s expectation that businesses will adhere to the agency’s core principles with respect to the IOT:

• Privacy and security by design;
• Simplified consumer choice; and
• Transparency.

Chairwoman Ramirez added that companies will need to build security into their products and that the FTC will enforce this requirement. Notably, she stressed the need to ensure the security of patient health care information in the IoT to safeguard against unauthorized disclosures.

At the midpoint of the workshop, Commissioner of the FTC, Maureen K. Ohlhausen, stated that the FTC will be policing those who collect data, with a focus on data security, mobile privacy and “Big Data.” Ohlhausen cited the FTC’s data security enforcement action against TRENDnet which settled in September, as well as the FTC’s mobile privacy enforcement action against Path, Inc. which settled in February, as examples of the types of actions it will take to ensure companies comply with their privacy policies and government regulations.

Jessica Rich, Director of the Bureau of Consumer Protection at the FTC, provided closing remarks for the workshop wherein she reiterated the importance of privacy and security and emphasized that the necessary protections must be built into the products and “nailed down before companies can come into your home” or vehicle to collect data. Director Rich noted the challenge of providing effective notice when no interconnection exists between many devices and where in some cases data is collected passively without the knowledge of the consumer. Director Rich concluded by announcing that the IoT workshop is not a prelude to regulation, but rather it is the first conversation between regulators, businesses and the public about the issues presented by the IoT and that the FTC will issue a report of best practices based on the information received at the workshop.
The comments by FTC’s leadership indicate that the agency will refrain from stifling regulations for the moment.

As a result, it will be incumbent on privacy professionals and practitioners to engage in self regulation in line with the principles set forth by the FTC at the IoT workshop and in its upcoming report. Especially since the FTC has made clear that it will enforce company privacy policies and existing regulations to protect consumers and their data in the IoT.

About Us
Sedgwick provides trial, appellate, litigation management, counseling, risk management and transactional legal services to the world’s leading companies. With more than 350 attorneys in offices throughout North America and Europe, Sedgwick's collective experience spans the globe and virtually every industry. more >

Search
Subscribe
Subscribe via RSS Feed
Receive blog updates via email: