FTC Clarifies COPPA “Verifiable Parental Consent” Requirements

July 21st, 2014 by Afigo Fadahunsi

The Federal Trade Commission (FTC) modified guidelines it issues to developers who make apps specifically for children. App developers have taken advantage of the soaring lucrative app market aimed at a younger audience that not only enjoys the fast-paced adrenaline rush of modern technology, but actually relies on technology for educational development, as more school districts have introduced the use of tablets in the classroom. The trouble is, however, with increasing and constant presence of adolescent online activity comes a greater degree of parental concern for their privacy.

The Children’s Online Privacy Protection Act (COPPA) was created primarily to protect children under the age of 13 from the collection of their personal data online for commercial use. The goal of COPPA is to keep parents in control of what their children under 13 are viewing and disclosing on the Internet. The FTC’s recent changes to a list of guidelines not only ensures that app developers and app stores notify parents of how their children are using apps, they also reaffirm these entities’ obligation to obtain verifiable parental consent before collecting personal information from children.

The FTC initially provided that charging a parental credit card was sufficient to satisfy parental consent, as the parent, at the very least, would see the charge on the monthly statement, and would have notice of the child’s activity on the website. In its revisions, the FTC now clarifies that a credit card need not be charged to obtain parental consent, so long as the collection of the credit card is supplemented with other effective safeguards, such as questions to which only parents would know the answer.

The FTC also revised its guidelines to establish that the developer of a child-related app may use a third party, such as an app store, to obtain parental consent on its behalf. In that instance, if the app store provides the required notice and consent verification prior to or at the time of the purchase of a mobile app for children under 13, the mobile app developer may rely on that consent.

Finally, the FTC suggested that it supports the creation of “multiple-operator” methods or common consent mechanisms – app stores that assist developers operating on their platform with providing a verifiable consent mechanism will not be held liable under COPPA so long as they do not “misrepresent the level of oversight [provided] for a child-directed app.”

Canada’s Anti-Spam Legislation (CASL) Will Impact U.S. Companies

July 7th, 2014 by Matthew Fischer

Canada’s Fighting Internet and Wireless Spam Bill, better known as Canada’s Anti-Spam Legislation (CASL), was enacted in December 2010, but enforcement of the law did not commence until July 1, 2014, on Canada Day. The law impacts any U.S. company or individual sending commercial electronic messages (CEMs) to businesses in Canada and it has several aspects that differ from the restrictions under CAN-SPAM and TCPA in the U.S. As a result, a “one size fits all” approach with respect to electronic marketing campaigns that include our neighbors to the north will not work.

The law applies to CEMs sent from or to computers and devices located in Canada. It includes emails, SMS, instant messaging and certain social networking communications that are sent to email addresses, instant message accounts, phone accounts and social media accounts for the purpose of conveying commercial or promotional information to customers or prospects in Canada. Fax messages do not fall under the statute.

CASL also prohibits the altering of transmission data, and the installation of a computer program without consent, but this post will focus on the CEM aspect of the statute.

Consent

Unlike CAN-SPAM, which requires an “opt-out” model, CASL requires an “opt-in” mechanism whereby senders must first procure either implied or express consent before sending a CEM. Accordingly, marketers cannot use a pre-checked toggle box when seeking consent.

Implied consent occurs if the recipient: (1) has purchased a product, service or entered into another business deal, agreement, or membership with the sender within the last 2 years, or; (2) made a donation or gift, volunteered with, or been a member of the sender within the last 2 years, if the sender is a registered charity or political organization.

Unlike the TCPA, express consent under CASL may be obtained orally or in writing, but it must be sought separately for each of the three acts covered by CASL (i.e., sending a CEM, altering transmission data and installing a computer program). A request for written consent must include:

  • A clear and concise description of the purpose for which consent is sought;
  • The name of the person seeking consent, or the person on whose behalf consent is sought;
  • The requestor’s contact information (mailing address, and either a telephone number, email address or website URL);
  • A statement that the recipient can withdraw consent at any time.

The Act is ambiguous with respect to a number of written consent issues, however, such as whether the person seeking consent must specify the particular device that will receive the CEMs, whether a hyperlink to the requestor’s contact information is permitted and the level of detail required for the purpose statement.

The CRTC has provided guidance for obtaining oral consent, which it deems sufficient if it can be verified by an independent third party, or where a complete and unedited audio recording of the consent is retained by the person seeking consent.

A number of categories of electronic messages are exempt from CASL, including:

  • CEMs sent between businesses that have an ongoing business relationship and that are sent by an employee, representative, contractor or franchisee and that are relevant to the business, role, function or duties of the recipient. Also exempt are CEMs sent to third-party business partners.
  • Messages sent and received via an electronic messaging service, provided that (i) the information and unsubscribe mechanism that are required under the Act are conspicuously posted and readily available on the user interface through which the CEM is accessed and (ii) the recipient either expressly or implicitly consented to receive it.
  • If the sender has a personal or family relationship with the recipient.
  • Messages sent to consumers in response to requests for information, inquiries or complaints.
  • Third-party referrals, provided the sender identifies in the CEM the full name of the referring person and the referring person has a current relationship (personal or business) with the recipient.
  • CEMs regarding the delivery of a product or service in relation to a previous transaction, including messages to facilitate or complete a transaction.
  • Messages sent by telecommunications service providers for the installation of computer programs without consent in order to either (i) protect network security, (ii) upgrade or update the network, or (iii) correct a failure in the operation of a computer system or program installed on the network.
  • CEMs sent to a limited-access secure and confidential account to which messages can only be sent by the person who provides the account.
  • Messages sent by a registered charity or political organization with the primary purpose of raising funds.Messages sent to satisfy a legal obligation, to provide notice of or to enforce a legal right, order, obligation or judgment.

Enforcement

Three different government agencies will share enforcement responsibilities of CASL. The main enforcement body is the Canadian Radio-television and Telecommunications Commission (CRTC) which will issue administrative monetary penalties for sending non-compliant CEMs, altering transmission data (e.g., misdirecting users to a website they did not intend to visit), or installing computer programs on a system without express consent. The Competition Bureau will administer monetary penalties or criminal sanctions for false and misleading representations and deceptive marketing practices. The Office of the Privacy Commissioner will enforce against the collection of personal information through the unauthorized access to computer systems or the harvesting of electronic addresses by compiling bulk email lists through mechanisms.

Penalties for the more serious violations can range as high as $1 million for individuals and $10 million for businesses, per violation. The law is being implemented in stages and starting July 1, 2017 a private right of action will be permitted against violators who will be liable for statutory damages that could be as high as $1 million per day.

Compliance Considerations

Companies marketing to businesses in Canada should create a checklist to ascertain whether a message constitutes a CEM and, if so, whether any of the many exceptions apply. We also recommend undertaking a thorough review of existing policies and guidelines, or developing new ones, for requesting consent to send CEMs and structuring a database to maintain records of each consent obtained (whether written or verbal). Existing databases of email addresses and phone numbers must be reviewed and scrubbed, if necessary, to determine which means of contact are still valid (i.e., an existing business relationship can be verified) and whether a new consent should be obtained. Businesses should also update their CEM templates and “unsubscribe” mechanisms to ensure compliance with the CEM aspect of the new law.

Aereo Loses Battle with Broadcasters Over Online Television Programming

June 30th, 2014 by Paul Pittman

In a highly anticipated decision, the Supreme Court ruled on Wednesday that Aereo Inc.’s online service that broadcasts television programming over the Internet infringed on the exclusive right of television broadcasters to provide those broadcasts to the public under the Copyright Act. The decision is a win for major broadcasters and content providers hoping to prevent online upstarts from impermissibly poaching their content. However, the Supreme Court’s ruling is limited and its application to future cases involving television content providers and storage is uncertain because the decision does not set forth a clear standard to identify the types of entities and services that violate the public performance provision of the Copyright Act.

Aereo’s Service

For a monthly fee Aereo broadcasted television programming online to its subscribers, virtually simultaneously with the actual broadcast of the programming on television. Aereo’s programming included copyrighted works that Aereo did not own and did not have a license to broadcast.

Technically, Aereo’s service worked as follows: subscribers to Aereo visited Aereo’s website and selected the desired television program. Aereo’s servers selected an antenna, specifically dedicated to the subscriber, to pick up the broadcast and convert it to a digital form for transmission across the Internet. The digital version of the broadcast was then sent to Aereo’s server which saved the data into a folder specifically assigned to the subscriber, creating a personal copy of the broadcast for the subscriber. Once several seconds of programming had accumulated, Aereo’s servers began streaming the programming to the subscriber until the entire show had been sent to the subscriber. The digital copy that was sent to the subscriber was solely for that subscriber and that copy was not sent to other subscribers. If multiple subscribers wanted to watch the same show, Aereo created individual copies of the program for each subscriber in their own folder to use to stream the content.

The copyright owners for these television programs – television producers, marketers, distributors and broadcasters – brought suit against Aereo for infringing their right to publicly perform their works under the Transmit Clause of the Copyright Act and sought a preliminary injunction enjoining Aereo’s service. The District Court for the Southern District of New York denied the request for preliminary injunction finding that Aereo’s service did not transmit the broadcast to the public in violation of the Copyright Act, but rather sent private transmissions to individual subscribers. The Second Circuit affirmed and the case was certified to the Supreme Court.

Does Aereo’s Service Transmit a Performance to the Public?

The “exclusive right” to perform a work is defined in the Transmit Clause of the Copyright Act as the right to “transmit or otherwise communicate a performance . . . of the copyrighted work . . . to the public, by means of any device or process.” Based on these provisions of the Copyright Act, the Supreme Court (“Court”) identified two determinative issues: (1) whether Aereo’s service “performs”; and (2) if so, whether the performance was done publicly.

Aereo’s Performance

To determine whether Aereo “performed” the Court focused on the purpose of the Copyright Act as gleaned from the amendments to the Copyright Act in 1976. The Copyright Act was amended to specifically address early cable TV providers who carried local television broadcasts to subscribers in other cities. These early cable TV providers used antennas to receive television signals and coaxial cables to carry the signal to their subscriber’s television set. A subscriber was free to choose the desired program by turning the knob on their television set. Under the law existing prior to the 1976 amendments to the Copyright Act, broadcasters “performed” because they selected the programs to be viewed and sent the programming to the viewers. However viewers, which included the cable TV providers, did not “perform” because they simply carried the programs they received.

The Court explained that Copyright Act of 1976 was amended to erase this distinction between broadcasters and viewers and instead clarified that one “performs” by showing a work’s “images in any sequence or to make the sounds accompanying it audible.” The Copyright Act’s Transmit Clause further addressed this activity by defining the transmission of a “performance” as the communication “by any device or process whereby images or sounds are received beyond the place where they are sent.” The Court determined that these amendments make clear that an entity that acts like the early cable TV providers “performs” when it enhances a viewer’s ability to receive broadcast television signals.

With that in mind, the Court held that Aereo “performed” under the Copyright Act because Aereo’s service was substantially similar to the service provided by the early cable TV providers that the Copyright Act sought to address. The Court dismissed the differences in the underlying technology between Aereo and the early cable TV providers – subscribers to Aereo have to initiate the transmission of the specific broadcast online while early cable TV providers sent a continuous feed to the subscriber’s televisions – finding that this difference was invisible to the consumer, and that Aereo’s service’s similarity to early cable TV providers was controlling.

Work Performed Publicly

Aereo argued that it did not “perform” any work “publicly” because the performance it transmitted was a new performance, distinct from the original broadcast, that was created by its act of transmitting. Aereo further argued that each new performance was transmitted privately to a single subscriber, not publicly. The Court found that even accepting Aereo’s argument that its transmission was a new performance, Aereo’s service nonetheless fell within the Copyright Act because the new performance still communicated the same images and sounds contained in the original broadcast “by means of a device or process,” albeit contemporaneously.

Importantly, the Court determined that Aereo transmitted the work to the public. Although Aereo’s service stored individual copies for each subscriber, the court considered this a technology issue that had no effect on the ultimate viewing experience of Aereo’s subscribers nor Aereo’s commercial objectives. To the Court, the nature of the service Aereo provided was indistinguishable from the early cable TV providers who performed publicly. The Court found that although Aereo creates and transmits personal copies of the works to each subscriber, overall its service shows the same work (images and sounds) to multiple subscribers who request the broadcast. These subscribers constitute the public, since they are unrelated and unknown to each other.

Having found that Aereo’s service “performs” the copyrighted work and transmits that performance to the “public” similar to the service provided by the early Cable TV providers, the Court found that Aereo infringed Plaintiff’s exclusive rights under the Copyright Act and reversed and remanded the case.

Future Impact

In Aereo, the Court applied a standard that simply looked at the similarities between the nature of the services provided by Aereo and the early cable TV providers. This “cable TV like” standard is inexact and may create uncertainty in the application of the standard in cases going forward. The Court even acknowledged that the Aereo case could have limited application. Whether and to what extent Aereo can be applied beyond its limited holding may be answered soon, as Fox Broadcasting Co. is already wielding Aereo in support of their infringement claims in the Ninth Circuit against Dish Network over features of its digital video recorder (DVR) service that allows users to upload recorded broadcasts to mobile devices. (see Fox Broadcasting Co., et al v. Dish Network et al, Civ. Case No. 13-56818)

The Court further noted that the decision is not intended to discourage the development of new technologies. Novel issues concerning other technologies, such as cloud computing and DVR services, will have to be addressed as they come before the Court. Despite the Court’s attempt to tread carefully, the decision to ignore the intricacies of the technology and focus on the ultimate effect of the service provided is significant and could hinder the growth of unique technologies in this area.

Ultimately, Aereo allows television broadcasters and content providers to breathe a sigh of relief knowing that the Copyright Act protects them from online entities seeking to offer television services by circumventing copyright laws and impermissibly siphoning off broadcasts. Aereo has been forced to shut down its service since the Supreme Court ruling. Any online media company or content provider hoping to broadcast television content must work with the major broadcasters, obtain licenses and pay fees to do so. Otherwise, they risk suffering the same fate as Aereo.

FTC Report Sheds Light on Dark World of Data Brokers

June 11th, 2014 by Paul Pittman

The Federal Trade Commission (“FTC”) recently issued a report entitled “Data Brokers: A Call for Transparency and Accountability” that identifies various concerns raised by the “Big Data” industry and provides solutions for addressing those concerns. The report focuses on data brokers – entities that collect consumer personal information and share or sell the information to third parties – who are largely invisible to the public. The FTC sought to illuminate the practices of the data broker business where consumer data is bought and sold with no direct consumer interaction. To that end, the FTC ordered nine data brokers, Axciom, Corelogic, Datalogix, eBureau, ID Analytics, Intelius, PeekYou, Rapleaf and Recorded Future, to provide information about the way they collect and use data, and how they enable consumers to access and control that data.

In the report, the FTC found that the data brokers have collected over a billion pieces of consumer data from a wide range of commercial, government and public sources. The data brokers create products using this raw data and the inferences made from that data to sell to clients. The FTC separated the products provided by data brokers to their clients into three categories: (1) marketing products; (2) risk mitigation products; and (3) people search products.

• With marketing products, data brokers provide clients with access to consumer contact information and interests determined through the use of cookies and other tracking mechanisms to facilitate targeted advertising of relevant products to consumers. In some cases, data brokers create products based on categories such as “Dog Owner” or “Motorcycle Lover,” for example, but also may segment consumers based on sensitive categories such as ethnicity or income level. While some data brokers may allow consumers limited access to the data collected from them, consumers are likely unaware that they have such access.

• For the risk mitigation products, data brokers provide their clients with the ability to verify a consumer’s identity and determine the existence of any fraudulent activity associated with a consumer. However, data brokers do not typically allow consumers to correct their information.

• A few data brokers also provide websites that enable people searches to obtain publicly available information about consumers. Unlike the marketing and risk mitigation products, data brokers providing people search websites typically allow for consumers to correct information and also allow consumers to opt out of having their information disclosed.

According to the report, the products provided by data brokers benefit consumers in many ways, by allowing businesses to effectively tailor advertisements, improve products, protect consumer identity from fraud and facilitate consumer interaction. However, the agency noted that the practices of some data brokers can also create risk to consumers if the information collected is incorrect, if a consumer is identified in a negative way or segmented based on discriminatory criteria. In addition, the collection of such vast amounts of consumer data create a security risk if that data is held indefinitely.

Call for Congress to Act

Based on its finding, the FTC called on Congress to enact legislation that would require data brokers to give consumers detailed access to their data and allow consumers the ability to opt out of having their data disseminated for marketing purposes. According to the agency, any such legislation should:

1. Enable consumers to identify the data brokers who collect their information, and determine how to access and opt out of the collection of that data, such as through a centralized portal;

2. Require data brokers to disclose how they use raw data and the types of inferences that are made from the raw data;

3. Require data brokers to disclose the sources of data to allow consumers to correct their data if necessary;

4. Ensure that data brokers provide consumers with notice that their data will be shared with data brokers, obtain consent (especially to collect sensitive data such as health information) and the ability to opt out of having their data shared with data brokers; and

5. Identify when a company uses risk mitigation products to limit a consumer’s ability to complete a transaction, especially where it adversely impacts a consumer’s ability to obtain certain benefits.

Outside of congressional action, the FTC also recommends that companies institute best practices, such as implementing privacy-by-design (considering privacy issues at every stage of product development) and refraining from collecting information from minors. Data brokers should also take steps to ensure that others who use the data they gather do not use it for eligibility determinations or to unlawfully discriminate.

Shifting Tide

The FTC report sheds light on the data brokerage industry and calls for legislation to protect consumers. So far, Congress has been slow to enact any data privacy legislation despite recent high profile consumer data breaches. However, momentum is building. The White House issued a report last month that focuses on how companies gather and use data online about individuals, and how those practices could be used to discriminate against certain groups. And at least one bill – the Data Broker Accountability and Transparency Act of 2014 – has been introduced by Senators John Rockefeller (WV) and Edward Markey (MA) that would require data brokers to disclose their efforts and provide consumers with options to control their information.

Even in the absence of clear direction by Congress, the FTC’s enforcement authority should compel any company that collects consumer data to heed the FTC’s recommendations, not just data brokers. As the FTC has shown, when it issues guidance on data privacy issues it will take action against companies that do not comply. At a minimum, companies collecting consumer data should ensure that their customers are notified that their data may ultimately be sold to data brokers, obtain consent and provide them with the opportunity to opt out of such disclosure. Doing so will ensure that companies meet the FTC’s consumer data use and collection expectations.

California AG’s Guidelines for CalOPPA and Do Not Track Disclosures

June 1st, 2014 by Matthew Fischer

California Attorney General Kamala Harris recently released guidelines for compliance with the California Online Privacy Protection Act (“CalOPPA”) entitled, Making Your Privacy Practices Public.  Attorney General Harris stated that the guide can be used as “a tool for businesses to create clear and transparent privacy policies that reflect the state’s privacy laws and allow consumers to make informed decisions.”

While the guide addresses the entire statute, it has been eagerly awaited for the insights it provides to the most recent amendments to CalOPPA under Assembly Bill 370, which requires all privacy policies to describe how website operators respond to Do Not Track signals.  The amendments have created a great deal of uncertainty since there is no general consensus even as to the definition of Do Not Track.  AB 370 compels website operators to disclose: (1) how they respond to a consumer’s Do Not Track signals or other similar mechanisms if they collect Personally Identifiable Information (PII) about individual consumer’s online activities across time and websites, and (2) whether they allow other parties to collect PII about an individual consumer’s online activities when a consumer uses the operator’s website.

The Online Tracking and Do Not Track section of the guide makes three key recommendations:

  • Make it easy for consumers to find the section in the privacy policy regarding online tracking by using clear labels and headers, such as: “How We Respond to Do Not Track Signals,” “Online Tracking” or “California Do Not Track Disclosures.”
    • This suggestion exceeds the literal requirements under CalOPPA which only obligates a website operator to include its tracking disclosures within its privacy policy and does not address the placement of the disclosure or formatting issues.
  • Describe how one responds to a browser’s Do Not Track signal or other mechanisms, which is more transparent and therefore preferable to simply providing a link to a related program or protocol.
    • This recommendation also surpasses the statutory requirements.  An operator can satisfy its disclosure obligations regarding its own tracking policies by including a link to a program that offers consumers a choice about online tracking in lieu of describing how it responds to a Do Not Track signal. If a website operator does include a link to another program to which it adheres, the guide encourages operators to provide a general description of what that program does.  CalOPPA does not require this added step if an operator decides to use a link.  
  • State whether other parties are or may be collecting PII of consumers while they are on your site or service. 
    • Compliance with this aspect of CalOPPA may require the website operator to ensure that only approved third parties collect PII from consumers on the site and to verify that those third parties, in fact, comply with the operator’s Do Not Track policy.

Highlights of the guide that address other aspects of CalOPPA include the following:

  • Use plain, straightforward language that avoids technical or legal jargon and that is easy to understand through the use of a layered or other clear format.
  • Explain the site’s uses of PII beyond what is necessary for fulfilling a consumer transaction while they are on your site or service.
  • Whenever possible, provide a link to the privacy policies of third parties with whom you share PII.
  • Describe the choices a consumer has regarding the collection, use and sharing of his or her personal information.
  • Tell your customers whom they can contact with questions or concerns about your privacy policies and practices.

California has been at the forefront of online privacy protections for consumers and Harris already demonstrated her willingness to enforce CalOPPA when she sued Delta Airlines under the statute back in December of 2012 for failing to include a privacy policy in its mobile app.  With the issuance of these long-awaited guidelines, companies should review their online privacy policies to ensure that they meet the Do Not Track disclosure requirements, as well as other provisions of CalOPPA, as more California AG enforcement measures can be expected.

HHS Continues to Obtain Record Settlements for Data Breach in the Healthcare Industry

May 14th, 2014 by Paul Pittman

As we reported in a prior post, the U.S. Department of Health and Human Services (“HHS”) has stepped up its efforts to enforce the use of adequate privacy and security measures to protect electronic protected health information (“ePHI”) found on laptops and other electronic devices, by seeking settlements from companies subject to investigations by the HHS. Last week, HHS secured the largest settlement to date when it agreed to settle with Columbia University and New York Presbyterian Hospital for $4.8 million over allegations that they allowed the disclosure of 6,800 patients’ ePHI. The breach occurred after a computer server containing patients’ ePHI was deactivated, making the ePHI accessible on the Internet.

The settlement further illustrates the HHS’ trend towards seeking monetary settlements to resolve its enforcement actions, with settlement amounts seemingly increasing with each new action. Covered entities and business associates under HIPAA, would be wise to periodically review their policies and procedures to ensure that they employ effective data security and encryption technology, provide proper training and properly dispose of electronic storage devices to avoid any inadvertent disclosure of personal health information.

Denial of Hulu’s Motion for Summary Judgment Paves Way for More Lawsuits Under the Video Privacy Protection Act

May 5th, 2014 by Nora Wetzel

A lawsuit filed in 2011 against Hulu, an on-line video content provider, claims the company violated the Video Privacy Protection Act (“VPPA”) by wrongfully disclosing users’ video viewing selections and personally identifiable information (PII) to third parties, comScore and Facebook.  On April 24, 2014, a federal court in the Northern District of California ruled that Hulu may have violated the VPPA by sharing user identifiers with Facebook.  Facebook could combine those user identifiers from Hulu with other information provided by cookies from a Facebook “like” button on Hulu’s web page that could reveal a user’s Facebook identity, as well as a user’s viewed video content on Hulu.  The court’s decision paves the way for privacy plaintiffs to bring suit against businesses who derive information from users’ viewing histories, with costly consequences given statutory damages of up to $2,500 per violation.

The VPPA prohibits a video service provider from knowingly disclosing PII of a consumer of the provider to third parties.  Under the VPPA, PII includes information that identifies a person as having requested or obtained specific video materials or services from a provider.  The VPPA prohibits disclosures that tie specific people to the videos they view.  The court found that disclosure of PII is not limited to a person’s actual name, but also consists of information that can identify a specific person and a specific transaction. The court affirmed that a unique, anonymized ID alone is not PII, but “context could render it not anonymous and the equivalent of the identification of specific person.”

Hulu’s Facebook disclosures included sufficient facts to potentially link the disclosure of a video name to an identified Facebook user to result in a violation of the VPPA.  A Facebook “like” button on Hulu’s web page sent Facebook the title of the video watched by the user, the IP address of the registered user’s computer, and cookies which could contain the Facebook user’s ID.  Hulu did not send Facebook the Hulu user’s ID or name when the user’s browser executed code to load the Facebook “like” button.  Nevertheless, the information provided to Facebook revealed information about what the Hulu user watched and the Hulu user’s name on Facebook.

In contrast with Hulu’s Facebook disclosures, Hulu’s disclosures to comScore did not potentially violate the VPPA because comScore could only have “hypothetically” linked to a user’s name or user’s viewing history.  Hulu provided comScore with users’ unique Hulu user ID, an alphanumeric string to differentiate between web browsers that Hulu assigned at random to a browser, a Hulu Ad ID identifying an advertisement, and the name of the video content program and any season or episode number.  Because comScore had the Hulu user ID, it possessed the “key” to locating user’s names, but there was no evidence it did so.

This ruling is significant because it further opens the door for class actions alleging violations of the VPPA because “anonymous” data may be considered PII under the statute when viewed in the context of other data points.  Privacy plaintiffs will undoubtedly seek to apply this more narrow interpretation of what constitutes “anonymous” data in other lawsuits implicating different state and federal privacy laws.

 See Order here.

Do-Not-Track Group on The Verge of Success

April 29th, 2014 by Afigo Fadahunsi

The World Wide Web Consortium (commonly known as “W3C”), an organization tasked with developing a method for ensuring online user privacy, finally released a model for a feature that will inform advertisers to not track the web user’s activity.  Consisting of more than 100 members from a wide swath of the digital advertising set, including web publishers, browser manufacturers, advertising networks and privacy advocates, the group has engaged in complex and sometimes contentious disputes over its own purpose and objectives.  This recent development signals a major push forward in the onerous efforts to balance online data privacy protection with healthy, yet relatively unintrusive digital advertising.  Some, however, view this as the easiest part of a very complex, multifaceted process.

Offering consumers meaningful control over who tracks their online activates and transparency in online behavioral monitoring strikes at the heart of what W3C seeks to accomplish with Do Not Track (DNT) technology.  Some opposed to the DNT system are concerned that independent ad networks will be the most impacted financially, as powerhouse web publishers like Google, Yahoo and Facebook who have their own ad networks, may gather the information already obtained from their members and send targeted ads to them, thereby circumventing the restrictions imposed by DNT.

W3C formed the “Tracking Protection Working Group” in 2011 to develop what it actually means for a browser to communicate a Do Not Track signal to advertisers and how those advertisers should respond.  As early as 2013, members of W3C clashed over fundamental issues as basic as the definition of “tracking”. Some balked at the supposed one-sidedness of the DNT tool, questioning whether Do Not Track would mean eliminating data gathering altogether or whether it would require that advertisers substantially limit sending targeted ads based on that data.  Either option, many believe, will inevitably suppress online advertising and the ability for ad networks to make money.  In September 2013, the Digital Advertising Alliance (DAA), a facet of W3C consisting of companies like Yahoo, Google and other major ad agencies, very publicly withdrew from the group, and sought to develop its own DNT scheme.  The advertising industry, estimated to be worth $40 billion, has made it clear that its members will not be forced into agreeing to a DNT standard that works directly against their interests.

Establishing a viable DNT system is still, quite frankly, in the rudimentary stage, as the issue of compliance will be the most challenging. The recently released DNT model, known as a “technical preference expression” specification, while a small step forward, is considered a success achieved after years of wrangling among members of W3C. The group must not only finalize this proposed signal, it must then turn its attention to defining “tracking compliance and scope” – how websites will be required to respond to users’ tracking preferences and what types of information can be collected and retained even after a server receives a DNT signal. At the moment, consumers have the option of either pursuing opt-out mechanisms made available through web publishers’ privacy policies, which are occasionally ignored, or the more drastic alternative of blocking all ads using special anti-tracking add-on software.  This tool, however, only blocks third party vendor ads and not targeted ads sent by “first party” advertisers, web publisher giants and the ad networks they own.

HHS Obtains Monetary Settlement for Theft of Patient Health Data

April 28th, 2014 by Paul Pittman

Concentra Health Services Inc. and QCA Health Plan Inc. have agreed to pay a total of nearly $2 million to settle claims involving violations of the Health Insurance Portability and Accountability Act (“HIPAA”) stemming from the theft of unencrypted laptop computers. The settlement represents the culmination of an investigation by the U.S. Department of Health and Human Services (“HHS”), into violations of HIPAA’s privacy and security rules and continues the trend by HHS to seek monetary settlements following investigations into HIPAA violations.

Concentra filed a breach report in December 2011 regarding the theft of unencrypted laptops from a company facility in Springfield, Mo. following an investigation by the HHS Office for Civil Rights – the primary entity responsible for enforcing HIPAA Privacy and Security Rules. OCR discovered that Concentra had conducted several previous risk analyses indicating that it lacked encryption on its laptops, medical equipment and other devices containing electronic protected health information (“ePHI”), but had been slow to implement encryption technology. In addition, the measures taken by Concentra to address their data security inadequacies were inconsistent and insufficient to protect patient data. Concentra’s violations existed from October 2008 to June 2012. Concentra is responsible for $1,725,229 of the settlement and must institute a corrective action plan pursuant to the settlement that includes security management processes, encryption of devices, training to employees and providing report updates to HHS.

QualChoice filed a breach notice in February 2012 also relating to the theft of an unencrypted laptop containing electronic public health information of nearly 150 people. An OCR investigation revealed that QualChoice failed to comply with HIPAA privacy and security rules from April 2005 to June 2012, where it did not maintain a strong security policy nor restrict access to private health records to authorized personnel. QualChoice is responsible for $250,000 of the settlement and must make efforts to update it security and risk management plan, provide training to employees and submit annual reports to HHS.

Similar to prior settlement between HHS and entities such as WellPoint, Inc. and Affinity, Health Plan Inc., HHS is increasingly enforcing the implementation and maintenance of adequate privacy and security measures to protect ePHI found on laptops and other devices. As a result, covered entities and business associates under HIPAA should periodically review their policies and procedures to ensure that they utilize the most current data security and encryption technology, restrict access to private patient health data and provide adequate training to their employees, to avoid drawing the ire of the HHS.

DAA RELEASES AD MARKER IMPLEMENTATION GUIDELINES FOR MOBILE

April 13th, 2014 by Matthew Fischer

On April 7, the Digital Advertising Alliance (DAA) announced the release of its Ad Marker Implementation Guidelines for Mobile (Ad Marker Guidelines) at the Interactive Advertising Bureau’s (IAB) Mobile Marketplace conference. The DAA is a consortium of national advertising and marketing trade groups that acts as an industry self-regulatory body. While the DAA traditionally focused on online advertising, the surge in mobile advertising in the last few years has caused it to increasingly address issues unique to the mobile ad space. The Ad Marker Guidelines follow on the heels of the DAA’s publication last summer of a policy guidance document on mobile advertising titled, “Application of Self-Regulatory Principles to the Mobile Environment.”

The DAA’s AdChoices (Ad Marker) icon is the blue triangular image that is the centerpiece of the organization’s ad choices program and is often delivered in or alongside interest-based ads in the online and mobile environments. Approved text accompanying the icon includes any of the following:

  • Why did I get this ad?
  • Interest Based Ads
  • AdChoices

When a consumer clicks on the Ad Marker, they receive information about the targeted nature of the advertisement and guidance on how to opt-out of behaviorally targeting advertising. The Ad Marker Guidelines “address use cases in which consumers interact with the screen without using a cursor, as is the case when they use mobile devices such as smart phones and tablets.”

The Ad Marker Guidelines cover both in-ad implementation (i.e., size, touchpad area, in-ad placement and in-ad user experience) and app developer and publisher implementation (i.e., ad marker placement and flow for developers and publishers). Below are some of the key takeaways.

In-Ad Implementation

Size: The smaller screen size and ad creative sizes associated with mobile devices justify implementation of the Ad Marker through the icon itself, provided it is at least 12 pixels by 12 pixels in size.

Touchpad Area: The Ad Marker should include an invisible touch pad area between 20×20 and 40×40 pixels and mobile devices should include enough area to allow the user to easily interact with the Ad Marker.

In-Ad Placement: For an in-ad placement, the entity serving the notice may position the Ad Marker in any one of the four corners of the ad, although placement in the upper right hand corner is discouraged because that is where the close button for ads is normally located. When the icon is used concurrently with approved text, the Ad Marker Guidelines recommend placing the icon in the immediate corner of the ad with the approved text adjacent to the icon.

In-Ad User Experience: Tapping on the Ad Marker results in any one of the following four experiences:

  • Link directly to a notice that contains a mechanism that allows users to exercise their interest based preferences or to instructions for device-specific advertising preferences.
  • An interstitial opens up that provides the user a choice to access a preference mechanism, access a privacy policy, go back to the ad, or close the interstitial.
  • Tapping on the icon the first time expands the notice to show the approved text and a second tap brings the user to the preference mechanism or to instruction for device-specific controls.
  • When the user taps the Ad Marker in a rich media ad that is in a collapsed state, the Ad Marker icon expands to provide the user with the option to: (i) close the in-ad interstitial to view the ad; (ii) access the privacy policy or; (iii) access a preference mechanism or instruction for device-specific controls.

App Developer and Publisher Implementation

The Ad Marker Guidelines advise that “[w]hen implementing the DAA Ad Marker, application developers and mobile Web publishers need to consider both the placement of the Ad Marker and user access to the notice and choice it provides.”

Mobile publisher notices should use any of the three approved texts and when the icon accompanies an approved text, it should be at least 12 pixels by 12 pixels in size.

The in-app notice is accessible from the app’s Settings menu. The best placement of the notice is in the mobile page footer.

The Ad Marker Guidelines provide practical, easy to understand directions that will allow those serving ads in the mobile environment, including those on the creative size, to consistently utilize the Ad Marker icon. Use of the Ad Marker helps facilitate compliance with the enhanced notice requirements set forth in the DAA’s Application of Self-Regulatory Principles to the Mobile Environment.

About Us
Sedgwick provides trial, appellate, litigation management, counseling, risk management and transactional legal services to the world’s leading companies. With more than 350 attorneys in offices throughout North America and Europe, Sedgwick's collective experience spans the globe and virtually every industry. more >

Search
Subscribe
Subscribe via RSS Feed
Receive blog updates via email: