A wooden desk with a laptop on it. The laptop is open to a news site
PHOTO: Shutterstock

While many companies are still struggling to work out what their next step will be following the decision by EU regulators to strike down the U.S. Privacy Shield, Facebook, which was at the center of the case that led to the EU decision, is now challenging EU anti-trust regulators. However, this time the challenge is not about the U.S. Privacy Shield but about a request by EU anti-trust regulators for documents from Facebook which Menlo Park, Calif.-based Facebook says would force it to reveal a substantial amount of its employees’ personal data.

It also says that, to date, it has cooperated with the European Commission anti-trust investigations, but pointed out that the way the request was formulated would mean that Facebook would have to hand over information that includes employees private messages among other things. Even still, Facebook expects to hand over hundreds of thousands of documents to the regulator. "The exceptionally broad nature of the commission's requests means we would be required to turn over predominantly irrelevant documents that have nothing to do with the commission's investigations," Lamb said in response to an inquiry from the French news agency AFP.

Among the documents that are covered by the request, according to Lamb, are documents that include sensitive personal information such as employees' medical information, personal financial documents, and private information about family members of employees. Facebook thinks such requests should be reviewed by EU courts, according to Lamb, and is asking the court to weigh in on broad search terms such as "applause" or "for free" that could easily be found in personal email messages or other exchanges way beyond the scope of antitrust matters.

Investigations such as this can involve requests for messages or documents containing certain words or phrases and are designed to scoop up the most information possible.

Meanwhile, a U.S. antitrust hearing, including top executives of four Big Tech firms, originally set for this week has been postponed. A notice filed by the House Judiciary Committee set no new date for the hearing titled "Examining the Dominance of Amazon, Apple, Facebook, and Google."

Chief executives Tim Cook of Apple, Jeff Bezos of Amazon, Mark Zuckerberg of Facebook and Sundar Pichai of Google and its parent firm Alphabet had agreed to participate in the session.

Google Faces Australian Courts Over Data Privacy

Elsewhere, the Australian Competition and  Consumer Commission (ACCC) has launched court proceedings against Alphabet’s Google accusing it of misleading Australian consumers to obtain their consent to expand the scope of personal information that Google could collect and combine about consumers’ internet activity, for use by Google, including for targeted advertising.

While the complaint may seem a distant problem for a distant country, the case itself has widespread implications in other jurisdictions and while there doesn’t appear to be any cases on the books elsewhere, it would be logical to see similar cases elsewhere if this complaint is upheld.

In court documents, the ACCC accused Google of misleading consumers when it failed to get their explicit informed consent about its move in 2016 to start combining personal information in consumers’ Google accounts with information about those individuals’ activities on non-Google sites that used Google technology, formerly DoubleClick technology, to display ads. “We are taking this action because we consider Google misled Australian consumers about what it planned to do with large amounts of their personal information, including internet activity on websites not connected to Google,” ACCC Chair Rod Sims said in a statement. “Google significantly increased the scope of information it collected about consumers on a personally identifiable basis. This included potentially very sensitive and private information about their activities on third party websites. It then used this information to serve up highly targeted advertisements without consumers’ express informed consent.”

For its part, Google said the change was optional and consumer consent was sought through prominent and easy-to-understand notifications. In a statement to Reuters news agency, Google said. “If a user did not consent, their experience of our products and services remained unchanged,” a Google spokesman said in an email. The statement also said that the company intends to defend its position.

Before 28 June 2016, Google stated in its privacy policy:" [We] will not combine DoubleClick cookie information with personally identifiable information unless we have your opt-in consent.”

On 28 June 2016, Google deleted this statement and inserted the following statement: “[d]epending on your account settings, your activity on other sites and apps may be associated with your personal information in order to improve Google’s services and the ads delivered by Google.”

Google acquired DoubleClick, a supplier of ad-serving technology services to publishers and advertisers in 2008. Google now supplies DoubleClick’s services through its Google Ad Manager and Google Marketing Platform brands.

Combined with the personal data stored in Google accounts, this provided Google with valuable information with which to sell even more targeted advertising, including through its Google Ad Manager and Google Marketing Platform brands, the ACCC claims. The regulators are seeking a fine “in the millions."

Okta Assesses Data Privacy Concerns

These are not just legal battles; these are all issues that have resonance with the public and those who work in digital workplaces. To find out exactly how deep this goes,  San Francisco-based Okta, which develops  an enterprise-grade, identity management service, commissioned Juniper Research to conduct an online survey of over 12,000 people between the ages of 18 and 75 in six countries, notably Australia, France, Germany, the Netherlands, the United Kingdom, and the United States.There were four principle findings in the research which was carried out between 20th and 31st January 2020 and 27th April to 6th May

1. Consumers Underestimate Data Tracking

A significant number of respondents believe companies are not collecting data about their online and offline activities. 42% of Americans do not think online retailers collect data about their purchase history, and 49% do not think their social media posts are being tracked by social media companies.

2. Consumers Say Privacy Outweighs All

While technology companies have launched initiatives to track the spread of COVID-19, many consumers are not buying it. 84% of Americans are worried that data collection for COVID-19 containment will sacrifice too much of their privacy, and 74% of Australians say the same.

3. Distrust in Government Is High

While social media companies are the least trusted overall, global respondents make it clear that governments are still not popular. 70% of Americans are uncomfortable with the government tracking their data, and less than a quarter (24%) of US respondents are willing to share their data to help law enforcement.

4. Consumers Are Sitting on Gold Mines of Data

37% of consumers would not sell their personal data. Another 27% are unsure if payment is worth it to sacrifice their data. When it comes to specific types of data, 76% of all respondents are unwilling to sell some portion of their data.

Private And Public Sector Divided Over Data Privacy

However, to suggest that there is a single view on data and about the way data is being accessed, would be a mistake. A recently released Ernst & Young report entitled Bridging AI's Trust Gaps shows that there are significant differences in how the public and private sectors view the future of ethics, governance, privacy, policy and regulation of artificial intelligence (AI) technologies.

The EY web-based survey was conducted between 2019 and early 2020. It obtained responses from 71 policymakers and 284 companies across 55 countries. Specifically, it showed that AI discrepancies exist in four key areas: fairness and avoiding bias, innovation, data access and privacy and data rights. There were three major findings:

1. Policymakers Have Specific Priorities, Private Sector Lacks Consensus

Policymakers' responses show widespread agreement on the ethical principles most relevant for different applications of AI. The private sector's top choices were principles prioritized by existing regulations, such as GDPR, rather than emerging issues such as fairness and non-discrimination. 

2. Disagreement About the Future Direction of Governance

While both policymakers and companies agree a multi-stakeholder approach is needed to guide the direction of AI governance, results show disagreement on what form it will take: thirty-eight percent of organizations surveyed expect the private sector to lead a multi-stakeholder framework, and only 6% of policymakers agree.

3. Overcoming Differences Through Collaboration

The survey also found that there are stakeholder group blind spots when it comes to the implementation of ethical AI, with 69% of companies agreeing that regulators understand the complexities of AI technologies and business challenges, while 66% of policymakers disagreed.

Nigel Duffy, EY Global Artificial Intelligence Leader, pointed out that the findings indicate that as AI transforms business and industries, poor alignment diminishes public trust in AI and slows the adoption of critical applications. For efforts to be fruitful, companies and policymakers need to be aligned.