People hold miniature EU flags in a protest.
PHOTO: Kzenon

On April 21, the EU officially proposed the Artificial Intelligence Act, outlining the ability to monitor, regulate and ban uses of machine learning technology. 

The goal, according to officials, is to invest in and accelerate the use of AI in the EU, bolstering the economy while also ensuring consistency, addressing global challenges and establishing trust with human users. 

The Act categorizes AI use-cases into three levels: unacceptable risk, high risk and low/minimal risk. 

AI use cases with unacceptable risk will be banned outright. These are applications that violate fundamental human rights, such as:

  • Manipulation through subliminal techniques
  • Exploitation of specific vulnerable groups, such as children
  • Social scoring done by public authorities (like China’s social credit system)
  • Real-time remote biometric identification in public spaces by law enforcement (though exemptions exist)

High-risk applications, similarly, pose a high risk to health, safety and fundamental rights, though the debate around the definition of “high risk” has been raging since last year, with more than 300 organizations weighing in. These AI applications are allowed on the market only if certain safeguards are in place, such as human oversight, transparency and traceability. 

To Whom Does This Law Apply?

These policies, like GDPR, will affect any company that targets the EU, not just those based there. And it's not merely for organizations that use high-level AI, such as for infrastructure or law enforcement. If you use chatbots to handle inquiries, machine learning to identify customer sentiments or insights or any type of content-altering bots, these regulations are for you. 

As of yet, the EU's AI Act has not been passed into law, and there is no expected implementation date. We're unlikely to see any updates on the Act before 2023. 

Once passed, lack of adherence to the AI Act could result in a monetary penalty or, in the case of high-risk applications, oversight bodies will have the power to retrain or destroy AI systems.

Related Article: GDPR Compliance: What Marketers Can Expect in 2022

What Does the AI Act Mean for Marketers?

Most marketers don’t have to worry about creating high-risk AI systems that are susceptible to rigid government oversight. But that doesn't mean they’re out of the woods.

Some non-high-risk AI systems will also face transparency obligations, specifically those that:

  • Interact with humans (such as chatbots)
  • Detect emotions (including sentiment)
  • Categorize based on biometric data
  • Generate or manipulate content (like deepfakes)

Deepfakes (digitally altered videos or photos) have seen an uptick in popularity in the last few years. Mark Zuckerberg, for instance, is a popular target of deepfake content, with this “Sassy Justice” (from the makers of “South Park”) video becoming an internet favorite:

Unfortunately, not all deepfakes are benign. Tim Hayden, CEO and managing partner of Brain+Trust, points out how bad actors could easily create deepfake videos of CEOs saying something that would hurt their brand. 

“There's a real threat with deepfakes where you can not very difficultly go and build and create videos of a CEO saying something," Hayden said. "You can do it by just finding that they wear the same shirt six different times over the last year, and you can take snippets of each time they were on CNBC or at a shareholder meeting or something and be able to piece together a message.”

Hayden, who previously worked in media intelligence, explained how these fake videos (or other faux headlines) often correlate to stock market action, or, more specifically, people shorting the targeted brand’s stock. 

“And what they were doing,” added Hayden, “they were buying guns, they were trafficking people or drugs. They were planning terrorist activity, but they were making money.”

Related Article: Growing Data Privacy Concerns in the Age of Digital Transformation

The Focus: Data Privacy and Transparency

The biggest takeaway for marketers should be this: data privacy. It’s all the rage, and it’s not going away. 

Brands — many of which already follow GDPR requirements for collecting, storing, sharing and using data — will face even more stringent regulations when it comes to handling consumer data. 

“Most of the privacy laws passed over the last five or so years have aimed to … give consumers the right to see what data/information a brand has on them and how those data are processed and used,” Hayden said. “This [Act] goes further to govern and monitor more advanced technologies, automation, artificial intelligence and machine learning.”

Hayden doesn't see data privacy requirements as anything more than this: a directive for brands to personalize their programmatic advertising and inbound customer experiences and reduce waste and noise across media networks. “I see it as similar to the FCC soliciting a new television standard in the late 1980s to give all of us the opportunity to trade our CRT television sets for thinner, widescreen HDTVs," Hayden said.

AI Act Criticisms and Compliments

Since the EU announced their proposed AI legislation, there’s been a frenzy of feedback online, both good and bad. 

Some critics say the Act’s oversight expectations are too broad for all AI applications, as regulating products released on the market vs. online platforms vs. critical city infrastructure are all completely different processes. 

Another shortfall critics cite is the lack of recourse for actual humans to raise complaints about AI systems that impact them or a group of people personally. This starkly contrasts GDPR, which enables individual complaints and allows for collective remedy. 

Others say the Act could be a good thing. Dr. Phillip A. Laplante, professor of software and systems engineering at Penn State University, said, “I think this is going to slow progress in a good way. I think it will force companies that use these technologies to conduct rigorous verification and validation and be able to provide justifiable explainability (that is defensible in court) before releasing the product to the public.”

He added that, when it comes to enforcing this legislation, “I think government entities have to harness their own expertise and include expertise from academia, industry and groups that represent the public and users in evaluating and regulating AI.”

If passed, Laplante said this Act will require “widespread (and continuous) regulatory and technical evaluation of products by providers and users of AI technology. I think this continuous evaluation is a good thing.”

Related Article: Balancing the Opportunities and Risks of Machine Learning

Will the EU AI Act Become Law — or Lose Steam? 

The EU’s AI Act is still in its infancy stages, with no definitive date for when it could become law. 

Still, with technology rapidly evolving, and the debate around consumer privacy and protection gaining more power, it seems that such a law is inevitable. 

According to Hayden, the Act leaves plenty of room open for interpretation. And, if passed, “there will be 10 or more high-profile cases that set early precedents, perhaps even causing amendments to be made to the law.”

It’s very likely, however, that this Act will see new iterations and updates as industry leaders and tech experts continue to speak out.