Heading

The EU AI Law Just Passed — What Marketers Need to Know

The world's first AI laws just passed. Find out more about specifics of the historic ruling and how they might impact your team.

Published on Mar 14, 2024

After three years of negotiations, the European Parliament gave its final approval on its AI Act: rules meant to create trustworthy AI systems and use cases that mitigate the safety and ethical risks posed by powerful AI models. 

The Act is the world’s first formal legal framework on AI and many experts believe it could set a precedent for how many other countries approach AI legislation. The EU has almost 500 million citizens, so the legislation is expected to impact the global AI market because large AI companies will be incentivized to comply and continue offering their products in the EU. Rule enforcement for certain aspects of the Act is set to kick in this year, with most going into effect in 2025 and the final rules taking effect in 2026. 

This initiative is a landmark ruling. Even still, for marketers using AI platforms like Jasper (whose interoperability means it’s likely categorized as “limited risk”), most changes revolve around transparency and copyright compliance, with minimal impact on everyday operations. However, notable requirements include disclosures for AI-driven interactions and content creation, particularly in preventing deep fakes. Jasper is committed to adhering to the new regulations, signaling a step forward in responsible AI usage. Keep reading to learn more about the EU’s AI Act.

A risk-based approach

At the heart of the Act is its risk categorization system, which regulates AI systems based on four risk levels that range from unacceptable to minimal/none. AI models and the products they power are split into these categories based on the risks they may pose to health, safety, and the fundamental rights of the EU’s citizens. Certain products may fall into multiple categories. 

  • Unacceptable risk: Applications with potential for manipulation through subconscious messaging, stimuli, or by exploiting vulnerabilities like socioeconomic status, disability, or age. Social scoring AI systems, which evaluate individuals based on social behavior, are prohibited. Also banned are real-time remote biometric identification systems by law enforcement in public spaces. 
  • High risk: These include applications related to biometrics, critical infrastructure, education and vocational training, employment, worker management and access to self-employment, access to essential services, law enforcement, and others. The Act outlines requirements for developers of high-risk AI systems that encompass risk management practices, data governance, monitoring, record-keeping, detailed documentation, transparency, human oversight obligations, and standards for accuracy, robustness, and cybersecurity. Additionally, high-risk AI systems must be registered in an EU-wide public database.
  • Limited risk: These refer to AI systems that meet specific transparency obligations. Large language models, like those used by Jasper, fall under the 'limited risk' category. We cover this very important section in more detail below.
  • Minimal risk: These applications are already widely deployed and constitute a significant portion of the AI systems we engage with today. Examples encompass spam filters, AI-enabled video games, inventory-management systems, and other implementations that pose little to no risk to humans. 

Limited risk, general-purpose AI models

The AI Act addresses limited risks that may arise from general-purpose AI models. Limited risk includes large generative AI models that power chatbots and platforms. Jasper, for example, is likely defined as limited risk. These models have a wide range of applications and are increasingly foundational in EU AI systems. According to the Act, highly capable or widely used models could pose risks that may lead to accidents or misuse in cyberattacks. The spread of harmful biases across various applications could also impact many people.

The legislation emphasizes transparency and accountability for these general models around technical documentation, training data summaries, and compliance with copyright laws. Certain models will also require risk assessments, adversarial testing, and incident reporting — similar to Jasper's security protocols.

The Act features two important callouts related to transparency for applications and users leveraging these limited-risk, general-purpose AI models: 

  1. Providers need to build AI chatbots and other systems meant to engage with humans in such a way that those systems inform users that they’re interacting with AI. This will ensure humans can make an informed decision on whether they want to continue that interaction.
  2. Users of AI systems that generate or manipulate images, audio, or video are required to disclose that the content has been artificially created or altered. This rule applies when the AI-produced content closely resembles existing individuals, objects, places, or events, potentially misleading a person into believing it to be genuine — aka a deep fake.

What does the EU’s AI Act mean for marketers using AI (and Jasper)?

Most of the EU AI Act deals with systems deemed high-risk to EU citizens. As mentioned previously, LLMs like those used by Jasper are likely deemed “limited-risk”.  This means they’re subject to smaller-scale regulations like adhering to transparency requirements and EU copyright law before releasing a new foundational model. 

So in short, the Act proposes very little regulation for end-users of AI. However, there are two very important points to note:

  • You are not permitted to use AI to create deep fakes or falsely impersonate someone. This is prohibited in Jasper’s terms of service. 
  • If you are using Jasper's API or something similar to power a chatbot, for example, then by 2025 you should provide clear disclosure in the chatbot experience to inform users that they are engaging with AI and not another person. 

If you’re unsure about how the Act’s reach and implications may impact your work or business, reach out to us. We also recommend contacting your legal team to get more insight.

The AI EU Act’s implementation timeline

The AI Act will officially become law after all of its member states sign off sometime this spring. 

Rule enforcement will take effect in stages: 

  • After 6 months from official publication, EU member states will phase out prohibited systems in the unacceptable risk category
  • At 12 months, rules for general-purpose AI systems like chatbots (aka Jasper users) will come into effect
  • By 24 months, all regulations of the AI Act will be fully applicable, including obligations for specific high-risk systems.

How will the AI Act be enforced?

Enforcing the AI Act will primarily come down to each EU member state. States will appoint one or more national authorities to supervise AI market activities. They are encouraged to designate a national supervisory authority, which will also serve as that country’s representative in the European Artificial Intelligence Board.

An advisory forum will also provide additional technical expertise by including a diverse range of stakeholders, like representatives from the AI industry, start-ups, SMEs, civil society, and academia. Additionally, the EU Commission will create the European AI Office to oversee general-purpose AI models, collaborate with the European Artificial Intelligence Board, and benefit from guidance provided by a scientific panel of independent experts.

How is AI innovation impacted by the law?

The Act has measures like regulatory sandboxes and provisions for real-world testing meant to help balance innovation with regulation. These initiatives intend to benefit SMEs and startups, offering them the flexibility to experiment and refine their AI systems within a controlled environment before wider deployment. 

Ultimately, this legislation is a significant step forward in establishing a common framework for AI regulation. Jasper’s platform is built on the principles of security, transparency, and responsibility, and we welcome additional guidance from legislators as we continue to serve the European market. Alongside our US team, our European office is dedicated to ensuring Jasper stays compliant with all EU regulations as the AI Act is implemented. 

*This story is not intended to be legal advice to our customers or any third parties. 
The White House issued its own Executive Order on AI. Learn more about its major mandates and their release dates here.

Table of contents

Start a free 7-day trial of Jasper today

Meet The Author:

Alton Zenon III

Alton Zenon III

Jasper Content Marketing Manager

Enjoy this post?
Join over 4 million people who are learning to master AI in 2024.

Subscribe to receive a weekly recap of breaking news, case studies, and exclusive webinars on what’s happening in generative AI.

Ready to create better content with AI?

Discover how 100,000+ copywriters and businesses are using AI content.
Works in 30+ languages
2 minute signup
Rated 4.8/5 stars in 10k+ reviews
horizontal-example-cta
Horizontal style
Lorem ipsum

Lorem ipsum

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s.

Register now
vertical-example-cta
Vertical style
Lorem ipsum

Lorem ipsum

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s.

Register now