Skip to main content

Liability in limbo: legal and ethical challenges of Generative AI

· 5 min read
Wojciech Gruszczyk
Chief Blog Officer

The latest proceedings in Artificial Intelligence initiated heated discussions about the future of AI. It is a mixture of excitement, fear, hope, and everything in between. The European Commission has proposed a new legal framework for AI, which may shortly become the law. Some big names (including Elon Musk) urged to pause the development of big models - the open letter was signed over 30,000 times as of August 28, 2023 1. At the same time, the adoption of the technology is growing rapidly and doubts are piling up. In this post, I'm summarizing my thoughts on how to approach the challenges and operate safely.

Prosecuted AI - absurd today, but who knows what the future holds.

Ethical concerns

The ethical concerns have a dual nature. On the one hand, the technology (like Chat GPT of OpenAI) is so good at conversations that we started to attribute it human characteristics, on the other hand, authors of the original content used by the algorithms are neither given any credit nor are getting paid for their work. Let's take a closer look at both groups.

AI as a human

Even though we do not fully control how the content is produced by the generative models, we are fully aware of the architecture of those models and we understand the representation of their memory and decision-making mechanisms. According to my best knowledge, no mechanism that might give them consciousness or feelings exists. From the formal perspective, what we currently observe is so-called Narrow (or Weak) AI - specialized in one task, yet way behind the human brain's capabilities. For that reason, moral concerns related to AI are way too early.

AI as a content creator

Content generation is strongly dependent on the data the models were trained with. Even though the final output is not a copy of the original work, it usually contains a lot of similarities or characteristic features (take a look at one of my previous articles where I used Midjourney to generate a Banksy-style gorilla). As long as the training data is publicly available or comes from an own, legal source everything is fine. The problem arises when the data is scraped from the Internet without proper permission (consciously or not). My main concerns are related to the following aspects:

  • the training data contained characteristic (copyrighted) materials and the generated content was used for commercial purposes violating the copyright? It might be as subtle as the painter's style or as obvious as a characteristic guitar riff of the 'Smoke on the water'.
  • the generated content contains a piece of code distributed under a viral license? Shall the final code be open-sourced? What if the generated code is used in a commercial product? Is the code generated by Copilot safe to use?

My general advice is to always check the outcome against plagiarism (with dedicated tools) in case of texts and be at least protected by a contract with the tool's provider that the generated content is free of any third-party rights.

In my eyes, the big battle in this area is about to start and the first lawsuits are already filed2.

The European Commission has proposed a new legal framework for AI3. The draft is under public consultation and is expected to be adopted in 2024. EU wants to ensure that AI systems are safe and respect human rights and values. At the same time, they are aware that the technology is still in its infancy and that the law should not be too restrictive to allow for further development.

The document distinguishes 4 risk levels of AI systems:

  • unacceptable risk - AI systems that are prohibited on the EU market:
    • Cognitive behavioral manipulation of people or specific vulnerable groups of people (think of kids consuming content generated by AI that may lead to depression or harmful behaviors)
    • Social-scoring
    • Real-time and remote biometric identification systems
  • high risk - AI systems that are subject to strict obligations before they can be put on the market
    • AI systems in products falling under EU's product safety legislation4
    • Controlled areas (have to be registered):
      • Biometric identification and categorization of natural persons
      • Management and operation of critical infrastructure
      • Education and vocational training
      • Employment, worker management, and access to self-employment
      • Access to and enjoyment of essential private services and public services and benefits
      • Law enforcement
      • Migration, asylum, and border control management
      • Assistance in legal interpretation and application of the law.
  • limited risk - AI systems that are subject to specific transparency obligations
    • systems interacting with humans (like chatbots)
    • systems used to generate or manipulate content (like deep fakes)
  • minimal risk - AI systems that are subject to no obligations

As the summary above is rather limited, I encourage you to take a look at the headline of the act provided on the EU pages5 on which the summary is based.

Summary

As you can see, the legal aspects of Generative AI are still in limbo. The technology is developing rapidly and the law is trying to catch up. The best advice I can give you is to be aware of the risks and use the technology responsibly. If you are a content creator, make sure that the generated content is not violating any third-party rights. At the same time, as a consumer, be aware that the content you are receiving might be generated by an AI, and what you see may be untrue or manipulated.

What are your thoughts on the topic? Share your opinion in the comments below!

Footnotes

  1. Pause Giant AI Experiments: An Open Letter

  2. Getty Images Statement

  3. EU Briefing: Artificial intelligence act

  4. EU’s product safety legislation

  5. EU AI Act: first regulation on artificial intelligence