Skip to main content

Cloudyna MeetUp 2023: Generative AI - a summary

· 4 min read
Wojciech Gruszczyk

On the 28th of September '23, I had the pleasure to be a speaker at the Cloudyna MeetUp 2023. The event took place at Leśniczówka, the rock'n'roll hub of Katowice, where the electric atmosphere matched the iconic venue. Besides the great talks, the event was an opportunity to meet with the community and discuss the latest in generative AI, which was sometimes quite heated. In this post, I'm sharing a summary of the event, focusing on the wow moments I experienced.

Elbert Einstein on Woodstock '69 wearing Ray-Ban © glasses - envisioned by Midjourney

Common challenges

Even though the speakers represented companies of various sizes, ranging from startups to global players like Deloitte and Microsoft, several common threads emerged. The most frequently discussed topics included:

  • Generative AI: It's clear that generative AI is reaching a tipping point in adoption. The technology is becoming increasingly accessible, with a growing number of use cases. AI adoption is no longer limited to tech giants, and the next few years will likely witness a sustained AI revolution, much like what happened with blockchain technology.

  • Hallucinations: A significant challenge with generative AI lies in the prevalence of hallucinations in AI models. Reliable generative AI models remain a future goal. These models often exhibit biases and generate inaccurate information. They are trained to respond but cannot recognize when they lack the necessary information. While statistically accurate responses may seem trustworthy, their stochastic nature often leads to inaccuracies, especially when models are trained on biased datasets. Attempts to address this issue usually involve a technique called "grounding," which incorporates relevant context to enable more informed decision-making. RAG (Retrieval-Augmented Generation) is a good example of such an approach.

  • Technical Challenges: Performance and cost remain the most common technical challenges. As models and datasets continue to grow, training becomes more demanding. While big players have managed to reduce costs, smaller challengers, mainly startups, still face competition challenges in this field. Response times are also a concern, especially in real-time applications or batch processing, which can be time-consuming and expensive. Classical approaches like caching or pre-computation are often applied to mitigate these issues, but the problem persists.

I mentioned this topic briefly in my previous post titled Liability in limbo: legal and ethical challenges of Generative AI. However, Piotr Kaniewski, an Attorney at Law from Osborne Clarke, shed new light on the subject. In addition to the widely discussed issues related to the ownership of training data and the implications of IP violations by big tech companies, Piotr delved into the nuances of license agreements and terms of use. While it's too complex to fully cover in a short paragraph, the key takeaways are as follows:

  • Service providers typically offer their services with non-negotiable license agreements that often don't align with specific IP laws.
  • In contrast, big tech companies usually present more acceptable terms. However, complications often arise with plugins, which are frequently developed by third parties and may have terms that don't align with the main service or are too radical for most businesses to accept.


Ethical concerns surrounding AI were a prominent discussion point at the meet-up, with participants expressing a wide range of opinions on the matter. These opinions varied from the very radical stance that we should refrain from using large models, as they might be based on stolen IP (as discussed in the previous legal section), to more conservative views advocating that the focus should be on the future, ensuring that original authors receive due compensation for their work. Whether a consensus was reached remains uncertain.

Nevertheless, it is clear that the topic is well understood by major tech companies. Michał Furmankiewicz of Microsoft highlighted The Microsoft Responsible AI Standard during the discussion, which outlines a framework for developing responsible AI systems. Equally important is the emphasis on responsible development. Similar to the WCAG for the web, this standard serves as an excellent starting point for any organization seeking to build AI systems responsibly.


The meetup was a great opportunity to meet with the community and discuss the hot topic of generative AI. The atmosphere was electric and the venue was a perfect match. Privately, it was an opportunity for me to stand on stage as a speaker, which was a nice change from the times of COVID-19. I'm looking forward to the next edition. If you have any thoughts on the topic, don't hesitate to share them in the comments below.