Skip to main content

AI<>BA - Conference takeaways

· 4 min read
Wojciech Gruszczyk

Another month, another conference. On October 25th, I had the pleasure of being a speaker at AI<>BA. The main focus of the conference was on business applications of AI. I was asked to give an introductory speech and define the common language - not an easy task, especially if you are used to speaking to a more technical audience. At the end of the day, I think it was beneficial for both - the audience and myself to have a chance to rethink the obvious and ask ourselves if we truly understood:

I learned very early the difference between knowing the name of something and knowing something.

Richard P. Feynman

In this post, I will not summarize the conference but will try to summarize my practical observations.

AI<>BA Conference - Practical Applications of AI for Businesses on the Crossroads

Everyone is learning

The conference was an opportunity to meet people from different industries and backgrounds. The common thing for all of them was that they defined generative AI as a technology that would disrupt their businesses. For that reason, they were all looking for ways to use it to their advantage and to stay ahead of the competition. Everyone is learning and looking for new ways to apply the technology even if they are in the AI business for a while. Usually, companies experienced in AI come from the fields of ML (Machine Learning) and NLP (Natural Language Processing). Fortunately, the new methods are pretty straightforward to apply if you are skilled in traditional programming and DevOps. Building an own model is much more difficult and resource-intensive. Fortunately, for business applications of generative AI, you don't need to dig into the nitty-gritty details of the technology.

GPT stigma

Companies interested in building a solution based on generative AI are often afraid of the GPT stigma. Using the most popular model feels cheap and not innovative. At a glance, this looks reasonable, but frankly speaking, solutions based on GPT models and supported by well-thought data pipelines and proper grounding are often very effective and can be built quickly. Fine-tuning, RAG (Retrieval-Augmented Generation), and other techniques can be used to improve the results GPT models provide.

In my eyes, alternatives should be considered if:

  • the price tag is too high,
  • performance is not sufficient,
  • legal issues arise.

Copilots - the way to go?

The last thought I'd like to share is a bit futuristic and touches the area of copilots. By copilots I mean software agents co-working, supporting human beings in the area of their expertise. The idea is not new and has been around for a while. The most popular example is the GitHub Copilot - an AI pair programmer, nevertheless, it is quite easy to envision other copilots. For example:

  • legal copilot supporting lawyers in writing contracts,
  • medical copilot supporting doctors in diagnosing patients,
  • financial copilot supporting families in saving,
  • you name the rest.

With the current state of the technology, it is hard to imagine a copilot that could work independently. The most likely scenario is that copilots will be used to support humans in their work, and the final decision will be taken by the first pilot - human.

Copilots are also a great way of modularising AI solutions. Instead of building one big model that does everything, we can create smaller models that work or specific specialized tasks yet provide results jointly. This approach is more flexible and easier to maintain and may yield great results.


I liked the conference and believe that there was enough quality content presented to please everyone. I'm already looking forward to the next editions. If you have comments or want to discuss the topic - do not hesitate to leave your comments below or contact me directly.