Blog

7 Things That Always Surprise People in Our Intro to AI Course

Lysanne van Beek

November 13, 2025
7 minutes

Every time I teach our Introduction to Generative AI, there’s at least one moment when the room goes quiet, someone frowns, and then says, “Wait… what?”

The field of AI is older than you might think

After twenty sessions, I’ve noticed the same seven surprises keep coming back, and they reveal a lot about how people think about AI. Which ones did you already know about?

During introductions, someone almost always says they joined because “AI has really come out of nowhere in the last couple of years.” It does feel that way. But AI is much older than you think, and ChatGPT is far from the only type of AI out there.


Artificial Intelligence (AI) is “the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making.”

The research field was founded in 1956, around the time the first computers appeared. Back then, computers were mainly used for heavy calculations and following a bunch of set rules. But researchers soon started wondering: could a computer not just calculate, but also learn or reason?

It turned out to be harder than they hoped, but from the 1980s onward, progress picked up. This is when machine learning appeared, the idea that instead of following fixed rules, a model could learn patterns from data and apply those to future scenarios. Suddenly, we could train computers to predict the weather, estimate house prices, or even play chess.

Over the past two decades, things have accelerated rapidly, leading to Generative AI (GenAI). This newest branch of machine learning can generate brand-new ideas or content like text, music, or images. For most people, GenAI seemed to appear out of nowhere when ChatGPT launched, but it actually builds on more than sixty years of research into language models that really started taking off in the mid-2010s.

AI has been around for over seventy years, and who knows what the next exciting breakthrough will be.

You’re already using AI every day (even if you don’t notice)

Once people realise AI didn’t just appear overnight, I ask them to think about where they already use it in daily life. After a little thinking, the list grows fast.


Netflix recommendations. Apps that recognise birds from their songs or photos. Lane assist in cars. Supermarkets predicting how much bread to bake each day. Smart thermostats learning your schedule.

It quickly becomes clear that AI has already entered our everyday routines, quietly, helpfully, and often invisibly.

You’re surrounded by AI, even when you’re not aware of it.

ChatGPT never gives the same answer twice

In one of the training exercises, I ask participants to try a short prompt in ChatGPT and compare results with their neighbour. To their surprise, the answers don’t match, even when they try again themselves. Why does that happen?


All answers are different, just like these apples.

Tools like ChatGPT, Microsoft Copilot and Google Gemini all run on Large Language Models (LLMs). These models are incredibly good at predicting what the next logical word should be based on the input they receive.

However, the model doesn’t always pick the single most likely next word. A bit of randomness is built in to make the results more natural and creative. Without it, every response would sound robotic and identical.

That small element of chance means that even when you give the exact same input, you’ll get slightly different output each time.

Every conversation is unique, and that’s exactly the point.

You shouldn’t believe everything ChatGPT says

LLMs are fantastic storytellers. The problem is that their stories aren’t always true.

If you’ve ever seen ChatGPT make something up that sounded perfectly plausible but wasn’t, you’ve witnessed what’s known as a hallucination: when a model confidently generates information that simply isn’t correct.


When I once asked ChatGPT what it knew about me, it told me I was a poet who worked at the University of Amsterdam. Funny, but not true at all.

One of my students had a more serious experience. She inherited a house in Kazakhstan and asked ChatGPT for tax advice. Everything it said sounded convincing, but turned out to be completely wrong, and it ended up costing her money.

People are usually stunned when they hear this. The text looks polished and professional, so it feels trustworthy. But remember: the model only cares about predicting the next logical word, not whether what it says is factually correct.

That’s why it’s so important to stay critical. LLMs don’t have a built-in fact checker.

AI can sound confident even when it’s completely wrong.

ChatGPT is good with words but bad with numbers

Ever tried to get ChatGPT to do a calculation and gotten an odd result? Once you understand how LLMs work, it makes sense.

Under the hood, they’re not actually calculating anything. They’re guessing what a correct-looking answer might be based on patterns they’ve seen before.


This surprises people, because we associate computers with being good at math. But LLMs are language models, not calculator models.

With extra training, they’ve become better at handling basic arithmetic. The newest models break down numbers into different units like 300 + 10 + 7 for 317 and have a lot of multiplications memorised so they can “calculate” the answer. Or in some cases the model will actually use an external calculator. So things are improving, but for anything complex, your trusty calculator still wins.

If it’s about words, ask ChatGPT. If it’s about numbers, check twice.

Meta-prompting is the cheat code for better AI results

In the course, we cover fifteen tips to write better prompts. My personal favourite, and one I use constantly, is meta-prompting.

It’s simple: instead of writing the perfect prompt yourself, you ask the AI to help you do it.


For example, if I ask ChatGPT for “20 varied recipes,” it might give me lots of meat dishes, even though I’m vegetarian. I forgot to include that context.

Context is the background the model needs to give a useful answer, but figuring out what to include can take effort.

With meta-prompting, you can ask the model to help:
“Before generating an answer, ask me any questions you need to clarify the task.”

It might ask what meals I usually make, what “more varied” means to me, and what equipment I have in my kitchen. Then I just answer the questions and the AI builds a much better prompt for me.

You can also use meta-prompting to refine prompts you’ve already written. For example:

  • Here’s my draft prompt. How can I make it more specific or effective?
  • Help me write a clear, structured prompt for [task].

It’s a small trick that makes a big difference in the quality of your results.

Meta-prompting is your shortcut to better, more personalised AI output.

There are some amazing tools out there that you haven’t heard of yet

Besides text generation, GenAI can do much more, and this part of the course always gets the biggest reactions.

We do a short “tool speed-date” session, where participants try out a few lesser-known tools and share what they’ve created.


Now, the joke that "everyone has a podcast these days" might actually come true.

Most people already know about generating text, images or videos. But how about making entire songs from just a few keywords? Tools like Suno can do that in thirty seconds. My students have made songs about their dinner, their company, even Pokémon, often with ridiculously catchy results.

Another favourite is NotebookLM by Google. It lets you upload documents and instantly turn them into a mind map, a quiz, or even a podcast where two AI voices discuss the material you uploaded. It’s like having a personal study buddy that can read your notes and talk them through with you. Perfect for people who’d rather listen than read.

The joke that everyone has a podcast these days might actually come true.

Curious? Join the course and see what surprises you

Generative AI is full of unexpected lessons that are best discovered by trying it yourself.

You can explore these and more in our Introduction to Generative AI course. Join Xebia and learn how to use, understand, and innovate with AI responsibly and creatively.

Ready to see what AI can really do? Sign up now for Xebia’s Introduction to Generative AI course.

Written by

Lysanne van Beek

Contact

Let’s discuss how we can support your journey.