What is AI?

K
Kamil
·3 min read

Everybody knows that AI stands for “Artificial Intelligence.” But most people don’t really know what’s behind it. And what exactly do we call “AI” anyway? Why are chat, image editing, video generation, and voice cloning all lumped under the same term?

Well, simply put, at some point the term “AI” basically replaced the term “neural networks.” Let’s break that down a bit.

Neural networks, deep learning, and all that

Neural networks are just a family of algorithms, also often called deep learning. What makes them different from classic machine learning? Classic ML requires humans to hand-engineer the rules — you tell the algorithm what features to look for. Neural networks, on the other hand, figure out the rules themselves by training on massive amounts of data.

ChatGPT is an LLM (Large Language Model) — just another neural network, based on a technology called Transformers. Don’t be scared — it’s just a cool name for a type of neural network, and it has nothing in common with Megatron and Optimus Prime.

The original paper on Transformers was published back in 2017, but the first mainstream chat went public only in November 2022 (ChatGPT, from OpenAI).

Midjourney, which can be used to produce mind-blowing, ultra-realistic images, is based on a another neural network technology called diffusion model.

So if neural networks have been around since the 80s — why is everyone losing their minds about AI only now? Two words: scale and data. Turns out, when you feed these algorithms insane amounts of text, images, and video — and throw enough computing power at them — something clicks. The results went from “meh” to “okay this is actually wild” pretty fast.

Algorithm + Data + Compute = the AI revolution
Algorithm + Data + Compute = the AI revolution

So why is everyone scared?

So if AI is just neural networks, and neural networks are just another family of mathematical algorithms that has been around for decades — why is everyone suddenly talking about it, and why are people scared?

I think there are two main reasons:

1. New chatbots are freakishly “smart.”

ChatGPT and other LLMs aren’t just another dumb chatbot of the kind people were used to seeing on websites across the internet. They seem genuinely intelligent — capable of reasoning, analysis, and holding a conversation on virtually any topic, like a knowledgeable adult. The gap between new LLM-based chats and the old ones built on storing huge databases of question-answer pairs is staggering.

And it can be frightening if you don’t understand that the model is, technically, just predicting the next word (token) based on your input. Like, you type “The sky is” and the model predicts “blue”. Then “blue and” → “cloudy”. One word at a time, billions of times, very fast.

How an LLM predicts the next word
How an LLM predicts the next word — one token at a time

The algorithm is conceptually simple — but that core misunderstanding, combined with the chat’s remarkable capabilities, is what scares people.

2. Future uncertainty.

Not understanding how the technology works, combined with the scary associations the term “AI” carries, creates real anxiety. Will AI replace me at my job? Are robot uprisings actually possible now? What will the future look like for my kids? These are pretty common questions.

Look, AI is going to affect every layer of society — no doubt. Probably more than the internet did. And yes, some jobs will change, some will disappear. That’s real.

So — what are you going to do?

Here’s the thing though — people who figure out how to use these tools are going to have a serious edge. Doesn’t matter if you’re a lawyer, designer, teacher, or run a small business. The question isn’t really “will AI affect me” — it’s “how do I make it work for me.”

Ignoring it or hoping it goes away is probably not the move. At this point, that’s a bit like refusing to use the internet in 2005.

This blog is here to help with exactly that. Let’s figure it out together.

Share:

Responses (0)

Join the conversation.

Sign in to comment
Loading comments...