
Ever wonder how Chat-GPT is able to remember its conversation with you?
Or how TikTok knows you’ll watch 47 videos of raccoons stealing food?
The sorcery behind this madness is Linear Algebra.
Linear Algebra is like the duct tape of Machine Learning — everything sticks together because of it.
Every fancy AI model you’ve heard about—ChatGPT, image recognition, recommendation systems—runs on the quiet power of Linear Algebra.
Lets see why Machine Learning would be crying itself to sleep without it.

🤔 What Is Linear Algebra
In terms of Machine Learning, think of Linear Algebra as the language of data.
Basically, linear algebra is what allows machine learning models to read, understand, and manipulate the data its given.
Linear Algebra is the branch of math that deals with vectors (arrows of numbers) and matrices (grids of numbers), and how they transform, combine, and interact.
If regular algebra is about solving for x, linear algebra is about organizing and moving entire armies of x’s at once. Kind of makes Linear Algebra sound cool, eh?
So, why does this matter in ML?
Well, ML is basically a giant data-wrangling operation. All your data, whether its text, images, or videos, all ultimately become numbers stored in those vectors and matrices I mentioned above.
Linear Algebra then allows us to:
Transform and rotate that data (kind of like squishing millions of messy variables into a set of key features)
Calculate distances, similarities, and relationships.
Power the matrix multiplications inside neural networks (which is how models like ChatGPT actually “think”).
Basically, Linear Algebra is what makes your ML models react like this:

When you finally get that all your data is just a bunch of numbers in a fancy spreadsheet.

→Vectors
So what exactly is a vector?
Simply put, a vector is a list of numbers (ex. [3, 4]) that represent something with both magnitude (size) and direction.
Vectors can also represent a point in space or a specific piece of data.
Think of it like an arrow pointing somewhere in space, but instead of "go north 3 miles," it might be [3, 4] (which means go 3 steps in the x-direction and 4 steps in the y-direction).
They can be written as a Column Vector:

Column Vector. 3 numbers means this vector could represent a point in 3D space.
They can also be written as a Row Vector:

Row Vector. 4 numbers means a point in 4D space. Whatever the hell thats supposed to look like.
In the context of machine learning, vectors are everywhere. Its how machines “see” the data. For example:
An image can be turned into a giant vector of pixel values.
A sentence can be turned into a vector of numbers.
Even you (yes YOU, reading this sentence), can be represented as a vector through something like:
Awesome Gradient Descent Reader = [height, weight, age, shoe size, number of Netflix binges, number of exes blocked]
Why It Matters In ML
Think of a vector as a container of features — each dimension holds a different piece of information about something.
Vectors are how data gets into ML’s brain. They’re the language ML speaks.

🔢 Matrices
A matrix is a rectangular grid of numbers, symbols, or expressions arranged in rows and columns. Think of a Google Spreadsheet!
Here’s what a simple matrix looks like:

Behold: Sudoku’s lazy cousin, the Matrix of 1–9.
In the context of machine learning, a matrix is always your dataset. Each row represents a single data point (a vector), and each column represents a feature.
For example, here’s a small dataset with 3 houses:

In this matrix, each row is a house (each house being represented by a row vector in this case), and the columns represent features like square footage, number of bedrooms, and number of bathrooms.
Why It Matters In ML:
Representing Data
The most fundamental application of matrices is to represent data. Almost all data is converted into a matrix format for a model to process.
Matrix Operations
This is a big one! Matrix operations are the "verbs" that allow machine learning algorithms to work. A critical one is matrix multiplication, which is the core of how a neural network processes information.
Matrix Operations are important enough to have an entire post or two dedicated to them, so I’ll cover them in more in detail in the future. For now, consider this an introduction.

🪄 Linear Transformations
Now this is where things get really cool.
We’ve got vectors (our data points) and matrices (our number grids). But what do matrices actually do?
Well, matrices put vectors through something called linear transformations.
Linear Transformations are just math’s way of saying: “rotate, stretch, shrink, or flip this thing until it looks cooler.”
The magic is that every linear transformation can be represented as a matrix.
Think of it like this:
A vector is you on Day 1 at the gym.
A matrix is the workout plan.
A linear transformation is the before-and-after photo — suddenly you’ve got bigger biceps, a six pack, and pecs that can crush a can of soda (thanks to our matrix).
Linear transformations let ML models actually see patterns by reshaping messy data into something usable.

🌟 Conclusion
At the end of the day, Linear Algebra is the love language of Machine Learning — every algorithm whispers it sweet nothings.
Next time, we’ll talk calculus, the overachiever of the family.
P.S. I’m writing a book on Linear Algebra! If you want to be the first to hear about my Linear Algebra e-book, reply (or comment) ‘Linear Algebra’.”

