Previously, in our first episode of Supervised Learning’s Got Talent, we were introduced to our debut performer, Linear Regression.

Linear Regression is the impressively correct oversimplifier. Today, however, we take a look at Mr. Linear’s quirky cousin. (Yes, they’re relatives!)

Introducing, Logistic Regression!

Logistic Regression: The Probability Bouncer. He either lets you in or out.

This guy’s specialty isn’t with drawing lines like Linear Regression, but he’s really good at making decisions.

Let’s get into it.

📈 Logistic Regression - “The Probability Bouncer”

Now despite having “Regression” in its name (it’s a bit of a mathematical inside joke, really), this algorithm is actually a master of classification.

Specifically, he’s good at deciding Yes or No on something. Gotta appreciate that type of straightforwardness.

Logistic Regression isn’t trying to predict a continuous number like Linear Regression. Instead, it’s the ultimate yes/no or (0/1) question asker.

Can you think of someone you know who’s good at making decisions? Perhaps they’re good at answering questions like:

  • “Will this email be spam?”

  • “Should I post this selfie?”

  • “Is The Gradient Descent awesome?” (hint: YES).

Well, if you could magically transform that person into an ML algorithm they’d turn into Logistic Regression.

🤖 The Sigmoid Function

Logistic Regression works its magic by drawing an S-shaped curve instead of a straight line. This is called the Sigmoid Function.

For example, imagine we’re trying to decide if someone is on diet based on how many donuts they eat per day.

When that person eats fewer donuts, the chance of a diet is high. If they eat lots of donuts, that chance drop precipitously.

The Sigmoid Function to make that decision would look something like this:

Donuts vs Discipline: An Epic Battle.

Notice how those purple x’s are either on 0 or 1? That’s basically saying, yes this person is on a diet and deserves a pat on the back (0), or no this person’s not on a diet and deserves a pat on the stomach (1).

The smooth orange S-shaped curve is the sigmoid curve. See how it starts to curve upwards as more donuts are eaten? That’s basically showing the increasing probability that this person is not dieting.

Threshold

So how does Logistic Regression actually make the decision to say “yes” or “no” to something? It uses the Threshold.

The threshold is the cutoff point we set to decide when a probability flips from “No” to “Yes”.

By default (as you can see in the graph above), the threshold is usually 0.5:

  • If the model’s prediction is greater than or equal to 0.5, we say “Yes”.

  • If the model’s prediction is less than 0.5, we say “No”.

This is where the ‘S’ formation comes from; the line starts to curve up as the threshold gets closer to 1.

The cool part? You can change the threshold depending on your problem. For example, you may want to lower the threshold to catch more spam emails, or for medical tests, you might raise it to be extra sure before saying someone has a disease.

🧠 The Math Of Sigmoid Function

With a name like “Sigmoid Function”, of course there has to be math involved right? Don’t worry, the equation looks friendly. Let’s break it down:

The Sigmoid: Squishing inputs into a nice 'maybe'!

σ(x) (sigma of x): This is the output of the function, and it will always be a value between 0 and 1. This is your probability!

e: This is Euler's number, an important mathematical constant, approximately 2.71828. It's the base of the natural logarithm.

x: This means you take your input value x and make it negative.

👉 When x is bigσ(x) gets close to 1 (a strong “YES”).
👉 When x is small (or negative)σ(x) drops close to 0 (a strong “NO”).
👉 When x is around zeroσ(x) hovers near 50% (it could go either way).

Think of the sigmoid like a soft yes/no switch - not a hard cutoff. It smoothly shifts from “probably no” to “probably yes” as the input changes.

Conclusion

And here you have it — Logistic Regression’s big audition! It showed up and showed its probability abilities like a pro. Not bad for a math model, huh?

But don’t pack up your popcorn just yet. In the next episode, we have another contestant stepping onto our Supervised Learning’s Got Talent stage: Decision Trees.

Stay tuned, because things are about to branch out.

Reply

or to participate