A Simple Explanation of Gini Impurity

What Gini Impurity is (with examples) and how it's used to train Decision Trees.

If you look at the documentation for the DecisionTreeClassifier class in scikit-learn, you’ll see something like this for the criterion parameter:

scikit learn

The RandomForestClassifier documentation says the same thing. Both mention that the default criterion is “gini” for the Gini Impurity. What is that?!

TLDR: Read the Recap.

Decision Trees 🌲

Training a decision tree consists of iteratively splitting the current data into two branches. Say we had the following datapoints:

The Dataset

Right now, we have 1 branch with 5 blues and 5 greens.

Let’s make a split at x=2x = 2:

A Perfect Split

This is a perfect split! It breaks our dataset perfectly into two branches:

  • Left branch, with 5 blues.
  • Right branch, with 5 greens.

What if we’d made a split at x=1.5x = 1.5 instead?

An Imperfect Split

This imperfect split breaks our dataset into these branches:

  • Left branch, with 4 blues.
  • Right branch, with 1 blue and 5 greens.

It’s obvious that this split is worse, but how can we quantify that?

Being able to measure the quality of a split becomes even more important if we add a third class, reds . Imagine the following split:

  • Branch 1, with 3 blues, 1 green, and 1 red.
  • Branch 2, with 3 greens and 1 red.

Compare that against this split:

  • Branch 1, with 3 blues, 1 green, and 2 reds.
  • Branch 2, with 3 greens.

Which split is better? It’s no longer immediately obvious. We need a way to quantitatively evaluate how good a split is.

Gini Impurity

This is where the Gini Impurity metric comes in.

Suppose we

  1. Randomly pick a datapoint in our dataset, then
  2. Randomly classify it according to the class distribution in the dataset. For our dataset, we’d classify it as blue 510\frac{5}{10} of the time and as green 510\frac{5}{10} of the time, since we have 5 datapoints of each color.

What’s the probability we classify the datapoint incorrectly? The answer to that question is the Gini Impurity.

Example 1: The Whole Dataset

Let’s calculate the Gini Impurity of our entire dataset. If we randomly pick a datapoint, it’s either blue (50%) or green (50%).

Now, we randomly classify our datapoint according to the class distribution. Since we have 5 of each color, we classify it as blue 50% of the time and as green 50% of the time.

What’s the probability we classify our datapoint incorrectly?

EventProbability
Pick Blue, Classify Blue 25%
Pick Blue, Classify Green ❌25%
Pick Green, Classify Blue ❌25%
Pick Green, Classify Green 25%

We only classify it incorrectly in 2 of the events above. Thus, our total probability is 25% + 25% = 50%, so the Gini Impurity is 0.5\boxed{0.5}.

The Formula

If we have CC total classes and p(i)p(i) is the probability of picking a datapoint with class ii, then the Gini Impurity is calculated as

G=i=1Cp(i)(1p(i))G = \sum_{i=1}^C p(i) * (1 - p(i))

For the example above, we have C=2C = 2 and p(1)=p(2)=0.5p(1) = p(2) = 0.5, so

G=p(1)(1p(1))+p(2)(1p(2))=0.5(10.5)+0.5(10.5)=0.5\begin{aligned} G &= p(1) * (1 - p(1)) + p(2) * (1 - p(2)) \\ &= 0.5 * (1 - 0.5) + 0.5 * (1 - 0.5) \\ &= \boxed{0.5} \\ \end{aligned}

which matches what we calculated!

Example 2: A Perfect Split

Let’s go back to the perfect split we had. What are the Gini Impurities of the two branches after the split?

A Perfect Split

Left Branch has only blues, so its Gini Impurity is

Gleft=1(11)+0(10)=0G_{left} = 1 * (1 - 1) + 0 * (1 - 0) = \boxed{0}

Right Branch has only greens, so its Gini Impurity is

Gright=0(10)+1(11)=0G_{right} = 0 * (1 - 0) + 1 * (1 - 1) = \boxed{0}

Both branches have 00 impurity! The perfect split turned a dataset with 0.50.5 impurity into 2 branches with 00 impurity.

A Gini Impurity of 0 is the lowest and best possible impurity. It can only be achieved when everything is the same class (e.g. only blues or only greens).

Example 3: An Imperfect Split

Finally, let’s return to our imperfect split.

An Imperfect Split

Left Branch has only blues, so we know that Gleft=0G_{left} = \boxed{0}.

Right Branch has 1 blue and 5 greens, so

Gright=16(116)+56(156)=518=0.278\begin{aligned} G_{right} &= \frac{1}{6} * (1 - \frac{1}{6}) + \frac{5}{6} * (1 - \frac{5}{6}) \\ &= \frac{5}{18} \\ &= \boxed{0.278} \\ \end{aligned}

Picking The Best Split

It’s finally time to answer the question we posed earlier: how can we quantitatively evaluate the quality of a split?

Here’s the imperfect split yet again:

An Imperfect Split

We’ve already calculated the Gini Impurities for:

  • Before the split (the entire dataset): 0.50.5
  • Left Branch: 00
  • Right Branch: 0.2780.278

We’ll determine the quality of the split by weighting the impurity of each branch by how many elements it has. Since Left Branch has 4 elements and Right Branch has 6, we get:

(0.40)+(0.60.278)=0.167(0.4 * 0) + (0.6 * 0.278) = 0.167

Thus, the amount of impurity we’ve “removed” with this split is

0.50.167=0.3330.5 - 0.167 = \boxed{0.333}

I’ll call this value the Gini Gain. This is what’s used to pick the best split in a decision tree! Higher Gini Gain = Better Split. For example, it’s easy to verify that the Gini Gain of the perfect split on our dataset is 0.5>0.3330.5 > 0.333.

Recap

Gini Impurity is the probability of incorrectly classifying a randomly chosen element in the dataset if it were randomly labeled according to the class distribution in the dataset. It’s calculated as

G=i=1Cp(i)(1p(i))G = \sum_{i=1}^C p(i) * (1 - p(i))

where CC is the number of classes and p(i)p(i) is the probability of randomly picking an element of class ii.

When training a decision tree, the best split is chosen by maximizing the Gini Gain, which is calculated by subtracting the weighted impurities of the branches from the original impurity.

Want to learn more? Check out my explanation of Information Gain, a similar metric to Gini Gain, or my guide Random Forests for Complete Beginners.

I write about ML, Web Dev, and more topics. Subscribe to get new posts by email!



This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

This blog is open-source on Github.

M ↓   Markdown
?
Anonymous
1 point
5 years ago

Victor, you are gifted with a sense of clearifying the essence of any topic in the most simple words, understandable to any humble human :-) Thank you and keep it going !

?
Anonymous
0 points
5 years ago

i dont understand

😥😥😥😥😥

?
Anonymous
1 point
3 years ago

like others, i feel compelled to comment - you have a gift for explaining things, keep it up for all our sakes

?
Anonymous
0 points
5 years ago

super clear + perfect explanation i needed for my exam! thank you for teaching :)

?
Anonymous
0 points
5 years ago

You're awesome! Thank you!

?
Anonymous
0 points
5 years ago

Crystal clear again. Thank you

D
Doc Fey
0 points
5 years ago

thanks man, you saved my life

?
Anonymous
0 points
4 years ago

This is very helpful! Thank you!

F
F Anati
0 points
4 years ago

if i want to have zero loss in my training set, does it mean i need to have zero gini?

?
Anonymous
0 points
4 years ago

an excellent explanation. thanks

?
Anonymous
0 points
5 years ago

Thank you for your wonderful post about the explaination of DT , entropy and information gain, gini impurity. This is much clearer than scholar post i find.

R
Ruslan Gaifulin
0 points
6 years ago

Awesome

?
Anonymous
0 points
6 years ago

Exactly what i was looking for, Great work!

D
DANGUIR Mounir
0 points
5 years ago

bravoooooooooooooooo

?
Anonymous
0 points
5 years ago

Awesome explanation!!!!!!

J
Jinhang Zhu
0 points
5 years ago

Hi Victor. Thank you for your great work! I got one problem with the splitting in decision trees. Say we got a dataset for a node and two subsets for the subnodes respectively, then should the Gini impurity of the parent node greater than or equal to the Gini index of this splitting? To clarify:

equation.png

Should this be satisfied to make sure the information gain is not negative?

L
Lars Thomsen
0 points
3 years ago

brilliant explanation

?
Anonymous
0 points
5 years ago

That was very helpful. Keep up the great work!

?
Anonymous
0 points
5 years ago

This post helped me a lot! Thanks Victor for this clear and precise explanation :)

?
Anonymous
0 points
5 years ago

The only interpretable explanation I could find. Thanks

?
Anonymous
0 points
3 years ago

thanks useful info easy to understand

?
Anonymous
0 points
5 years ago

Great post..clearly explained

?
Anonymous
0 points
4 years ago

Great! Thanks a bunch :))

?
Anonymous
0 points
4 years ago

Excellent. Simple, to the point. Perfectly graspable. Keep it up.

張建發
0 points
4 years ago

The most concise explanation I've ever seen.

?
Anonymous
0 points
5 years ago

Nice explanation

?
Anonymous
0 points
6 years ago

How does "Gini Impurity" compare to "Information Gain"?

?
Anonymous
0 points
6 years ago

Very clear! Loved it :)

J
jf d
0 points
5 years ago

In order to say thanks ,I sign in my google account

?
Anonymous
0 points
5 years ago

As other readers commented above, you are such a great teacher! Thanks!

?
Anonymous
0 points
5 years ago

It's usefull for me, thks

S
shaunak sen
0 points
4 years ago

Very clearly explained! Thanks a lot!

?
Anonymous
0 points
2 years ago

Thank you for these simple explanations. I am a complete novice and I got here by googling 'random forest for dummies'. The random forest discussion that led to this page was well-explained too. Thank you. Could you please consider using a different colour combination for red and green dots for colour-blind readers? I'm colour blind and I can't differentiate them :)

?
Anonymous
0 points
5 years ago

lolo

?
Anonymous
0 points
2 years ago

Thanks man, so clear

?
Anonymous
0 points
22 months ago

Best breakdown I've seen on the topic - much appreciated!

?
Anonymous
0 points
6 years ago

Thanks

?
Anonymous
0 points
4 years ago

Man, you nailed it, you are the best, hats-off

A
Andrew Tang
0 points
2 years ago

Great explanation! Your post also inspired me to write out my own understanding. It's a great feeling crystallizing ideas into words!

https://gradiently.io/gini-impurity/

M
Make a Calendar
0 points
4 months ago

As someone who's dabbled in machine learning, I can relate to the confusion around Gini Impurity. When I first encountered it in the scikit-learn docs, I was pretty lost too. It's one of those terms that gets thrown around a lot in data science circles, but isn't always well explained. I appreciate that this article aims to break it down in simpler terms. That's definitely needed, especially for newcomers to the field. However, I think it would be even more helpful if they included a practical example or two. Maybe show how different Gini Impurity values affect a decision tree's splits on a small dataset?

Y
Ya Zheng
0 points
5 years ago

非常感谢!优秀啊! This is the only interpretable explanation I could find. Thanks!

?
Anonymous
0 points
6 years ago

Great post!!

?
Anonymous
0 points
6 years ago

Good for a beginner, but still do not know how to apply it to the real case?

?
Anonymous
0 points
5 years ago

This post is great, and the website is very pleasant.

?
Anonymous
0 points
5 years ago

Thank you for the simple and clear explanation. Exam on decision trees tomorrow.

?
Anonymous
0 points
4 years ago

Great Article. 多谢分享。

?
Anonymous
0 points
3 years ago

Such clarity! Victor you are an excellent teacher

?
Anonymous
0 points
4 years ago

This is a great explanation, now all makes sense. Thank you!!

D
Damon Yuan
0 points
5 years ago

very clear, based on your example, added my understanding of Information Gain here: https://www.damonyuan.com/2019/a-simple-explanation-of-information-gain/

?
Anonymous
0 points
5 years ago

Great explanation, thanks!

?
Anonymous
0 points
5 years ago

Most simplest and most clear !!

At least this isn't a full screen popup

That'd be more annoying. Anyways, subscribe to my newsletter to get new posts by email! I write about ML, Web Dev, and more topics.



This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

ads via Carbon Your new development career awaits. Check out the latest listings. ads via Carbon