Entropy


Entropy as a concept is rarely mentioned while discussing Machine Learning. However, the concept forms the basis for a lot of the Machine Learning algorithms that we see today.

In terms of a formal definition, Entropy of a variable is defined by

But there are better ways to grasp the concept of Entropy.

Definition 1:

Amount of information gained on receiving a sample from a distribution. This information is measured in bits (if log is taken to base 2)

Fair coin with two values with equal probability

Throwing M fair coins

Fair dice with M faces

Event with probability distribution : .

For the last example, the entropy of 1.75 implies that every time we receive information about a sample from this distribution, we get 1.75 bits of information on average.

Another interesting way of looking at entropy is by thinking of the number of bits required on average to convey the information of an event from this distribution to the entity receiving this information.

We start by assigning each event a specific code, the codes being 0,10,110,111 for each of the events respectively. Therefore, the average code length of messages would be which equals . Therefore on average 1.75 bits of information is sent each time to communicate an event from this variable.

This is the same value that we got from the entropy formula. It goes as such that the entropy of a distribution is the minimum amount of bits required on average to communicate the samples of the variable. Therefore the encoding used above where the event with probability 0.5 is assigned the code 0, the event with probability 0.25 is assigned the code 10 and so on, is the most efficient encoding possible as the average code length equals the entropy of the distribution. Any other encoding would result in an average code length greater than 1.75

Take another example where the an event has two outcomes and with probabilities of 1 and 0. In this case, the entropy of this distribution turns out to be 0. This implies that with every new sample from the distribution, we receive no new information. This is true as we know for sure that the sample is the outcome A.

NOTE An thing to notice here is that the example taken above is an ideal case where the probabilities are all powers of 2 and hence the entropy calculated and the average code length are the same. In cases with arbitrary probabilities, the average code length would not be equal (would be greater than actually) to the entropy of the distribution.

Lets take an example of a variable with a distribution of .

The code lengths for the two events as given by the entropy equation are :

Event A with probability 0.3 :

Event B with probability 0.7 :

The entropy of the distribution :

The encoding system that we have used till now (Hoffman encoding) does not handle fraction bits quite that well. Therefore we end up using single bits for both the events. To better understand how we can still achieve the average code length speculated by the entropy of the distribution, check the section on Fractional Bits in this blog. The basic idea here is to increase the number of events encoded for with a single code. Right now we have one code for event A and one for event B. Instead have codes for two events i.e a codes for AA, AB, BA and BB. The section in the above mentioned blog shows how this approach reduces the average code for each event from the current 1 to approximately 0.9. By theory, if we keep on increasing the number of events encoded for by a code, we would be able to reach the minimum length given by the entropy (0.876).

Definition 2:

Entropy can also be seen as a measure of uncertainty in a variable. It defines how ‘pure’ or ‘homogenous’ the distributions are.

As seen in the case of a fair coin, a uniform distribution has an entropy of 1, which is the maximum achievable for a distribution with two outcomes. This is true for any distribution with n number of outcomes, with the uniform distribution having the highest entropy. This runs along with the second definition of Entropy since a uniform distribution would have the highest uncertainty.

The following distributions and their entropies would make this definition a bit more clear.

Distribution 1 Distribution 2 Distribution 3 Distribution 4

As can be seen from the above examples, the farther the distribution is from a uniform distribution, lesser is the uncertainty and hence lesser is the entropy.

Information Gain

Information Gain as a concept is commonly used in decision trees as a measure for determining the relevance of a particular variable. In simple terms, it refers to the gain in information or reduction in entropy when a variable is conditioned on another variable.

The following example on the splitting of a node in a decision tree would better illustrate the application of Information Gain.

Distribution 5

Lets consider the below distribution consisting of two target classes, denoted by circles and squares. Now the task of the decision tree is to differentiate between the two target classes using the given variables. In this case, the variables provided are the color of the samples as well as the presence/absence of borders in the samples.

Therefore, we evaluate the classifying ability of both these variables by estimating the Information Gain for both the variables.

Splitting on Border

In the above case, the distribution is split based on the presence/absence of a border in the samples. In the resulting split, we get two child nodes with entropies of 0.7642 and 0.8113. On calculating the information gain for the split, we get

Therefore the information gained or the overall reduction in entropy of the distribution by splitting on the border variable is 0.2117 bits.

Splitting on Color

The distribution is now split based on the color of the samples. In the resulting split, we get two child nodes with entropies of 0.9911 and 1.0 . On calculating the information gain for the split, we get

The information gained or the overall reduction in entropy of the distribution by splitting on the color variable is 0.0022 bits.

Since the border variable results in higher information gain, it is a better candidate to split the node on. The above result can easily be verified visually. On observing the child nodes created by splitting on the border variable, we can observe that the left child node has a strong concentration of squares while the right child node has a strong concentration of circles. This shows that the variable has been able to differentiate the two classes well. However, in the case of the child nodes created by splitting on color, both the child nodes have an equal mix of both the target classes and hence proves to be a poor differentiator.

This visual difference that we observe in the classification power of variables is quantified using Information Gain. The higher the ‘purity’ of the resulting child nodes, the higher the Information Gain of the splitting variable.