Login

Register

Login

Register

✆+91-9916812177 | contact@beingdatum.com

Intuition behind Cross Entropy In Logistic Regression Logistic Regression

Logistic regression helps us to find how probabilities are changed by the action. It involves the “best fit” S-curve.

P(y i )=1/1+e (A+Bx i )
Where,
y: hot or miss?(0 or 1 for Binary Classifier)
x: input features
P(y)= probability of y=1 that we get when we apply Logistic Regression to our x
data
A is the intercept
B is the Regression Coefficient

 

Cross Entropy : Loss Function

Cross entropy is the process of minimizing the loss of our model and to improve
the model parameter and gives us a robust model. In Linear Regression, the cost function that we use to find the best fit line is through the Mean Square Error (Measures how far the line is from the actual points.) while in Logistic Regression, the Cost function that we will use is Cross Entropy.

The output of our classification model is a probability score and it measures how well the estimated probabilities of our model matches the actual label of the original categories to which the data points belong.

Loss cross Entropy

It means that the probability distribution of the Y actual and the Y predicted are in
sync.

High cross Entropy

It is the opposite and means that when we have Y actual and Y predicted, the series are not in sync. So, When we are building a classification and trying to minimize the cross entropy. We want Y actual and Y predicted to be in sync. So, when we are applying the logistic regression, you want to find the best S curve that fits on our underlying data, which minimizes the cross entropy of our actual labels versus the labels predicted by our model.

0 responses on "Intuition behind Cross Entropy In Logistic Regression Logistic Regression"

    Leave a Message

    Your email address will not be published. Required fields are marked *

    © BeingDatum. All rights reserved.
    X