Skip to main content

Posts

Showing posts with the label data imputation

Multi-variate Imputation of Chained Equation

We have already studied many techniques used for Missing Data Imputation . The majority of these techniques , that we studied, are or can be used in our final production-ready model. But when it comes to imputing something then there is always a chance of getting it better cause we are never sure if the values imputed are correct or not. Thus, to improve the imputation, we use Multiple imputations , i.e using more than one way to predict the values and then taking average or any other way to get the best suitable value.  We have already seen a technique using similar logic, i.e. KNN Imputation , that uses the K-Nearest Neighbour Algorithm to find the best suitable value. These techniques are better known as " Multi-Variate Imputation ". Now, we would like to introduce you to a newer and better technique, which has now become a principal technique for Missing Data Imputation, known as MICE(Multi-variate Imputation of Chained Equation).  Multi-variate Imputation of Chained Equa

Multiple Imputation

Imputation, seems to be a simple term, "Replacing Missing Data". Also, we have learned a lot many techniques to perform such Imputation in few lines of code. So, let me ask a question to you guys now.  Do you think in practical scenarios where we have very sensitive information like medical data, imputing some missing data based on some Random data would suffice? Will it impact the end analysis?  Before reading ahead, do think of the above question and try to answer it for yourself.  So, coming to the answer, there is a high probability that we might bias the dataset with some static value imputation. Imputation is never a simple job, it takes a lot of time and expertise to impute the correct values, even after that you can't be sure how your end model will perform and have you imputed the correct values. Thus, there was a need to devise a technique that could impute different plausible values and impute with the best one.  As of now, all the imputation techniques we saw

KNN Imputation

Talking about Multi-variate Imputation, one of the techniques that are very common and familiar to every data scientist is the KNN Impute. Though KNN Impute might be a new term, KNN is not a new term and is familiar to everyone related to this field. Even if it is a new term for you, don't worry we have defined it for you in the next section.  Let's define the KNN and make it familiar to the new aspirants. 

Imputation Using SimpleImputer

Welcome back, friends..!!!  Till now we have seen quite a few techniques, that we can use for Imputing Missing values in the dataset.  We have studied the theory for them and have seen a basic code for using those techniques. But as said there are other libraries that we can use to implement these techniques. So, we are going to study one such library, i.e. skLearn for Imputation. So let's begin and get our hands dirty and learn these libraries.  *Please Note:- Theory is already covered in previous articles, we will be directly moving to use libraries for the methods. For demo purposes, we will be using the COVID-19 cases dataset. 

Missing Indicator Imputation

Welcome back, friends..!! We are back with another imputation technique which is a bit different than the previous techniques we studied so far, & serves an important role that we knowingly/unknowingly have been skipping throughout the previous techniques.  We studied many techniques like Mean/Median , Arbitrary Value , CCA , Missing Category , End of tail , Random Samples . If we notice all these techniques were good enough to Impute the Missing Values but the majority of them lacked to mark/flag the observations that were having values/and were imputed.  Thus, we bring here the technique of Missing Indicator that was designed with the sole purpose of marking or denoting the observation that was/is having a missing value. This technique is mostly used together with one of the previously defined techniques for imputation.  In simple terms, if we have to explain the technique, then in this technique we use another column/variable to maintain a flag(binary value 0/1, true/false) mos

Missing Category Imputation

Till now, we have seen imputation techniques that could only be used for Numerical variables but didn't say anything about the Categorical variables/column.   So now, we are going to discuss a technique that is mostly used for imputing categorical variables. Missing Category Imputation is the technique in which we add an additional category for the missing value, as "Missing" in the variable/column. In simple terms we do not take the load of predicting or calculating the value(like we did for Mean/Median or End tail Imputation ), we simply put "Missing" as the value.  Now, we may have a doubt that if we are only replacing the value with "Missing" then why it is said that this method can be used for Categorical variables only?  Here is the answer, we can use it for Numerical variables also, since we can't introduce a categorical value in the Numerical variables/column, we will be required to introduce some Numerical value that is unique for the va

End of Tail Imputation

End of Tail Imputation is another important Imputation technique. This technique was developed as an enhancement or to overcome the problems in the Arbitrary value Imputation technique. In the Arbitrary values Imputation method the biggest problem was selecting the arbitrary value to impute for a variable. Thus, making it hard for the user to select the value and in the case of a large dataset, it became more difficult to select the arbitrary value every time and for every variable/column.  So, to overcome the problem of selecting the arbitrary value every time, End of Tail Imputation was introduced where the arbitrary value is automatically selecting arbitrary values from the variable distributions. Now the question comes How do we select the values? & How to Impute the End value? There is a simple rule to select the value given below:-  In the case of normal distribution of the variable, we can use Mean plus/minus 3 times the standard deviation.  In the case variable is skewed,

Imputation Techniques

Welcome to a series of articles about Imputation techniques. We will be publishing small articles(Quick Notes) about the various Imputation techniques used, their advantages, disadvantages, when to use and coding involved for them.  Not Sure What is Imputation?   &   What is Missing Data?    Why they are important. Click on the links to know more about them.   1. Mean Or Median Imputation 2.  End of tail Imputation  3. Missing Category Imputation 4. Random Sample Imputation 5. Missing Indicator Imputation 6. Mode Imputation 7. Arbitrary Value Imputation 8. Complete Case Analysis(CCA)  Python Libraries used for Quick & Easy Imputation.  09.  SimpleImputer   10. Feature Engine 11. Multi-Variate Imputation

Mean or Median Imputation

To understand Mean or Median Imputation, we need to first revise the concepts of Mean & Median. Then it would be easy for us to know why this is a widely used method for imputation and can easily identify its issues.  We already studied the Missing Data and defined Imputation & its basics in previous articles. What is Mean?  Mean is nothing but the arithmetic average of numbers. That is why it is also referred to as Average or Arithmetic Average. The process of finding average/mean is quite simple, we just add all the given values irrespective of the magnitude(+/-) and then divide the total sum by the no. of observations.                                  Sum of all the observation  Average/Mean =  ---------------------------------------                                    No. of Observations

Missing Data -- Understanding The Concepts

  Introduction Machine Learning seems to be a big fascinating term, which attracts a lot of people towards it, and knowing what all we can achieve through it makes the sci-fi imagination of ours jump to another level. No doubt in it, it is a great field and we can achieve everything from an automated reply system to a house cleaning robots, from recommending a movie or a product to help in detecting disease. Most of the things that we see today have already started using ML to better themselves. Though building a model is quite easy, the most challenging task is preprocessing the data and filtering out the Data of Use. So, here I am going to address one of the biggest and common issues that we face at the start of the journey of making a Good ML Model, which is  The   Missing Data . Missing Data can cause many issues and can lead to wrong predictions of our model, which looks like our model failed and started over again. If I have to explain in simple terms, data is like Fuel of our Mo