Skip to main content

Posts

Showing posts with the label ML

Pandas Profiling -- A Unique way to Data Analysis

Source: Google Images Pandas Profiling is an Open-Source Library of Python. It focuses on easing out the process of initial data analysis, by providing a tool to perform the analysis of our data Quick & Easy. It's also considered a major EDA library, creating visuals, graphs, data profiling reports, pandas reports within seconds, in just a line of code. It saves a lot of time, which is usually lost in visualizing & understanding the data. It extends the pandas data frame to create a report for Quick & Easy Data Analysis.

EDA ---- Exploratory Data Analysis

EDA EDA - Exploratory Data Analysis is the technique of defining, analyzing and investigate the dataset. This technique is used by most data scientists, engineers and everyone who is related to or wants to work and analyze the data. Saying that, it includes the whole majority of us as at any point of time we are dealing with data and we un-knowingly do an initial analysis about which in technical terms is referred to as   "Exploratory Data Analysis". Here is a formal definition of the EDA:-  In statistics, exploratory data analysis is an approach to analyzing data sets to summarize their main characteristics, often using statistical graphics and other data visualization methods.  Still confused about how every one of using this process..!! Let me explain it with a simple example... Suppose you and your group plan for lunch in a restaurant... as soon as we hear "lunch" and "restaurant" our mind starts creating a list of all the known places, next as someon

One Click Data Visualization

What is Data Visualization?  Data Visualization as the name suggests is creating nice, beautiful and informative visuals from our data, which helps get more insights from the data. It helps us and the third person who sees our analysis or report in reading it better. Creating a good visualization helps us in understanding the data better and helps in our machine learning journey.  The data visualization process uses various graphs, graphics, plots for explaining the data and getting insights. DV is important to simplify complex data by making it more  accessible, understandable, and usable to its end users. If you want to know in more detail about data visualization you can Read IT Here .

Anaconda -- How to install in 5 steps in Windows

  Image taken from Google images An easy to go guide for installing the Anaconda in Windows 10. 1. Prerequisites      Hardware Requirement * RAM — Min. 8GB, if you have SSD in your system then 4GB RAM would also work. * CPU — Min. Quad-core, with at least 1.80GHz  Operating System * Windows 8 or later  System Architecture Windows- 64-bit x86, 32-bit x86  Space Minimum 5 GB disk space to download and install   Anaconda   We need to download the Anaconda from HERE .  On opening the link we would be greeted by a great web page.   Now click on "Get Started"   to continue...  The next step is to click on "Download Installer" to proceed...  Select the correct version based on your System's architecture. I will be using a 64-bit installer (477 MB). Your download should now.. it will take some time...  Let's catch up in 2nd Section (Unzip and Install)  

Defining, Analyzing, and Implementing Imputation Techniques

  What is Imputation? Imputation is a technique used for replacing the missing data with some substitute value to retain most of the data/information of the dataset. These techniques are used because removing the data from the dataset every time is not feasible and can lead to a reduction in the size of the dataset to a large extend, which not only raises concerns for biasing the dataset but also leads to incorrect analysis. Fig 1:- Imputation Not Sure What is Missing Data? How it occurs? And its type? Have a look  HERE  to know more about it. Let’s understand the concept of Imputation from the above Fig {Fig 1}. In the above image, I have tried to represent the Missing data on the left table(marked in Red) and by using the Imputation techniques we have filled the missing dataset in the right table(marked in Yellow), without reducing the actual size of the dataset. If we notice here we have increased the column size, which is possible in Imputation(Adding “Missing” category imputation)