Data Science Classes Archives - Page 2 of 10 - DexLab Analytics | Big Data Hadoop SAS R Analytics Predictive Modeling & Excel VBA

An Introduction to Sampling and its Types

An Introduction to Sampling and its Types

Sampling is a technique in which a predefined number of observation is taken from a large population for the purpose of statistical analysis and research.

There are two types of sampling techniques:-

Random Sampling

Random sampling is a sampling technique in which each observation has an equal probability of being chosen. This kind of sample should be an unbiased representation of the population.

Types of random sampling

  1. Simple Random Sampling:- Simple random sampling is a technique in which any observation can be chosen and each observation has an equal probability of being selected.
  2. Stratified Random Sampling:- In this sampling technique we create sub-group of the population with similar attributes and characteristics and then out of those sub-groups we then include each category in our sample with the probability of choosing each observation from the sub-group being equal.

  3. Systematic Sampling:- This is a sampling technique where the first observation is selected randomly and then every kth element is chosen randomly to be included in our sample.
    k= 2, here the first observation is selected randomly and after that every second element is included in the sample.

  4. Cluster Sampling:- This is a sampling technique in which the data is grouped into small sub-groups called clusters with random categories and then from those clusters random observation is selected which then is included in the sample.

Two clusters are created from which then random observation will be chosen to form the sample.

Non-Random Sampling :- It is a sampling technique in which an element of biasedness is introduced which means that an observation is selected for the sample on the basis of not probability but choice.

Types of non-random sampling:-

  1. Convenience Sampling:- When a sample observation is drawn from the population based on how comfortable it is for you to take the observation it is called convenience sampling. For example when you have a survey sheet that is to be filled by students from all the departments of your college but you only ask your friends to fill the survey sheet.
  2. Judgment Sampling:– When the sample observation drawn from the population is based on your professional judgment or past experience, it is called judgment sampling.
  3. Quota Sampling:– When you draw a sample observation from the population that is based on some specific attribute, it is called quota sampling. For example, taking sample of people over and above 50 years.
  4. Snow Ball Sampling:– When survey subjects are selected based on referral from other survey respondents, it is called snow ball sampling.

Sampling and Non-sampling errors

Sampling errors:- It occurs when the sample is not representative of the entire population.  For example a sample of 10 people with or without COVID-19 cannot tell whether or not the entire population of a country is COVID positive.

Non-sampling error:-This kind of error occurs during data collection. For example, during data collection if you falsely specified a name, it will be considered a non-sampling error.

So, with that this discussion on Sampling wraps up, hopefully, at the end of this you have learned what Sampling is, what are its variations and how do they all work. If you need further clarification, then check out our video tutorial on Sampling attached down the blog. DexLab Analytics provides the best data science course in gurgaon, keep following the blog section to stay updated.


.

Hypothesis Testing: An Introduction

Hypothesis Testing: An Introduction

You must be familiar with the phrase hypothesis testing, but, might not have a very clear notion regarding what hypothesis testing is all about. So, basically the term refers to testing a new theory against an old theory. But, you need to delve deeper to gain in-depth knowledge.

Hypothesis are tentative explanations of a principal operating in nature. Hypothesis testing is a statistical method which helps you prove or disapprove a pre-existing theory.

Hypothesis testing can be done to check whether the average salary of all the employees has increased or not based on the previous year’s data, testing can be done to check if the percentage of passengers increased or not in the business class due to introduction of a new service and testing can also be done to check the differences in the productivity varied land.

There are two key concepts in testing of hypothesis:-

Null Hypothesis:- It means the old theory is correct, nothing new is happening, the system is in control, old standard is correct etc. This is the theory you want to check if is true or not. For example if a ice-cream factory owner says that their ice-cream contains 90% milk, this can be written as:-

Alternative Hypothesis:- It means new theory is correct, something is happening, system is out of control, there are new standards etc. This is the theory you check against the null hypothesis. For example you say that ice-cream does not contain 90% milk, this can be written as:-

Two-tailed, right tailed and left tailed test

Two-tailed test:- When the test can take any value greater or less than 90% in the alternative it is called two-tailed test ( H190%) i.e. you do not care if the alternative is more or less all you want to know is if it is equal to 90% or no.

Right tailed test:-When your test can take any value greater than 90% (H1>90%) in the alternative hypothesis it is called right tailed test.

Left tailed test:-When your test can take any value less than 90% (H1<90%) in the alternative hypothesis it is called left tailed test.

Type I error and Type II error

->When we reject the null hypothesis when it is true we are committing type I error. It is also called significance level.

->When we accept the null hypothesis when it is false we are committing type II error.

Steps involved in hypothesis testing

  1. Build a hypothesis.
  2. Collect data
  3. Select significance level i.e. probability of committing type I error
  4. Select testing method i.e. testing of mean, proportion or variance
  5. Based on the significance level find the critical value which is nothing but the value which divides the acceptance region from the rejection region
  6. Based on the hypothesis build a two-tailed or one-tailed (right or left) test graph
  7. Apply the statistical formula
  8. Check if the statistical test falls in the acceptance region or the rejection region and then accept or reject the null hypothesis

Example:- Suppose the average annual salary of the employees in a company in 2018 was 74,914. Now you want to check whether the average salary  of the employees has increased or not in 2019. So, a sample of 112 people were taken and it was found out  that the average annual salary of the employees in 2019 is 78,795. σ=14.530.

We will apply hypothesis testing of mean when  known with 5% of significance level.

The test result shows that 2.75 falls beyond the critical value of 1.9 we reject the null hypothesis which basically means that the average salary has increased significantly in 2019 compared to 2018.

So, now that we have reached at the end of the discussion, you must have grasped the fundamentals of hypothesis testing. Check out the video attached below for more information. You can find informative posts on Data Science course, on Dexlab Analytics blog.


.

ANOVA PART I: The Introductory Guide to ANOVA

ANOVA PART I: The Introductory Guide to ANOVA

 In this blog, we are going to be discussing a statistical technique, ANOVA, which is used for comparison.        

The basic principal of ANOVA is to test for differences among the mean of different samples. It examines the amount of variation within each of these samples and the amount of variation between the samples. ANOVA is important in the context of all those situations where we want to compare more than two samples as in comparing the yield of crop from several variety of seeds etc.

The essence of ANOVA is that the total amount of variation in a set of data is broken in two types:-

  1. The amount that can be attributed to chance.
  2. The amount which can be attributed to specified cause.

One-way ANOVA

Under the one-way ANOVA we compare the samples based on a single factor. For example productivity of different variety of seeds.

Stepwise process involved in calculation of one-way ANOVA is as follows:-

  1. Calculate the mean of each sample X ̅
  2. Calculate the super mean
  3. Calculate the sum of squares between (SSB) samples

  1. Divide the result by the degree of freedom between the samples to obtain mean square between (MSW) samples.
  2. Now calculate variation within the samples i.e. sum of square within (SSW)

  1. Calculate mean square within (MSW)

  2. Calculate the F-ratio

  3. Last but not the least calculate the total variation in the given samples i.e. sum of square for total variance.

Lets now solve a one-way ANOVA problem.

A,B and C are three different variety of seeds and now we need to check if there is any variation in their productivity or not. We will be using one-way ANOVA as there is a single factor comparison involved i.e. variety of seeds.

The f-ratio is 1.53 which lies within the critical value of 4.26 (calculated from the f-distribution table).

Conclusion:- Since the f-ratio lies within the acceptance region we can say that there is no difference in the productivity of the seeds and the little bit of variation that we see is caused by chance.

Two-way ANOVA will be discussed in my next blog so do comeback for the update.

Hopefully, you have found this blog informative, for more clarification watch the video attached down the blog. You can find more such posts on Data Science course topics, just keep on following the DexLab Analytics blog.


.

Introducing Automation: Learn to Automate Data Preparation with Python Libraries

Introducing automation

In this blog we are discussing automation, a function for automating data preparation using a mix of Python libraries. So let’s start.

Problem statement

A data containing the following observation is given to you in which the first row contains column headers and all the other rows contains the data. Some of the rows are faulty, a row is faulty if it contains at least one cell with a NULL value. You are supposed to delete all the faulty rows containing NULL value written in it.

In the table given below, the second row is faulty, it contains a NULL value in salary column. The first row is never faulty as it contains the column headers. In the data provided to you every cell in a column may contain a single word and each word may contain digits between 0 & 9 or lowercase and upper case English letters. For example:

In the above example after removing the faulty row the table looks like this:

The order of rows cannot be changed but the number of rows and columns may differ in different test case.

The data after preparation must be saved in a CSV format. Every two successive cells in each row are separated by a single comma ‘,’symbol and every two successive rows are separated by a new-line ‘\n’ symbol. For example, the first table from the task statement to be saved in a CSV format is a single string ‘S. No., Name, Salary\n1,Niharika,50000\n2,Vivek,NULL\n3,Niraj,55000’ . The only assumption in this task is that each row may contain same number of cells.

Write a python function that converts the above string into the given format.

Write a function:

def Solution(s)

Given a string S of length N, returns the table without the Faulty rows in a CSV format.

Given S=‘S. No., Name, Salary\n1,Niharika,50000\n2,Vivek,NULL\n3,Niraj,55000’

The table with data from string S looks as follows:

After removing the rows containing the NULL values the table should look like this:

You can try a number of strings to cross-validate the function you have created.

Let’s begin.

  • First we will store the string in a variable s
  • Now we will start by declaring the function name and importing all the necessary libraries.
  • Creating a pattern to separate the string from ‘\n’ .
  • Creating a loop to create multiple lists within a list.

In the above code the list is converted to an array and then used to create a dataframe and stored as csv file in the default working directory.

  • Now we need to split the string to create multiple columns.

The above code creates a dataframe with multiple columns.

Now after dropping the rows with NaN values data looks like

To reset the index we can now use .reset_index() method.

  • Now the problem with the above dataframe created is that the NULL values are in string format, so first we need to convert them into NaN values and then only we will be able to drop them. For that we will be using the following code.

Now we will be able to drop the NaN values easily by using .dropna() method.

In the above code we first dropped the NaN values  then we used the first row of the data set to create column names and then dropped the original row. We also made the first column as index.


Hence we have managed to create a function that can give us the above data. Once created this function can be used to convert a string into dataframe with similar pattern.

Hopefully, you found the discussion informative enough. For further clarification watch the video attached below the blog. To access more informative blogs on Data science using python training related topics, keep on following the Dexlab Analytics blog.

Here’s a video introduction to Automation. You can check it down below to develop a considerable understanding of the same:


.

How The Industries Are Being Impacted By Data Science?

How The Industries Are Being Impacted By Data Science?

The world has finally woken up and smelled the power of data science and now we are living in a world that is being driven by data. There is no denying the fact that new technologies are coming to the fore that are born out of data-driven insight and numerous sectors are also turning towards data science techniques and tools to increase their operational efficiency.

This in turn is also pushing a demand for skilled people in various sectors who are armed with Data Science course or, Retail Analytics Courses to be able to sift through mountains of data to clean it, sort it and analyze it for uncovering valuable information. Decisions that were earlier taken often on the basis of erroneous data or, assumption can now be more accurate thanks to application of data science.

Now  let’s take a look at which sectors are benefitting the most from data science

Healthcare

The healthcare industry has adopted the data science techniques and the benefits could already be perceived. Keeping track of healthcare records is easier not just that but digging through the pile of patient data and its analysis actually helps in giving hint regarding health issues that might crop up in near future. Preventive care is now possible and also monitoring patient health is easier than ever before.

The development in the field  can also predict which medication would be suitable for a particular patient. Data analytics and data science application is also enabling the professionals in this sector to offer better diagnostic results.

Retail

This is one industry that is reaping huge benefits from the application of data science. Now sorting through the customer data, survey data it is easier to gauge the customers’ mindset. Predictive analysis is helping the experts in this field to predict the personal preference of the consumers and they are able to come up with personalized recommendations that is bound to help them retain customers.  Not just that they can also find the problem areas in their current marketing strategy to make changes accordingly.

Transport

Transport is another sector that is using data science techniques to its advantage and  in turn it is increasing its service quality. Both the public and private transportation services providers are keeping track of customer journey and getting the details necessary to develop personalized information, they are also helping people be prepared for unexpected issues and most importantly they are helping people reach their destinations without any glitch.

Data Science Machine Learning Certification

Finance

If so many industries are reaping benefits, Finance is definitely to follow suit. Dealing with  valuable data regarding banking transactions, credit history is essential. Based on the data insight it is possible to offer customers personalized financial advice. Also the credit risk issue could be minimized thanks to the insight derived from a particular customer’s credit history. It would allow the financial institute make an informed decision. However, credit risk analytics training would be required for personnel working in this field. 

Telecom

The field of telecom is surely a busy sector that has to deal with tons of valuable data. With the application of data science now they are able to find a smart solution to process the data they gather from various call records, messages, social media platforms in order to design and deliver services that are in accordance with customers’ individualistic needs.

Harnessing the power of data science is definitely going to impact all the industries in future. The data science domain is expanding and soon there would be more miracles to observe. Data Science training can help upskill the employees reduce the skill gap that is bugging most sectors.


.

Probability PART-II: A Guide To Probability Theorems

Probability PART-II: A Guide To Probability Theorems

This is the second part of the probability series, in the first segment we discussed the basic concepts of probability. In this second part we will delve deeper into the topic and discuss the theorems of probability. Let’s find out what these theorems are.

Addition Theorem

    • If A and B are two events and they are not necessarily mutually exclusive then the probability of occurrence of at least one of the two events A and B i.e. P(AUB) is given by



 
 
 
 
Removing the intersections will give the probability of A or B  or both.
 
 
 
 
 
 
Example:- From a deck of cards 1 card is drawn, what is the probability the card is king or heart or both?

Total cards 52

P(KingUHeart)= P(King)+P(Heart) ─ P(King∩Heart)

  • If A and B are two mutually exclusive events then the probability that either A or B will occur is the sum of individual probabilities of the events A and B.

 
 
 
 
 
 
P(A)+P(B), here the combined probability of the two will either give P(A) or P(B)
 
 
 
 
 
 

  • If A and B are two non mutually exclusive events then the probability of occurrence of event A is given by

               

 
 
 
Where B’ is 1-P(B), that means probability of  A is calculated as P(A)=1-P(B)
 
 
 
 
 
 
 

Multiplication Law

The law of multiplication is used to find the joint probability or the intersection i.e. the probability of two events occurring together at the same point of time.

In the above graph we see that when the bill is paid at the same time tip is also paid and the interaction of the two can be seen in the graph.

Joint probability table

A joint probability table displays the intersection (joint) probabilities along with the marginal probabilities of a given problem where the marginal probability is computed by dividing some subtotal by the whole.

Example:- Given the following joint probability table find out the probability that the employee is female or a professional worker.

Watch this video down below that further explains the theorems.

At the end of this blog, you must have grasped the basics of the theorems discussed here. Keep on tracking the Dexlab Analytics blog where you will find more discussions on topics related to Data Science training.


.

What Role Does A Data Scientist Play In A Business Organization?

What Role Does A Data Scientist Play In A Business Organization?

The job of a data scientist is one that is challenging, exciting and crucial to an organization’s success.  So, it’s no surprise that there is a rush to enroll in a Data Science course, to be eligible for the job. But, while you are at it, you also need to have the awareness regarding the job responsibilities usually bestowed upon the data scientists in a business organization and you would be surprised to learn that the responsibilities of a data scientist differs from that of a data analyst or, a data engineer.

So, what is the role and responsibility of a data scientist?  Let’s take a look.

The common idea regarding a data scientist role is that they analyze huge volumes of data in order to find patterns and extract information that would help the organizations to move ahead by developing strategies accordingly. This surface level idea cannot sum up the way a data scientist navigates through the data field. The responsibilities could be broken down into segments and that would help you get the bigger picture.

Data management

The data scientist, post assuming the role, needs to be aware of the goal of the organization in order to proceed. He needs to stay aware of the top trends in the industry to guide his organization, and collect data and also decide which methods are to be used for the purpose. The most crucial part of the job is the developing the knowledge of the problems the business is trying solve and the data available that have relevance and could be used to achieve the goal. He has to collaborate with other departments such as analytics to get the job of extracting information from data.

Data analysis

Another vital responsibility of the data scientist is to assume the analytical role and build models and implement those models to solve issues that are best fit for the purpose. The data scientist has to resort to data mining, text mining techniques. Doing text mining with python course can really put you in an advantageous position when you actually get to handle complex dataset.

Developing strategies

The data scientists need to devote themselves to tasks like data cleaning, applying models, and wade through unstructured datasets to derive actionable insight in order to gauge the customer behavior, market trends. These insights help a business organization to decide its future course of action and also measure a product performance. A Data analyst training institute is the right place to pick up the skills required for performing such nuanced tasks.

Collaborating

Another vital task that a data scientist performs is collaborating with others such as stakeholders and data engineers, data analysts communicating with them in order to share their findings or, discussing certain issues. However, in order to communicate effectively the data scientists need to master the art of data visualization which they could learn while pursuing big data courses in delhi along with deep learning for computer vision course.  The key issue here is to make the presentation simple yet effective enough so that people from any background can understand it.

Data Science Machine Learning Certification

The above mentioned responsibilities of a data scientist just scratch the surface because, a data scientist’s job role cannot be limited by or, defined by a couple of tasks. The data scientist needs to be in synch with the implementation process to understand and analyze further how the data driven insight is shaping strategies and to which effect. Most importantly, they need to evaluate the current data infrastructure of the company and advise regarding future improvement. A data scientist needs to have a keen knowledge of Machine Learning Using Python, to be able to perform the complex tasks their job demands.


.

Step-by-step guide to building a career in Data Science

Step-by-step guide to building a career in Data Science

With 2.5 quintillion bytes of data being created everyday companies are scrambling to build models and hire experts to extract information hidden in massive unstructured datasets and the data scientists have become the most sought-after professionals in the world.  The job portals are full of job postings looking for data scientists whose resume has the perfect combination of skill and experience. In this world which is being driven by the data revolution, achieving your big data career dreams need a little bit of planning and strategizing. So, here is a step-by-step guide for you.  

Grabbing a high paying and skilled data job is not going to be easy, industries will only invest money on individuals with the right skillset. Your job responsibility will involve wading through tons of unstructured data to find pattern and meaning, making forecasts regarding marketing trends, customer behavior and deliver the insight in a presentable format to the company on the basis of which they are going to be strategizing.

So, before you even begin make sure that you have the tenacity and enthusiasm required for the job. You would need to undergo Data science using python training, in order to gain the necessary skills and knowledge and since this is an evolving field you should be ready to constantly upskill yourself and stay updated about the latest developments in the field.

Are you ready? If it’s a resounding yes, then, without wasting any more time let’s get straight to the point and explore the steps that will lead you to become a data scientist.

Step 1: Complete education

Before you pursue data science, you must complete your bachelors degree, if you are coming from computer science, applied mathematics, or, economics that could give you a head start. However, you need to undergo Data Science training, post that to acquire the required skillset.

Step 2: Gain knowledge of Mathematics and statistics

You do not need to have a PHD in either, but, since both are at the core of the data science you must have a good grasp on applied mathematics and statistics. Your task would require you to have knowledge regarding linear algebra, probability & statistics. So, your first step would be to update yourself and be familiar with the concepts if you happen to hail from a non-science background so that you can sail through the rest of the journey.

 Step 3: Get ready to do programming

Just like mathematics and statistics, having a grip on a programming language preferably Python, is essential. Now, why do you need to learn coding? Well, coding is important as you have to work with large datasets comprising mostly unstructured data and coding will help you to clean, organize, read data and also process it. Now the stress is on Python because it is one of the widely used languages in the data science community and is comparatively easier to pick up.

Step 4: Learn Machine Learning

Machine learning plays a crucial role in data science as it helps finding patterns in data and making predictions. Mastering machine learning techniques would enable you develop algorithms for the models and create an automated system that enables you to make predictions in real-time. Consider undergoing a Machine Learning training gurgaon.

Step 5: Learn Data Munging, Visualization, and Reporting

It has been mentioned before that you would mostly be handling unstructured data, which means in order to process that data you must transform that data into a format that is easy to work with. Data munging helps you achieve that. Data visualization is again a must-have skill for a data scientist as it allows you to visually present your data findings that is easy to understand through graphs, charts, while data reporting lets you prepare and present reports for businesses.

Step 6: Be certified

Now that the field has advanced so much, there is a requirement for professionals who have undergone Data Science course. Doing a certification course would upskill you and arm you with industry knowledge. Reputed institutes like Dexlab Analytics offer cutting edge courses such as Python for data science training. If you just follow this step it would take care of the rest of the worries, the best part of getting your training is that here you will be taught everything from scratch so, no need to fret if you do not know programming language. Your learning would be aided by hands-on training.

Step 7: Practice your skills

You need to test the skills you have acquired and to hone the skills you must explore Kaggle, which lets your access resources you need and this platform also allows you to take part in competitions that further helps you sharpen your abilities. You should also keep on practicing by doing projects in order to put the theories into action.

Step 8: Work on your soft skills

In order to be a professional data scientist you must acquire soft skills as well. So along with working on your communication skills, you must also need to develop problem solving skills while learning how business organizations function to understand what would be required of you when you assume the role of a data scientist.

Step 9: Get an internship

Now that you have the skill and certification you need experience to get hired, build a resume stressing on the skills you have acquired and search the job portals to land an internship. It would not only enhance your resume, but, it also gives you exposures to real projects, the more projects you handle the better and you would also learn from the experts there.

Step 10: Apply for a job

Once you have gathered enough experience start applying for full-time positions as now you have both skill and experience. But, do not stop learning once you land a job, because this field is growing many changes will happen so you have to mold yourself accordingly. Be a part of the community, network with people, keep on exploring  GitHub and find out what other skills you require.

Data Science Machine Learning Certification

So, those were the steps you need to follow to build a rewarding career in data science. The job opportunities are plenty and to grab the right job you must do big data training in gurgaon. These courses are aimed to prepare individuals for the industry, so get ready for an exciting career!


.

Probability PART-I: Introducing The Concept Of Probability

Probability PART-I: Introducing The Concept Of Probability

Today we will begin discussion about a significant concept, probability, which measures the likelihood of the occurrence of an event. This is the first part of the series, where you would be introduced to the core concept. So, let’s begin.

What is probability?

It is a measure of quantifying the likelihood that an event will occur and it is written as P(x).


Key concepts of probability


 
 
A union comprises of only unique values.
 
 
 
 
 

 
 
Intersection comprises of common values of the two sets

 

 

  • Mutually Exclusive Events:- If the occurrence of one event preludes the occurrence of the other event(s), then it is called mutually exclusive event.

P(A∩B) = 0

  • Independent Events:- If the occurrence or non-occurrence of an event does not have any effect on the occurrence or non-occurrence of other event(s), then it is called an independent event. For example drinking tea is independent of going for shopping.
  • Collectively Exhaustive Events:– A set of collectively exhaustive events comprises of all possible elementary events for an experiment. Therefore, all sample spaces are collectively exhaustive sets.
  • Complementary Events:– A complement of event A will be A` i.e. P(A`) = 1 ─ P(A)

Properties of probability

  • Probabilities are non-negative values ranging between 0 & 1.
  • Ω = 1 i.e. combined probability of sample is 1
  • If A & B are two mutually exclusive events then P(A U B)= P(A) +P(B)
  • Probability of not happening of an event is P(A)= 1 ─ P(A)

Rules of Counting the possibilities

  • The mn counting rule:- When a customer has a set of combinations to choose from like two different engines, five different paint colors and three different interior packages , how will he calculate the total number of options available to him? The answer to the question is “ mn counting rule”. Simply multiply the given options, like in our case 2 * 5 * 3 will give us 30.This means the customer has 30 combinations to choose from when it comes to purchasing a car.
  • Sampling from a population with replacement:- Suppose that you roll a dice three times i.e. the number of trials is 3, now if we want to check how many combinations are possible in this particular experiment we use Nn = 63 = 216
  • Sampling from a population without replacement:- When the sample space shrinks after each trial then you use the following formula :-

Data Science Machine Learning Certification

Conclusion

There is a video covering the same concept attached down the blog, go through it to be more clear about this.

So, with this we wrap up our discussion on the concept of probability. If you want more informative blogs on Data Science training, then follow the Dexlab Analytics blog. Dexlab Analytics provides machine learning certification courses in gurgaon as well.


.

Call us to know more