Machine Learning Courses Archives - Page 6 of 9 - DexLab Analytics | Big Data Hadoop SAS R Analytics Predictive Modeling & Excel VBA

Artificial Intelligence: Let’s Crack the Myths and Unfold the Future to You

Artificial Intelligence: Let’s Crack the Myths and Unfold the Future to You

A lot of myths are going around about Artificial Intelligence.

In a recent interview, Alibaba founder Jack Ma said AI can pose a massive threat to jobs around the world, along with triggering World War III. The logic of shared by him explained that in 30 years, humans will be working for only 4 hours a day, and 4 days a week.

Fuelling this, Recode founder Kara Swisher vouched for Ma’s prediction. She supported him by saying Ma is “a hundred percent right,” adding that “any job that’s repetitive, that doesn’t include creativity, is finished because it can be digitized” and “it’s not crazy to imagine a society where there’s very little job availability.” 

Besides, I find all these stuffs quite baffling. I think that if AI is going to be the driving force towards innovation and bringing in a new technological revolution, it’s upon US to curate the opportunities that will require new jobs. Apocalyptic predictions just don’t help.

2

Let’s highlight the myths and the logical equations:

Myth 1: AI is going to kill our jobs – it can never happen

Remember, it’s humans who have created robots. We excel at mechanizing, systematizing and automating. We spurred the automation drive, while infusing intelligence to the machines.

The present objective is to create AIs that can work together with human intelligence to develop new narratives for problems we are yet to solve. To solve these new problems, we need new kinds of jobs – there’s a great scope of opportunity, let’s not believe that AI will kill our jobs.

DexLab Analytics is here with its comprehensive machine learning courses.

Myth 2: Robots are AINot at all.

From drones to self-organizing shelves in warehouses to machines sent to Mars, all are just machines programmed to function.

Myth 3: Big Data and Analytics are AI. Who said that?

Data mining, Data Science, Pattern Recognition – they are just human-created models. They might be intricate or complicated in nature, but not AI. Data and AI are two entirely different and divergent concepts.

Myth 4: Machine Learning and Deep Learning are AI. Again a big NO.

Though Machine Learning and Deep Learning are a part of the enormous AI tool kit, they are not AI. They are just mere tools to program computers to tackle complex patterns- like the way your email filters out spam by “understanding” what hundreds and thousands of users have identified as spam. They look uber smart, undeniably, in fact scary at times, when a computer wins against a renowned expert at the game GO, but they are definitely not AI.

Myth 5: AI includes Search Engines. Definitely NO.

Search Engines have made our lives easier, undoubtedly. The way you can search information now was impossible few years back, but being the searcher, you too contribute the intelligence. All the computer does is identify patterns from what you search and suggest it to others. From a macro perspective, it doesn’t actually know what it finds because it’s dumb in the end. We feed them intelligence, otherwise they are nothing.  

So, instead of panicking about the uncertainties that AI may bring into our lives, we should take a bow and appreciate the efforts humans gave into creating something so huge, so complex like AI.

And remember, AI has always created jobs in the past and didn’t take them. So, be hopeful!

For best data analytics training in Gurgaon, consider DexLab Analytics! Follow us to get feeds regularly.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

5 New-Age IT Skill Sets to Fetch Bigger Paychecks in 2017

Technology is the king. It is slowly intensifying its presence over workplaces, and is one of the chief reasons why companies are laying off employees. Adoption of cutting-edge technologies is believed to be the main reason of job cuts and by now if professional techies are not properly equipped with newer technologies under their sleeves, the future of human workforce seems bleaker.

 
5 New-Age IT Skill Sets to Fetch Bigger Paychecks in 2017
 

DexLab Analytics offers the best R language certification in Delhi.

 

A recent report says – India would lose about 69,000 jobs until 2021 due to the adoption of IoT, so do you really think human intelligence is losing its intellect? Will AI finally surpass brain power?

Continue reading “5 New-Age IT Skill Sets to Fetch Bigger Paychecks in 2017”

The Timeline of Artificial Intelligence and Robotics

The Timeline of Artificial Intelligence and Robotics

Cities have been constructed sprawling over the miles, heaven-piercing skyscrapers have been built, mountains have been cut across to make way for tunnels, and rivers have been redirected to erect massive dams – in less than 250 years, we propelled from primitive horse-drawn carts to autonomous cars run on highly integrated GPS systems, all because of state-of-the-art technological innovation. The internet has transformed all our lives, forever. Be it artificial intelligence or Internet of Things, they have shaped our society and amplified the pace of high-tech breakthroughs.

One of the most significant and influential developments in the field of technology is the notion of artificial intelligence. Dating back to the 5th century BC, when Greek myths of Hephaestus incorporate the idea of robots, though it couldn’t be executed till the Second World War II, artificial intelligence has indeed come a long way.

 

Come and take a look at this infographic blog to view the timeline of Artificial Intelligence:

 

Evolution of Artificial Intelligence Over the Ages from Infographics

 

In the near future, AI will become a massive sector brimming with promising financial opportunities and unabashed technological superiority. To find out more about AI and how it is going to impact our lives, read our blogs published at DexLab Analytics. We offer excellent Machine Learning training in Gurgaon for aspiring candidates, who want to know more about Machine Learning using Python.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

Indian Startups Relying on Artificial Intelligence to Know Their Customer’s Better

Indian-Startups-Relying-on-Artificial-Intelligence-to-Know-Their-Customers-Better

Artificial Intelligence was there decades ago, but everyone is talking about AI and Big Data in India’s startup ecosystem of late.

Budding startups are looking for new talent with AI expertise to inspect and evaluate consumer data and provide customized services to the users. At the same time, tech honchos such as Apple have discovered the huge potentials hidden within Indian companies that help their clients with data processing, image and voice recognition, and no wonders, investors are too hopeful for Indian AI startups.

Discover an intensive and student-friendly Machine Learning course in Delhi. Visit us at DexLab Analytics.

 

Here are a slew of Indian unicorns – companies valued at $1 billion or more that are putting in use the exploding technology of AI in the best way possible:

2

Paytm

An eye-piercing transformation from being an e-wallet to selling flight or movie tickets, Paytm is now implementing machine learning to bring order into chaos. The company’s chief technology offer, Charumitra Pujari, said, “You could Google and try to look for something. But a better world would be when Google could on its own figure out Charu is looking for ‘x’ at this time. That’s exactly what we’re doing at Paytm,” he further added, “If you’ve come to buy a flight ticket, because I understand your purchase cycle, I show that instead of a movie ticket or transactions.”

In order to identify and prevent fraudulent activities, machines are constantly assessing illicit accounts that purposefully sign up to derive advantage of promo codes, or for money laundering intention. The fraud-detection engine is extremely efficient, leaving no room for human error, Pujari stated.

The team at Paytm is versatile – machine learning engineers, software engineers, and data scientists are in action in Toronto, Canada, as well as in Paytm’s headquarters in Noida, India. Currently, they have 60 people working for them in each location – “We know the future is AI and we will need a lot more people,” said Pujari.

Ola cabs

One of the most successful ride-hailing apps in India, Ola uses machine learning tech to track traffic, crack through driver habits, improve customer experience and enhance the life of each vehicle they acquired. AI plays a consequential role in interpreting day-in-day-out variations in demand and to decipher how much supply is required to cater to its increased demand, how variable are traffic predictions and how rainfall affects the productiveness of vehicles.

olacabs-picture

“AI is understanding what is the behavioral profile of a driver partner and, hence, in which way can we train him to be a better driver partner on (the) platform,” co-founder and chief technology officer Ankit Bhati said, the algorithms put into the car-pooling service works great in pulling down travel times by coordinating with various pick-up points and destinations, while sharing one single vehicle, he further added.

Power yourself with unabashed Machine Learning training.

Flipkart

According to a report in Forbes, Flipkart – India’s largest domestic e-commerce player has already re-designed its app’s home screen to give a more personalized version of services to its mushrooming 120 million patrons. Machine learning models crack each customer’s gender, brand preference, store affinity, price range, volume of purchases and more. In fact, in future, the company is going forward to figure out the reasons about when and why the returns are made, and as a result will try to reduce their happenings. 

Flipkart

A squad of 25 data scientists at Flipkart have started using AI to observe the past buyer behavior to predict their future purchases. “If a customer keys in a query for running shoes, we show only the category landing pages of the particular brand the customer wants to see, in the price point and styles that (are) preferred, as gauged by previous buying behaviour, therefore ensuring a faster, smoother checkout process,” Ram Papatla, the vice president of product management at Flipkart, said recently at an interview with a leading daily.

ShopClues, InMobi, SigTuple and EdGE Network are myriad other Indian startup players who are making it really big by utilizing the powerful tentacles of AI and machine learning.

For more such interesting feeds on artificial intelligence and machine learning, follow us at DexLab Analytics. We offer India’s best Machine Learning Using Python courses.  

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

Google Is All Set to Wipe Off Artificial Stupidity

Google Is All Set to Wipe Off Artificial Stupidity

Well, human-AI relation needs to improve. Amazon’s Alexa personal assistant is operating in one of the world’s largest online stores and deserves accolade as it pulls out information from Wikipedia. But what if it can’t play that rad pop banger you just heard and responds saying “I’m sorry, I don’t understand the question,”!! Disappointing, right?

All revered digital helpmates including Google’s Google Assistant and Apple’s Siri are capable of producing frustrating coups that can feel like artificial stupidity. Against this, Google has decided to start a new research push to realize and improve the existing relations between humans and AI. PAIR, for People + AI Research initiative was announced this Monday, and it would be shepherded by two data viz crackerjacks, Fernanda Viégas and Martin Wattenberg.

104476359-google-assistant-5.530x298

Get Machine Learning Certification today. DexLab Analytics is here to provide encompassing Machine Learning courses.

Virtual assistants don’t like to be defeated – they get infuriated when they fail to perform a given task. In this context, Viégas says she is keen to study how people outline expectations regarding what systems can and cannot outperform a command – which is to say how virtual assistants should be designed to prick us toward only asking things that it can perform, leaving no room for disappointment.

Making Artificial Intelligence more transparent among people and not just professionals is going to be a major initiative of PAIR. It also released two open source tools to help data scientists grasp the data they are feeding into the Machine Learning systems. Interesting, isn’t it?

The deep learning programs that have recently gained a lot of appreciation in analyzing our personal data or diagnosing life-threatening diseases is of late said to be dubbed as ‘black boxes’ by polemicist researchers, meaning it can be trickier to observe why a system churn out a specific decision, like a diagnosis. So, here lies the problem. In life and death situations inside clinics, or on-road, while driving autonomous vehicles, these faulty algorithms may pose potent risks. Viégas says “The doctor needs to have some sense of what’s happening and why they got a recommendation or prediction.”

Googleplex-Google-Logo-AH-6

Google’s project comes at a time when the human consequences of AI are being questioned the most. Recently, the Ethics and Governance of Artificial Intelligence Fund in association with the Knight Foundation and LinkedIn cofounder Reid Hoffman declared $7.6 million in grants to civil society organizations to review the changes AI is going to cause in labor markets and criminal justice structures. Similarly, Google announces most of PAIR’s work will take place in the open. MIT and Harvard professors Hal Abelson and Brendan Meade are going to join forces with PAIR to study how AI can improve education and science.

google_io_2017_ai_1499777827549

Closing Thoughts – If PAIR can integrate AI seamlessly into prime industries, like healthcare, it would definitely shape roads for new customers to reach Google’s AI-centric cloud business destination. Viégas reveals she will also like to work closely with Google’s product teams, like the ones responsible for developing Google Assistant. According to her, such collaborations are great and comes with an added advantage, as it keeps people hooked to the product, resulting in broader company services. PAIR is a necessary shot to not only help push the society to understand what’s going on between humans and AI but also to boost Google’s bottom line.

DexLab Analytics is your gateway to great career in data analytics. Enroll in a Machine Learning course online and ride on.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

Let’s Make Visualizations Better In Python with Matplotlib

Let’s Make Visualizations Better In Python with Matplotlib

Learn the basics of effective graphic designing and create pretty-looking plots, using matplotlib. In fact, not only matplotlib, I will try to give meaningful insights about R/ggplot2, Matlab, Excel, and any other graphing tool you use, that will help you grasp the concepts of graphic designing better.

Simplicity is the ultimate sophistication

To begin with, make sure you remember– less is more, when it is about plotting. Neophyte graphic designers sometimes think that by adding a visually appealing semi-related picture on the background of data visualization, they will make the presentation look better but eventually they are wrong. If not this, then they may also fall prey to less-influential graphic designing flaws, like using a little more of chartjunk.

 

Data always look better naked. Try to strip it down, instead of adorning it.

Have a look at the following GIF:

“Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away.” – Antoine de Saint-Exupery explained it the best.

Color rules the world

The default color configuration of Matlab is quite awful. Matlab/matplotlib stalwarts may find the colors not that ugly, but it’s undeniable that Tableau’s default color configuration is way better than Matplotlib’s.

Get Tableau certification Pune today! DexLab Analytics offers Tableau BI training courses to the aspiring candidates.

Make use of established default color schemes from leading software that is famous for offering gorgeous plots. Tableau is here with its incredible set of color schemes, right from grayscale and colored to colorblind friendly.

A plenty of graphic designers forget paying heed to the issue of color blindness, which encompasses over 5% of the graphic viewers. For example, if a person suffers from red-green color blindness, it will be completely indecipherable for him to understand the difference between the two categories depicted by red and green plots. So, how will he work then?

 

For them, it is better to rely upon colorblind friendly color configurations, like Tableau’s “Color Blind 10”.

 

To run the codes, you need to install the following Python libraries:

 

  1. Matplotlib
  2. Pandas

 

Now that we are done with the fundamentals, let’s get started with the coding.

 

percent-bachelors-degrees-women-usa

 

import matplotlib.pyplot as plt
import pandas as pd

# Read the data into a pandas DataFrame.  
gender_degree_data = pd.read_csv("http://www.randalolson.com/wp-content/uploads/percent-bachelors-degrees-women-usa.csv")  

# These are the "Tableau 20" colors as RGB.  
tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),  
             (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),  
             (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),  
             (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),  
             (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]  

# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts.  
for i in range(len(tableau20)):  
    r, g, b = tableau20[i]  
    tableau20[i] = (r / 255., g / 255., b / 255.)  

# You typically want your plot to be ~1.33x wider than tall. This plot is a rare  
# exception because of the number of lines being plotted on it.  
# Common sizes: (10, 7.5) and (12, 9)  
plt.figure(figsize=(12, 14))  

# Remove the plot frame lines. They are unnecessary chartjunk.  
ax = plt.subplot(111)  
ax.spines["top"].set_visible(False)  
ax.spines["bottom"].set_visible(False)  
ax.spines["right"].set_visible(False)  
ax.spines["left"].set_visible(False)  

# Ensure that the axis ticks only show up on the bottom and left of the plot.  
# Ticks on the right and top of the plot are generally unnecessary chartjunk.  
ax.get_xaxis().tick_bottom()  
ax.get_yaxis().tick_left()  

# Limit the range of the plot to only where the data is.  
# Avoid unnecessary whitespace.  
plt.ylim(0, 90)  
plt.xlim(1968, 2014)  

# Make sure your axis ticks are large enough to be easily read.  
# You don't want your viewers squinting to read your plot.  
plt.yticks(range(0, 91, 10), [str(x) + "%" for x in range(0, 91, 10)], fontsize=14)  
plt.xticks(fontsize=14)  

# Provide tick lines across the plot to help your viewers trace along  
# the axis ticks. Make sure that the lines are light and small so they  
# don't obscure the primary data lines.  
for y in range(10, 91, 10):  
    plt.plot(range(1968, 2012), [y] * len(range(1968, 2012)), "--", lw=0.5, color="black", alpha=0.3)  

# Remove the tick marks; they are unnecessary with the tick lines we just plotted.  
plt.tick_params(axis="both", which="both", bottom="off", top="off",  
                labelbottom="on", left="off", right="off", labelleft="on")  

# Now that the plot is prepared, it's time to actually plot the data!  
# Note that I plotted the majors in order of the highest % in the final year.  
majors = ['Health Professions', 'Public Administration', 'Education', 'Psychology',  
          'Foreign Languages', 'English', 'Communications\nand Journalism',  
          'Art and Performance', 'Biology', 'Agriculture',  
          'Social Sciences and History', 'Business', 'Math and Statistics',  
          'Architecture', 'Physical Sciences', 'Computer Science',  
          'Engineering']  

for rank, column in enumerate(majors):  
    # Plot each line separately with its own color, using the Tableau 20  
    # color set in order.  
    plt.plot(gender_degree_data.Year.values,  
            gender_degree_data[column.replace("\n", " ")].values,  
            lw=2.5, color=tableau20[rank])  

    # Add a text label to the right end of every line. Most of the code below  
    # is adding specific offsets y position because some labels overlapped.  
    y_pos = gender_degree_data[column.replace("\n", " ")].values[-1] - 0.5  
    if column == "Foreign Languages":  
        y_pos += 0.5  
    elif column == "English":  
        y_pos -= 0.5  
    elif column == "Communications\nand Journalism":  
        y_pos += 0.75  
    elif column == "Art and Performance":  
        y_pos -= 0.25  
    elif column == "Agriculture":  
        y_pos += 1.25  
    elif column == "Social Sciences and History":  
        y_pos += 0.25  
    elif column == "Business":  
        y_pos -= 0.75  
    elif column == "Math and Statistics":  
        y_pos += 0.75  
    elif column == "Architecture":  
        y_pos -= 0.75  
    elif column == "Computer Science":  
        y_pos += 0.75  
    elif column == "Engineering":  
        y_pos -= 0.25  

    # Again, make sure that all labels are large enough to be easily read  
    # by the viewer.  
    plt.text(2011.5, y_pos, column, fontsize=14, color=tableau20[rank])  

# matplotlib's title() call centers the title on the plot, but not the graph,  
# so I used the text() call to customize where the title goes.  

# Make the title big enough so it spans the entire plot, but don't make it  
# so big that it requires two lines to show.  

# Note that if the title is descriptive enough, it is unnecessary to include  
# axis labels; they are self-evident, in this plot's case.  
plt.text(1995, 93, "Percentage of Bachelor's degrees conferred to women in the U.S.A."  
       ", by major (1970-2012)", fontsize=17, ha="center")  

# Always include your data source(s) and copyright notice! And for your  
# data sources, tell your viewers exactly where the data came from,  
# preferably with a direct link to the data. Just telling your viewers  
# that you used data from the "U.S. Census Bureau" is completely useless:  
# the U.S. Census Bureau provides all kinds of data, so how are your  
# viewers supposed to know which data set you used?  
plt.text(1966, -8, "Data source: nces.ed.gov/programs/digest/2013menu_tables.asp"  
       "\nAuthor: Randy Olson (randalolson.com / @randal_olson)"  
       "\nNote: Some majors are missing because the historical data "  
       "is not available for them", fontsize=10)  

# Finally, save the figure as a PNG.  
# You can also save it as a PDF, JPEG, etc.  
# Just change the file extension in this call.  
# bbox_inches="tight" removes all the extra whitespace on the edges of your plot.  
plt.savefig("percent-bachelors-degrees-women-usa.png", bbox_inches="tight")

 

chess-number-ply-over-time
 

import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import sem

# This function takes an array of numbers and smoothes them out.
# Smoothing is useful for making plots a little easier to read.
def sliding_mean(data_array, window=5):
    data_array = array(data_array)
    new_list = []
    for i in range(len(data_array)):
        indices = range(max(i - window + 1, 0),
                        min(i + window + 1, len(data_array)))
        avg = 0
        for j in indices:
            avg += data_array[j]
        avg /= float(len(indices))
        new_list.append(avg)
        
    return array(new_list)

# Due to an agreement with the ChessGames.com admin, I cannot make the data
# for this plot publicly available. This function reads in and parses the
# chess data set into a tabulated pandas DataFrame.
chess_data = read_chess_data()

# These variables are where we put the years (x-axis), means (y-axis), and error bar values.
# We could just as easily replace the means with medians,
# and standard errors (SEMs) with standard deviations (STDs).
years = chess_data.groupby("Year").PlyCount.mean().keys()
mean_PlyCount = sliding_mean(chess_data.groupby("Year").PlyCount.mean().values,
                             window=10)
sem_PlyCount = sliding_mean(chess_data.groupby("Year").PlyCount.apply(sem).mul(1.96).values,
                            window=10)

# You typically want your plot to be ~1.33x wider than tall.
# Common sizes: (10, 7.5) and (12, 9)
plt.figure(figsize=(12, 9))

# Remove the plot frame lines. They are unnecessary chartjunk.
ax = plt.subplot(111)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)

# Ensure that the axis ticks only show up on the bottom and left of the plot.
# Ticks on the right and top of the plot are generally unnecessary chartjunk.
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()

# Limit the range of the plot to only where the data is.
# Avoid unnecessary whitespace.
plt.ylim(63, 85)

# Make sure your axis ticks are large enough to be easily read.
# You don't want your viewers squinting to read your plot.
plt.xticks(range(1850, 2011, 20), fontsize=14)
plt.yticks(range(65, 86, 5), fontsize=14)

# Along the same vein, make sure your axis labels are large
# enough to be easily read as well. Make them slightly larger
# than your axis tick labels so they stand out.
plt.ylabel("Ply per Game", fontsize=16)

# Use matplotlib's fill_between() call to create error bars.
# Use the dark blue "#3F5D7D" as a nice fill color.
plt.fill_between(years, mean_PlyCount - sem_PlyCount,
                 mean_PlyCount + sem_PlyCount, color="#3F5D7D")

# Plot the means as a white line in between the error bars. 
# White stands out best against the dark blue.
plt.plot(years, mean_PlyCount, color="white", lw=2)

# Make the title big enough so it spans the entire plot, but don't make it
# so big that it requires two lines to show.
plt.title("Chess games are getting longer", fontsize=22)

# Always include your data source(s) and copyright notice! And for your
# data sources, tell your viewers exactly where the data came from,
# preferably with a direct link to the data. Just telling your viewers
# that you used data from the "U.S. Census Bureau" is completely useless:
# the U.S. Census Bureau provides all kinds of data, so how are your
# viewers supposed to know which data set you used?
plt.xlabel("\nData source: www.ChessGames.com | "
           "Author: Randy Olson (randalolson.com / @randal_olson)", fontsize=10)

# Finally, save the figure as a PNG.
# You can also save it as a PDF, JPEG, etc.
# Just change the file extension in this call.
# bbox_inches="tight" removes all the extra whitespace on the edges of your plot.
plt.savefig("chess-number-ply-over-time.png", bbox_inches="tight");

Histograms

 
chess-elo-rating-distribution

 

import pandas as pd
import matplotlib.pyplot as plt

# Due to an agreement with the ChessGames.com admin, I cannot make the data
# for this plot publicly available. This function reads in and parses the
# chess data set into a tabulated pandas DataFrame.
chess_data = read_chess_data()

# You typically want your plot to be ~1.33x wider than tall.
# Common sizes: (10, 7.5) and (12, 9)
plt.figure(figsize=(12, 9))

# Remove the plot frame lines. They are unnecessary chartjunk.
ax = plt.subplot(111)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)

# Ensure that the axis ticks only show up on the bottom and left of the plot.
# Ticks on the right and top of the plot are generally unnecessary chartjunk.
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()

# Make sure your axis ticks are large enough to be easily read.
# You don't want your viewers squinting to read your plot.
plt.xticks(fontsize=14)
plt.yticks(range(5000, 30001, 5000), fontsize=14)

# Along the same vein, make sure your axis labels are large
# enough to be easily read as well. Make them slightly larger
# than your axis tick labels so they stand out.
plt.xlabel("Elo Rating", fontsize=16)
plt.ylabel("Count", fontsize=16)

# Plot the histogram. Note that all I'm passing here is a list of numbers.
# matplotlib automatically counts and bins the frequencies for us.
# "#3F5D7D" is the nice dark blue color.
# Make sure the data is sorted into enough bins so you can see the distribution.
plt.hist(list(chess_data.WhiteElo.values) + list(chess_data.BlackElo.values),
         color="#3F5D7D", bins=100)

# Always include your data source(s) and copyright notice! And for your
# data sources, tell your viewers exactly where the data came from,
# preferably with a direct link to the data. Just telling your viewers
# that you used data from the "U.S. Census Bureau" is completely useless:
# the U.S. Census Bureau provides all kinds of data, so how are your
# viewers supposed to know which data set you used?
plt.text(1300, -5000, "Data source: www.ChessGames.com | "
         "Author: Randy Olson (randalolson.com / @randal_olson)", fontsize=10)

# Finally, save the figure as a PNG.
# You can also save it as a PDF, JPEG, etc.
# Just change the file extension in this call.
# bbox_inches="tight" removes all the extra whitespace on the edges of your plot.
plt.savefig("chess-elo-rating-distribution.png", bbox_inches="tight");

Here Goes the Bonus

It takes one more line of code to transform your matplotlib into a phenomenal interactive.

 

 

Learn more such tutorials only at DexLab Analytics. We make data visualizations easier by providing excellent Python courses in India. In just few months, you will cover advanced topics and more, which will help you make a career in data analytics.

 

Interested in a career in Data Analyst?

To learn more about Machine Learning Using Python and Spark – click here.
To learn more about Data Analyst with Advanced excel course – click here.
To learn more about Data Analyst with SAS Course – click here.
To learn more about Data Analyst with R Course – click here.
To learn more about Big Data Course – click here.

More Powerful and Soon To Be Everywhere, Here’s All You Need to Know about AI

More Powerful and Soon To Be Everywhere, Here’s All You Need to Know about AI
 

This Wednesday, at the Google I/O Keynote, there wasn’t just one major revelation, but a series of incremental improvements across several Google’s product portfolios. And the best part of the story is all the improvements are driven by discoveries in artificial intelligence – the intelligence exhibited by machines.

Continue reading “More Powerful and Soon To Be Everywhere, Here’s All You Need to Know about AI”

Pentagon Fights Off ISIS with Machine Learning and Big Data

Machine learning and big data are the new BIG things going around in the world. They are being used for myriad purposes – better AI, apt malware detection, smart messenger apps, and lot, lot more. Topping that, they are being utilized by the Pentagon to eradicate the foundations of Islamic State militants and make the world a safer (and better) place to live in.

 
Pentagon Fights Off ISIS with Machine Learning and Big Data
 

This May, the Pentagon announced that it is undertaking its newly minted Algorithmic Warfare Cross Functional Team (AWCFT), codenamed Project Maven, with introducing Big Data and Machine Learning to boost the process of discovering actionable intelligence with the help of aerial imagery. “We’re not going to solve it by throwing more people at the problem…That’s the last thing that we actually want to do. We want to be smarter about what we’re doing,” Air Force Lt. Gen. John N.T. “Jack” Shanahan, director for defence intelligence for war fighter support told a leading defence news magazine.

Continue reading “Pentagon Fights Off ISIS with Machine Learning and Big Data”

When Machines Do Everything – How to Survive?

In the coming years, jobs and businesses are going to be impacted; reason AI. Today’s generation is very much concerned about how the bots will consume everything; from jobs to skills, the smart machines will spare nothing! It is true that machines are going to replace man-powered jobs – by using robots, mundane jobs can be performed in a flick of an eye freeing people working in bigger organisations to innovate and succeed.

 

Machine Learning training course Pune

Continue reading “When Machines Do Everything – How to Survive?”

Call us to know more