Dexlab, Author at DexLab Analytics | Big Data Hadoop SAS R Analytics Predictive Modeling & Excel VBA - Page 2 of 80

Deep Learning — Applications and Techniques

Deep Learning — Applications and Techniques

Deep learning is a subset of machine learning, a branch of artificial intelligence that configures computers to perform tasks through experience. While classic machine-learning algorithms solved many problems, they are poor at dealing with soft data such as images, video, sound files, and unstructured text.

Deep-learning algorithms solve the same problem using deep neural networks, a type of software architecture inspired by the human brain (though neural networks are different from biological neurons). Neural Networks are inspired by our understanding of the biology of our brains – all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.

The data is inputted into the first layer of the neural network. In the first layer individual neurons pass the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced. Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings.

Deep Learning Use Case Examples

Robotics

Many of the recent developments in robotics have been driven by advances in AI and deep learning. Developments in AI mean we can expect the robots of the future to increasingly be used as human assistants. They will not only be used to understand and answer questions, as some are used today. They will also be able to act on voice commands and gestures, even anticipate a worker’s next move. Today, collaborative robots already work alongside humans, with humans and robots each performing separate tasks that are best suited to their strengths.

Agriculture

AI has the potential to revolutionize farming. Today, deep learning enables farmers to deploy equipment that can see and differentiate between crop plants and weeds. This capability allows weeding machines to selectively spray herbicides on weeds and leave other plants untouched. Farming machines that use deep learning–enabled computer vision can even optimize individual plants in a field by selectively spraying herbicides, fertilizers, fungicides and insecticides.

Medical Imaging and Healthcare

Deep learning has been particularly effective in medical imaging, due to the availability of high-quality data and the ability of convolutional neural networks to classify images. Several vendors have already received FDA approval for deep learning algorithms for diagnostic purposes, including image analysis for oncology and retina diseases. Deep learning is also making significant inroads into improving healthcare quality by predicting medical events from electronic health record data.  Earlier this year, computer scientists at the Massachusetts Institute of Technology (MIT) used deep learning to create a new computer program for detecting breast cancer.

Here are some basic techniques that allow deep learning to solve a variety of problems.

Fully Connected Neural Networks

Fully Connected Feed forward Neural Networks are the standard network architecture used in most basic neural network applications.

Deep Learning — Applications and Techniques

In a fully connected layer each neuron is connected to every neuron in the previous layer, and each connection has its own weight. This is a totally general purpose connection pattern and makes no assumptions about the features in the data. It’s also very expensive in terms of memory (weights) and computation (connections).

Deep Learning — Applications and Techniques

Each neuron in a neural network contains an activation function that changes the output of a neuron given its input. These activation functions are:

  • Linear function: – it is a straight line that essentially multiplies the input by a constant value.
  •  Sigmoid function: – it is an S-shaped curve ranging from 0 to 1.
  • Hyperbolic tangent (tanH) function: – it is an S-shaped curve ranging from -1 to +1
  • Rectified linear unit (ReLU) function: – it is a piecewise function that outputs a 0 if the input is less than a certain value or linear multiple if the input is greater than a certain value.

Each type of activation function has pros and cons, so we use them in various layers in a deep neural network based on the problem. Non-linearity is what allows deep neural networks to model complex functions.

Convolutional Neural Networks

Convolutional Neural Networks (CNN) is a type of deep neural network architecture designed for specific tasks like image classification. CNNs were inspired by the organization of neurons in the visual cortex of the animal brain. As a result, they provide some very interesting features that are useful for processing certain types of data like images, audio and video.

Deep Learning — Applications and Techniques

Mainly three main types of layers are used to build ConvNet architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer (exactly as seen in regular Neural Networks). We will stack these layers to form a full ConvNet architecture.  A simple ConvNet for CIFAR-10 classification could have the above architecture [INPUT – CONV – RELU – POOL – FC].

  • INPUT [32x32x3] will hold the raw pixel values of the image, in this case an image of width 32, height 32, and with three color channels R,G,B.
  • CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume such as [32x32x12] if we decided to use 12 filters.
  • RELU layer will apply an elementwise activation function, such as the max(0,x)max(0,x)thresholding at zero. This leaves the size of the volume unchanged ([32x32x12]).
  • POOL layer will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as [16x16x12].
  • FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x10], where each of the 10 numbers correspond to a class score, such as among the 10 categories of CIFAR-10. As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume.

In this way, ConvNets transform the original image layer by layer from the original pixel values to the final class scores. Note that some layers contain parameters and others don’t. In particular, the CONV/FC layers perform transformations that are a function of not only the activations in the input volume, but also of the parameters (the weights and biases of the neurons). On the other hand, the RELU/POOL layers will implement a fixed function. The parameters in the CONV/FC layers will be trained with gradient descent so that the class scores that the ConvNet computes are consistent with the labels in the training set for each image.

Convolution is a technique that allows us to extract visual features from an image in small chunks. Each neuron in a convolution layer is responsible for a small cluster of neurons in the receding layer. CNNs work well for a variety of tasks including image recognition, image processing, image segmentation, video analysis, and natural language processing.

Recurrent Neural Network

The recurrent neural network (RNN), unlike feed forward neural networks, can operate effectively on sequences of data with variable input length.

The idea behind RNNs is to make use of sequential information. In a traditional neural network we assume that all inputs (and outputs) are independent of each other. But for many tasks that is a very bad idea. If you want to predict the next word in a sentence you better know which words came before it. RNNs are called recurrent because they perform the same task for every element of a sequence, with the output being depended on the previous computations. Another way to think about RNNs is that they have a “memory” which captures information about what has been calculated so far. This is essentially like giving a neural network a short-term memory. This feature makes RNNs very effective for working with sequences of data that occur over time, For example, the time-series data, like changes in stock prices, a sequence of characters, like a stream of characters being typed into a mobile phone.

The two variants on the basic RNN architecture that help solve a common problem with training RNNs are Gated RNNs, and Long Short-Term Memory RNNs (LSTMs). Both of these variants use a form of memory to help make predictions in sequences over time. The main difference between a Gated RNN and an LSTM is that the Gated RNN has two gates to control its memory: an Update gate and a Reset gate, while an LSTM has three gates: an Input gate, an Output gate, and a Forget gate.

RNNs work well for applications that involve a sequence of data that change over time. These applications include natural language processing, speech recognition, language translation, image captioning and conversation modeling.

Conclusion

So this article was about various Deep Learning techniques. Each technique is useful in its own way and is put to practical use in various applications daily. Although deep learning is currently the most advanced artificial intelligence technique, it is not the AI industry’s final destination. The evolution of deep learning and neural networks might give us totally new architectures. Which is why more and more institutes are offering courses on AI and Deep Learning across the world and in India as well. One of the best and most competent artificial intelligence certification in Delhi NCR is DexLab Analytics. It offers an array of courses worth exploring.


.

Applications of Artificial Intelligence: Agriculture

Applications of Artificial Intelligence: Agriculture

This article, the first part of a series, is on the application of artificial intelligence in agriculture. Popular applications of AI in agriculture can be sectioned off into three aspects – AI powered robots, computer vision and seasonal forecasting.

Robots

Firstly, companies are now gradually adopting AI powered machines to automate agricultural tasks such as harvesting larger volumes of crops faster than human workers. For instance, companies are using robots to remove weeds and unwanted plants from fields.

Computer Vision

Secondly, companies are using computer vision and deep learning algorithms to process and study crop and soil health. For instance, farmers are using unmanned drones to survey their lands in real time to identify problem areas and areas of potential improvement. Farms can be monitored frequently using these machines than they can be with farmers doing so on foot.

Seasonal Forecasting

Thirdly, AI is used to track and predict environmental impacts such as weather changes. “Seasonal forecasting is particularly valuable for small farms in developing countries as their data and knowledge can be limited. Keeping these small farms operational and growing bountiful yields is important as these small farms produce 70% of the world’s crops,” says a report .

The India story

In India, for instance, farmers are gradually working with technology to predict weather patterns and crop yield. Since 2016, Microsoft and a non-profit have together developed an AI sowing application which is used to guide farmers on when to sow seeds based on a study of weather patterns, local crop yield and rainfall.

Data Science Machine Learning Certification

In the year 2017, the pilot project was broadened to encompass over 3,000 farmers in Andhra Pradesh and Karnataka and it was found that those farmers who received the AI-sowing app advisory text messages benefitted wherein they reported 10–30% higher yields per hectare.

Chatbots

Moreover, farmers across the world have begun to turn to chatbots for assistance and help, getting answers to a variety of questions and queries regarding specific farm problems.

Precision Farming

Research predicts the precision agriculture market to touch $12.9 billion by 2027. Precision agriculture or farming, also called site-specific crop management or satellite farming, is a concept of farm management that utilizes information technology to ensure optimum health and productivity of crops.

With this increase in the volume of satellite farming, there is bound to be an increase in the demand for sophisticated data-analysis solutions. One such solution has been developed by the University of Illinois. The system developed aims to “efficiently and accurately process precision agricultural data.”

A professor of the University says, “We developed methodology using deep learning to generate yield predictions…”

Conclusion

The application of artificial intelligence to analyze data from precision agriculture is a nascent development, but it is a growing one. Environment vagaries and factors like food security concerns have forced the agricultural industry to search for innovative solutions to protect and improve crop yield. Consequently, AI is steadily emerging as the game changer in the industry’s technological evolution.

It is no surprise then that AI training institutes are mushrooming all across the world, especially in India. For the best artificial intelligence certification in Delhi NCR, do check out the DexLab Analytics site today.


.

Deep Learning and Computer Vision – A study – Part II

Deep Learning and Computer Vision – A study – Part II

In the first series of this article we have seen what is computer vision and a brief review of its applications. You can read the first part of this article here. We have also seen the contribution of deep learning in computer vision. Especially we focused on Image Classification and deep learning architecture which is used in Image Classification. In this series we will focus on other applications including Image Localization, Object Detection and Image Segmentation. We will also walk through the required deep learning architecture used for above applications.

Image classification with Localization

Similar to classification, localization finds the location of a single object inside the image. Localization can be used for lots of useful real-life problems. For example, smart cropping (knowing where to crop images based on where the object is located), or even regular object extraction for further processing using different techniques. It can be combined with classification for not only locating the object but categorizing it into one of many possible categories.

A classical dataset for image classification with localization is the PASCAL Visual Object Classes datasets, or PASCAL VOC for short (e.g. VOC 2012). These are datasets used in computer vision challenges over many years.

Object detection

Iterating over the problem of localization plus classification we end up with the need for detecting and classifying multiple objects at the same time. Object detection is the problem of finding and classifying a variable number of objects on an image. The important difference is the “variable” part. In contrast with problems like classification, the output of object detection is variable in length, since the number of objects detected may change from image to image.

The PASCAL Visual Object Classes datasets, or PASCAL VOC for short (e.g. VOC 2012), is a common dataset for object detection.

Deep learning for Image Localization and Object Detection

There is nothing hardcore about the architectures which we are going to discuss. What we are going to discuss are some clever ideas to make the system intolerant to the number of outputs and to reduce its computation cost. So, we do not know the exact number of objects in our image and we want to classify all of them and draw a bounding box around them. That means that the number of coordinates that the model should output is not constant. If the image has 2 objects, we need 8 coordinates. If it has 4 objects, we want 16. So how we build such a model?

One key idea to traditional computer vision is regions proposal. We generate a set of windows that are likely to contain an object using classic CV algorithms, like edge and shape detection and we apply only these windows (or regions of interests) to the CNN. To learn more about how regions are proposed, we introduce a new architecture called RCNN.

R-CNN

Given an image with multiple objects, we generate some regions of interests using a proposal method (in RCNN’s case this method is called selective search) and wrap the regions into a fixed size. We forward each region to Convolutional Neural Network (such as AlexNet), which will use an SVM to make a classification decision for each one and predicts a regression for each bounding box. This prediction comes as a correction of the region proposed, which may be in the right position but not at the exact size and orientation.

Although the model produces good results, it suffers from a major issue. It is quite slow and computationally expensive. Imagine that in an average case, we produce 2000 regions, which we need to store in disk, and we forward each one of them into the CNN for multiple passes until it is trained. To fix some of these problems, an improvement of the model comes in play called ‘Fast-RCNN’

Fast RCNN

The idea is straightforward. Instead of passing all regions into the convolutional layer one by one, we pass the entire image once and produce a feature map. Then we take the region proposals as before (using some external method) and sort of project them onto the feature map. Now we have the regions in the feature map instead of the original image and we can forward them in some fully connected layers to output the classification decision and the bounding box correction.

Note that the projection of regions proposal is implemented using a special layer (ROI layer), which is essentially a type of max-pooling with a pool size dependent on the input, so that the output always has the same size.

Data Science Machine Learning Certification

Faster RCNN

And we can take this a step further. Using the produced feature maps from the convolutional layer, we infer regions proposal using a Region Proposal network rather than relying on an external system. Once we have those proposals, the remaining procedure is the same as Fast-RCNN (forward to ROI layer, classify using SVM and predict the bounding box). The tricky part is how to train the whole model as we have multiple tasks that need to be addressed:

  • The region proposal network should decide for each region if it contains an object or not.
  • It needs to produce the bounding box coordinates.
  • The entire model should classify the objects to categories.
  • And again predict the bounding box offsets.

As the name suggests, Faster RCNN turns out to be much faster than the previous models and is the one preferred in most real-world applications.

Localization and object detection is a super active and interesting area of research due to the high emergency of real world applications that require excellent performance in computer vision tasks (self-driving cars, robotics). Companies and universities come up with new ideas on how to improve the accuracy on regular basis.

There is another class of models for localization and object detection, called single shot detectors, which have become very popular in the last few years because they are even faster and require less computational cost in general. Sure, they are less accurate, but they are ideal for embedded systems and similar power-hungry applications.

Object segmentation

Going one step further from object detection we would want to not only find objects inside an image, but find a pixel by pixel mask of each of the detected objects. We refer to this problem as instance or object segmentation.

Semantic Segmentation is the process of assigning a label to every pixel in the image. This is in stark contrast to classification, where a single label is assigned to the entire picture. Semantic segmentation treats multiple objects of the same class as a single entity. On the other hand, instance segmentation treats multiple objects of the same class as distinct individual objects (or instances). Typically, instance segmentation is harder than semantic segmentation.

In order to perform semantic segmentation, a higher level understanding of the image is required. The algorithm should figure out the objects present and also the pixels which correspond to the object. Semantic segmentation is one of the essential tasks for complete scene understanding. This can be used in analysis of medical images and satellite images. Again, the VOC 2012 and MS COCO datasets can be used for object segmentation.

Deep Learning for Image Segmentation

Modern image segmentation techniques are powered by deep learning technology. Here are several deep learning architectures used for segmentation.

Convolutional Neural Networks (CNNs) 

Image segmentation with CNN involves feeding segments of an image as input to a convolutional neural network, which labels the pixels. The CNN cannot process the whole image at once. It scans the image, looking at a small “filter” of several pixels each time until it has mapped the entire image. To learn more see our in-depth guide about Convolutional Neural Networks.

Fully Convolutional Networks (FCNs)

Traditional CNNs have fully-connected layers, which can’t manage different input sizes. FCNs use convolutional layers to process varying input sizes and can work faster. The final output layer has a large receptive field and corresponds to the height and width of the image, while the number of channels corresponds to the number of classes. The convolutional layers classify every pixel to determine the context of the image, including the location of objects.

DeepLab

One main motivation for DeepLab is to perform image segmentation while helping control signal decimation—reducing the number of samples and the amount of data that the network must process. Another motivation is to enable multi-scale contextual feature learning—aggregating features from images at different scales. DeepLab uses an ImageNet pre-trained residual neural network (ResNet) for feature extraction.   DeepLab uses atrous (dilated) convolutions instead of regular convolutions. The varying dilation rates of each convolution enable the ResNet block to capture multi-scale contextual information. DeepLab comprises three components:

  • Atrous convolutions—with a factor that expands or contracts the convolutional filter’s field of view.
  • ResNet—a deep convolutional network (DCNN) from Microsoft. It provides a framework that enables training thousands of layers while maintaining performance. The powerful representational ability of ResNet boosts computer vision applications like object detection and face recognition.
  • Atrous spatial pyramid pooling (ASPP)—provides multi-scale information. It uses a set of atrous convolutions with varying dilation rates to capture long-range context. ASPP also uses global average pooling (GAP) to incorporate image-level features and add global context information.

SegNet neural network

An architecture based on deep encoders and decoders is also known as semantic pixel-wise segmentation. It involves encoding the input image into low dimensions and then recovering it with orientation invariance capabilities in the decoder. This generates a segmented image at the decoder end.

Conclusion

In this post we have discussed some applications of computer vision including Image Localization, Object Detection and Image Segmentation. We then discussed required deep learning architectures which are used for the above applications.


.

Commercial Uses of Deep Learning

Commercial Uses of Deep Learning

Deep Learning has its limitations, scientists argue.

“We have machines that learn in a very narrow way,” Yoshua Bengio, deep learning pioneer, said in his keynote address at NeurIPS in December, 2019. “They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.”

Unarguably, deep learning is an imperfect framework of intelligence. It does not think abstractedly, does not comprehend causation and struggles with out-of-distribution generalization. For a deeper understanding of its limitations, this brilliant paper on the science and its shortcomings is available on the internet.

However, despite numerous shortcomings, the commercial uses of deep learning are only just being mined and its capabilities to automate and transform industries still abound. AI and deep learning capabilities, as developed as they are today, are sufficiently mature to spearhead transformation, innovation, and value creation across industries like agriculture, healthcare and construction. “For the most part, these transformative opportunities have not yet been operationalized at scale.”

Radiology

For instance, in the radiology industry, something as extreme and point blank as this was declared in 2016 by AI luminary Geoff Hinton – “It’s quite obvious that we should stop training radiologists now.” Hinton’s comments drew worked up reactions in the medical community but his statement was based on strong data which showed neural networks can identify medical conditions from X-rays with better accuracy than human radiologists can.

Yet, years after Hinton foresaw the removal of the need of human radiologists from the medical science field, no clinic in the world has deployed AI-driven radiology tools at scale. Only a few health organizations have begun using it in limited settings. But more and more organizations are slowly adopting deep learning in radiology.

Off Road Autonomous Vehicles

In another instance, the off-road autonomous vehicle industry is seeing a slow move towards tapping the massive unrealized commercial potential of AI. Construction, agriculture and mining are some of the largest industries in the world. If these industries start deploying deep learning powered automated machines to do work that human hands are trained to do, a massive pool of cost, productivity and safety benefits could be tapped.

Energy

In the field of energy, leading players like BP are using deep learning to innovate and transform work conditions on site. “It uses technology to drive new levels of performance, improve the use of resources and safety and reliability of oil and gas production and refining. From sensors that relay the conditions at each site to using AI technology to improve operations, BP puts data at the fingertips of engineers, scientists and decision-makers to help drive high performance.”

Retail

Burberry, a luxury fashion brand, uses big data and AI to fight counterfeit products. It is also trying to enhance sales and customer relationships by initiating a loyalty program that creates data to help personalize the shopping experience for each customer.

Data Science Machine Learning Certification

Social Media

Both Twitter and Facebook are tapping into structured and unstructured sets of big data for understanding user behavior and using deep learning to check for communal or racist comments and user preferences.

Deep Learning and Artificial Intelligence is the future and it is here to stay. No wonder then, that more and more professionals are opting to train themselves through deep learning courses. DexLab Analytics is one of the best Deep Learning training institutes in Delhi. Do go through its website for more details.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

Computer Vision and Image Classification -A study

Computer Vision and Image Classification -A study

Computer vision is the field of computer science that focuses on replicating parts of the complexity of the human vision system and enabling computers to identify and process objects in images and videos in the same way that humans do. With computer vision, our computer can extract, analyze and understand useful information from an individual image or a sequence of images. Computer vision is a field of artificial intelligence that works on enabling computers to see, identify and process images in the same way that human vision does, and then provide the appropriate output.

Initially computer vision only worked in limited capacity but due to advance innovations in deep learning and neural networks, the field has been able to take great leaps in recent years and has been able to surpass humans in some tasks related to detecting and labeling objects.

The Contribution of Deep Learning in Computer Vision

While there are still significant obstacles in the path of human-quality computer vision, Deep Learning systems have made significant progress in dealing with some of the relevant sub-tasks. The reason for this success is partly based on the additional responsibility assigned to deep learning systems.

It is reasonable to say that the biggest difference with deep learning systems is that they no longer need to be programmed to specifically look for features. Rather than searching for specific features by way of a carefully programmed algorithm, the neural networks inside deep learning systems are trained. For example, if cars in an image keep being misclassified as motorcycles then you don’t fine-tune parameters or re-write the algorithm. Instead, you continue training until the system gets it right.

With the increased computational power offered by modern-day deep learning systems, there is steady and noticeable progress towards the point where a computer will be able to recognize and react to everything that it sees.

Application of Computer Vision

The field of Computer Vision is too expansive to cover in depth.  The techniques of computer vision can help a computer to extract, analyze, and understand useful information from a single or a sequence of images. There are many advanced techniques like style transfer, colorization, action recognition, 3D objects, human pose estimation, and much more but in this article we will only focus on the commonly used techniques of computer vision. These techniques are: –

  • Image Classification
  • Image Classification with Localization
  • Object Segmentation
  • Object Detection

So in this article we will go through all the above techniques of computer vision and we will also see how deep learning is used for the various techniques of computer vision in detail. To avoid confusion we will distribute this article in a series of multiple blogs. In first blog we will see the first technique of computer vision which is Image Classification and we will also explore that how deep learning is used in Image Classification.

Data Science Machine Learning Certification

Image Classification

Image classification is the process of predicting a specific class, or label, for something that is defined by a set of data points. Image classification is a subset of the classification problem, where an entire image is assigned a label. Perhaps a picture will be classified as a daytime or nighttime shot. Or, in a similar way, images of cars and motorcycles will be automatically placed into their own groups.

There are countless categories, or classes, in which a specific image can be classified. Consider a manual process where images are compared and similar ones are grouped according to like-characteristics, but without necessarily knowing in advance what you are looking for. Obviously, this is an onerous task. To make it even more so, assume that the set of images numbers in the hundreds of thousands. It becomes readily apparent that an automatic system is needed in order to do this quickly and efficiently.

There are many image classification tasks that involve photographs of objects. Two popular examples include the CIFAR-10 and CIFAR-100 datasets that have photographs to be classified into 10 and 100 classes respectively.

Deep learning for Image Classification

The deep learning architecture for image classification generally includes convolutional layers, making it a convolutional neural network (CNN). A typical use case for CNNs is where you feed the network images and the network classifies the data. CNNs tend to start with an input “scanner” which isn’t intended to parse all the training data at once. For example, to input an image of 100 x 100 pixels, you wouldn’t want a layer with 10,000 nodes.

Rather, you create a scanning input layer of say 10 x 10 which you feed the first 10 x 10 pixels of the image. Once you passed that input, you feed it the next 10 x 10 pixels by moving the scanner one pixel to the right. This technique is known as sliding windows.

Following Layers are used to build Convolutional Neural Networks:

  • INPUT [32x32x3] will hold the raw pixel values of the image, in this case an image of width 32, height 32, and with three color channels R,G,B.
  • CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume such as [32x32x12] if we decided to use 12 filters.
  • RELU layer will apply an element wise activation function, such as the max(0,x)max(0,x)thresholding at zero. This leaves the size of the volume unchanged ([32x32x12]).
  • POOL layer will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as [16x16x12].
  • FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x10], where each of the 10 numbers correspond to a class score, such as among the 10 categories of CIFAR-10. As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume.

Output of the Model History

In this way, ConvNets transform the original image layer by layer from the original pixel values to the final class scores. Note that some layers contain parameters and other don’t. In particular, the CONV/FC layers perform transformations that are a function of not only the activations in the input volume, but also of the parameters (the weights and biases of the neurons). On the other hand, the RELU/POOL layers will implement a fixed function. The parameters in the CONV/FC layers will be trained with gradient descent so that the class scores that the ConvNet computes are consistent with the labels in the training set for each image.

Conclusion

The above content focuses on image classification only and the architecture of deep learning used for it. But there is more to computer vision than just classification task. The detection, segmentation and localization of classified objects are equally important. We will see these in next blog.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

G-Suite and Office 365: A Comparison of Spreadsheets for Your Business Needs

G-Suite and Office 365: A Comparison of Spreadsheets for Your Business Needs

When Google introduced its G-Suite in 2006, Microsoft Office was a world leader with virtually no competition. Over time, however, Google Suite (initially known as Google Docs and Spreadsheets) has managed to draw a loyal user base despite its subscription fee. Faced with this new challenge, Microsoft went ahead and introduced in the market what it calls Office 365, a subscription based version of its Office, which is regularly updated with new features.

Common features

G-Suite and Office 365 have many features and aspects in common. For instance, both work offline and both offer apps for the Android and iOS versions. The suites offer the same core and basic utilities. They have word processing, spreadsheet, presentation, email, calendar, contacts, video conferencing, and programs like messaging, note-taking software among others. The two suites also have cloud storage though these are different from each other by virtue of the tools used to manage them.

When thinking about which suite to use for your business needs, compare the prices of the two. But before that, it is important that you know that you can use several online apps from the two suites – like Google Docs, Sheets and Slides as well as Microsoft Word Online, Excel Online and Powerpoint Online. In India, Microsoft is still a little more popular and it is not surprising that institutes aiming at excellence in computing like DexLab Analytics prefer to have a separate course on excel certification in gurgaon.

Cost and Word Processors

The G-Suite comes in three ranges – Basic, Business and Enterprise with the prices starting from $6 to $25 per month. Office 365 is a little expensive, its cost ranging from $5 to $35 per month. When it comes to their word processing software, Google Docs is better for collaboration while Word has numerous editing features and templates to offer.

Spreadsheets and Presentations

When it comes to Google Sheets against Microsoft Excel, again, collaboration becomes the bellwether. If users are prone to working on spreadsheets by themselves without collaborating with others, then Microsoft Excel is the application for you. But if you are going to collaborate with fellow workers, then Google Sheets is the option for you. Similarly, collaboration is easier on Google Sheets than it is on Powerpoint but for every other reason, be they charts or slide layouts, Powerpoint is the winner.

Email

If you prefer simplicity over clutter, then Gmail is your go-to mail service. Gmail has succeeded in cutting the clutter and offers this really fast and streamlined mail service. However, Microsoft Outlook is also trying to be as straightforward as possible. It can be anything, composing a mail to managing it and responding to it, Gmail has a system which is intuitive and understands user response. When it comes to the Microsoft Outlook, power features like the Focused Inbox are a great draw. Focused Inbox helps prioritize important mails and enables you to read them and respond to them first.

Data Science Machine Learning Certification

And, “because the contacts and calendar functions are part of Outlook itself, they’re well integrated with email. Gmail relies on the separate Google Contacts and Calendar apps, which can be a bit more cumbersome to navigate”, says a handy report on the comparison of the two suites.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

AI joins the fight against Cancer

AI joins the fight against Cancer

Cancer is the emperor of all maladies. Finding a cure to it is one of the biggest challenges in the world of medicine. More and more men and women, one in five men and one in six women worldwide likely to be afflicted, are falling prey to the malady. It is something that has spurred on the fight against the disease even more intensely.  AI and machine learning has increased the scope of groundbreaking research in the field and it is worth knowing a little about.

One reason why AI, which has made inroads into numerous sectors of the economy, has made immense advancements in the field of medical oncology is the vast amount of data generated during cancer treatment. With the assistance of AI, say scientists, this vast trove of data can be mined and worked to improve methods of diagnosis and preventive cures and treatments.

Detection of Cancer

Machine learning can lead to early detection and timely treatment in many cases. Because cancer is treated in stages, unlike other diseases, machine learning can come in handy when it comes to detection of precancerous lesions in tissues.

AI utilizing tools can assist radiologists in graphically and visually studying images by revealing suspicious lesions. This process not only reduces the work load of radiologists but it also makes possible the detection of miniscule lesions which could otherwise be overlooked.

Detection of Breast Cancer

“DeepMind and Google Health collaborated to develop a new AI system that helps in detecting breast cancer accurately at a nascent stage. Being the most common cancer in women, breast cancer, has seen an alarming rise over the past few years. Though early detection can improve a patient’s prognosis significantly, mammography, which is the best screening test currently available, is not entirely error-proof”, says a report.

To correct this, researchers at DeepMind and Google Health designed an algorithm on mammogram images and noticed AI systems reduced the recurrence of errors. They discovered that AI systems functioned better than human radiologists. A few startups in India are also laboring in the arena of cancer detection.

Predicting Cancer Evolution

Besides detection, AI is useful in the treatment of cancer as well. It is critical to the survival of patients in that it is used to predict growth and evolution of cancers which could help doctors prepare a treatment plan and save lives.

Identifying Effective Treatments

AI can play a significant role in the overall treatment of the patient, especially precision medicine which is the administering of personalized medicine from a pool of generic medication beneficial to the patient. AI can also be used to design new drugs.

Thus, AI has created a huge potential for changing the mode of treatment of cancer patients. According to the report, Exscientia is the first company, globally, to have overtaken conventional drug designing processes by automating the whole process using AI. Another company is trying to do the same in Bangalore.

Data Science Machine Learning Certification

It is no surprise then that AI is being even more widely adopted across sectors of healthcare and medicine. More and more professionals, the world over, are enrolling in courses teaching AI, deep learning and machine learning. For the best such institute in India, or for the best artificial intelligence training institute in Gurgaon, do not forget to visit the DexLab website today.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

How AI and Deep Learning Helps In Weather Forecasting

 

How AI and Deep Learning Helps In Weather Forecasting

The world’s fight against extreme weather conditions and climate change is at the forefront of all discussions and debates on the environment. In fact, climate change is the biggest concern we are faced with today, and studying the climate has increasingly become the primary preoccupation of scientists and researchers. They have received a shot in the arm with the increase in the scope of artificial intelligence and deep learning in predicting weather patterns.

Take for instance the super cyclone Amphan that has ravaged West Bengal and Orissa. Had it not been for weather forecasting techniques, meteorologists would never had predicted the severity of the cyclone and the precautionary evacuation of thousands of people from coastal areas would not have been taken, leading to massive loss of lives. This is where the importance of weather forecasting lies.

Digitizing the prediction model

Traditionally, weather forecasting depends on a combination of observations of the current state of the weather and data sets from previous observations. Meteorologists prepare weather forecasts collecting a wealth of data and running it through prediction models. These sets of data come from hundreds of observations like temperature, wind speed, and precipitation produced by weather stations and satellites across the globe. Due to the digitization of these weather models, accuracy has improved much more than it was a few decades ago. And with the recent introduction of machine learning, forecasting has become an even more accurate and exact science.

Machine Learning

Machine learning can be utilized to make comparisons between historical weather forecasts and observations in real time. Also, machine learning can be used to make models account for inaccuracies in predictions, like overestimated rainfall.

At weather forecast institutions, prediction models use gradient boosting that is a machine learning technique for building predictive models. This is used to correct any errors that come into play with traditional weather forecasting.

Deep Learning

Machine Learning and Deep Learning are increasingly being used for nowcasting, a model of forecasting in the real time, traditionally within a two-hour time span. It provides precipitation forecasts by the minute. With deep learning, a meteorologist can anywhere in the vicinity of a weather satellite (which runs on deep learning technology) use nowcasting rather than just those who live near radar stations (which are used in traditional forecasting).

Extreme Weather Events

Deep learning is being used not only for predicting usual weather patterns, it is being used to predict extreme weather conditions as well. Rice University engineers have designed a deep learning computer system that has trained itself to predict, in accurate terms, extreme weather conditions like heat waves or cold waves. The computer system can do so up to five days in advance. And the most fascinating part is it uses the least information about current weather conditions to make predictions.

This system could effectively guide NWP (numerical weather prediction) that currently does not have the ability to predict extreme weather conditions like heat waves. And it could be a super cheap way to do so as well.

According to sciencedaily.com, with further development, the system could serve as an early warning system for weather forecasters, and as a tool for learning more about the atmospheric conditions that lead to extreme weather, said Rice’s Pedram Hassanzadeh, co-author of a study about the system published online in the American Geophysical Union’s Journal of Advances in Modeling Earth Systems.

Data Science Machine Learning Certification

Thus, it is no surprise then that machine learning and deep learning is being widely adopted the world over. In India, is it being taken up as a form of study and training in metropolitans like Delhi and Gurgaon. For the best Machine Learning course in Delhi and deep learning course in delhi, check out the DexLab Analytics website today.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

Budget 2020 Focuses on Artificial Intelligence in a Bid to Build Digital India

Budget 2020 Focuses on Artificial Intelligence in a Bid to Build Digital India

The Indian technology industry has welcomed the 2020 budget for its outreach to the sector, specially the Rs 8000 crore mission for the next five years on Quantum Computing. The budget has been praised in general for its noteworthy allocation of funds for farm, infrastructure and healthcare to revive growth across sectors in the country.

According to an Economic Times report, Debjani Ghosh, President, NASSCOM, reacting to the budget, said, “Budget 2020 and the finance minister’s speech has well-articulated India’s vision on not just being a leading provider of digital solutions, but one where technology is the bedrock of development and growth’.

Industry insiders lauded the budget for the allocation on Quantum Computing, the policy outline for the private sector to construct data center parks and the abolition of the Dividend Distribution Tax. The abolition of the Tax had been a long standing demand of the industry and the move has been welcomed. The building of data parks will help retain data within the country, industry experts said.

Moreover, while announcing the budget this year, Finance Minister Nirmala Sitharaman spelt out the government’s intentions of utilizing, more intensely, technology, specially artificial intelligence and machine learning.

These will be used for the purposes of monitoring economic data, preventing diseases and facilitating healthcare systems under Ayushman Bharat, guarding intellectual property rights, enhancing and improving agricultural systems and sea ports and delivery of government services.

Governments the world over have been emphasising the deployment of AI for digital governance and research. As per reports, the US government plans and intends to spend nearly 1 billion US dollars on AI-related research and development this year.

The Indian government has also planned to make available digital connectivity to citizens at the gram panchayat level under its ambitious Digital India drive with a focus on carrying forward the benefits and advantages of a digital revolution by utilizing technology to the fullest. One lakh gram panchayats will be covered under the Rs 6000 crore Bharat Net project wherein fibre connectivity will be made available to households.  

“While the government had previously set up a national portal for AI research and development, in the latest announcement, the government has continued to offer its support for tech advancements. We appreciate the government’s emphasis on promoting cutting-edge technologies in India,” Atul Rai, co-founder & CEO of Staqu said in a statement, according to a report by Live Mint.

The Finance Minister also put forward a plan to give a fillip to manufacturing of mobiles, semiconductor packaging and electronic equipment. She iterated that there will be a cost-benefit to electronics manufacturing in India.

Data Science Machine Learning Certification

Thus, this article shows how much the government of India is concentrating on artificial intelligence and machine learning with a push towards digital governance. It shows that the government is recognising the need to capitalise on the “new oil” that is data, as the saying goes. So it is no surprise then that more and more professionals are opting for Machine Learning Course in India and artificial intelligence certification in delhi ncr. DexLab Analytics focuses on these technologies to train and skill professionals who want to increase their knowledge base in a digital first economy.

 

Interested in a career in Data Analyst?

To learn more about Data Analyst with Advanced excel course – Enrol Now.
To learn more about Data Analyst with R Course – Enrol Now.
To learn more about Big Data Course – Enrol Now.

To learn more about Machine Learning Using Python and Spark – Enrol Now.
To learn more about Data Analyst with SAS Course – Enrol Now.
To learn more about Data Analyst with Apache Spark Course – Enrol Now.
To learn more about Data Analyst with Market Risk Analytics and Modelling Course – Enrol Now.

Call us to know more