Introduction to Supervised Learning

January 17, 2022

What else can be the best example of supervised learning than us? Remember in school days we were taught one concept, then we solve examples related to that concept and then we are fed with some unseen example to check our understanding of the concept. We have been supervised with many such examples by our parents, teachers, friends etc.

1

We use a similar approach to teach various machine learning models to predict or classify certain things. As in the above example, in supervised machine learning, we give machines some input data including the correct output of the data. Then the machine uses a supervised learning algorithm to decipher patterns that exist in the data and develops a model that can reproduce matching results with new data. Basically with given input data and correct output data machines try to figure out the mapping function that correctly predicts the output for the given input.

Let’s see how supervised learning works with the below example. Consider you have pictures of cats and dogs as the input data. There is a folder labeled as cat containing pictures of cats and another folder labeled as dog containing pictures of dogs. Now these pictures along with their labels are fed to machine learning models which learn to classify cats from dogs. Then to check if our machine learning model has learned something we give test data containing pictures of cats or dogs to the model and it tries to predict whether the given picture is of cat or dog. If It has correctly predicted the mapping function then if it sees the picture of a dog it will return the label dog and if it sees the picture of a cat it will return the label cat.

Cat(1)

Cat(1)(1)

Below are the following steps involved in supervised learning:

1)Collecting the dataset

2)Splitting dataset as input data (X) and output data (Y)

3)Splitting the above dataset further as training (X_train, Y_train) and testing (X_test, Y_test) datasets.

4)Using training dataset to train machine learning model.

5)Testing the accuracy of machine learning models using a testing data set

Further supervised learning is divided into two subcategories:

  • Regression
  • Classification

1)Regression

6

Regression is the ‘Hello world’ of machine learning algorithms. It is the simplest supervised machine learning technique used to find the best trendline to describe the data. Basically it finds a curve which best describes the relationship between input data and output data. It is used for the prediction of continuous variables.

Let’s try to understand this with an example: suppose we have to predict the yield of crops this year in a certain region. This helps farmers decide what to grow and when to grow. We are provided with statistics of many years. The data contains important information about weather conditions, fertilizers used, amount of water used, amount of money invested, crop grown, timing etc with the output as yield. Using this data we need to predict which crop will give maximum yield with minimal resources.

In order to achieve that we will try to find a relationship between the resources used and the yield. We will try to look for patterns in the data which will enable us to predict things more accurately. And then conclude with the crop that will give the maximum yield. This is a very common practice done by our ancestors to predict the yield of crops in fields. The only difference is that now we have more resources and data to predict things accurately.

4

It is the same thing that a machine learning model will do in order to predict the yield using Regression. When data is given to a machine learning model it will use regression to find the relation between input data and target variable (In this case yield). Once it gets the relationship it can accurately predict which crop will give us maximum yield for the given resources.

Regression is easy to understand and is very important as it provides the basis for more advanced machine learning techniques.

Regression has its application in a range of disciplines like finance, business, investing, trends in social media, GDP growth etc.

Some Algorithms used in regression techniques are-

  • Linear regression
  • Non-linear regression
  • Regression trees
  • Bayesian linear regression
  • Polynomial regression

2)Classification

5

Remember the example we used to understand supervised machine learning where we classified cats from dogs? Classification is a big topic. We do classify things in our day to day life. We classify clothes in our cupboard, we arrange varieties of lentils in different jars, we arrange dishes in the kitchen rack, we have classified regions according to culture etc. Do read our previous article to learn more about classification. Here we will quickly try to understand how classification occurs in machines

We love to shop online where we also give feedback on the product we just purchased. If the product is good we write good things about it but if it is bad, we criticise and return it. Now let’s assume that there is a person sitting in amazon’s office separating good comments from bad ones. It is easy for him to recognize sarcasm, positive review, negative review, basically various human sentiments. He reads the review and puts it in a box containing a positive review. If it is a negative review he will put that review in a box containing a negative review. This is how he is going to classify reviews as positive or negative. But there are billions of such reviews coming each day. Even if he classifies one review in 10 sec it will take more than 3 years for him to classify 1 billion reviews.

But classification algorithms can do the job in merely a few hours. This is how they do it- first we feed the machine with some data containing the reviews and the sentiment as positive or negative. With the help of the data we train machines to learn human sentiments. After that we test the machine if it has learned anything by feeding some unseen data. If the machine successfully recognizes positive review from negative review in test data with some good accuracy, we deploy our model to finally do the job.

Some algorithms used to perform classification are given below:

  • logistic regression
  • k-nearest neighbors
  • decision trees
  • support vector machines
  • naive bayes

Do follow our LinkedIn page for updates: [ Myraah IO on LinkedIn ]

Machine Learning – A Simple Introduction

October 29, 2021

In this article, I will walk you through the concept of machine learning. My attempt is to explain the concepts without complex mathematics or code. After reading it, you will be able to understand key ideas and hopefully will be able to apply them to solve problems.

Machine Learning Is Everywhere

You may not have noticed, but machine learning is all around you. When you read your email, you don’t see spam, because machine learning filters them out for you. When you type a query in Google, it is how Google recommends which result to show. When you receive a book recommendation by Amazon or a movie recommendation by Netflix, it is machine learning at play. Your Facebook & Instagram feeds are curated using machine learning. ML is responsible for automatically approving or declining credit card transactions, and continuously monitoring accounts for signs of fraud. Whenever you use a computer, chances are, machine learning is involved somewhere.

What is Machine Learning?

Once upon a time, when we wanted the computer to perform some tasks, we needed to provide it with a detailed set of instructions i.e., program. Computer then follows the instruction to execute the task at hand. For all different tasks, we painstakingly needed to write down the steps in detail.

Machine learning is different. In the case of machine learning, algorithms- learners for short, can figure out the instructions themselves. We don’t need to program the computer anymore; they program themselves by making inferences from data. The more data they have, the better they get.

difference-between-classical-programming-and-machine-learning

No one programmed your tastes into Amazon recommendation engine, learners figured them out on their own by inferring from your past purchase data. Tesla’s self-driving car taught itself how to stay on the road – no programmer wrote an instruction for it. ML is something new under the sun: A technology that builds itself.

teaching-machines-to-make-a-decison-isML

Machine learning is a process in which the computer learns to solve problems and make decisions like humans. Humans make decisions based on intuition, which is based on experience. In the case of ML, computers learn to make decisions based on experience (data) rather than instructions. Simply, ML is about teaching computers how to think like humans.

How Humans Make a Decision and How Computers do The Same

Humans make a decision mainly in two ways:

By reasoning and logic

Or by using experience

Similar to humans’ machines can also be taught to make decisions in two ways mentioned above. In short, when we talk about machines making decisions using both 1 & 2 ways it is called Artificial Intelligence (AI). And when a computer focuses on only the second way it falls under Machine learning. In the case of computers, experience equates to data. So, by this logic, Machine learning is a subset of a broader field called AI.

ML-is-a-subset-of-AI

In order to understand how a machine can make a decision using data, let’s first try to understand how humans make decisions based on experience.

We humans use, remember- model – predict framework to make decisions. Let me explain:

We remember our experiences and situations we went through in our life

Based on the experience we formulate models to generalize

We can use models to predict what is likely to happen in a particular situation

how-humans-make-decision

For example, if the question is “Will India win the match against Bangladesh today? “

We will evaluate this question using remember-model-predict framework

We remember India won most of the matches against Bangladesh in the last two years.

We model that the Indian team is stronger compared to the Bangladeshi team.

Hence, India is likely to win the match today.

There are chances that we may be wrong in predicting this but this is the thought process we use to make predictions.

This is challenging for machines, as all they do is store numbers and do operations on them. Programming them to mimic human level of thoughts is challenging. Following the remember-model-predict framework, it is clear to see that ‘remember’ for machine stands for ‘data’.

data-is-nothing-but-information-in-a-table

Data is simply information. Anytime we have a table with information, we have data. Normally each row is a data point. For example, if we have a dataset of fruits, then in this case, each row represents a different fruit. Each fruit is described by certain features through columns. In our fruit’s dataset example, features will be color, size & shape. Features describe the data. Some features are special and we call them labels.

Label depends upon the context of the problem we are solving. For example, if we are trying to predict the type of fruit from the given data then the type of fruit is labeled.

For the purpose of machine learning data comes in two flavors: Labeled data & Unlabeled data.

When the data comes with a label attached to it, it is called Labeled data and similarly when data doesn’t have a label attached to it, it is called Unlabeled data.

labelled-unlabelled-data-example

The better the quality and quantity of data fed into the machine, the better the prediction will be. But, before we get into prediction let’s explore models.

Machine learning models can be broadly categorized into three different types.

Supervised Learning

Unsupervised Learning

Reinforcement Learning

Supervised Learning

Supervised learning is the most natural way to start on the machine learning journey and it is the most commonly used in ML tasks. Supervised learning algorithms take “labeled data” as an input. Data is labeled by predictions we wish to make using supervised learning algorithms or models.

Let me explain this with an example. Recalling our fruits dataset, if we feed the data into a supervised learning model, with labels ‘apples’ and ‘grapes’, the model will then use the fed data to predict if a new data is ‘apple’ or ‘grapes’. This means when we bring in a new image of a fruit, it can predict whether the new image is that of an ‘apple’ or ‘grapes’.

Recall the human decision-making framework: remember- model – predict, this is precisely how the supervised learning models work. The model first remembers the data we feed into it, then it makes rules or models what ‘A ‘and ‘B’ looks like and finally predicts the same for the new image.

machine-learning-example

Labeled data comes in two types. Data can be labeled with a number, such as weight of the fruit or data can be labeled as classes, such as apple, orange, banana etc. Based on the type of data there are two types of supervised learning models:

Regression models: These are the models which can predict a number as an output.

Classification models: These are the models which can predict the class as an output.

Note that output of a regression model is continuous while the output from the classification model is discrete, since it can predict from a finite set of output values.

Let’s take a few examples to understand them better.

Regression models are designed to predict numbers. These numbers are predicted from the features. Examples include

Predicting house prices based on features of a house such as number of bedrooms, location, etc.

Predicting the expected durationa user will use a website based on the source of traffic etc.

Predicting the lifetime value or the amount a customer is expected to spend with the company based on features such as customer type, consumption level etc.

Predicting the price of a particular stock based on other market signals or company specific features or other stock prices.

As you may notice, the result of such prediction is expected to be a number.

regression-versus-classification-models

On the other hand, classification models predict a class. They help us answer questions such as looking at a picture of which fruit it is: Apple or Orange.

Examples may include:

Image recognition: Predict the content of the image. Is there an airplane in the picture or not?

User behavior: Predict if a user will click on Add to cart button based on users historical browsing history – Yes Or No

Social Media: Will the user like a picture or post based on their demographics, history and friends. Now you know how Facebook shows you posts which you are expected to engage with.

Unsupervised Learning

Unsupervised learning models take ‘unlabeled data’ as an input. Which means the data has no labels attached to them but only features. So for example the housing dataset which doesn’t have prices but only details such as room sizes, location etc.

The obvious question is then, what can you infer from such data. Well it is true that unlabeled data has a limited use but we can still achieve some insights from them which may be useful. For example if we have sets of pictures of different fruits without labels, we can use unsupervised models to break them into groups such as pictures of apples, pictures of oranges etc. This is called clustering – a task of grouping our data into clusters based on similarity.

clustering-with-unlabelled-data

Examples where clustering is used include:

Genetics: Clustering species into groups based on similarity

Market Segmentation: Grouping customers into different segments based on features such as company size, no of employees etc.

Let’s look at the data table below, grouping the data side is called clustering.

what-is-clustering

How about grouping the features side? Well, it is called dimensionality reduction. It is a very useful data preprocessing step which is used to simplify the data before it is fed into models.

what-is-dimentionality-reduction

Example of dimensionality reduction:

dimentionality-reduction-example

And finally there is a third type of machine learning.

Reinforcement Learning

Well simply put, this type of machine learning model solves problems without getting fed by any data. Ouch!

Instead of data these models are fed with the environment and an agent, which is expected to navigate within the environment. The agent may have a goal or a set of goals. On taking the right decision to reach the goal the agent is rewarded and on taking the wrong decision it is punished. Hence the learning is reinforced on it.

In case of reinforcement learning, an agent needs to take a sequence of actions within some environment. These actions influence the information that the environment provides to the agent in the next step. The agent receives direct feedback on the actions it takes. In supervised or unsupervised learning, in contrast, the model never impacts the underlying data, it simply consumes it.

For example, let’s assume that the agent is driving the car and its goal is to take the car from some point A to point B.

The actions the agent can take could be steering, accelerating and or pressing the brakes.

The environment in this case is the real world, consisting of roads, traffic, pedestrians etc.

The reward in this case is every meter the car travels towards the destination and punishment could be minor: traffic infractions or major: in the event of collision.

AI-can-learn-to-drive-acar

Reinforcement learning has cutting edge applications such as:

Self-driving car: Here the agent is expected to plan the route, control the car, keep the car on track etc.

Industrial Robotics: Here the agent’s task is to learn how to pick a box, how to walk or handle any particular task in the manufacturing process.

Gaming: Agents need to learn how to play Chess and so on.

Do follow our LinkedIn page for updates: [ Myraah IO on LinkedIn ]