Overfitting and Underfitting in Machine Learning

Overfitting and Underfitting in Machine Learning

Zoe Larkin

·

Content Queen

September 30, 2024
Divider

Machine Learning, also known as ML, is the process of teaching computers to learn from data, without being explicitly programmed. It’s becoming more and more important for businesses to be able to use Machine Learning in order to make better decisions.

In fact, stats show that Deep Learning, Machine Learning, Natural Language Processing, and data analysis are all methods that 48% of businesses use to effectively integrate big data.

That being said, the secret to effective Machine Learning lies in finding the balance between overfitting and underfitting. But what are they? And what do you need to know about them when implementing Machine Learning in your business?

The more you go into Machine Learning and its terms, the more there is to learn and understand. However, we’re here to make it easy with this easy-to-understand guide to overfitting and underfitting. This article will help you understand what overfitting vs underfitting is, and how to spot and avoid each.

Definitions: Overfitting

So, what is overfitting? Overfitting occurs when a model learns the intricacies and noise in the training data to the point where it detracts from its effectiveness on new data. It also implies that the model learns from noise or fluctuations in the training data. Basically, when overfitting takes place it means that the model is learning too much from the data.

The issue is that these ideas do not work with new data and thus limit the model's ability to generalize. Overfitting in Machine Learning refers to a model being too accurate in fitting data.

The model performs exceptionally well in its training set, but it does not generalize effectively enough when used for predictions outside of that training set.

When there's more freedom in the target function learning process, nonparametric and nonlinear models that have less flexibility are more likely to overfit. Many nonparametric Machine Learning algorithms, therefore, include parameters or methods to restrict and confine the degree of detail the model learns.

Overfitting in Machine Learning graph
Graph showing an example of overfitting in Machine Learning

Why is Overfitting Bad?

With this in mind, you may be starting to realize that overfitting isn’t something that you want to happen. So, why is it bad? It is dangerous in Machine Learning since no sample of the population can ever be truly unbiased.

Overfitted models generate parameters that are strongly reliant and biased towards the sample rather than being representative of the entire population.

Overfitting may be compared to learning how to play a single song on the piano. While you can develop considerable skill in playing that one specific song, attempting to perform a new tune will not provide the same level of mastery.

As an example, overfitting might cause your AI model to predict that every person coming to your site will purchase something simply because all of the people in the dataset it was given had.

How to Detect Overfitting

If you're wondering how you can detect whether a Machine Learning model has overfitted, you can compare a model's performance on the training set to its performance on a holdout test set.

It's critical to understand that detection of overfitting is practically impossible before actually testing the data. You may divide the data into two subsets: a test set and a training set.

Tarang Shah makes a great job of explaining this concept in this article. They provide an example, where the training set is made up of the bulk of the available data (80%), which is used to train the model. Respectively, the test set is just a small section of the data (about 20%), and it's used to check how well the data performs with input it has never been introduced to before.

We can examine the model's performance on each data set to identify overfitting and how the training process works by separating it into subsets.

How to Prevent Overfitting

If you are wondering how you can prevent overfitting, here are four things that you can do:

1. Train With More Data

The more data you train on, the less likely it is that your model will overfit. This makes it simple for algorithms to find the signal more easily, lowering errors. As the user adds further training data, the model will be unable to overfit all of the samples and will have to generalize in order to obtain results.

Users should gather more data as a method for improving the accuracy of the model going forward. This approach, on the other hand, is expensive, so users must be sure that the data being utilized is relevant and clean.

2. Data Augmentation

A less expensive alternative to training with increased data is data augmentation, which is also known as Supervised Machine Learning. If you don't have enough data to train on, you may use techniques like diversifying the visible data sets to make them appear more diverse.

The data is augmented by Artificial Intelligence techniques that alter the sample data's appearance slightly each time it is used by the model. The process ensures that each data set appears unique to the model, preventing the model from learning about the data sets' characteristics.

Adding noise to the input and output data is another technique that accomplishes the same goal as data augmentation. Adding noise to the input makes the model stable, without affecting data quality and privacy, whereas adding noise to the output enhances data variety. This may seem counterintuitive for improving your model's performance, but adding noise to your dataset can reduce your model's generalization error and make your dataset more robust.

However, the addition of noise should be done in moderation so that the data is not incorrect or too diverse as an unintended consequence.

Example of image before and after Gaussian noise was added.
An example of an image before and after Gaussian noise was added. Source.

3. Data Simplification

Overfitting can happen for a variety of reasons, the most common being that a model's complexity leads to it overfitting even when there are massive amounts of data. Overfitting is prevented by reducing the complexity of the model to make it simple enough that it does not overfit.

Pruning a decision tree, reducing the number of parameters in a Neural Network, and employing dropout on a neutral network are just a few examples of what may be done. The model can also be simplified to make it lighter and run faster.

4. Ensembling

Ensembling is a Machine-Learning method in which two or more separate models' predictions are combined. Boosting and bagging are two of the most widely used ensembling techniques.

The notion of boosting, as it applies to Machine Learning and AI, is relatively straightforward: it entails increasing the aggregate complexity of a collection of simple base models. It instructs a large volume of weak learners in a sequence so that each one learns from the mistakes of the learner before.

In sequence learning, boosting combines all of the weak learners to produce one strong learner.

Bagging, on the other hand, is a different strategy for organizing data. This procedure entails training a large number of strong learners in parallel and then combining them to improve their predictions.

Techniques to Reduce Overfitting

When attempting to achieve greater consistency across larger sets of data, two crucial techniques for evaluating Machine Learning algorithms to avoid overfitting are:

  • Use a resampling technique to estimate model accuracy
  • Hold back a validation dataset.

K-fold cross-validation is the most commonly used resampling technique. It enables you to train and evaluate your model ‘k’ times on distinct subsets of training data in order to generate an estimate of a Machine Learning model's performance on unseen data.

A validation data set is a subset of your training data that you withhold from your Machine Learning models until the very end of your project.

After you've chosen and tuned your Machine Learning algorithms n the training set, you may evaluate the learned models on the validation set to obtain a final objective idea of how well they'll perform on previously unseen data.

Cross-validation is a gold standard in applied Machine Learning for predicting model accuracy on unseen data. Using a test set is also a good technique if you have the data.

Now that you know what overfitting is, and how to detect, prevent, and reduce overfitting, let’s discuss underfitting in Machine Learning.

Definitions: Underfitting

When a Machine Learning model is underfitting, it means it isn't learning much from the training data, or, very little.

The disadvantage of underfit models is that they don't have enough information on the target variable. The objective of any Machine Learning technique is to acquire, or "learn" trends in the data by imitating how it was presented through examples without explaining what those trends are.

If no such patterns exist in our data (or if they are too weakly defined), the machine can only fabricate things that aren't there and create predictions that don't hold true in reality.

Undefitting in Machine Learning graph
Graph showing an example of underfitting in Machine Learning

Why is Underfitting Bad?

You already know why overfitting is bad, but what about underfitting? Well, when a model is underfitting, it is failing to detect the main trend within the data, leading to training mistakes and poor performance of the model.

If a model's ability to generalize to new data is limited, it can't be used for classification or predictive tasks.

How to Detect Underfitting

Similarly to overfitting, underfitting, and Machine Learning in general, we can't know how well our model will perform on new data until we put it to the test.

To address this, we may divide our entire dataset into two subsets: a training subset and a test subset. This method can give us an indication of how well our model will perform on new data. (For population and sample size, go back and check how we did this to detect underfitting.)

If our model does considerably better on the training set than the test set, we may be overfitting. If, for example, our model achieved 95 percent accuracy on the training set but only 48 percent accuracy on the test set, that would be a big overfitting red flag.

However, if your results show a high level of bias and a low level of variance, these are good indicators of a model that is underfitting.

Since you don’t want either, it’s important to keep in mind these overfitting vs underfitting ratios.

How to Prevent Underfitting

If you feel for any reason that your Machine Learning model is underfitting, it's important for you to understand how to prevent that from happening.

To prevent underfitting, you will need to maintain an adequate complexity of data for your machine to learn from. This will allow you to avoid an underfitting model, as well as make more accurate predictions going forward.

The following are a few strategies for reducing underfitting:

Techniques to Reduce Underfitting

When attempting to achieve greater and more comprehensive results and reduce underfitting it is all about increasing labels and process complexity.

Use the following three techniques to help you reduce underfitting:

1. Increasing the Model Complexity

It's possible that your model is underfitting because it isn't robust enough to capture trends in the data. Using a more sophisticated model, for example by changing from a linear to a non-linear approach or by adding hidden layers to your Neural Network, may be very beneficial in this situation.

A good example is how some banks continue to utilize credit scoring. In many cases, financial institutions provide reports on a customer's creditworthiness based on variables like income and debt repayment history.

We can create potentially useful models for predicting customer credit risk using traditional statistical approaches such as linear regression, but because they lack the required domain knowledge about human behavior and decision-making processes in finance, these models frequently fail.

2. Reducing Regularization

By default, the algorithms you employ include regularization parameters to prevent overfitting. Thus, they sometimes hinder the algorithm from learning. Slight adjustments to their settings usually assist when trying to reduce underfit.

3. Adding Features to Training Data

As opposed to overfitting, your model may be underfitting if the training data is too limited or simple. If your model is underfitting, it may not have the characteristics required to identify key patterns and make accurate forecasts and predictions.

However, underfitting can be alleviated by adding features and complexity to your data.

Summary

Overfitting and underfitting can pose a great challenge to the accuracy of your Machine Learning predictions. If overfitting takes place, your model is learning ‘too much’ from the data, as it’s taking into account noise and fluctuations. This means that even though the model may be accurate, it won’t work accurately for a different dataset.

When your Machine Learning model is underfitting, it means that the model isn’t learning enough from the data provided. This will result in inaccurate and over-generalized predictions.

When it comes to picking a model, the goal is to find the correct balance between overfitting and underfitting. Identifying that perfect spot between the two lets Machine Learning models produce accurate predictions.

One of the main things to take from this article is that the quality and quantity of your data are essential and directly proportional to the accuracy of your Machine Learning model’s predictions. If you have a reason to think your model is either underfitting or overfitting, take a look at the data and apply some of the measures mentioned above.

We hope that you found this article useful!

If you would like to learn more about how you can leverage Machine Learning in your business and understand the intricacies of AI and no-code solutions, be sure to give our other blog posts a read.

Now that you're here

Levity is a tool that allows you to train AI models on images, documents, and text data. You can rebuild manual workflows and connect everything to your existing systems without writing a single line of code.‍If you liked this blog post, you'll love Levity.

Sign up

Now that you're here

Levity is a tool that allows you to train AI models on images, documents, and text data. You can rebuild manual workflows and connect everything to your existing systems without writing a single line of code.‍If you liked this blog post, you'll love Levity.

Sign up

Stay inspired

Sign up and get thoughtfully curated content delivered to your inbox.

Thanks!