Thursday, February 29, 2024
HomeAIDifference between machine learning and neural networks in 2022!

Difference between machine learning and neural networks in 2022!

machine learning

As AI influences our lives these days, you may have heard the terms “ machine learning ” and “ neural networks ” in business and at school.

Do you understand the difference and relationship between ” machine learning ” and ” neural networks “?

The two terms are confusing and easy to confuse, so it’s important to have a firm grasp on each.

Therefore, in this article, we will deepen your understanding by explaining the differences between ” machine learning ” and ” neural networks ” and the respective algorithms in detail.

Furthermore, in the second half of the article, we will also explain books and application examples that can be used to learn ” machine learning ” and ” neural networks ” respectively, so please be sure to read to the end.

Table of Contents

  • Review machine learning
  • An easy-to-understand explanation of neural networks!
  •  What is the relationship between machine learning, neural networks, and deep learning? 
  • What is the difference between machine learning and neural networks?  
  • There are three types of machine learning
    • ① What is supervised learning?
    • ② What is unsupervised learning?
    • ③ What is reinforcement learning? 
  • Four typical learning methods used in machine learning
    • ①Classification 
    • ②Regression
    • (3) Clustering
    • ➃ Dimension reduction  
  • Five representative neural networks
    • (1) Convolutional Neural Network (CNN)
    • ② Recurrent Neural Network (RNN)
    • ③LSTMs
    • ➃Auto Encoder
    • ➄Generative Adversarial Network (GAN)
  • 4 use cases for machine learning
    • (1) Image recognition
    • ② Stock price prediction
    • ③ Advertising production
    • ➃ Traffic control
  • Three Use Cases for Neural Networks
    • (1) Object detection
    • (2) Natural language processing
    • ③ Deep fake
  • Books to learn about machine learning
    • 1. Machine learning at work
    • 2. The easiest machine learning project textbook
  • Books to learn about neural networks
    • 1. Introduction to making your own neural network
    • 2. Technology Supporting Deep Learning (2) – The Biggest Mystery of Neural Networks
  • At the end 

Review machine learning

First, let me explain its definition to review machine learning.

Machine learning is a technology that learns multiple rules and patterns based on given data for classification and prediction.

It is known as one of the classifications of artificial intelligence (AI) and is used in various fields. In the second half of the article, we will introduce specific use cases.

An easy-to-understand explanation of neural networks!

Next, we will also discuss the definition of a neural network.

A neural network is a mechanism that represents nerve cells (neurons) in the human brain and their connections as a model.

This connection consists of “input layer”, “hidden layer” and “output layer”, and has a structure similar to a human neural network (synapse).

In addition, a neural network with many layers of “hidden layers” is called deep learning , and it is also called ” deep learning ” because of the depth of the layers.

And there is a history that the birth of neural networks and deep learning has made it possible to perform complex tasks such as speech recognition that could not be performed with conventional machine learning.

 What is the relationship between machine learning, neural networks, and deep learning? 

Machine learning , neural networks , and deep learning , these three relationships can be visualized in the following image.

As you can see from this figure, the three are not completely different, and the recognition that machine learning includes neural networks and deep learning is correct.

What is the difference between machine learning and neural networks?  

Based on them, the difference between machine learning and neural network can be read from the difference in the following explanation.

Machine learning : A “mathematical system of theory”specific task,also referred to asa “machine learning model”

Neural network :“mathematical theory systemsmathematical learning model“that imitates the connection of nerve cells (neurons)

In other words, machine learning and neural networks have differences and similarities, respectively.

The first difference is that machine learning refers to a theory system and neural networks refers to a learning model.

And as a common point, since machine learning is also a learning model, each is used as a learning model for a task.

Therefore, machine learning models and neural network models can be used depending on the task, taking advantage of their respective characteristics.

Based on this, we will explain each learning method and use cases from the next chapter.

There are three types of machine learning

  1. Supervised learning
  2. Unsupervised learning
  3. Reinforcement learning

I will explain each.

① What is supervised learning?

Supervised learning is a method of training a model with data that contains correct answers(labeled data).

In supervised learning, labeled data is used for model learning, but the ultimate goal is to correct unlabeled data that does not contain the correct answer.

As an example, let’s use supervised learning to solve the problem of classifying images of oranges and apples. For the images of oranges and apples, humans label each orange or apple in advance.

The model learns which image is an orange or an apple by looking at the relationship between images and labels. Ultimately, learning is successful if either the orange or the apple can be determined just by looking at the image without the label.

In the next chapter, we will explain in detail the learning methods of this supervised learning, “classification” and “regression”.

② What is unsupervised learning?

Unsupervised learning is a mechanism by which machine algorithms extract the essential structure and laws of given data.

In other words, unlike the supervised learning mentioned earlier, humans don’t label the correct answer data in advance and let the machine learn. Ultimately, if the model can learn to “capture the characteristics of the data”, the goal will be achieved.

As an example, let’s say that we divide the image of a mandarin orange and an apple into two by unsupervised learning. Unlike the previous supervised learning, here humans do not pre-label the images as oranges or apples.

The machine learns the characteristics of each image, such as color and size. And if it can distinguish each feature and divide the image into two like a human being, it can be said that the learning is successful.

In the next chapter, we will explain in detail about ” clustering ” and “dimensionality reduction”, which are the learning methods of this unsupervised learning.

③ What is reinforcement learning? 

Reinforcement learning is a mechanism in which a machine learns to perform optimal actions by repeating trial and error without giving correct answers.

Supervised learning has a clear answer, but reinforcement learning does not. Reinforcement learning, therefore, rewards how good the behavior was and encourages the behavior to increase the reward.

Unsupervised learning also has no correct answer, but its properties are completely different from reinforcement learning. The former learns the features of the data, while the latter learns the optimal behavior.

By the way, it is one of the learning methods that are currently attracting attention, such as the application of this reinforcement learning know-how to shogi “Alpha Go” .

Four typical learning methods used in machine learning

There are four main learning methods used in machine learning:

  1. Classification
  2. Regression
  3. Clustering
  4. Dimensionality reduction

Among them, supervised learning introduced earlier is divided into classification and regression types, and unsupervised learning can perform clustering and dimensionality reduction.


Classification is characterized by the fact that the answers are in categories such as “mandarin orange/apple” and “deputy manager/section manager/department manager/president” as introduced in the example of supervised learning.

As a specific classification method, we first attach numbers such as 0 and 1 (discrete values) that are not continuous numbers to oranges and apples, and let the machine learn.

Then, depending on whether the data of the newly loaded image is closer to 0 or closer to 1, it is classified as either oranges or apples.


Regression , on the other hand, is characterized by the fact that the answer is a continuous number (continuous value). For example, stock price prediction is effective even if the answer is an odd number such as 98765.4 yen, so this regression learning is used.

Furthermore, while classification is about sorting newly input data into which group, regression is about analyzing where input data falls within a group.

This means that newly entered data can be analyzed in groups and used to predict other values, such as stock prices.

(3) Clustering

Clustering , as explained in unsupervised learning, is a task in which the machine itself divides data with similar characteristics into groups without the human giving an answer in classification.

In the example of unsupervised learning, it is equivalent to considering from which perspective oranges and apples can be separated well.

There are two main types of clustering : hierarchical clustering and non-hierarchical clustering .

Hierarchical clustering first classifies data with similar characteristics into clusters, and then joins the clusters one by one. And it is a clustering method that repeats it until it finally becomes one big cluster.

The following image is an image diagram of the result.

Furthermore, representing these structures as a hierarchy results in a tree diagram that looks like this:

Non-hierarchical clustering is a method in which the number of clusters is set first, and clustering is performed in such a way that the data is best divided by the number of clusters.

A typical non-hierarchical clustering algorithm is “k-means”.

➃ Dimension reduction  

Dimensionality reduction is a typical task next to clustering in unsupervised learning, and is the process of “reducing the number of dimensions of data”.

The purpose of the task is to avoid the curse of dimensionality, compress data, and visualize data.

The curse of dimensionality is that in machine learning, if there are too many points to compare data, it becomes difficult to tell the difference. Therefore, dimensionality reduction is used to reduce the dimensionality of the data, in other words to summarize the data.

The most widely used technique for dimensionality reduction is principal component analysis .

Principal component analysis is a method of compressing multi-dimensional data with variations into fewer dimensions while trying not to lose the information of the data source as much as possible. Reduce the correlated information to fewer dimensions, as in the following image.

Five representative neural networks

From here, we will introduce five representative examples of neural networks.

  1. Convolutional Neural Network (CNN)
  2. Recurrent Neural Network (RNN)
  3. LSTMs
  4. Auto encoder
  5. Generative Adversarial Network (GAN)

I will explain each.

( 1 ) Convolutional Neural Network (CNN)

A convolutional neural network (CNN) is an algorithm that specializes in multidimensional array data processing such as image recognition among neural networks.

As a feature, unlike conventional machine learning, it is possible to process while maintaining the positional relationship between pixels in a multidimensional array, so it is possible to perform highly accurate image recognition.

A CNN is mainly composed of three layers, a convolutional layer, a pooling layer, and a fully connected layer.

② Recurrent Neural Network (RNN)

A recurrent neural network (RNN) is characterized by its ability to make predictions that consider the order of input data such as voice data and text data, rather than independent output like image recognition.

For example, predicting word order. To predict the word “go” after “tomorrow at a friend’s house”, you need to enter the sequence of words “tomorrow/friend/of/home/to”.

In order to handle such data, it is necessary to satisfy the following conditions: ( 1) the number of input data is not fixed, (2) even if the data series to be input is long, it can be handled, and (3) the order of the data series can be maintained .

This recurrent neural network (RNN) satisfies all these conditions, and is used for natural language processing and prediction of time-series data.


LSTM (Long Short Term Memory) is an algorithm that solves RNN problems.

The problem was that it was not possible to focus on data that was input long ago in RNN processing.

Therefore, LSTM was born by introducing a mechanism called a forget gate that can calculate how much previous information is to be discarded, making it possible to perform processing focusing on old data that was not discarded.

This LSTM is used for natural language processing and prediction of time-series data, just like the RNN mentioned earlier.

➃Auto Encoder

An autoencoder is a neural network that trains its input data to be the same as its output data.

However, at first glance, a learning model with the same input and output seems worthless. So here we focus on the “middle layer” of the autoencoder.

The hidden layers of autoencoders are characterized by a small number compared to other neural networks, so the limited number of inputs and outputs is required.

As a result, as with the dimensionality reduction introduced earlier, it has the characteristic of being able to compress data.

Taking advantage of its characteristics, it is used for processing such as image noise removal and new image generation with an intermediate layer.

➄Generative Adversarial Network (GAN)

A generative adversarial network (GAN) is an algorithm that allows a machine to generate data such as images that look real by learning data.

This algorithm consists of a “generator” and a “discriminator”, and works by repeating the competition between the two.

For example, in image generation, a “generator” creates a realistic image that does not exist, and a “discriminator” determines whether the image is real or not.

By repeating this process, the accuracy of the generated image is gradually increased, creating an image that looks as if it actually exists.

4 use cases for machine learning

There are four use cases for machine learning:

  1. Image recognition
  2. Stock price prediction
  3. Sdvertisement production
  4. Traffic control

I will explain each.

(1) Image recognition

In general, image recognition requires complex processing, so deep learning is used, but with recent technological advances, machine learning is sometimes used in image recognition as well.

In 2020, Feature Co., Ltd. announced an in-vehicle machine learning algorithm that was independently developed, and was highly evaluated for its ability to accurately identify objects such as people and signs.

Since image recognition can be performed without using an expensive image processing semiconductor (GPU), it has the advantage of significantly reducing costs.

② Stock price prediction

Machine learning is good at calculating predictions from data, so it has a high affinity with stock price prediction, and the regression model I introduced earlier is used.

For example, by having machine learning learn from past price trends and current economic conditions, it predicts the price trends of stock prices, which fluctuate in real time, and buys and sells at the optimal timing.

Recently, Green Monster, which operates an investment app, has installed a prediction function for stock prices in its smartphone app, and its use is expanding.

③ Advertisement production

Since 2020, CyberAgent, Japan’s largest advertising production company, has been providing a service that uses machine learning to provide feedback on what kind of layout and design is optimal based on past advertising response data.

This service is called “Kiwami Prediction AI”, and it is used to select materials, arrange materials, and compare products, and is used to create so-called “sticky advertisements”.

➃ Traffic control

Traffic control is the management and control of traffic volume in order to prevent road congestion and danger. Like stock price prediction, machine learning is also good at prediction from given data, so it is used for this traffic control.

For example, based on real-time traffic data, congestion can be minimized by predicting the route and time required for each vehicle to reach its destination and optimizing signal control.

In fact, in a test in downtown Pittsburgh, the system reduced travel time by up to 25% and idle time by more than 40%.

In Japan, efforts toward commercialization are progressing, such as the success of a traffic control demonstration experiment conducted by Sumitomo Electric Industries and the New Energy and Industrial Technology Development Organization (NEDO) in 2022.

Three Use Cases for Neural Networks

There are three use cases for neural networks.

  1. Object detection
  2. Natural language processing
  3. Deepfake

I will explain each.

(1) Object detection

A neural network can perform object detection that “what” exists in “where” in the image with “what percentage of confidence”.

This object detection mainly uses the CNN image recognition technology introduced earlier, and it is much more accurate than conventional machine learning technology.

This technology is also used for object detection in autonomous driving, and in 2020, Renesas announced that it will release a car that can perform autonomous driving processing including this object detection with a single chip installed, and it became a hot topic. I was.

(2) Natural language processing

Using neural networks, it is possible to perform syntactic analysis and semantic analysis of languages ​​(natural languages) such as Japanese and English that humans usually use.

The RNN and LSTM introduced earlier are used for this natural language processing, and in recent years, the natural language processing model called BERT announced by Google in 2018 is used.

In our familiar examples, the principle of neural networks is used in machine translation such as Google Translate and DeepL, and also in character conversion prediction.

③ Deep fake

The adversarial generation network (GAN) introduced earlier is also used in deepfake technology, which has been a problem in recent years because it has been abused for fake pornography.

The deepfake is a video that uses GAN to create a high-precision synthetic video, and aims to make it look as if the person himself is doing actions and remarks that the person does not actually do.

An example is a video that makes it look like President Obama said, “President Trump is a hopeless idiot.” This video was published on Facebook in 2018 and became a hot topic at the time.

Books to learn about machine learning

Two of the best books on machine learning are:

1. Machine learning at work

  • Content

In this book, we will organize how to use machine learning and data analysis tools in business, and how to proceed with highly uncertain machine learning projects from the perspective of “using them at work.”

It focuses on points that readers may be wondering, “What do you actually do?”, such as how to start a project, system configuration, and how to collect resources for learning.

  • Reader review

As the title suggests, this book provides an overview and key points for incorporating machine learning and data analysis into practice. The outline of the method and the flow at the time of introduction are systematically organized in an easy-to-understand manner.

In the second half, the actual analysis work and the report that is the product of the analysis are taken up, and it is easy to get an image of the actual work.

2. The easiest machine learning project textbook

  • Content

It is full of business growth know-how using machine learning that can be understood without knowledge of IT or mathematics. A wide range of explanations from basic knowledge of AI / machine learning to strategy planning and execution to incorporate into business. Get all the knowledge you need to know as a project leader in one book.

  • Reader review

I have a vague understanding of how to proceed with a machine learning project, but I picked up this book to learn more. It comprehensively and carefully explains the points that you may be concerned about when actually proceeding with a machine learning project.

Even if you are not normally involved in machine learning projects, please read it. And I think how good it would be if they were interested in machine learning projects and actually started implementing them.

from Amazon: Textbook for the easiest machine learning project


Books to learn about neural networks

The following two books are representative books that you can learn about neural networks.

1. Introduction to making your own neural network

  • Content
  • In this book, you will create your own neural network using the computer language: Python while touching on the mathematics required for neural networks step by step, as if you are on a journey.

The purpose of this book is to convey to as many readers as possible how to make your own neural network in an easy-to-understand manner.

  • Reader review

Easy to understand even for programming beginners.

The last one looks quite difficult to code, but since he carefully writes the theory one by one, I think I can make the computer recognize the image.

The perfect book for your entryway. Even if you learn something more applied, I think that beginners should read it first.

Amazon: Neural network self-made introduction

2. Technology Supporting Deep Learning (2) – The Biggest Mystery of Neural Networks

  • Content

This book focuses on the big mysteries of neural networks, “why can they learn?” and “why they generalize.”

At the same time, two major topics that have the potential for future innovation are “Generative Models” and “Deep Reinforcement Learning”.

  • Reader review

It describes in detail what kind of mathemati cal background the current AI applications are based on.

Pictures are everywhere and easy to understand. Many IT books do not explain difficult parts even with illustrations, but the illustrations in this book explain difficult parts and make it easier to read. I also thought it would be nice to have markers in important places.

At the end 

This time, I introduced the difference between machine learning and neural networks.

First, it is very important to understand what machine learning and neural networks are.

After that, compare algorithms and use cases to better understand their differences.

Also, if you get stuck in learning machine learning or neural networks, it would be a good idea to refer to the books introduced in this article and proceed with your learning.



Please enter your comment!
Please enter your name here

Recent Posts

Most Popular

Recent Comments