A Primer on Machine Learning
Fall 2017
UCI DCE instructor Amit
Manghani talks about
Machine Learning:
Question: What is Machine Learning?
Simply put, Machine Learning is
a form of data analysis. Using
algorithms that continuously learn
from data, Machine Learning allows
computers to recognize hidden
patterns without actually being
programmed to do so. The key
aspect of Machine Learning is that
as models are exposed to new data
sets, they adapt to produce reliable
and consistent output.
Question:
What is the difference between
Artificial Intelligence (AI),
Machine Learning and Deep
Learning?
The terms AI, Machine Learning,
and Deep Learning are often used
interchangeably.
Artificial Intelligence:
AI can be thought of as advanced
computer intelligence. In AI, every
aspect of intelligence can be so
precisely defined that a machine
can be programmed to simulate it.
Machine Learning:
Machine Learning is a sub-discipline
of Artificial Intelligence. The core of
Machine Learning revolves around
a computer system consuming
data and learning from the data.
Once trained on large data sets,
the system can be leveraged to
perform a myriad of tasks ranging
from natural language processing
to predicting outcomes to
proactive/preventive maintenance. In traditional programming, a
computer system completes tasks
based on instructions whereas in
Machine Learning, the system
continuously learns from data and
utilizes the knowledge to uncover
patterns and make predictions.
Deep Learning:
Deep Learning is a branch of
Machine Learning focused on
algorithms called Artificial Neural
Networks which tries to mimic the
structure and functioning of the brain. As compared to traditional
programming which uses a set
of instruments to perform a task,
Artificial Neural Networks use a
network of nodes to recognize
patterns. Many layers of software
neurons are utilized to identity
patterns of great complexity. Let's
say you want a computer system to
recognize an object. The Artificial
Neural Network is blitzed with digital
images containing those objects.
Each individual layer of software
neurons learns to recognize a
specific feature. For example: the
first layer may recognize primitive
features like an edge in an image.
Once the layer has successfully
recognized a feature, it is fed to
the next layer which trains itself to
recognize more complex patterns
like a corner in an image. This
“divide and conquer” process is
repeated in each layer until the
system can reliably recognize
the object.
Read more about machine
learning at
ce.uci.edu/machinelearning