It is a common missconception to mix concepts of “algorithm”, “method”, “model” and “implementation” of particular ML concepts. Most of things defined in ML community is a model or method, not an algorithm or implementation. Roughly speaking:
- model is a form of representing actual, real processes in the form of mathematical equations/formulas. One of such models can be a nearest neighbour classifier or linear classifier
- method is often a form of approaching the problem of finding the parameters of a model using some data (like for example gradient methods of optimization, often used to train ML models)
- algorithm is a set of instructions, often in some pseudo-code showing the exact operations one needs to do in order to create some implementation of a given method.
- finally implementation is one, particular piece of code which realizes some abstract algorithm
So now, deep learning is just a general concept in ML, which does not yet have clear definition, although it is often used to relate to models involving hierarchical abstraction of data representation as well as methods of training such models.
The most common DL models are deep neural networks, in other words neural networks which have multiple (how many? it is an open debate, some say 5, other 10 or 30) nonlinear hidden layers. Some of the models include:
- Deep Boltzmann Machines (DBM)
- Deep Autoencoder (DAE)
- Deep Convolutional Neural Network (DCNN)
- Recurrent Neural Networks (RNN)
In general models can be deep, and there can be methods, algorithms for deep learning or implementations of algorithms for deep learning. Some of such algorithms are
- Contrastive Divergence (CD)
- Persistent Constrastive Divergence (PCD)
Which are used to train DBMs.