HoughtonlakeBoard.org

TIPS & EXPERT ADVICE ON ESSAYS, PAPERS & COLLEGE APPLICATIONS

Convolutional
Neural Networks

It is very similar to
ordinary neural networks. These are actually made of neurons which consists of learnable weights and biases and where
each neuron get some inputs , performs a dot product operation of these inputs
and conditionally follows it with non-linearity.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

This is usually explained in the architecture of this model where each
neuron when receiving inputs make it to transform through a series of hidden
layers. Now each hidden layer consists of neurons where each neuron is fully
connected to all the previous neurons and these neurons in a single layer
function independently and thus making them not to share connections with others.
The finally connected layer is the “output layer” and it represents class
scores in classification system. a single fully-connected neuron in a first
hidden layer of a regular Neural Network would have 32*32*3 = 3072 weights

There are three main parameters that control the output volume of  the convolution layer. They are:

1. Depth

2. Stride

3. Zero padding

 

 

 {displaystyle W} {displaystyle K} {displaystyle S}{displaystyle P}{displaystyle
(W-K+2P)/S+1}The main advantage of convolution neural
networks is the inputs are represented in a image format and this system is a
more sensible way of neural networks.

   The applications of convolution neural
networks are

    1. Image recognition

    2. Video analysis

    3. Checkers

    4. Go

    5. Fine-tuning

  Deep
Belief Networks

  They are generally a problem type generative models which
contains many layers of hidden variables. Now each layer is performing the
operation of capturing high order correlations between the hidden features in
the layer mentioned below in two characterstic points:

 1. the
two main top layers of the deep belief networks form a undirected bipartition
graph which will result ina machine called Restriction Boltzman Machine.

 2.
Wheras the lower layers results in the directed sigmoid belief graph.

 

 

The boltzman machine is a representation of  network of 
symmetrically coupled random binary units denoted or having variables as
{0,1}. The restricted boltzman machine is like a extension of  boltzman machine where the condition is no
hidden to hidden and no visible to visible connections.

The top layer is a random binary hidden units h
wheras the bottom layer is a vector of random binary visible variables w.

The exact calculations of restricted boltzman
machine is very difficult to find and conclude because of the expectation
operator in E_P MODEL .

The training of deep belief learning is that it
yields much better results by pre training each layer with a algorithm named
unsupervised algorithm which the superposition of one layer after another layer
starting mainly with the first layer always. After initializing a number of
layers, the whole neural networks can be fine tuned with respect to the
supervised training criterion.

Global strategies

Deep learning provides two main improvements
over the traditional machines. They are:

1.They simply reduce the need for hand crafted
and engineered feature set to be used exclusively for training purpose.

 

 

2.They increase the accuracy of the prediction model
for larger amounts of dat

3. Back-Propagation

4. Now in today’s generation most of the
companies making employed deep learning for various particular applications.

 Now some
of the strategies that are applied in various big international companies are
listed below:

 1. Facebook’s
artificial intelligence lab adopted this deep learning strategy and performs
tasks such as automatically tagging uploaded pictures with the names of the
specified people in them.

2. Google’s DeepMind Technologies developed a
new system which is capable of learning how to play Atari video games which
uses the pixels as input data. And Google  translation system uses an LSTM method to
translate between more than 100 languages.

3. In 2015, a company named Blippar
demonstrated a augmented reality version which uses deep learning methods and
concepts to identify objects in real time.

Automotive Deep Learning

In deep learning there is a concept of automotive
use cases which can be applied in automotive industry and it is listed below:

 

 

1. Visual inspection in manufacturing

2. Social media analytics

3. Autonomous driving

4. Robots and Smart machines

5. Conversational user interface

Tools for deep learning

1. Pylearn2

2. Theano

These two tools of deep learning are first
developed in the University of Montreal
with the most developers from Lisa group lead by Yoshua Bengio. Theano is
actually a Python library which is considered as a mathematical expression
compiler

3. Torch – it provides a Matlab environment for
the machine called state of the art machine in which learning the algorithms
take place

4. TensorFlow – it is a basically a open source
software library for numerical computation of input data by using the help of
flow graphs.

5. MXNet- it is basically a deep learning
framework which provides designing for both efficiency and flexibility.

6. Deepmat- it is usually a Matlab based
learning way of algorithms.

 

7. Nengo- Nengo is usually a graphical and
scripting package which is used for simulating large scale neural systems.

8. EbLearn- it is basically a c++ machine learning
library provided with a BSD license that is used for energy based learning, convolutional
networks, vision recognition applications etc. EbLearn is now primarily maintained
by Pierre Sermanet at NYU.

9. Cudament- it is a GPU based matrix library
which is used for Python.the examples of which include neural networks.

10. OpenNN- it is basically a open source class
library which is written in c++ language which eventually implements neural
networks which is a main source of deep learning research.

References

1. Bengio Y. 2009. Learning deep architectures for AI. Foundations
and Trends in Machine Learning

 2.
Joseph Turian, Lev Ratinov, and Yoshua Bengio.
2010. Word representations: a simple and general method for semi-supervised
learning. Computational Linguistics.

3. Yoshua Bengio. 2007. Learning deep architectures for AI.
Technical report, Dept. IRO, Université de Montreal.

4. Pearlmutter, B., & Parra, L. (1996). A context-sensitive generalization
of ICA. In Xu, L. (Ed.), InternationalConference On Neural Information
Processing Hong-Kong.

 

5.
Collobert,
R., & Bengio, S. (2004). Links between perceptrons, mlps and svms. In ICML
’04: Twenty-first international conference on Machine learning New York, NY,
USA. ACM Press.

6.
Cortes,
C., & Vapnik, V. (1995). Support vector networks. Machine Learning, 20,
273–297.

7.
Hinton,
G. E., & Zemel, R. S. (1994). Autoencoders, minimum description length, and
Helmholtz free energy. Advances in Neural Information Processing Systems, 6,
3–10.

8.
 Lewicki, M., & Sejnowski, T. (1998).
Learning nonlinear overcomplete representations for efficient coding. In
Jordan, M., Kearns, M., & Solla, S. (Eds.), Advances in Neural Information
Processing Systems 10. MIT Press.

9. Simard,
P.Y. Steinkraus, D., & Platt, J. (2003). Best practices for convolutional
neural networks. In Proc. Of ICDAR.

10.
M.
Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov,

K.
Talwar, and L. Zhang, “Deep learning with differential privacy,”

in
Proceedings of the 2016 ACM SIGSAC Conference on Computer

and
Communications Security. ACM, 2016, pp. 308–318.

 

 

 

 

11.
A.
Luckow, M. Cook, N. Ashcraft, E. Weill, E. Djerekarov, and

B.
Vorster, “Deep learning in the automotive industry: Applications andtools,” in
Big Data (Big Data), 2016 IEEE International Conference on. IEEE, 2016, pp.
3759–3768.

12.
S.
Valipour, M. Siam, E. Stroulia, and M. Jagersand, “Parking-stall vacancy
indicator system, based on deep convolutional neural networks,” in 2016 IEEE
3rd World Forum on Internet of Things (WF-IoT), 2016. pp. 655–660.

13.
F.
Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N.
Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed
improvements,” arXiv preprint arXiv:1211.5590v1 cs.SC, 2012.

14.
K.-C.
Wang and R. Zemel, “classifying nba offensive plays using neural networks,” in
Proc. MIT SLOAN Sports Analytics Conf, 2016.

15.
C.
R. Pereira, D. R. Pereira, J. P. Papa, G. H. Rosa, and X.-S.

Yang,
“Convolutional neural networks applied for parkinson’s disease

identification,”
in Machine Learning for Health Informatics. Springer,

2016, pp.
377–390.

 

 

 

16.
Bigdeli,
E., and Bahmani, Z. 2008. Comparing accuracy of cosine-based similarity and
correlation-based similarity algorithms in tourism recommender systems.In
Management of Innovation and Technology.

17.
Utgoff,
P., & Stracuzzi, D. (2002). Many-layered learning. Neural Computation, 14,
2497–2539.         

Post Author: admin

x

Hi!
I'm Irvin!

Would you like to get a custom essay? How about receiving a customized one?

Check it out