Implementation of a generic neural network on Zynq

The project's goal is to implement an SOC architecture that could compute a wide variety of Deep Learning algorithms using Convolutional Neural Networks in a fast, dynamic and configurable way.

Deep Learning is a new area of Machine Learning (ML) research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals, Artificial Intelligence.

One of the architectures for deep learning algorithms is Convolutional Deep Learning Networks, which have been applied to fields such as computer vision, speech recognition, natural language processing, and bioinformatics – where they were shown to produce excellent results.

Those types of algorithms use artificial neural networks – which mimic the behavior of biological neural networks. The basic elements of the network are neurons and synapses. Neuron refers to a unit that performs a mathematical function over its connected inputs (usually applying an activation function on the summed inputs) and transmits it to its connected outputs. A synapse is the basic unit that connects the neurons. It stores a parameter called “weight” that manipulates the data in the calculations.

A typical network consists of an input layer, a number of intermediate (hidden) layers, and an output layer. Neurons of one layer are connected to neurons of the following layer, where a weight is assigned to each connection.

Our project deals with supervised offline algorithms, which have an initial stage of feature learning from examples (drawing the weights) and afterwards an operation stage of feature extraction from unknown samples (use the weights and make conclusions).