Search images with deep learning (torch)

Images are usually very different if we compare them at pixel level but that's quite different if we look at them after they were processed by a deep learning model. We convert each image into a feature vector extracted from an intermediate layer of the network.

Get a pre-trained model

We choose the model described in paper SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size.

The model is stored here:

pytorch's design relies on two methods forward and backward which implement the propagation and backpropagation of the gradient, the structure is not known and could even be dyanmic. That's why it is difficult to define a number of layers.


We collect images from pixabay.

Raw images

torch implements optimized function to load and process images.

We can multiply the data by implementing a custom sampler or just concatenate loaders.