## Wednesday, September 11, 2019

### Organizing movie covers with Neural Networks

In this post we will see how to organize a set of movie covers by similarity on a 2D grid using a particular type of Neural Network called Self Organizing Map (SOM). First, let's load the movie covers of the top 100 movies according to IMDB (the files can be downloaded here) and convert the images in samples that we can use to feed the Neural Network:
```import numpy as np
import imageio
from glob import glob
from sklearn.preprocessing import StandardScaler

# covers of the top 100 movies on www.imdb.com/chart/top
# (the 13th of August 2019)
data = []
all_covers = glob('movie_covers/*.jpg')
for cover_jpg in all_covers:
data.append(cover.reshape(np.prod(cover.shape)))

scaler = StandardScaler()
data = scaler.fit_transform(data)
```
In the snippet above we load every image and for each of them we stack the color values of each pixel in a one dimensional vector. After loading all the images a standard scaling is applied to have all the values with mean 0 and standard deviation equal to 1. This scaling strategies often turns out to be quite successful when working with SOMs. Now we can train our model:
```from minisom import MiniSom

w = 10
h = 10
som = MiniSom(h, w, len(data), learning_rate=0.5,
sigma=3, neighborhood_function='triangle')

som.train_random(data, 2500, verbose=True)
win_map = som.win_map(data)
```
Here we use Minisom, a lean implementation of the SOM, to implement a 10-by-10 map of neurons. Each movie cover is mapped in a neuron and we can display the results as follows:
```import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid

fig = plt.figure(figsize=(30, 20))
grid = ImageGrid(fig, 111,

def place_image(i, img):
img = (scaler.inverse_transform(img)).astype(int)
grid[i].imshow(img.reshape(original_shape))
grid[i].axis('off')

to_fill = []
collided = []

for i in range(w*h):
position = np.unravel_index(i, (h, w))
if position in win_map:
img = win_map[position]
collided += win_map[position][1:]
place_image(i, img)
else:
to_fill.append(i)

collided = collided[::-1]
for i in to_fill:
position = np.unravel_index(i, (h, w))
img = collided.pop()
place_image(i, img)

plt.show()
```
Since some images can be mapped in the same neuron, we first draw all the covers picking only one per neuron, then we fill the empty spaces of the map with covers that have been mapped in nearby neurons but have not been plotted yet.

This is the result:

Where to go next: