Wednesday, November 11, 2020

Visualize the Dictionary of Obscure Words with T-SNE

I recently published on a wrapper around The Dictionary of Obscure Words (originally from this website http://phrontistery.info) for Python and in this post we'll see how to create a visualization to highlight few entries from the dictionary using the dimensionality reduction technique called T-SNE. The dictionary is available on github at this address https://github.com/JustGlowing/obscure_words and can be installed as follows:
pip install git+https://github.com/JustGlowing/obscure_words
We can now import the dictionary and create a vectorial representation of each word:
import matplotlib.pyplot as plt
import numpy as np
from obscure_words import load_obscure_words
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.manifold import TSNE

obscure_dict = load_obscure_words()
words = np.array(list(obscure_dict.keys()))
definitions = np.array(list(obscure_dict.values()))

vectorizer = TfidfVectorizer(stop_words=None)
X = vectorizer.fit_transform(definitions)

projector = TSNE(random_state=0)
XX = projector.fit_transform(X)
In the snippet above, we compute a Tf-Idf representation using the definition of each word. This gives us a vector for each word in our dictionary, but each of these vectors has many elements as the total number of words used in all the definitions. Since we can't plot all the features extracted, we reduce our data to 2 dimensions we use T-SNE. We have now a mapping that allows us to place each word in a point of a bi-dimensional space. There's one problem remaining, how can we plot the words in a way that we can still read them? Here's a solution:
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import pairwise_distances

def textscatter(x, y, text, k=10):
    X = np.array([x, y]).T
    clustering = KMeans(n_clusters=k)
    scaler = StandardScaler()
    clustering.fit(scaler.fit_transform(X))
    centers = scaler.inverse_transform(clustering.cluster_centers_)
    selected = np.argmin(pairwise_distances(X, centers), axis=0)
    plt.scatter(x, y, s=6, c=clustering.predict(scaler.transform(X)), alpha=.05)
    for i in selected:
        plt.text(x[i], y[i], text[i], fontsize=10)

plt.figure(figsize=(16, 16))
textscatter(XX[:, 0], XX[:, 1], 
            [w+'\n'+d for w, d in zip(words, definitions)], 20)
plt.show()
In the function textscatter we segment all the points created at the previous steps in k clusters using K-Means, then we plot the word related to the center of cluster (and also its definion). Given the properties of K-Means we know that the centers are distant from each other and with the right choice of k we can maximize the number of words we can display. This is the result of the snippet above:
(click on the figure to see the entire chart)

Monday, June 29, 2020

Solving the Travelling Salesman Problem with MiniSom

Have you ever heard of the Travelling Salesman Problem? I'm pretty sure you do, but let's refresh our mind looking at its formulation: "Given a list of points and the distances between each pair of points, what is the shortest possible path that visits each point and returns to the starting point?".
What makes this problem so famous and so studied is the fact that it has no "quick" solution as the complexity of calculating the best path increases adding more points. And the complexity increases so fast that, even with modern hardware, it can be impossible to compute an exact solution in a reasonable time. In more rigorous terms, it is an NP-hard problem. Many heuristics are known to solve this problem and in this post we will see a solution based on Self-organizing Maps (SOM). A SOM is a Neural Network that is capable of mapping an input point into a bi-dimnsional space placing points that are close to each other into the same area. Hence, the idea to solve our problem is to train the SOM in order to map the points to visit in single dimension map and visit the points from the one mapped to the first cell (the one on the left) to the last cell (on the right). Points that are mapped to the same cell are visited consecutively.


Let's generate a set of points to test this idea:
import numpy as np
import matplotlib.pyplot as plt

np.random.RandomState(10)
N_points = 20
N_neurons = N_points*2
t = np.linspace(0, np.pi*2, N_points)
x = np.cos(t)+(np.random.rand(N_points)-.5)*.3
y = np.sin(t)*.8+(np.random.rand(N_points)-.5)*.2
points = np.array([x,y]).T
plt.scatter(x, y)



We can now import MiniSom, our favorite implementation of the Self_Organizing Maps, and see what path it's able to produce:
from minisom import MiniSom

som = MiniSom(1, N_neurons*2, 2, sigma=10,
              neighborhood_function='gaussian', random_seed=50)
max_iter = 2000
som.pca_weights_init(points)

paths_x = []
paths_y = []
for i in np.arange(max_iter):
    i_point = i % len(points)
    som.update(points[i_point], som.winner(points[i_point]), i, max_iter)
    visit_order = np.argsort([som.winner(p)[1] for p in points])
    visit_order = np.concatenate((visit_order, [visit_order[0]]))
    paths_x.append(points[visit_order][:,0])
    paths_y.append(points[visit_order][:,1])
    
plt.scatter(x, y, label='point to visit')
plt.plot(paths_x[-1], paths_y[-1],
         'C3', linewidth=2, label='path')



In the snippet above we initialized the SOM and run 2000 training iterations (check this out to discover how that works). At each iteration we have saved the path found and visualized the last solution. As we can see, the line covers all the points and it's easy to see that it's the best possible path with just a glance. However, it's interesting to see how the solution evolves at each iteration:
from matplotlib.animation import FuncAnimation
from IPython.display import HTML

fig, ax = plt.subplots()
plt.scatter(x, y, label='point to visit')
ln, = plt.plot([], [], 'C3', linewidth=2, label='path')
plt.legend()

def update(frame):
    ln.set_data(paths_x[frame], paths_y[frame])
    plt.title('iteration = %d' % frame)
    return ln,

ani = FuncAnimation(fig, update, frames=np.arange(max_iter),
                    interval=10, repeat=False, blit=False)
HTML(ani.to_html5_video())



Here we note that the initial path is very messy and presents various loops and that the more the network is trained the more optimal the solution becomes. Notice that the snippet above uses the object HTML from the IPython library and it will automatically display the video if a Jupyter notebook is used. The video can be saved in a specific location using ani.save(filename.mp4).

Sunday, May 17, 2020

Neural Networks Regularization made easy with sklearn and matplotlib

Using regularization has many benefits, the most common are reduction of overfitting and solving multicollinearity issues. All of this is covered very well in literature, especially in (Hastie et all). Howerver, wihout touching too many details we can have a very straigthforward interpretation of regularization. Regularization is a way to constrain a model in order to learn less from the data. In this post we will experimentally show what are the effects of regularization on a Neural Network (Multilayer Perceptron) validating this interpretation.

Let's define a goal for our Neural Network. We have a dataset H and we want to build a model able to reconstruct the same data. More formally, we want to build a function f that takes in input H and returns the same values or an approximation close as possible to H. We can say that we want f to minimize the following error


To begin our experiment we build a data matrix H that contains the coordinate of a stylized star:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

x= [np.cos(np.pi/2), 2/5*np.cos(np.pi/2+np.pi/5), np.cos(np.pi/2+2*np.pi/5), 
    2/5*np.cos(np.pi/2+3*np.pi/5), np.cos(np.pi/2+4*np.pi/5), 
    2/5*np.cos(3*np.pi/2), np.cos(np.pi/2+6*np.pi/5), 
    2/5*np.cos(np.pi/2+7*np.pi/5), np.cos(np.pi/2+8*np.pi/5),
    2/5*np.cos(np.pi/2+9*np.pi/5), np.cos(np.pi/2)]

y=[np.sin(np.pi/2), 2/5*np.sin(np.pi/2+np.pi/5), np.sin(np.pi/2+2*np.pi/5),
   2/5*np.sin(np.pi/2+3*np.pi/5), np.sin(np.pi/2+4*np.pi/5),
   2/5*np.sin(3*np.pi/2), np.sin(np.pi/2+6*np.pi/5),
   2/5*np.sin(np.pi/2+7*np.pi/5), np.sin(np.pi/2+8*np.pi/5),
   2/5*np.sin(np.pi/2+9*np.pi/5), np.sin(np.pi/2)]

xp = np.linspace(0, 1, len(x))
np.interp(2.5, xp, x)
x = np.log(np.interp(np.linspace(0, 1, len(x)*10), xp, x)+1)

yp = np.linspace(0, 1, len(y))
np.interp(2.5, yp, y)
y = np.interp(np.linspace(0, 1, len(y)*10), yp, y)
#y[::2] += .1

H = np.array([x, y]).T
plt.plot(H[:,0], H[:,1], '-o')


Now the matrix H contains the x coordinates of our star in the first column and the y coordinates in the second. With the help of sklearn, we can now train a Neural Network and plot the result:
from sklearn.neural_network import MLPRegressor
from sklearn.preprocessing import minmax_scale

H = scale(H)

plt.figure()
f = MLPRegressor(hidden_layer_sizes=(200, 200, 200), alpha=0)
f.fit(H, H)
result = f.predict(H)
plt.plot(result[:,0], result[:,1], 'C3', linewidth=3, label='Neural Network')
plt.plot(H[:,0], H[:,1], 'o', alpha=.3, label='original')
plt.legend()
#plt.xlim([-0.1, 1.1])
#plt.ylim([-0.1, 1.1])
plt.show()



In the snippet above we created a Neural Network with 3 layers of 200 neurons. Then, we trained the model using H as both input and output data. In the chart we compare the original data with the estimation. It's easy to see that there are small differences between the two stars but they are very close.
Here's important to notice that we initialized MLPRegressor using alpha=0. This parameter controls the regularization of the model and the higher its value, the more regularization we apply. To understand how alpha affects the learning of the model we need to add a term to the computation of the error that was just introduced:


where W is a matrix of all the weights in the network. The error not only takes in account the difference between the ouput of the function and the data, but also the size of weights of the connections of the network. Hence, the higher alpha, the less the model is free to learn. If we set alpha to 20 in our experiment we have the following result:



We still achie an approximation of the star but the output of our model is smaller than before and the edges of the star are smoother.
Increasing alpha gradually is a good way to understand the effects of regularization:
from matplotlib.animation import FuncAnimation
from IPython.display import HTML

fig, ax = plt.subplots()
ln, = plt.plot([], [], 'C3', linewidth=3, label='Neural Network')
plt.plot(H[:,0], H[:,1], 'o', alpha=.3, label='original')
plt.legend()

def update(frame):
    f = MLPRegressor(hidden_layer_sizes=(200, 200, 200), alpha=frame)
    f.fit(H, H)
    result = f.predict(H)
    ln.set_data(result[:,0], result[:,1])
    plt.title('alpha = %.2f' % frame)
    return ln,

ani = FuncAnimation(fig, update, frames=np.linspace(0, 40, 100), blit=True)
HTML(ani.to_html5_video())



Here we vary alpha from 0 to 40 and plot the result for each value. We notice here that not only the star gets smaller and smoother as alpha increases but also that the network tends to preserve long lines as much as possible getting away from the edges which account less on error function. Finally we see that the result degenerates into a point when alpha is too high.

Friday, May 1, 2020

Tornado plots with matplotlib

Lately there's a bit of attention about charts where the values of a time series are plotted against the change point by point. This thanks to this rather colorful and cluttered Tornado plot.

In this post we will see how to make one of those charts with our favorite plotting library, matplotlib, and we'll also try to understand how to read them.
Let's start loading the records of the concentration of CO2 in the atmosphere and aggregate the values on month level. After that we can plot straight away the value of the concentration against the change compared to the previous month:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline

data_url = 'ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_weekly_mlo.txt'
co2_data = pd.read_csv(data_url, sep='\s+', comment='#', na_values=-999.99,
                       names=['year', 'month', 'day', 'decimal', 'ppm', 
                       'days', '1_yr_ago',  '10_yr_ago', 'since_1800'])
co2_data = co2_data.groupby([co2_data.year, co2_data.month]).mean()
idx = co2_data.index.to_flat_index()
co2_data['timestamp'] = [pd.Timestamp(year=y, month=m, day=1) for y, m in idx]
co2_data.set_index('timestamp', inplace=True)
co2_data = co2_data['2018-04-01':]


plt.plot(co2_data.ppm.diff(), co2_data.ppm, '-o')


The result is nicely loopy. This is because we plotted data for 2 years and the time series presents a yearly seasonality. Still, apart from the loop it's quite hard to understand anything without adding few details. Let's print the name of the months and highlight beginning and end of the time series:
from calendar import month_abbr

plt.figure(figsize=(8, 12))
diffs = co2_data.ppm.diff()
plt.plot(diffs, co2_data.ppm, '-o', alpha=.6, zorder=0)
plt.scatter([diffs[1]], [co2_data.ppm[1]],
            c='C0', s=140)
plt.scatter([diffs[-1]], [co2_data.ppm[-1]],
            c='C0', s=140)

for d, v, ts in zip(diffs,
                    co2_data.ppm,
                    co2_data.index):
    plt.text(d, v-.15, '%s\n%d' % (month_abbr[ts.month], ts.year),
             horizontalalignment='left', verticalalignment='top')

plt.xlim([-np.abs(diffs).max()-.1,
          np.abs(diffs).max()+.1])
plt.ylabel('CO2 concentrations (ppm)')
plt.xlabel('change from previous month (ppm)')


The situation is much improved. Now we can see that from June to September (value on the left) the concentrations decrease while from October to May (value on the right) they increase. If we look at the points where the line of zero chance is crossed, we can spot when there's a change of trend. We see that the trend changes from negative to positive from September to October and the opposite from May to June.

Was this easier to observe just have time on the y axis? That's easy to see looking at the chart below.

Tuesday, April 14, 2020

Recoloring NoIR images on the Raspberry Pi with OpenCV

Not too long ago I've been gifted a Raspberry Pi camera, after taking some pictures I realized that it produced very weird colors and I discovered that it was a NoIR camera! It means that it has no infrared filter and that it can take pictures in the darkness using an infrared LED. Since I never found an application that required taking pictures without proper lighting I started wondering if I could recolor the images processing them with some Python magic. While it's clear that the problem is ill posed, once the camera takes a picture sensing the wrong colors, the original colors are lost. It's also true that it's possible to transfer the coloring from one image to another. This, old but gold, paper shows a technique that is simple enough to be implemented on a pi. Hence, the idea to create a script that recolors the images from the NoIR camera using the colors from images taken with a proper infrared filter.

Here are the elements I gathered to start experimenting:
  • A nice implementation of the color transfer algorithm, easy to install and run on the pi.
  • An installation of OpenCV on the pi. It's possible to have a fully optimized OpenCV installation for your pi building it from the source but for this project it's okay to install the library from binaries (this command will do the trick: sudo apt-get install python-opencv).
  • A camera stand that I built myself recycling the components of an unused usb fan.


The Python script to acquire and recolor the images turned out to be pretty compact:
from picamera.array import PiRGBArray
from picamera import PiCamera
from sys import argv
# get this with: pip install color_transfer
from color_transfer import color_transfer 

import time
import cv2

# init the camera
camera = PiCamera()
rawCapture = PiRGBArray(camera)

# camera to warmup
time.sleep(0.1)

# capture
camera.capture(rawCapture, format="bgr")
captured = rawCapture.array

# import the color source
color_source = cv2.imread(argv[1])

# transfer the color
result = color_transfer(color_source, captured,
                        clip=True, preserve_paper=False)

cv2.imwrite(argv[2], result)
This script captures an image from the camera and reads another image, that will be the color source, from the disk. Then, it recolors the captured image and saves the result. The script takes in input two parameters, the color source and the name of the file in output. Here's an example of how to run the script on the pi:
$ python capture.py color_source.jpg result.jpg
Here are some samples pictures that were recolored. In each of the figures below there is the color source on the left, the image from the NoIR camera in the middle and final result on the right.


Here the source has vivid colors and the details are nice and sharp while the image from the NoIR camera is almost monochromatic. In the recolored image the color of the curtain and the wall were recovered, still the image has quite a low contrast.

This time the resulting image is much sharper and the resulting colors are more intense, even more intense than the source.


This result is particularly interesting because the NoIR image shows very nasty colors as there was quite a lot of sunlight when the picture was taken. Recoloring the image I could recover the green of some trees and the blue of the sky, however the walls and the ground got a greenish appearance while some plants look purple.

In conclusion, this turned out to be a fun experiment that also provided some encouraging results. Next step? Recoloring the images with, more modern, Deep Learning techniques.

Sunday, April 5, 2020

What makes a word beautiful?

What makes a word beautiful? Answering this question is not easy because of the inherent complexity and ambiguity in defining what it means to be beautiful. Let's tackle the question with a quantitative approach introducing the Aesthetic Potential, a metric that aims to quantify the beaty of a word w as follows:


where w+ is a word labelled as beautifu, w- as ugly and the function s is a similarity function between two words. In a nutshell, AP is the difference of the average similarity to beautiful words minus the average similarity to ugly words. This metric is positive for beautiful words and negative for ugly ones.
Before we can compute the Aesthetic Potential we need a similarity function s and a set of words labeled as beautiful and ugly. The similarity function that we will use considers the similarity of two words as the maximum Lin similarity between all the synonyms in WordNet of the two words in input (I will not introduce WordNet or the Lin similarity for brevity, but the curious reader is invited to follow the links above). Here's the Python implementation:
import numpy as np
from itertools import product
from nltk.corpus import wordnet, wordnet_ic
brown_ic = wordnet_ic.ic('ic-brown.dat')

def similarity(word1, word2):
    """
    returns the similarity between word1 and word2 as the maximum
    Lin similarity between all the synsets of the two words.
    """
    syns1 = wordnet.synsets(word1)
    syns2 = wordnet.synsets(word2)
    sims = []
    for sense1, sense2 in product(syns1, syns2):
        if sense1._pos == sense2._pos and not sense1._pos in ['a', 'r', 's']:
            d = wordnet.lin_similarity(sense1, sense2, brown_ic)
            sims.append(d)            
    if len(sims) > 0 or not np.all(np.isnan(sims)):        
        return np.nanmax(sims)
    return 0 # no similarity

print('s(cat, dog) =', similarity('cat', 'dog'))
print('s(cat, bean) = ', similarity('cat', 'bean'))
print('s(coffee, bean) = ', similarity('coffee', 'bean'))
s(cat, dog) = 0.8768009843733973
s(cat, bean) = 0.3079964716744931
s(coffee, bean) = 0.788150820826125
This function returns a value between 0 and 1. High values indicate that the two words are highly similar and low values indicate that there's no similarity. Looking at the output of the function three pairs of test words we note that the function considers "cat" and "dog" fairly similar while "dog" and "bean" not similar. Finally, "coffee" and "bean" are considered similar but not as similar as "cat" and "dog".
Now we need some words labeled as beautiful and some as ugly. Here I propose two lists of words inspired by the ones used in (Jacobs, 2017) for the German language:
beauty = ['amuse',  'art', 'attractive',
          'authentic', 'beautiful', 'beauty',
          'bliss', 'cheerful', 'culture',
          'delight', 'emotion', 'enjoyment',
          'enthusiasm', 'excellent', 'excited',
          'fascinate', 'fascination', 'flower',
          'fragrance', 'good', 'grace',
          'graceful', 'happy', 'heal',
          'health', 'healthy', 'heart',
          'heavenly', 'hope', 'inspire',
          'light', 'joy', 'love',
          'lovely', 'lullaby', 'lucent',
          'loving', 'luck', 'magnificent',
          'music', 'muse', 'life',
          'paradise', 'perfect', 'perfection',
          'picturesque', 'pleasure',
          'poetic', 'poetry', 'pretty',
          'protect', 'protection',
          'rich', 'spring', 'smile',
          'summer', 'sun', 'surprise',          
          'wealth', 'wonderful']

ugly = ['abuse', 'anger', 'imposition', 'anxiety',
        'awkward', 'bad', 'unlucky', 'blind',
        'chaotic', 'crash', 'crazy',
        'cynical', 'dark', 'disease',
        'deadly', 'decrepit', 'death',
        'despair', 'despise', 'disgust',
        'dispute', 'depression', 'dull',
        'evil', 'fail', 'hate',
        'hideous', 'horrible', 'horror',
        'haunted', 'illness', 'junk',
        'kill', 'less',
        'malicious', 'misery', 'murder',
        'nasty', 'nausea', 'pain',
        'piss', 'poor', 'poverty',
        'puke', 'punishment', 'rot',
        'scam', 'scare', 'shame',
        'spoil', 'spite', 'slaughter',
        'stink', 'terrible', 'trash',
        'trouble', 'ugliness', 'ugly',
        'unattractive', 'virus']
A remark is necessary here. The AP strongly depends on these two lists and the fact that I made them on my own strongly biases the results towards my personal preferences. If you're interested on a more general approach to label your data, the work published by Westbury et all in 2014 is a good place to start.
We now have all the pieces to compute our Aesthetic Potential:
def aesthetic_potential(word, beauty, ugly):
    """
    returns the aesthetic potential of word
    beauty and ugly must be lists of words
    labelled as beautiful and ugly respectively
    """
    b = np.nanmean([similarity(word, w) for w in beauty])
    u = np.nanmean([similarity(word, w) for w in ugly])
    return (b - u)*100

print('AP(smile) =', aesthetic_potential('smile', beauty, ugly))
print('AP(conjuncture) =', aesthetic_potential('conjuncture', beauty, ugly))
print('AP(hassle) =', aesthetic_potential('hassle', beauty, ugly))
AP(smile) = 2.6615214570040195
AP(conjuncture) = -3.418813636728729e-299
AP(hassle) = -2.7675826881674497
It is a direct implementation of the equation introduced above, the only difference is that the result is multiplied by 100 to have the metric in percentage for readability purposes. Looking at the results we see that the metric is positive for the word "smile", indicating that the word tends toward the beauty side. It's negative for "hassle", meaning it tends to the ugly side. It's 0 for "conjuncture", meaning that we can consider it a neutral word. To better understand these results we can compute the metric for a set of words and plot it agains the probability of a value of the metric:
test_words = ['hate', 'rain',
         'earth', 'love', 'child',
         'sun', 'patience',
         'coffee', 'regret',
         'depression', 'obscure', 'bat', 'woman',
         'dull', 'nothing', 'disillusion',
         'abort', 'blurred', 'cruelness', #'hassle',
         'stalking', 'relevance',
         'conjuncture', 'god', 'moon',
         'humorist', 'idea', 'poisoning']

ap = [aesthetic_potential(w.lower(), beauty, ugly) for w in test_words]

from scipy.stats import norm
import matplotlib.pyplot as plt
from matplotlib.colors import to_hex, LinearSegmentedColormap, Normalize
%matplotlib inline

p_score = norm.pdf(ap, loc=0.0, scale=0.7) #params estimated on a larger sample
p_score = p_score / p_score.sum()

normalizer = Normalize(vmin=-10, vmax=10)
colors = ['crimson', 'crimson', 'silver', 'deepskyblue', 'deepskyblue']
cmap = LinearSegmentedColormap.from_list('beauty', colors=colors)

plt.figure(figsize=(8, 12))
plt.title('What makes a word beautiful?',
          loc='left', color='gray', fontsize=22)
plt.scatter(p_score, ap, c='gray', marker='.', alpha=.6)
for prob, potential, word in zip(p_score, ap, test_words):
    plt.text(prob, potential, word.lower(),
             fontsize=(np.log10(np.abs(potential)+2))*30, alpha=.8,
             color=cmap(normalizer(potential)))
plt.text(-0.025, 6, 'beautiful', va='center',
         fontsize=20, rotation=90, color='deepskyblue')
plt.text(-0.025, -6, 'ugly', va='center',
         fontsize=20, rotation=90, color='crimson')
plt.xlabel('P(Aesthetic Potential)', fontsize=20)
plt.ylabel('Aesthetic Potential', fontsize=20)
plt.gca().tick_params(axis='both', which='major', labelsize=14)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.show()



This chart gives us a better insight on the meaning of the values we just computed. We note that high probability values are around 0, hence most words in the vocabulary are neutral. Values above 2 and below -2 have a quite low probability, this tells us that words associated with these values have a strong Aesthetic Potential. From this chart we can see that the words "idea" and "sun" are considered beautiful while "hate" and "poisoning" are ugly (who would disagree with that :).

Tuesday, March 17, 2020

Ridgeline plots in pure matplotlib

A Ridgeline plot (also called Joyplot) allows us to compare several statistical distributions. In this plot each distribution is shown with a density plot, and all the distributions are aligned to the same horizontal axis and, sometimes, presented with a slight overlap.

There are many options to make a Ridgeline plot in Python (joypy being one of them) but I decided to make my own function using matplotlib to have full flexibility and minimal dependencies:
from scipy.stats.kde import gaussian_kde
from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt

def ridgeline(data, overlap=0, fill=True, labels=None, n_points=150):
    """
    Creates a standard ridgeline plot.

    data, list of lists.
    overlap, overlap between distributions. 1 max overlap, 0 no overlap.
    fill, matplotlib color to fill the distributions.
    n_points, number of points to evaluate each distribution function.
    labels, values to place on the y axis to describe the distributions.
    """
    if overlap > 1 or overlap < 0:
        raise ValueError('overlap must be in [0 1]')
    xx = np.linspace(np.min(np.concatenate(data)),
                     np.max(np.concatenate(data)), n_points)
    curves = []
    ys = []
    for i, d in enumerate(data):
        pdf = gaussian_kde(d)
        y = i*(1.0-overlap)
        ys.append(y)
        curve = pdf(xx)
        if fill:
            plt.fill_between(xx, np.ones(n_points)*y, 
                             curve+y, zorder=len(data)-i+1, color=fill)
        plt.plot(xx, curve+y, c='k', zorder=len(data)-i+1)
    if labels:
        plt.yticks(ys, labels)
The function takes in input a list of datasets where each dataset contains the values to derive a single distribution. Each distribution is estimated using Kernel Density Estimation, just as we've seen previously, and plotted increasing the y value.

Let's generate data from few normal distributions with different means and have a look at the output of the function:
data = [norm.rvs(loc=i, scale=2, size=50) for i in range(8)]
ridgeline(data, overlap=.85, fill='y')


Not too bad, we can clearly see that each distribution has a different mean. Let's apply the function on real world data:
import pandas as pd
data_url = 'ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_weekly_mlo.txt'
co2_data = pd.read_csv(data_url, sep='\s+', comment='#', na_values=-999.99,
                       names=['year', 'month', 'day', 'decimal', 'ppm', 
                       'days', '1_yr_ago',  '10_yr_ago', 'since_1800'])
co2_data = co2_data[co2_data.year >= 2000]
co2_data = co2_data[co2_data.year != 2020]

plt.figure(figsize=(8, 10))
grouped = [(y, g.ppm.dropna().values) for y, g in co2_data.groupby('year')]
years, data = zip(*grouped)
ridgeline(data, labels=years, overlap=.85, fill='tomato')
plt.title('Distribution of CO2 levels per year since 2000',
          loc='left', fontsize=18, color='gray')
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.xlabel('ppm')
plt.xlim((co2_data.ppm.min(), co2_data.ppm.max()))
plt.ylim((0, 3.1))
plt.grid(zorder=0)
plt.show()


In the snippet above we downloaded the measurements of the concentration of CO2 in the atmosphere, the same data was also used here, and grouped the values by year. Then, we generated a Ridgeline plot that shows the distribution of CO2 levels each year since 2000. We easily note that the average concentration went from 370ppm to 420pmm gradually increasing over the 19 years abserved. We also note that the span of each distribution is approximatively 10ppm.