Wednesday, May 30, 2018

Visualizing UK Carbon Emissions

Have you ever wanted to check carbon emissions in the UK and never had an easy way to do it? Now you can use the Official Carbon Intensity API developed by the National Grid. Let's see an example of how to use the API to summarize the emissions in the month of May. First, we download the data with a request to the API:
import urllib.request
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

period = ('2018-05-01T00:00Z', '2018-05-28T00:00Z')
url = 'https://api.carbonintensity.org.uk/intensity/%s/%s'
url = url % period
response = urllib.request.urlopen(url)
data = json.loads(response.read())['data']
We organize the result in a DataFrame indexed by timestamps:
carbon_intensity = pd.DataFrame()
carbon_intensity['timestamp'] = [pd.to_datetime(d['from']) for d in data]
carbon_intensity['intensity'] = [d['intensity']['actual'] for d in data]
carbon_intensity['classification'] = [d['intensity']['index'] for d in data]
carbon_intensity.set_index('timestamp', inplace=True)
From the classification provided we extract the thresholds to label emissions in low, high and moderate:
thresholds = carbon_intensity.groupby(by='classification').min()
threshold_high = thresholds[thresholds.index == 'high'].values[0][0]
threshold_moderate = thresholds[thresholds.index == 'moderate'].values[0][0]
Now we group the data by hour of the day and create a boxplot that shows some interesting facts about carbon emissions in May:
hour_group = carbon_intensity.groupby(carbon_intensity.index.hour)

plt.figure(figsize=(12, 6))
plt.title('UK Carbon Intensity in May 2018')
plt.boxplot([g.intensity for _,g in hour_group], 
            medianprops=dict(color='k'))

ymin, ymax = plt.ylim()

plt.fill_between(x=np.arange(26), 
                 y1=np.ones(26)*threshold_high, 
                 y2=np.ones(26)*ymax, 
                 color='crimson', 
                 alpha=.3, label='high')

plt.fill_between(x=np.arange(26), 
                 y1=np.ones(26)*threshold_moderate, 
                 y2=np.ones(26)*threshold_high, 
                 color='skyblue', 
                 alpha=.5, label='moderate')

plt.fill_between(x=np.arange(26), 
                 y1=np.ones(26)*threshold_moderate, 
                 y2=np.ones(26)*ymin, 
                 color='palegreen', 
                 alpha=.3, label='low')

plt.ylim(ymin, ymax)
plt.ylabel('carbon intensity (gCO_2/kWH)')
plt.xlabel('hour of the day')
plt.legend(loc='upper left', ncol=3,
           shadow=True, fancybox=True)
plt.show()

We notice that the medians almost always falls in the moderate emissions region and in two cases it even falls in the low region. In the early afternoon the medians reach their minimum while the maximum is reached in the evening. It's nice to see that most of the hours present outliers in the low emissions region and only few outliers are in the high region.

Do you want to know more about boxplots? Check this out!

Monday, October 9, 2017

Spotting outliers with Isolation Forest using sklearn

Isolation Forest is an algorithm to detect outliers. It partitions the data using a set of trees and provides an anomaly scores looking at how isolated is the point in the structure found, the anomaly score is then used to tell apart outliers from normal observations. In this post we will see an example of how IsolationForest behaves in simple case. First, we will generate 1-dimensional data from bimodal distribution, then we will compare the anomaly score with the distribution of the data and highlighting the regions considered where the outliers fall.

To start, let's generate the data and plot the histogram:
import numpy as np
import matplotlib.pyplot as plt

x = np.concatenate((np.random.normal(loc=-2, scale=.5,size=500), 
                    np.random.normal(loc=2, scale=.5, size=500)))

plt.hist(x, normed=True)
plt.xlim([-5, 5])
plt.show()

Here we note that there are three regions where the data has low probability to appear. One on the right side of the distribution, another one and the left and another around zero. Let's see if using IsolationForest we are able to identify these three regions:

from sklearn.ensemble import IsolationForest

isolation_forest = IsolationForest(n_estimators=100)
isolation_forest.fit(x.reshape(-1, 1))

xx = np.linspace(-6, 6, 100).reshape(-1,1)
anomaly_score = isolation_forest.decision_function(xx)
outlier = isolation_forest.predict(xx)

plt.plot(xx, anomaly_score, label='anomaly score')
plt.fill_between(xx.T[0], np.min(anomaly_score), np.max(anomaly_score), 
                 where=outlier==-1, color='r', 
                 alpha=.4, label='outlier region')
plt.legend()
plt.ylabel('anomaly score')
plt.xlabel('x')
plt.xlim([-5, 5])
plt.show()

In the snippet above we have trained our IsolationForest using the data generated, computed the anomaly score for each observation and classified each observation as outlier or non outlier. The chart shows, the anomaly scores and the regions where the outliers are. As expected, the anomaly score reflects the shape of the underlying distribution and the outlier regions correspond to low probability areas.

Thursday, July 13, 2017

Dates in Pandas Cheatsheet

Lately I've been working a lot with dates in Pandas so I decided to make this little cheatsheet with the commands I use the most.

Importing a csv using a custom function to parse dates

import pandas as pd

def parse_month(month):
    """
    Converts a string from the format M in datetime format.
    Example: parse_month("2007M02") returns datetime(2007, 2, 1)
    """
    return pd.datetime(int(month[:4]), int(month[-2:]), 1)

temperature = pd.read_csv('TempUSA.csv', parse_dates=['Date'], 
                          date_parser=parse_month, 
                          index_col=['Date'], # will become an index
                          # use a subset of the columns
                          usecols=['Date', 
                                   'LosAngelesMax', 'LosAngelesMin'])
print temperature
            LosAngelesMax  LosAngelesMin
Date                                    
2000-01-01           19.6           10.0
2000-02-01           18.9           10.1
2000-03-01           18.6           10.1
2000-04-01           20.2           12.5
2000-05-01           21.9           14.2

Format the dates in a chart

import matplotlib.pyplot as plt
import matplotlib.dates as mdates
plt.plot(temperature['LosAngelesMax'])
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))
plt.show()

Here's the reference of the date format directives. ISO compliant format: %Y-%m-%dT%H:%M:%S.

Group the DataFrame by month

print temperature.groupby([temperature.index.month]).mean() 
      LosAngelesMax  LosAngelesMin
Date                              
1         20.092308       8.992308
2         19.223077       9.276923
3         19.253846      10.492308
4         19.992308      11.461538
5         21.076923      13.761538
6         22.123077      15.800000
7         23.892308      17.315385
8         24.246154      17.530769
9         24.384615      16.846154
10        23.330769      14.630769
11        21.950000      11.241667
12        19.241667       8.683333
The resulting DataFrame is indexed by month.

Merging two DataFrames indexed with timestamps that don't match exactly

date_range_a = pd.date_range('2007-01-01 01:00', 
                            '2007-01-01 3:00', freq='1h')
date_range_b = date_range_a + pd.Timedelta(10, 'm')
df_a = pd.DataFrame(np.arange(len(date_range_a)), 
                    columns=['a'], index=date_range_a)
df_b = pd.DataFrame(['x', 'y', 'z'], 
                    columns=['b'], index=date_range_b)

print 'left DataFrame'
print df_a
print '\nright DataFrame'
print df_b
print '\nmerge_AsOf result'
print pd.merge_asof(df_a, df_b, direction='nearest', 
                    left_index=True, right_index=True)
left DataFrame
                     a
2007-01-01 01:00:00  0
2007-01-01 02:00:00  1
2007-01-01 03:00:00  2

right DataFrame
                     b
2007-01-01 01:10:00  x
2007-01-01 02:10:00  y
2007-01-01 03:10:00  z

merge_AsOf result
                     a  b
2007-01-01 01:00:00  0  x
2007-01-01 02:00:00  1  y
2007-01-01 03:00:00  2  z
The DataFrames have been aligned according to the index on the left.

Aligning two DataFrames

aligned = df_a.align(df_b)

print 'left aligned'
print aligned[0]
print '\nright aligned'
print aligned[1]
print '\ncombination'
aligned[0]['b'] = aligned[1]['b']
print aligned[0]
left aligned
                       a   b
2007-01-01 01:00:00  0.0 NaN
2007-01-01 01:10:00  NaN NaN
2007-01-01 02:00:00  1.0 NaN
2007-01-01 02:10:00  NaN NaN
2007-01-01 03:00:00  2.0 NaN
2007-01-01 03:10:00  NaN NaN

right aligned
                      a    b
2007-01-01 01:00:00 NaN  NaN
2007-01-01 01:10:00 NaN    x
2007-01-01 02:00:00 NaN  NaN
2007-01-01 02:10:00 NaN    y
2007-01-01 03:00:00 NaN  NaN
2007-01-01 03:10:00 NaN    z

combination
                       a    b
2007-01-01 01:00:00  0.0  NaN
2007-01-01 01:10:00  NaN    x
2007-01-01 02:00:00  1.0  NaN
2007-01-01 02:10:00  NaN    y
2007-01-01 03:00:00  2.0  NaN
2007-01-01 03:10:00  NaN    z
The timestamps are now aligned according to both the DataFrames and unknown values have been filled with NaNs. The missing value can be filled with interpolation when working with numeric values:
print aligned[0].a.interpolate()
2007-01-01 01:00:00    0.0
2007-01-01 01:10:00    0.5
2007-01-01 02:00:00    1.0
2007-01-01 02:10:00    1.5
2007-01-01 03:00:00    2.0
2007-01-01 03:10:00    2.0
Name: a, dtype: float64
The categorical values can be filled using the fillna method:
print aligned[1].b.fillna(method='bfill')
2007-01-01 01:00:00    x
2007-01-01 01:10:00    x
2007-01-01 02:00:00    y
2007-01-01 02:10:00    y
2007-01-01 03:00:00    z
2007-01-01 03:10:00    z
Name: b, dtype: object
The method bfill propagates the next valid observation, while ffil the last valid observation.

Convert a Timedelta in hours

td = pd.Timestamp('2017-07-05 16:00') - pd.Timestamp('2017-07-05 12:00')
print td / pd.Timedelta(1, unit='h')
4.0
To convert in days, months, minutes and so on one just need to change the unit. Here are the values accepted: D,h,m,s,ms,us,ns.

Convert pandas timestamps in unix timestamps

unix_ts = pd.date_range('2017-01-01 1:00', 
                        '2017-01-01 2:00', 
                        freq='30min').astype(np.int64) // 10**9
print unix_ts
Int64Index([1483232400, 1483234200, 1483236000], dtype='int64')
To convert in milliseconds divided by 10**6 instead of 10**9.

Convert unix timestamps in pandas timestamps

print pd.to_datetime(unix_ts, unit='s')
DatetimeIndex(['2017-01-01 01:00:00', '2017-01-01 01:30:00',
               '2017-01-01 02:00:00'],
              dtype='datetime64[ns]', freq=None)
To convert from timestamps in milliseconds change the unit to 'ms'.