None

Pandas pipelines

Method chaining is a great way for writing pandas code as it allows us to go from:

raw_data = pd.read_parquet(...)
data_with_types = set_dtypes(raw_data)
data_without_outliers = remove_outliers(data_with_types)

to

data = (
    pd.read_parquet(...)
    .pipe(set_dtypes)
    .pipe(remove_outliers)
)

But it does come at a cost, mostly in our ability to debug long pipelines. If there’s a mistake somewhere along the way, you can only inspect the end result and lose the ability to inspect intermediate results. A mitigation for this is to add decorators to your pipeline functions that log common attributes of your dataframe on each step:

Logging in method chaining

In order to use the logging capabilitites we first need to ensure we have a proper logger configured. We do this by running logging.basicConfig(level=logging.DEBUG).

[1]:
from sklego.datasets import load_chicken
from sklego.pandas_utils import log_step
chickweight = load_chicken(give_pandas=True)
[2]:
import logging

logging.basicConfig(level=logging.DEBUG)

If we now add a log_step decorator to our pipeline function and execute the function, we see that we get some logging statements for free

[3]:
@log_step
def set_dtypes(chickweight):
    return chickweight.assign(
        diet=lambda d: d['diet'].astype('category'),
        chick=lambda d: d['chick'].astype('category'),
    )
[4]:
chickweight.pipe(set_dtypes).head()
INFO:__main__:[set_dtypes(df)] time=0:00:00.007444 n_obs=578, n_col=4
[4]:
weight time chick diet
0 42 0 1 1
1 51 2 1 1
2 59 4 1 1
3 64 6 1 1
4 76 8 1 1

We can choose to log at different log levels. For example if we have a remove_outliers function that calls different outlier removal functions for different types of outliers, we might in general be only interested in the total outliers removed. In order to get that, we set the log level for our specific implementations to logging.DEBUG

[5]:
@log_step(level=logging.DEBUG)
def remove_dead_chickens(chickweight):
    dead_chickens = chickweight.groupby('chick').size().loc[lambda s: s < 12]
    return chickweight.loc[lambda d: ~d['chick'].isin(dead_chickens)]


@log_step
def remove_outliers(chickweight):
    return chickweight.pipe(remove_dead_chickens)
[6]:
chickweight.pipe(set_dtypes).pipe(remove_outliers).head()
INFO:__main__:[set_dtypes(df)] time=0:00:00.003233 n_obs=578, n_col=4
DEBUG:__main__:[remove_dead_chickens(df)] time=0:00:00.011910 n_obs=519, n_col=4
INFO:__main__:[remove_outliers(df)] time=0:00:00.043610 n_obs=519, n_col=4
[6]:
weight time chick diet
0 42 0 1 1
1 51 2 1 1
2 59 4 1 1
3 64 6 1 1
4 76 8 1 1

We can now easily switch between log levels to get the full detail or the general overview

[7]:
logging.getLogger(__name__).setLevel(logging.INFO)
chickweight.pipe(set_dtypes).pipe(remove_outliers).head()
INFO:__main__:[set_dtypes(df)] time=0:00:00.001635 n_obs=578, n_col=4
INFO:__main__:[remove_outliers(df)] time=0:00:00.004753 n_obs=519, n_col=4
[7]:
weight time chick diet
0 42 0 1 1
1 51 2 1 1
2 59 4 1 1
3 64 6 1 1
4 76 8 1 1

The log step function has some settings that let you tweak what exactly to log: - time_taken: log the time it took to execute the function (default True) - shape: log the output shape of the function (default True) - shape_delta: log the difference in shape between input and output (default False) - names: log the column names if the output (default False) - dtypes: log the dtypes of the columns of the output (default False)

For example, if we don’t care how long a function takes, but do want to see how many rows are removed if we remove dead chickens:

[8]:
@log_step(time_taken=False, shape=False, shape_delta=True)
def remove_dead_chickens(chickweight):
    dead_chickens = chickweight.groupby('chick').size().loc[lambda s: s < 12]
    return chickweight.loc[lambda d: ~d['chick'].isin(dead_chickens)]

chickweight.pipe(remove_dead_chickens).head()
INFO:__main__:[remove_dead_chickens(df)] delta=(-59, 0)
[8]:
weight time chick diet
0 42 0 1 1
1 51 2 1 1
2 59 4 1 1
3 64 6 1 1
4 76 8 1 1

We can also define custom logging functions by using log_step_extra. This takes any number of functions (> 1) that can take the output dataframe and return some output that can be converted to a string. For example, if we want to log some arbitrary message and the number of unique chicks in our dataset, we can do:

[9]:
from sklego.pandas_utils import log_step_extra

def count_unique_chicks(df, **kwargs):
    return "nchicks=" + str(df["chick"].nunique())

def display_message(df, msg):
    return msg


@log_step_extra(count_unique_chicks)
def start_pipe(df):
    """Get initial chick count"""
    return df


@log_step_extra(count_unique_chicks, display_message, msg="without diet 1")
def remove_diet_1_chicks(df):
    return df.loc[df["diet"] != 1]

chickweight.pipe(start_pipe).pipe(remove_diet_1_chicks).head()


INFO:__main__:[start_pipe(df)] nchicks=50
INFO:__main__:[remove_diet_1_chicks(df)] nchicks=30 without diet 1
[9]:
weight time chick diet
220 40 0 21 2
221 50 2 21 2
222 62 4 21 2
223 86 6 21 2
224 125 8 21 2
[ ]: