Coloring validation metrics

I’ve been computing the validation metrics locally and keeping my local validation code in sync with all the recent changes (per-era feature-neutral mean and feature exposure, for instance). One thing that I haven’t been able to do until last week was color the metrics the way they’re displayed on the website. I asked @master_key (MikeP on Rocket Chat) for the intervals and percentiles they use for coloring the metrics on the website, and he shared the numbers with me.

Here’s some quick and dirty Python code that I wrote based on the numbers that @master_key shared with me. I suspect there are many others who compute validation metrics locally, who might benefit from this. BTW, if you aren’t already compute validation metrics locally, I’d recommend doing it.All the code needed to compute the validation metrics can be found in the example model.

import numpy as np
from scipy import stats

from colorama import Fore, Style

    "mean": (0.013, 0.028),
    "sharpe": (0.53, 1.24),
    "std": (0.0303, 0.0168),
    "max_feature_exposure": (0.4, 0.0661),
    "mmc_mean": (-0.008, 0.008),
    "corr_plus_mmc_sharpe": (0.41, 1.34),
    "max_drawdown": (-0.115, -0.025),
    "feature_neutral_mean": (0.006, 0.022)

def color_metric(metric_value, metric_name):
    low, high = VALIDATION_METRIC_INTERVALS[metric_name]
    pct = stats.percentileofscore(np.linspace(low, high, 100),
    if high <= low:
        pct = 100 - pct
    if pct > 95:  # Excellent
        return f"{Style.BRIGHT}{Fore.GREEN}{metric_value:.4f}" \
    elif pct > 75:  # Good
        return f"{Fore.GREEN}{metric_value:.4f}{Fore.BLACK}"
    elif pct > 35:  # Fair
        return f"{metric_value:.4f}"
    else:  # Bad
        return f"{Fore.RED}{metric_value:.4f}{Fore.BLACK}"

I use the colorama module for coloring text (and it works with Jupyter notebooks, as well as the terminal). It’s quite straightforward to use something else in its place, if needed.