Why & When: Cross Validation in Practice

Nearly every time I lead a machine learning course, it becomes clear that there is a fundamental acceptance that cross validation should be done…and almost no understanding as to why it should be doneor how it is actually done in a real-world workflow. Finally I’ve decided to move my answers from the white board to the blog post. Hope this helps!

Cross validation is like standing in a line in San Francisco: Everyone’s doing it, so everybody does it, and if you ask someone about it, chances are they don’t know why they’re doing it.

Fortunately for those who might be doing so without fully knowing why, there is very good reason to cross validate (which is generally not true about joining any random Bay Area queue).

So, other than our now inbred inclination as data scientists to do the thing that everyone else is doing…why should we cross validate?

The Why & the When

To discuss why we perform cross-validation, it’s easiest to review how and when we incorporate cross validation into a data science workflow.

To set the stage, let’s pretend we’re not doing a GridSearchCV. Just good old fashioned cross validation.

And, let’s forget about time. Time of observation usually matters quite a bit…and that complicates things…quite a bit. For right now let’s pretend time of observation doesn’t matter.

First Partition of Data

Divide your data into a holdout set (maybe you prefer to call it a testing set?) and the rest of your data.

What’s a good practice here? If you’re data isn’t too big, import your entire_dataset, randomly sample [your chosen holdout percent]% of observation ids (or row indices). Create two new data frames, holdout_data and rest_of_data. Clear your variable entire_dataset. Export holdout_data to the file or storage type of your convenience, and clear your variable holdout_data.

If your data is too big, adapt a similar workflow by using indices instead of data frames and only loading data when necessary.

To clear a variable in python, you’re welcome to del your variable or set it to Null, as you’d like. Just make sure you can’t access the data by accident.

EDA & Feature Engineering

Let’s assume we then follow good practices around exploratory data analysis and creating a reusable feature engineering pipeline.

Notice that we’re only using the rest_of_data for EDA and designing our feature engineering pipeline. Let’s think about this for a second. Only doing this on the rest_of_data allows us to test the entire modeling process when we test on the holdout data. Clever, right?

Now, we’ve got our data ready to go and we have an automated way to process new data, via our pipeline.

Specify a Model

We need to specify a model, e.g. choose to use ‘random forest’. For this non-grid-search scenario, we’re going to also say this is where tuning occurs.

Cross Validation

Now we cross-validate.

You know the idea: take the rest_of_data, partition it into k folds (partition). For step j, create a [j]_train_set (all the data but the jth partition) and a [j]_test_set (the jth partition). Train on the training set, test on the test set, record the test performance, and throw away the trained model. Do this for all k partitions.

Or, more likely, have sklearn do it for you.

Go ahead, we’ll wait.

Now we have k data points of “sample performance” of our specified model.

We inspect these results by looking at the mean (or median) score and the variance. This is the key to why we cross validate.

Why We Perform Cross Validation:

We want to get an idea of how well the specified model can generalize to unseen data and how reliable that performance is. The mean (or median) will gives us something like expected performance. The variance gives us an idea of how likely it is to actually perform near that ‘expected’ performance.

I’m not making this up–you can look at the sklearn docs and they, too, look at the variance. Some people even plot out the performances.

If you’re coming from a traditional analytics or statistics background, you might think “don’t we have theories for that? don’t we have p-values for that? don’t we have modeling assumptions for that?” The answer is, generally, no. We don’t have the same strong assumptions around the distribution of data or the performance of models in broader machine learning as we do in, say, linear models or GLMs. As a result, we need to engineer what we can’t theory [sic]. Cross validation is the engineered answer to “how well will this perform?”

Assuming the performance measures are acceptable, we’ve now found a specification of a model we like. Huzzah! (Otherwise, go back and specify a different model and repeat this process.)

Note: we did not get a trained model; we got a bunch of trained models, none of which we will actually use and all of which we will immediately discard. What we found was that the “specified model”, e.g. random forest, was a good choice for this problem.

Get the Performance Estimate

Armed with a specified model we like, we train this specified model on the entirety of rest_of_data. Sweet.

Now we load the holdout_data and run it through the feature engineering pipeline. We test our trained-on-rest-of-data specified model on the holdout_data. This gives us our performance estimate.

We hold a silent sort of intuition about performance at this step because rest_of_data is necessarily larger than any training set used during cross validation.

The intuition is that the specified model trained on this larger set would not have higher variation in performance and on average will perform at least as well as the average performance seen during cross validation. However, to check this we would need to repeat the steps from “First partition of Data” through this one multiple times.

NOTE: We do not assume it will perform at or better than average performance seen in cross validation. That is a very different assumption.

NOTE: we did not get a trained model; we got a trained model which we will immediately discard. What we found was a better estimate for in-production performance of the “specified model”, e.g. random forest.

Train Your Final Model

Assuming our performance estimate is sufficient, now we train the specified model on the entire_dataset. This is the trained model we use for production and/or decision making. This is our final model.

Monitoring & Maintenance

Until, as always, we iterate.

Hope this gave a little more insight to cross-validation in practice!

When Stack Exchange Fails

….or How to Answer Your Own Questions

This walk-thru shows how to hunt down the answer to an implementation question. The example references sklearn, github, and python. The post is intended for data scientists new to coding and anyone looking for a guide to answering their own code questions. 

Does sklearn’s gridsearchCV use the same cross validation train/test splits for evaluating each hyperparameter combination?

A student recently asked me the above question. Well, he didn’t pose it as formally as all that, but this was the underlying idea. I had to think for a minute. I knew what I assumed the implementation was, but I didn’t know for sure. When I told him I didn’t know but I could find out he indicated that his question was more of a moment’s curiosity than any real interest. I could’ve left it there. But now I was curious. And so began this blog post.

I began my research as any good data scientist does:

It’s true, this question is a little bit nit-picky. So, I wasn’t completely surprised when the answer didn’t bubble right up to the top of my google results.

I had to break out my real gumshoe skills: I added “stackexchange” to my query.

Now I had googled my question–twice–and no useful results. What was a data scientist to do?

Before losing all hope, I rephrased the question a few times (always good practice, but particularly useful in data science with its many terms for the same thing). Yet still no luck. The internet’s forums held no answers for me. This only made me more determined to find the answer.

What now?

There’s always the option to….read the documentation (which I did). But, as often is the case with questions that are not already easily google-able, the documentation did not answer the question I was asking.

Fine.” I thought. “I’ll answer the question myself.”

How do you answer a question about implementation?

Look at the implementation. I.e, read the code.

Now, this is where many newer data scientists feel a bit uneasy. And that’s why I’m taking you on this adventure with me.

The Wonderland of Someone Else’s Code

Read the sklearn code?!?

If it feels like an invasion, it’s not. If it feels scary, it won’t be (after a you get used to it.) But if it sounds…dangerous? That’s because it is.

…Insofar as a rabbit hole is dangerous. Much like Alice’s journey through Wonderland, digging through someone else’s code can be an adventure fraught with enticing diversions, distracting enticements, and an entirely mad import party. It’s easy to find yourself in the Wonderland of someone else’s repo hours after you started, no closer to an answer, and having mostly forgotten which rabbit you followed here in the first place.

Much like Alice’s adventure, it’s a personal journey too. Reading someone else’s code can reveal a lot about you as a coder and as a colleague. But, that’s for another blog post .

Warned of the dangers, with our wits about us, and with our question firmly in hand (seriously–it helps to right it down), we begin our journey.

How do we find this rabbit hole of code in the first place? A simple google search of “GridSearchCV sklearn github” will get us fairly close.

Clicking through the first result get us to the .py file we need.

We know that GridSearchCV is a class, so we’re looking for the class definition. CMD(CTRL) + F “class GridSearchCV” will take us here.

We note that GridSearchCV inherits from BaseSearchCV. Right now, this could be a costly diversion. But, there’s a good chance we might need this later. So this is good a breadcrumb to mentally pickup.

And–Great! A docstring!

Oh, wait–a 300 line docstring. If we have the time we could peruse it, but if we skim it we notice it’s very similar to the documentation (which makes sense). We don’t expect to find any new information there, so we make the judgment call and pass by this diversion.

We scroll to the end of that lengthy docstring and find there are only two methods defined for this class (__init__ and _run_search). _run_search is the one we need.


Ok. So what’s this evaluate_candidates?

We can follow the same method of operation and search for def evaluate_candidates. With a little (ok, a lot) of scrolling we’ll see it’s a method of the BaseSearchCV class. Now, is when we pause to pat ourselves on the back for noting the inheritance earlier (pat, pat, pat).

Here we can finally start to inspect the implementation:

So….that’s a lot. After a cursory reading through the code, it becomes clear that the real work we’re concerned with happens here:

product, if you’re not familiar, is imported from the the fabulous itertools module which provides combinatorial functions like combinations, permutations, and product (used here).

(Worthy Read for Later: itertools has far more utility than listed here and it is certainly worth getting familiar with docs.)

There’s one product call for the candidate_params and cv.split. What does product  yield? It returns an iterator whose elements comprise the cartesian product of the arguments passed to it.

Now we’ve got distractions at every corner. We could chase cv, itertools, parallel,BaseSearchCV, or any of the other fun things.

This is where having our wits about us helps. We can look back to our scribbled question: “Does sklearn’s gridsearchCV use the same cross validation test splits for evaluating each hyperparameter combination?” Despite the tempting distractions, we are getting very close to the answer. Let’s focus on that product call.

Ok, great. So we have the cartesian product of candidate_params and cv.split(X, y, groups).

Thank Guido for readability counting, am I right?

From here it’s a safe bet that candidate_params are indeed the candidate parameters for our grid search. cv is cross validation and split returns precisely what we’d expect.

It might look concerning that one of the arguments passed to product is itself a call to the split method of cv. You might wonder “Is it called multiple times? Will it generate new cv splits for every element?”

Fear not, for product and split behave just as we would hope.

split returns a generator, a discussion of which merits its own post. For these purposes it’s enough to say split acts like an iterator (e.g. list).

… and product treats it the same as it would a list.

Let’s say we didn’t already know that (because we probably didn’t). When it comes to many common functions, we can often quickly test behavior with toy cases to answer our questions:

The exact same train sets for each parameter (high five!)

Great! Now all we have to do is look at this in its larger context:

It looks like delayed and parallel  will help us manage this execution. We could go hunt down how they do this, and it might be a worthy journey when time is not of the essence. For right now, we’ve got our wits about us and we’ve already answered our question:

sklearn’s gridsearchCV uses the same cross validation train/test splits for evaluating each hyperparameter combination.

There we have it. We’ve answered out question (yes!), we’ve might’ve learned about a cool new module (itertools), we even bookmarked a few diversions for if-we-ever-have-extra-time (parallel, generators, BaseSearchCV,….), and we did it all without too much distraction. Maybe we can even take lunch today!

Special thanks to Damien Martin, a fellow Metis SDS, for giving my post a good once-over.  You can find his blog at kiwidamien.github.io where you might want to start by reading his empirical Bayes series.