One More Intelligible Model

In my most recent entry, I attempted (apparently not particularly successfully) a distinction between interpretable and mechanistic models. The term interpretable appears to be in common use in statistics and machine learning, but it may be that intelligible would be more appropriate. By this I mean that humans can make sense of the specific mathematical forms employed in order to “understand” how function outputs are related to function inputs.

So what can we find intelligible? In an older post I essentially brought this down to small combinations of simple algebraic operations. Of course what “small” and “simple” mean here will depend on the individual in question — I’m pretty decent at understanding how exponentiating affects a quantity, but that certainly isn’t true for my introductory statistics students — but with the possible exception of a very small number of prodigies, I know nobody who can keep even tens of algebraic manipulations in their head at any one time.

There is an important extension of this, which is that some interpretation is still possible if a relatively simple algebraic expression is embedded in a more complex model and retains its interpretation in this context. For example, a linear regression with hundreds of covariates is not particularly interpretable — there are far too many terms for a human to keep track of — but each individual term can be understood in terms of its effect on the prediction. (This is as a function, ie “with the other terms held constant” — I’ll post something on this weasel formulation later). It is, of course, possible to embed simple terms within complex models in which case the relatively easy interpretation of a linear model within a neural network, for example, are lost when its effects are obscured by the more complex manipulation that is then applied to its output.

For the purposes of describing some means of machine learning diagnostics, there are, however, one further class of mathematical function that I think humans can get a handle on — those we can visualize. Here

fnxfny

I have plotted some one and two dimensional functions (I’ll come back to what these are in a bit) that do not have “simple” algebraic structures. Nonetheless, understanding them is easy — just look! We can even read off these numbers. We also know how to plot two-dimensional functions and are pretty good at understanding contour plots, heatmaps, and three-dimensional renderings.

xyfn1 xyfn2 xyfn3

If we wanted a function of three inputs it might be possible to stack some of these, or at least lay them out somehow:

xyzfn1 xyzfn2 xyzfn3 xyzfn4

and nominally we could try to extend this further, but my brain is already starting to dribble when I actually want to look through these and come up with some sense of what is going on.

None of the functions that I have just presented have algebraically simple expressions. The first is given by a combination of three normal densities (apparently I don’t have any sort of equation editor in this tool so I can’t do square roots) which really isn’t nice, but we can examine it visually. Is this more than a special case? Only to some extent — these can be extended into more complex contexts in the same way that the interpretation of linear terms can be: so long as their effects are the same when put within that context. In fact, statisticians have long used generalized additive models of the form

y = g_1(x_1) + g_2(x_2) + g_3(x_3) + g_4(x_4)

precisely because of their intelligibility (and because estimating such models is more statistically stable). Even in machine learning, this is gaining some traction — see  my paper  with Yin Lou who just completed his PhD in Computer Science at Cornell explicitly looking at estimating these types of prediction function because of their intelligibility. ***

By way of example to tie in many themes from the last few posts, we might examine Newton’s law of gravity as it applies to a object near the ground on earth. In the classical form, the vertical height z of an object from the surface is described by the differential equation

D^2 z = – g/(z+c)^2

where c is the distance from earth’s center of mass to its surface and D^2 z means its acceleration. This, of course, is an approximation for many reasons, but partly because earth’s gravity changes over space, and is affected (in very minor ways) by other celestial bodies, so perhaps we should write

D^2 z = – g(x,y,z,t)/(z+c)^2

where x and y provide some representation of latitude and longitude and t is, of course, time. Here the mechanistic interpretation remains — the dynamics of z are governed by acceleration due to gravity — but g(x,y,z,t), unless given in an algebraically nice form, is not particularly intelligible. My larger question in this blog is “does that matter?” Of course, for most practical purposes, the first form of these dynamics is sufficient to predict the trajectory of the object quite well — it’s also a very handy means of producing a simpler, intelligible approximation to the actual underlying dynamics (g is very close to constant) that humans can make use of.+++

This idea of producing a simpler model, along with additive models, really makes up most of the tools used — usually informally — to understand high dimensional prediction functions, and that’s something that I’ll get to in the next post.

 

*** I must also thank Yin for pointing out in his thesis that “intelligible” might be a less ambiguous term than “interpretable”, although there is no alternative verb corresponding to “interpret”.

+++ Now I know that a physicist will object that the general law of gravitation applies to any collection of bodies if you know enough. Besides the fact that you never know enough to account for everything (and once you do, not everything behaves approximately according to Newtonian dynamics), I could still ask — what if the inverse square law were a more complicated function, and does the fact that it has a nice algebraic form matter?

Leave a Reply

Your email address will not be published. Required fields are marked *