Saturday, July 27, 2019

Do neural networks have a point of view?

As someone once said, figures don't lie, but liars do figure.

In other words, just because something's supported by cold numbers doesn't mean it's true.  It's always good to ask where the numbers came from.  By the same token, though, you shouldn't distrust anything with numbers behind it, just because numbers can be misused.  The breakdown is more or less:
  • If you hear "up" or "down" or "a lot" or anything that implies numbers, but they're aren't any numbers behind it, you really don't know if it's true or not, or whether it's significant.
  • If you hear "up X%" or "down Y%" or -- apparently a popular choice -- "up a whopping Z%" and you don't know where the numbers came from, you still don't really know if it's true or not.  Even if they are correct, you don't know whether they're significant.
  • If you hear "up X%, according to so-and-so", then the numbers are as good as so-and-so's methodology.  If you hear "down Y%, vs. Z% for last quarter", you at least have a basis for comparison, assuming you otherwise trust the numbers.
  • In all, it's a bit of a pain to figure all this out.  Even trained scientists get it wrong more than we might think (I don't have numbers on this and I'm not saying it happens a lot, but it's not zero).
  • No one has time to do all the checking for more than a small subset of things we might be interested in, so to a large extent we have to trust other people to be careful.  This largely comes down to reputation, and there are a number of cognitive biases in the way of evaluating that objectively.
  • But at least we can try to ignore blatantly bad data, and try to cross-check independent sources (and check that they're actually independent), and come up with a rough, provisional picture of what's really going on.  If you do this continually over time the story should be pretty consistent, and then you can worry about confirmation bias.
  • (Also, don't put much stock in "record high" numbers or "up (a whopping) 12 places in the rankings", but that's a different post).
I'm not saying we're in some sort of epistemological nightmare, where no one has any idea what's true and what's not, just that objectivity is more a goal to aim towards rather than something we can generally expect to achieve.


So what does any of this amateur philosophizing have to do with neural networks?

Computers have long been associated with objectivity.  The stramwan idea that "it came from a computer" is the same as "it's objectively true" probably never really had any great support, but a different form, I think, has quite a bit of currency, even to the point of becoming an implicit assumption.  Namely, that computers evaluate objectively.

"Garbage in, garbage out," goes the old saying, meaning a computed result is only as good as the input it's given.  If you say the high temperature in Buenos Aires was 150 degrees Celsius yesterday and -190 Celsius today, a computer can duly tell you the average high was -20 Celsius and the overall high was 150 Celsius, but that doesn't mean that Buenos Aires has been having, shall we say, unusual weather lately.  It just means that you gave garbage data to a perfectly good program.

The implication is that if you give a program good data, it will give you a good result.  That's certainly true for something simple, like calculating averages and extremes.  It's less certain when you have some sort of complicated, non-linear model with a bunch of inputs, some of which affect the output more than others.  This is why modeling weather takes a lot of work.  There are potential issues with the math behind the model (does it converge under reasonable conditions?), the realization of that model on a computer (are we properly accounting for rounding error?) the particular settings of the parameters (how well does it predict weather that we already know happened?).  There are plenty of other factors.  This is just scratching the surface.

A neural network is exactly a complicated, non-linear model with a bunch of inputs, but without the special attention paid to the particulars.  There is some general assurance that the tensor calculations that relate the input to the output are implemented accurately, but the real validation comes from treating the whole thing as a black box and seeing what outputs it produces from test inputs.  There are well-established techniques for ensuring this is done carefully, for example using different datasets for training the network and for testing how well the network really performs, but at the end of the day the network is only as good as the data it was given.

This is similar to "Garbage in, Garbage out," but with a slightly different wrinkle.  A neural net trained on perfectly accurate data and given perfectly accurate input can still produce bad results, if the context of the training data is too different from that of the input it was asked to evaluate.

If I'm developing a neural network for assessing home values, and I train and test it on real estate in the San Francisco Bay area, it's not necessarily going to do well evaluating prices in Toronto or Albuquerque.  It might, because it might do a good job of taking values of surrounding properties into account and adjusting for some areas being more expensive than others, but there's no guarantee.  Even if there is some sort of adjustment going on, it might be thrown off by any number of factors, whether housing density, the local range of variation among homes or whatever else.

The network, in effect, has a point of view based on what we might as well call its experience.  This is a very human, subjective way to put it, but I think it's entirely appropriate here.  Neural networks are specifically aimed at simulating the way actual brains work, and one feature of actual brains is that their point of view depends to a significant degree on the experience they've had.  To the extent that neural networks successfully mimic this, their evaluations are, in a meaningful way, subjective.

There have been some widely-reported examples of neural networks making egregiously bad evaluations, and this is more or less why.  It's not (to my knowledge) typically because the developers are acting in bad faith, but because they failed to assemble a suitably broad set of data for training and testing.  This gave the net, in effect, a biased point of view.


This same sort of mistake can and does occur in ordinary research with no neural networks involved.  A favorite example of mine is drawing conclusions about exoplanets based on the ones we've detected so far.  These skew heavily toward large, fast-moving planets, because for various reasons those are much easier to detect.  A neural network trained on currently known exoplanets would have the same skew built in (unless the developers were very careful, and quite likely even then), but you don't need a neural network to fall prey to this sort of sampling bias.  From my limited sample, authors of papers at least try to take it into account, authors of magazine articles less so and headline writers hardly at all.

1 comment:

  1. (attributed to) Harry Truman: "There are lies, damned lies, and statistics.

    ReplyDelete