Yesterday was a great day that didn't feel like winter at all, so I said "winter is supposed to be cold, cloudy and rainy". Today it's cold, cloudy and rainy. Let's see what happens with the statement in this post's title.
As human cultures are becoming more and more developed they start to find out how and why things work. For example, if you lived in the middle ages and caught a fever, people would probably know the procedure of healing, but they wouldn't be able to tell why. The whole process of becoming a more advanced civilization involves analyzing a problem and something that affects the problem, either negatively or positively, finding out the true reason behind it and using that information to improve life in some way. You can imagine this happening when fighting disease, but also in technical areas like bridge building and other architecture. The formula is quite simple: analyze something that you implicitly know is true, find out why it's true, and use that information to improve something.
Interestingly enough programmers, whom you would expect to be the first to accept this scientific approach to things, are doing the exact opposite. Instead of going from something vague to something clearly defined we are resorting to a simulation of the way our own vague brains work when trying to solve complex problems like speech or image recognition. We take that beautiful, efficient and exact world and use a neural network to simulate the way a human brain would judge things. The beauty of this is that it actually works, and gets much better results in a wide area of expertise than most exact alternatives. Yet somehow that doesn't quite satisfy.
Are neural networks and genetic programming just a quick fix to solve problems that are too complex for us to think about? A book I'm reading currently by Ray Kurzweil suggests otherwise. Kurzweil suggests that the only problem humans are supposed to solve is that of creating an intelligence smarter than us, and if we cannot understand that intelligence then that's ok, because we're not supposed to. It's the whole my-children-understand-how-to-use-the-tv-remote-and-I-don't story applied to a grander scale. I rather don't like it.
Take image recognition for example (or speech recognition if you will). I believe that an algorithm developed and tuned by hand by the smartest scientists in their particular field would perform better than a dumb neural network being trained with a bunch of training data. The problem is time. For a human being to create such an algorithm it may take a lifetime, or it may never happen, depending on the complexity of the task. At some level our human brains just can't keep up any more, and suddenly it pays off to brute-force the solution with a neural network or a hidden markov model. We're working on improving our tools, but we're not working on improving ourselves.
If the human brain could work faster, or could process more information simultaneously, then the range of problems that we could solve would be increased. If we spent more time studying the fuzzy logic of our brains, then in the long term we might be able to create better exact solutions to the problems that are troubling us right now. That topic would lead us in the direction of genetic engineering, but let's save that for another time.
It takes a fuzzy human brain to understand the art in exact solutions.