Ugh, that title sounds like some awful teenager. Luckily, there no teenagers in this post.
Today I had the opportunity to listen to a guest lecture by the famous machine learning theorist Vladimir Vapnik. Since he lives locally, I've heard him talk about the same topic three times now: in this class, at the annual NYAS Machine Learning Symposium, and at general Princeton CS Lecture. (These are more-or-less the slides he used today.)
Warning: this next paragraph is geeky; skip it if you aren't interested.
The theory he presents is interesting, as are the results; he proposes that information other than input and result can be used in training a machine learning algorithm. The idea is that some description of how we get from input to output, even if the description isn't enough to reproduce the result exactly, helps us learn. He gives an awesome example of labeling OCR numbers with essentially poetry, describing the personalities of the writers in flowery, adjective-heavy text; each digit in the training set had some text written exclusively for it. He shows that providing that text when training the algorithm (in addition to the input pixels and labeled outputs, of course) results in better OCR recognition than just providing the standard training data exclusively. Permuting the text associations got rid of the improvement. Crazy stuff.
During the lecture, he said to the class several times "you don't understand." It wasn't a question, nor did he always attempt to re-explain, perhaps deeming us incapable of understanding those particular points at all. I've often found that the most brilliant people have a hard time explaining themselves so that everyone can understand--they just can't understand not understanding, and so can't see the path people need to follow in order to obtain understanding.
It seemed like Vapnik has reached a point in his life were he is comfortable with people not understanding him; he's a very well-established individual and is possibly entitled to that luxury. At this point, it's on us to try and understand him, instead of the usual more balanced responsibilities of teacher and student both needing to do their best to teach and understand, respectively.
That isn't to say that Vapnik isn't a good lecturer; he's fairly clear and entertaining, but there are some details that could use more illumination. Perhaps I'm not being fair, though, since everything is in contrast to the usual lecturer for the course Rob Schapire, who is possibly the best lecturer I've ever encountered. I also contrast it to my own teaching, where I've been thinking hard about how to explain simple computer science concepts like objects or static methods to students that have never seen the material or anything like it ever before. It's a lot of fun, but it's also exhausting to some extent.
Anyway, I find it funny that I felt the need to write a commentary about the teaching style of the lecturer whose talk was entitled Learning with Teacher: Learning using Hidden Information. Maybe there was something hidden in there...
Today I had the opportunity to listen to a guest lecture by the famous machine learning theorist Vladimir Vapnik. Since he lives locally, I've heard him talk about the same topic three times now: in this class, at the annual NYAS Machine Learning Symposium, and at general Princeton CS Lecture. (These are more-or-less the slides he used today.)
Warning: this next paragraph is geeky; skip it if you aren't interested.
The theory he presents is interesting, as are the results; he proposes that information other than input and result can be used in training a machine learning algorithm. The idea is that some description of how we get from input to output, even if the description isn't enough to reproduce the result exactly, helps us learn. He gives an awesome example of labeling OCR numbers with essentially poetry, describing the personalities of the writers in flowery, adjective-heavy text; each digit in the training set had some text written exclusively for it. He shows that providing that text when training the algorithm (in addition to the input pixels and labeled outputs, of course) results in better OCR recognition than just providing the standard training data exclusively. Permuting the text associations got rid of the improvement. Crazy stuff.
During the lecture, he said to the class several times "you don't understand." It wasn't a question, nor did he always attempt to re-explain, perhaps deeming us incapable of understanding those particular points at all. I've often found that the most brilliant people have a hard time explaining themselves so that everyone can understand--they just can't understand not understanding, and so can't see the path people need to follow in order to obtain understanding.
It seemed like Vapnik has reached a point in his life were he is comfortable with people not understanding him; he's a very well-established individual and is possibly entitled to that luxury. At this point, it's on us to try and understand him, instead of the usual more balanced responsibilities of teacher and student both needing to do their best to teach and understand, respectively.
That isn't to say that Vapnik isn't a good lecturer; he's fairly clear and entertaining, but there are some details that could use more illumination. Perhaps I'm not being fair, though, since everything is in contrast to the usual lecturer for the course Rob Schapire, who is possibly the best lecturer I've ever encountered. I also contrast it to my own teaching, where I've been thinking hard about how to explain simple computer science concepts like objects or static methods to students that have never seen the material or anything like it ever before. It's a lot of fun, but it's also exhausting to some extent.
Anyway, I find it funny that I felt the need to write a commentary about the teaching style of the lecturer whose talk was entitled Learning with Teacher: Learning using Hidden Information. Maybe there was something hidden in there...