Jan 13, 2013

Suppose you are with a native in a foreign country where you do not speak the language. All of a sudden, a rabbit passes by, and the native utters the word “gavagai”. One inference you could make is that “gavagai” means rabbit, but as the philosopher Quine points out, “gavagai” could have an infinite number of other meanings, such as “Let’s go hunting” or “There will be a storm tonight”. Why would “rabbit” be the correct inference to make in this case? This is but one example of where our minds must make inferences from a very limited amount of information. Yet our minds are continuously making such inferences, such as determining the correct grammar when learning language, or recognizing objects in our visual space, or how to generalize unseen objects into categories.

How do our minds learn these remarkable feats of cognition? If we can discover these underlying principles, would we be able to build more intelligent machines? Starting in March, I’ll be undertaking a Ph.D. in computational cognitive science to (hopefully!) shed some light on some of these problems. The hope is this blog will serve as a public repository for my thoughts on this endeavour, as well as an attempt to make cognitive science research more open and to publicize research in the field. Stay tuned!


Back to posts


comments powered by Disqus