I am currently Director of AI at a startup in Boston called Mainstay. Before this, I was a machine learning scientist and engineer at Spotify in Boston, MA. And before that, I was a research scientist at Amazon in Cambridge, MA, developing the natural language understanding behind Amazon’s Alexa products (Echo, FireTV, Tap, Dot).
I am also an abd Ph.D student at the Language Technologies Institute, a department of the School of Computer Science at Carnegie Mellon University. My work there was to develop new technologies in order to build more effective automatic speech recognition software. This is an important area of research because spoken language is perhaps the most natural mode of communication for people, but so difficult for computers. Among other reasons, automatic speech recognition is hard because computers, telephones, and devices have no faculty for understanding meaning in spoken language.
My work at CMU is on figuring out how to encode some of the meaningful information that may help guide automatic speech recognition. For example, The pulp will be used to produce newsprint and The Pope will be used to produce newsprint, are acoustically very similar. But to most people the former seems much more plausible, simply with a little common sense. My work is on how to encode and use some of that common sense in an ASR system.
In addition to speech recognition, I have research experience in natural language processing, knowledge representation, machine learning, information extraction, and information retrieval. In my spare time I enjoy softball, Mozart, and theater, among other things.