讲学题目：Compositional Deep Learning
内容：Distributed representations of human language content and structure had a brief boom in the 1980s, but it quickly faded, and the past 20 years have been dominated by continued use of categorical representations of language, despite the use of probabilities or weights over elements of these categorical representations. However, the last five years have seen a resurgence, with highly successful use of distributed vector space representations, often in the context of "neural" or "deep learning" models. One great success has been distributed word representations, and I will look at some of our recent work and that of others on better understanding word representations and how they can be thought of as global matrix factorizations, much more similar to the traditional literature. But we need more than just word representations: We need to understand the larger linguistic units that are made out of words, a problem which has been much less addressed. I will discuss the use of distributed representations in tree-structured recursive neural network models, showing how they can provide sophisticated linguistic models of semantic similarity, sentiment, syntactic parse structure, and logical entailment.