Object-Oriented Learning (OOL): Perception, Representation, and Reasoning
International Conference on Machine Learning (ICML)
Friday July 17, 2020, Virtual Workshop
We offer a systematic comparison of the ability of neural network architectures to learn relations from collections of objects. Relational inference is a crucial component of human reasoning from early development, and while the deep learning community has not ignored the subject, it is often integrated into performing a task, such as CLEVR. We generate simple object representations and evaluate models on their ability to learn these relations in the abstract, which allows us to focus on the relevance of their inductive biases for such reasoning. Our results highlight substantial differences between our models, suggesting there is room for further research in this domain.