Object-Oriented Learning: Perception, Representation, and Reasoning

International Conference on Machine Learning (ICML)

July 17-18, 2020

Vienna, Austria

ICML is now a virtual conference only


Objects, and the interactions between them, are the foundations on which our understanding of the world is built [1]. Similarly, abstractions centered around the perception and representation of objects play a key role in building human-like AI, supporting high-level cognitive abilities like causal reasoning, object-centric exploration, and problem solving [2,4,5,6]. Indeed, prior works have shown how relational reasoning and control problems can greatly benefit from having object descriptions [2,7]. Yet, many of the current methods in machine learning focus on a less structured approach in which objects are only implicitly represented [3], posing a challenge for interpretability and the reuse of knowledge across tasks. Motivated by the above observations, there has been a recent effort to reinterpret various learning problems from the perspective of object-oriented representations [2,4,5,6].

In this workshop, we will showcase a variety of approaches in object-oriented learning, with three particular emphases. Our first interest is in learning object representations in an unsupervised manner. Although computer vision has made an enormous amount of progress in learning about objects via supervised methods, we believe that learning about objects with little to no supervision is preferable: it minimizes labeling costs, and also supports adaptive representations that can be changed depending on the particular situation and goal. The second primary interest of this workshop is to explore how object-oriented representations can be leveraged for downstream tasks such as reinforcement learning and causal reasoning. Lastly, given the central importance of objects in human cognition, we will highlight interdisciplinary perspectives from cognitive science and neuroscience on how people perceive and understand objects.

Call for Papers

We are sourcing short, four-page papers focusing on all aspects of object-oriented learning. We welcome submissions on:

  • Learning unsupervised object-centric representations,
  • Leveraging object-centric representations in RL,
  • The interface between objects and causal reasoning,
  • Object-centric approaches to exploration,
  • Inductive biases which favor object representations,
  • Temporal "objects" (i.e. events),
  • Datasets or environments for learning and testing object-centric reasoning,
  • Object-centric aspects of human cognition,
  • Subjects which are otherwise relevant to object-oriented learning.

We particularly encourage submissions from students from groups that are underrepresented at machine learning conferences, including: gender, gender identity, sexual orientation, race, ethnicity, nationality, disability, and institution. We are pleased to be able to offer a limited number of travel grants to student presenters from these groups; if you would like to apply for financial assistance, please indicate this when submitting your paper.

Submission Policy:

  • Submissions should be a maximum of four pages, plus any number of pages for references and supplementary material. We ask authors to use the supplementary material only for minor details that do not fit in the main paper.
  • Papers should be fully anonymized for double-blind review.
  • Papers should use this style file.
  • Dual submission policy: we welcome submissions that have been published or are currently under review at other venues, including both full conference papers and workshops. However, all submissions should be shortened to four pages.
  • Evaluation criteria: Papers will be reviewed for topicality, clarity, correctness, and novelty; however, we welcome submissions which are still work-in-progress. Papers which are longer than four pages, clearly off-topic, or not anonymized will be rejected without review.
  • Papers accepted to the workshop will not be considered archival publications.

Important Dates and Links

UPDATE (24/04/2020): ICML is now a virtual conference only, which allowed us to extend the submission deadline to provide workshop contributors with more time.

UPDATE (22/05/2020): Due to NeurIPS postponing their submission deadline we have decided to extend the submission deadline one final time to provide workshop contributors with more time.

Submission site opens April 15, 2020
Submission site https://cmt3.research.microsoft.com/WOR2020/
Submission deadline May 1, 2020 May 22, 2020 June 5, 2020 (Midnight anywhere on Earth)
Decisions announced May 29, 2020 June 21, 2020 (tentative)
Day of workshop TBD (July 17-18, 2020)

Invited Speakers

Fabien Baradel is a PhD student at INSA-Lyon. His research interests include causality, perception, and video understanding. His recent work is on counterfactual learning of physical dynamics.
Jody Culham is a Professor in the Department of Psychology at Western University in London, Ontario. Her research focuses on how vision is used for perception and to guide actions in human observers. In order to answer these questions, she makes use of several techniques from cognitive neuroscience, including functional Magnetic Resonance Imaging (fMRI) and behavioral testing.
Moira Dillon is an Assistant Professor of Psychology at New York University (NYU). Her main research question is how the physical world in which we live shapes the abstract world in which we think. She addresses this question by exploring the origin and development of uniquely human geometric understanding.
Klaus Greff is a PhD student at IDSIA. His research interests include perceptual grouping, object-centric representation learning. His recent work is on unsupervised object-based perception models.
Thomas Kipf is a research scientist at Google Brain. His research focuses on graph neural networks, reasoning, and unsupervised object-oriented representations.
Igor Mordatch is a research scientist at Google Brain. His research focuses on model-based RL, multi-agent RL, and object-based concept learning using energy-based models.
Vincent Sitzmann is a PhD student at Stanford University. His research interest lies in neural scene representations - the way neural networks learn to represent information on our 3D world. He is interested in learning to reason about our world given visual observations, such as inferring a complete model of a scene with information on geometry, material, lighting etc. from only a few observations.
Linda Smith is a Distinguished Professor of Psychological and Brain Sciences at Indiana University. Her main research interests include the interaction of perceptual, cognitive, and linguistic factors in the psychology of objects and dimensions from a developmental perspective.

Sponsors

Organizers

References

  1. Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental science, 10(1), 89-96.
  2. Bapst, V., Sanchez-Gonzalez, A., Doersch, C., Stachenfeld, K. L., Kohli, P., Battaglia, P. W., & Hamrick, J. B. (2019). Structured agents for physical construction. ICML 2019.
  3. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533.
  4. Kosiorek, A., Kim, H., Teh, Y.W. and Posner, I., (2018). Sequential attend, infer, repeat: Generative modelling of moving objects. In Advances in Neural Information Processing Systems 2018.
  5. Lin, Z., Wu, Y.F., Peri, S.V., Sun, W., Singh, G., Deng, F., Jiang, J. and Ahn, S., (2020). SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition. ICLR 2020.
  6. van Steenkiste, S., Chang, M., Greff, K., & Schmidhuber, J. (2018). Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. ICLR 2018.
  7. Diuk, C., Cohen, A., & Littman, M. L. (2008, July). An object-oriented representation for efficient reinforcement learning. ICML 2008.