Keynotes

Invited Speakers

Wednesday, September 1st, 10:00-11:00 CEST,
Session Chairs: Chris McCool, Sven Behnke

Luca locchi is Full Professor at Sapienza University of Rome, Italy, mainly teaching in the Master in Artificial Intelligence and Robotics and currently Coordinator of the PhD Program in Engineering in Computer Science. His main research interests include cognitive robotics, task planning, multi-robot coordination, robot perception, robot learning, human-robot interaction, and social robotics. He is the author of over 175 referred papers in journals and conferences in artificial intelligence and robotics. He is currently Associate Editor of Artificial Intelligence Journal and organized several scientific events. He has been principal investigator of several international, EU, national and industrial projects in artificial intelligence and robotics.
He organized several international scientific robot competitions, as well as student competitions focussing on service robots and human-robot interaction (including ERL-SR, European RoboCupJunior Championship, and RoboCup@Home Education Challenges). He has also supervised the development of teams participating in robot competitions.

Cognitive Social Robots

In the recent years, we are observing a change of perspective in Artificial Intelligence & Robotics, shifting from “AI&Robot defeating humans” to “AI&Robot improving and augmenting human abilities”. Robotics research is also giving more and more emphasis to interaction with humans in addition to interaction with the environment. When humans are involved in a task as part of the goal (not just as obstacles), robots need to demonstrate both cognitive and social abilities. In this talk, I will discuss goals and challenges of cognitive social robots and, in particular, I will highlight knowledge representation methods and techniques for effective integration of multiple representation layers.

Presentation

Wednesday, September 1st, 13:30-14:30 CEST,
Session Chairs: Sven Behnke, Federiko Magistri

Angela Schoellig is an Associate Professor at the University of Toronto Institute for Aerospace Studies and a Faculty Member of the Vector Institute. She holds a Canada Research Chair (Tier 2) in Machine Learning for Robotics and Control and a Canada CIFAR Chair in Artificial Intelligence. She is a principal investigator of the NSERC Canadian Robotics Network and the University’s Robotics Institute. She conducts research at the intersection of robotics, controls, and machine learning. Her goal is to enhance the performance, safety, and autonomy of robots by enabling them to learn from past experiments and from each other. She is a recipient of the Robotics: Science and Systems Early Career Spotlight Award (2019), a Sloan Research Fellowship (2017), and an Ontario Early Researcher Award (2017). She is one of MIT Technology Review’s Innovators Under 35 (2017), a Canada Science Leadership Program Fellow (2014), and one of Robohub’s “25 women in robotics you need to know about (2013)”. Her team is the four-times winner of the North-American SAE AutoDrive Challenge. Her PhD at ETH Zurich (2013) was awarded the ETH Medal and the Dimitris N. Chorafas Foundation Award. She holds both an M.Sc. in Engineering Cybernetics from the University of Stuttgart (2008) and an M.Sc. in Engineering Science and Mechanics from the Georgia Institute of Technology (2007).

Robots that safely learn in a changing world

To date, robots have been primarily deployed in structured and predictable settings, such as factory floors and warehouses. The next generation of robots — ranging from self-driving and -flying vehicles to robot assistants – is expected to operate alongside humans in complex, unknown and changing environments. This challenges current robot algorithms, which are typically based on our prior knowledge about the robot and its environment. While research has shown that robots are able to learn new skills from experience and adapt to unknown situations, these results have typically been limited to learning single tasks in simulation or in structured lab settings. The next challenge is to enable robot learning in real-world application scenarios. This will require data-efficient, online learning algorithms that guarantee safety. It will also require to answer the fundamental question of how to design learning architectures for interactive agents. This talk will highlight our recent progress in combining learning methods with formal results from control theory. By combining models with data, our algorithms achieve adaptation to changing conditions during long-term operation, data-efficient multi-robot, multi-task transfer learning, and safe reinforcement learning. We demonstrate our algorithms in vision-based off-road driving and drone flight experiments, as well as on mobile manipulators.

Presentation

Thursday, September 2nd, 10:00-11:00 CEST,
Session Chairs: Maren Bennewitz, Federiko
Magistri

Marco Hutter is an assistant professor for robotic systems at ETH Zurich and co-founder of ANYbotics AG, a Zurich-based company developing legged robots for industrial applications. Marco’s research interests are in the development of novel machines and actuation concepts together with the underlying control, planning, and learning algorithms for locomotion and manipulation. His works find application from electrically actuated quadrupeds like ANYmal to large-scale autonomous excavators used for digital fabrication and disaster mitigation. Marco is part of the National Centre of Competence in Research (NCCR) Robotics and NCCR Digital Fabrication and PI in various international projects (e.g. EU Thing, NI) and challenges (e.g. DARPA SubT). He was recipient of the Branco Weiss Fellowship, the ETH Medal, the IEEE/RAS Early Career Award, and the ERC Starting Grant.

Legged Robots for Challenging Environments

There has be a tremendous progress in the field of legged robots in last couple of years that led to systems that are skilled and mature enough for real-world deployment. In particular, quadrupedal robots have reached a level of mobility to navigate complex environments, which enables them to take over inspection or surveillance jobs in place like offshore industrial plants, in underground areas, or on construction sites. In this talk, I will present our research work with the quadruped ANYmal and explain some of the underlying technologies for locomotion control, environment perception, and mission autonomy. I will show how these robots can learn to traverse very complex terrain, how they can navigate and explore unknown environments and how they are able to conduct surveillance, inspection or exploration scenarios.

Thursday, September 2nd, 13:30-14:30 CEST,
Session Chairs: Emanuele Menegatti, Jens Behley

Barbara Mazzolai is Associate Director for Robotics and Director of the Bioinspired Soft Robotics Laboratory at the Istituto Italiano di Tecnologia (IIT). From February 2011 to March 2021 she was the Director of the IIT Center for Micro-BioRobotics. She graduated in Biology (with Honours) at the University of Pisa, Italy, and received the Ph.D. in Microsystems Engineering from the University of Rome Tor Vergata. She was Deputy Director for the Supervision and Organization of the IIT Centers Network from July 2012 to 2017. From January to July 2017 she was Visiting Faculty at Aerial Robotics Lab, Department of Aeronautics, of Imperial College of London. She is member of the Scientific Advisory Board of the Max Planck Institute for Intelligent Systems (Tübingen and Stuttgart, Germany) and member of the Advisory Committee of the Cluster on Living Adaptive and Energy-autonomous Materials Systems – livMatS (Freiburg, Germany). In 2020, she has obtained the Italian National Scientific Qualification of Full Professor in Bioengineering. Her research activity is in the field of bioinspired soft robotics, combining biology and engineering for advancing technological innovation and scientific knowledge. In particular, she focuses her investigations on plants and invertebrate animals. Pioneer of the field of plant-inspired robotics, she was the Coordinator of the EU FET-Open PLANTOID project and developed the first robot inspired by plants. Currently she coordinates the EU FET-Proactive Projects GrowBot and I-Seed. In May 2021, she has started the European Research Council (ERC) Consolidator Grant “I-Wood, Forest Intelligence: robotic networks inspired by the Wood Wide Web”. She has received various awards for her work, including the Marisa Bellisario Award and the Medal of the Italian Senate. She is author and co-author of more than 260 papers appeared in international journals, books, and conference proceedings. In 2019, she published her book “La Natura Geniale” (ed. Longanesi).

Towards New Paradigms of Movement for Adaptive Robots: Lessons from Plants

Plants show unique abilities of resilience and adaptation of their morphology and behaviour to different environmental conditions. Despite plants have been seen traditionally as passive entities, they show several strategies of movements, including those associated to their indeterminate growing capabilities to look for nutrients, lights, or for external supports; but also, rapid movements to capture preys; or even passive movements for spreading seeds. Plants also show distributed intelligence and sensory systems, as well as interesting solutions to vary stiffness without muscles. In this talk, I will present our approach to bioinspiration using plants as a model for their adaptation and mobility abilities, towards the design of growing robots and of novel robotic systems that can deal with unstructured environments, moving purposively, effectively and efficiently.

Presentation

Friday, September 3rd, 10:00-11:00 CEST,
Session Chairs: Emanuele
Menegatti, Federico Magistri

Daniel Cremers studied physics and mathematics in Heidelberg and New York. After his PhD in Computer Science he spent two years at UCLA and one year at Siemens Corporate Research in Princeton. From 2005 until 2009 he was associate professor at the University of Bonn. Since 2009 he holds the Chair of Computer Vision and Artificial Intelligence at the Technical University of Munich. His publications received numerous awards, including the ‘Best Paper of the Year 2003’ (Int. Pattern Recognition Society), the ‘German Pattern Recognition Award 2004’ and the ‘2005 UCLA Chancellor’s Award for Postdoctoral Research’. For pioneering research he received five grants from the European Research Council, including a Starting Grant, a Consolidator Grant and an Advanced Grant. In 2018 he organized the largest ever European Conference on Computer Vision in Munich. He is member of the Bavarian Academy of Sciences and Humanities. In December 2010 he was listed among “Germany’s top 40 researchers below 40” (Capital). In 2016 Prof. Cremers received the Gottfried Wilhelm Leibniz Award, the biggest award in German academia. He is co-founder and chief scientific officer of the high-tech startup Artisense.

Visual Simultaneous Localization and Mapping (SLAM)

Visual Simultaneous Localization and Mapping (SLAM) is of utmost importance to autonomous systems and augmented reality. I will discuss direct methods for visual SLAM (LSD SLAM and DSO) that recover camera motion and 3D structure directly from brightness consistency thereby providing better performance in terms of precision and robustness compared to classical keypoint-based techniques. Moreover, I will demonstrate how we can leverage the predictive power of self-supervised deep learning in order to significantly boost the performance of direct SLAM methods. The resulting method D3VO allows us to track a single camera with a precision that is on par with state-of-the-art stereo-inertial odometry methods. Lastly, I will introduce MonoRec – a deep network that can generate faithful dense reconstruction of the observed world from a single moving camera.

Presentation

Friday,
September 3rd, 13:30-14:30 CEST,
Session Chairs: Sven Behnke, Jens Behley

Ingmar Posner leads the Applied Artificial Intelligence Lab at Oxford University and is a founding director of the Oxford Robotics Institute. His research aims to enable machines to robustly act and interact in the real world – for, with, and alongside humans. With a significant track-record of contributions in machine perception and decision-making, Ingmar and his team are thinking about the next generation of robots that are flexible enough in their scene understanding, physical interaction and skill acquisition to learn and carry out new tasks. His research is guided by a vision to create machines which constantly improve through experience. In 2014 Ingmar co-founded Oxbotica, a multi-award winning provider of mobile autonomy software solutions.

Learning to Perceive and to Act – Disentangling Tales from (Structured) Latent Space

Unsupervised learning is experiencing a renaissance. Driven by an abundance of unlabelled data and the advent of deep generative models, machines are now able to synthesise complex images, videos and sounds. In robotics, one of the most promising features of these models – the ability to learn structured latent spaces – is gradually gaining traction. The ability of a deep generative model to disentangle semantic information into individual latent-space dimensions seems naturally suited to state-space estimation. Combining this information with generative world-models, models which are able to predict the likely sequence of future states given an initial observation, is widely recognised to be a promising research direction with applications in perception, planning and control. Yet, to date, designing generative models capable of decomposing and synthesising scenes based on higher-level concepts such as objects remains elusive in all but simple cases. In this talk I will motivate and describe our recent work using deep generative models for unsupervised object-centric scene inference and generation. I will demonstrate that explicitly modelling correlations in the latent space is a key ingredient required to synthesise physically plausible scenes. Furthermore, I will make the case that exploiting correlations encoded in latent space, and learnt through experience, lead to a powerful and intuitive way to disentangle and manipulate task-relevant factors of variation. I will show that this not only casts a novel light on affordance learning, but also that the same framework is capable of generating a walking gait in a real-world quadruped robot.

Presentation