/
00:00

PLENARIES
Simulation and Control through Contact
RUSS TEDRAKE
Toyota Professor of EECS, Aero/Astro, MechE., MIT & VP, Robotics Research, Toyota Research Institute
Time15:25—15:50 CEST, Sep. 28 (Tue)
HallHall 1
Abstract

Very often our robots attempt to avoid contact with the world (e.g., collision-free motion planning), or attempt to constrain the locations on the robot that will contact (e.g. "point feet" legged robots, or impedance control at an end effector). But the research community is starting to generate examples of robots that make very rich contact with the world, showing just how beautiful and effective it can be.

In this talk, I'd like to discuss some big questions: 1) How well can we simulate contact? How important is it that we do it well? 2) How do algorithms from reinforcement learning compare with model-based optimization? I will describe some recent results that try to deepen our understanding of these questions, and provide a foundation for continuing to improve our algorithms. And, of course, I will have robot videos.

Biography

Russ Tedrake is the Toyota Professor at the Massachusetts Institute of Technology (MIT) in the Department of Electrical Engineering and Computer Science, Mechanical Engineering, and Aero/Astro, and he is a member of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). He is also the Vice President of Robotics Research at Toyota Research Institute (TRI). He received a B.S.E. in Computer Engineering from the University of Michigan in 1999, and a Ph.D. in Electrical Engineering and Computer Science from MIT in 2004. Dr. Tedrake is the Director of the MIT CSAIL Center for Robotics and was the leader of MIT’s entry in the DARPA Robotics Challenge. He is a recipient of the NSF CAREER Award, the MIT Jerome Saltzer Award for undergraduate teaching, the DARPA Young Faculty Award in Mathematics, the 2012 Ruth and Joel Spira Teaching Award, and was named a Microsoft Research New Faculty Fellow. His research has been recognized with numerous conference best paper awards, including ICRA, Robotics: Science and Systems, Humanoids, Hybrid Systems: Computation and Control, as well as the inaugural best paper award from the IEEE RAS Technical Committee on Whole-Body Control.

Learning Risk and Social Behavior in Mixed Human-Autonomous Vehicles Systems
DANIELA RUS
Massachusetts Institute of Technology
Time15:25—15:50 CEST, Sep. 29 (Wed)
HallHall 1
Abstract

Deployment of autonomous vehicles (AV) on public roads promises increases in efficiency and safety, and requires intelligent situation awareness. We wish to have autonomous vehicles that can learn to behave in safe and predictable ways, and are capable of evaluating risk, understanding the intent of human drivers, and adapting to different road situations. This talk describes an approach to learning and integrating risk and behavior analysis in the control of autonomous vehicles. I will introduce Social Value Orientation (SVO), which captures how an agent’s social preferences and cooperation affect interactions with other agents by quantifying the degree of selfishness or altruism. SVO can be integrated in control and decision making for AVs. I will provide recent examples of self-driving vehicles capable of adaptation.

Biography

Prof. Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science; Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Deputy Dean of Research for Schwarzman College of Computing at MIT. Prof. Rus’s research interests center on the development of the science and engineering of autonomy. Her recent research focus includes developing (1) new approaches to designing robot bodies, (2) new algorithms and models robot brains, and intuitive human-machine interaction in the context of single robot and multi-robot systems. She is a senior visiting scholar at MITRE corporation, member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and fellow of the Association for the Advancement of Artificial Intelligence, the Institute of Electrical and Electronics Engineer, and the Association for Computing Machinery. She was a member of the Defense Innovation Board and the President’s Council of Advisors for Science and Technology. She is a recipient of a MacArthur Fellowship, the Engelberger prize for Robotics, Mass TLC Innovation Catalyst Award, IEEE RAS Pioneer Award, and the IJCAI John McCarthy Award. Rus earned her PhD in computer science from Cornell University.

Computer-aided and robot-assisted medical interventions: myths, reality and challenges
JOCELYNE TROCCAZ
Research Director, TIMC laboratory, CNRS and Grenoble Alpes University, France
Time15:25—15:50 CEST, Sep. 30 (Thu)
HallHall 1
Abstract

Computer-Assisted Medical Intervention (CAMI) aims at assisting a physician during diagnostic or therapeutic interventions. This includes several abilities: to process and to fuse multi-modal information and prior knowledge, to plan a suitable strategy and to simulate it, and finally, to perform the strategy as planned in a safe and clinically efficient way. Operating on soft biological tissue may require iterative re-planning from intra-operative information to account for anatomical environment modifications. In a first part of this talk, we will first describe this general context and will illustrate these different aspects on a clinical example: prostate brachytherapy.
In a second part of the talk, we will focus on the specific contributions of robotic assistance. After a short history and a panorama of clinically available systems, we will try to understand why, despite an extensive amount of research, development and technical innovation in surgical robotics for more than three decades, so few robotic systems have been translated from prototypes to products. We will also discuss the cognitive abilities of such systems and the potential evolution of robotic devices towards surgical assistants.

Biography

Research Director at CNRS, working in TIMC Laboratory, Grenoble, France. Graduated in Computer Science. PhD in robotics in 1986, Institut National Polytechnique de Grenoble. CNRS Research fellow from 1988. Specialized in Medical Robotics and Computer-Assisted Medical Interventions. Her main interests are in the development of new robotic paradigms and devices and in image registration. Active in several clinical areas (urology, radiotherapy, cardiac surgery, orthopedics, etc.) in close collaboration with Grenoble University Hospital and La Pitié Salpétrière Paris Hospital. Thanks to industrial transfer, hundreds of thousands of patients, worldwide, benefited from technology and systems she developed. Coordinating the national excellence laboratory CAMI from 2016. IEEE Fellow, MICCAI Fellow. Dr. Troccaz has been an associate editor of the IEEE Transactions on Robotics and Automation and of the IEEE Transactions on Robotics. Currently member of the steering committee of IEEE Transactions on Medical Robotics and Bionics and editorial board member of Medical Image Analysis. Member of the French Academy of Surgery. Recipient of several awards and medals: Award from the French Academy of Surgery in 2014; Silver Medal from CNRS in 2015; Chevalier de la Légion d'Honneur in 2016.

For more details cf. http://membres-timc.imag.fr/Jocelyne.Troccaz.

KEYNOTES
Broad Robot Generalization by Reusing Broad Data

CHELSEA FINN
Assistant Professor, Stanford University
Time15:55—16:15 CEST, Sep. 28 (Tue)
HallHall 2
Abstract

Recent progress in robot learning has found that robots can acquire complex visuomotor manipulation skills through trial and error. Despite these successes, the generalization and versatility of robots across environment conditions, tasks, and objects remains a major challenge. And, unfortunately, our existing algorithms and training set-ups are not prepared to tackle such challenges, which demand large and diverse sets of tasks and experiences. This is because robots are typically trained using online data collection, using data collected entirely from scratch in a single lab environment. In this talk, I will propose that we consider a paradigm for robot learning where we continuously accumulate, leverage, and reuse broad offline datasets across papers, much like the rest of machine learning, but notably underexplored for robotic manipulation. To this end, I will first discuss recent results studying zero-shot robot generalization using a diverse dataset containing 100 distinct tasks. Then, I will discuss how we can overcome the challenges that arise when reusing existing broad offline datasets, including datasets spanning multiple robots and datasets of videos of humans. In all cases, the evaluation will emphasize the robots’ ability to generalize broadly, including to new objects, new scenes, new camera viewpoints, and even new tasks.

Biography

Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University, where she directs the IRIS lab. Her research interests lie in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction. She received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through the ACM doctoral dissertation award, the Microsoft Research Faculty Fellowship, the ONR YIP award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg.

Lessons Learned from Building Benchmarks for Visual Localization
TORSTEN SATTLER
Senior Researcher, Czech Institute of Informatics, Robotics and Cybernetics; Czech Technical University in Prague; Czech Republic
Time15:55—16:15 CEST, Sep. 28 (Tue)
HallHall 3
Abstract

In recent years, progress in Computer Vision has to a large degree been driven by the availability of benchmark datasets as these benchmarks have been used to develop and test new algorithms and ideas. Visual localization, i.e., the problem of estimating the 6DoF pose from which a given image was taken, is no exception. At the same time, creating a benchmark is a highly non-trivial task in itself. In this talk, we will look back at the long-term visual localization benchmark we designed and published at visuallocalization.net. In particular, we will look at the design decisions we made and what we could have done better in hindsight (and how). We will talk about updates to the benchmark as well new datasets we recently released and plan to release. In order to scale to a meaningful size, it is necessary to use automated algorithms when constructing localization benchmarks. We will show that the choice of algorithm can have a significant impact on which localization methods perform well on a dataset. We will further point out ways to take this into account when benchmarking visual localization techniques.

Biography

Torsten Sattler is a Senior Researcher at the Czech Institute of Informatics, Robotics and Cybernetics (CIIRC) at the Czech Technical University (CTU) in Prague, where he is currently building up his own research group working on 3D computer vision and machine learning as part of a tenure track position. Torsten joined CIIRC in July 2020. Before, he was a tenured Associate Professor at Chalmers University of Technology in Gothenburg, Sweden after spending 5 years first as a PostDoc and later as a Senior Researcher in the Computer Vision and Geometry group at ETH Zurich in Zurich, Switzerland. He obtained his PhD from RWTH Aachen University in Germany. He was a program chair for the German Conference on Pattern Recognition in 2020 and a workshop chair for CVPR 2021. He will be a program chair for the European Conference in Computer Vision in 2024. Furthermore, he is co-organizing workshops and tutorials on visual localization at ICCV 2021.

Wireless medical microrobots inside our body
METIN SITTI
Director, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
Time15:55—16:15 CEST, Sep. 28 (Tue)
HallHall 4
Abstract

Wireless cell-sized medical microrobots have the unique capability of navigating, operating and staying inside risky and currently hard- or impossible-to-reach small spaces inside our body. However, due to miniaturization limitations on on-board actuation, powering, sensing, computing and communication, new methods need to be introduced in creating them. In this direction, two alternative approaches are investigated. First, cell-driven biohybrid microswimmers are proposed towards targeted active drug delivery applications. Bacteria- and algae-driven microswimmers are steered using remote magnetic fields and local chemical, oxygen or pH gradients in a given physiological microenvironment. In vitro local and effective drug delivery demonstrations of such microswimmers are reported. Second, external light, magnetic fields and acoustic waves are used to propel mobile microrobots remotely. Carbon nitride-based light-driven microswimmers with propulsion and responsive on-demand drug delivery in biological media are reported. Next, using rotating external magnetic fields, magnetic Janus microparticles-based microrollers are used to move against the blood flow on the vessel walls. They can adhere to the specific cancer cells using their antibody coating and release drugs triggered by light. Also, alternating magnetic fields are used to stimulate magnetopiezoelectric nanoparticles for wireless deep brain neural stimulation to treat the Parkinson's disease, which is validated in preliminary in vivo mouse tests. Moreover, using acoustic waves, microswimmers with integrated microbubbles are propelled on tissue surfaces by fluidic flows induced by the bubble oscillation. Such mobile microrobots and their swarms can be tracked inside our body using photoacoustic and x-ray medical imaging.

Biography

Metin Sitti is the director of Physical Intelligence Department at Max Planck Institute for Intelligent Systems in Stuttgart, Germany. He is also a professor at ETH Zurich, Switzerland and Koç University, Turkey. He was a professor at Carnegie Mellon University (2002-2014) and a research scientist at UC Berkeley (1999-2002) in USA. He received BSc (1992) and MSc (1994) degrees in electrical and electronics engineering from Boğaziçi University, Turkey, and PhD degree in electrical engineering from University of Tokyo, Japan (1999). His research interests include small-scale mobile robotics, bio-inspiration, wireless medical robots, and physical intelligence. He is an IEEE Fellow. As selected awards, he received the Breakthrough of the Year Award in Falling Walls World Science Summit in 2020, ERC Advanced Grant in 2019, Rahmi Koç Science Prize in 2018, SPIE Nanoengineering Pioneer Award in 2011, and NSF CAREER Award in 2005. He received over 15 best paper and video awards in major conferences, including the Best Paper Award in Robotics Science & Systems Conference in 2019. He is the editor-in-chief of Progress in Biomedical Engineering and Journal of Micro-Bio Robotics and associate editor in Science Advances and Extreme Mechanics Letters journals.

Keeping an Eye on Things: Can We Build a Long-Term Navigation System Based On Vision?
TIM BARFOOT
University of Toronto
Time15:55—16:15 CEST, Sep. 28 (Tue)
HallHall 5
Abstract

Many applications of mobile robots (e.g., transportation, delivery, monitoring, guiding) require the ability to operate over long periods of time in the same environment. Over the lifetime of a robot, many factors from lighting and weather to seasonal change and construction influence the way sensors ‘see’ that environment. Cameras are perhaps the most challenging sensor to work with in this regard due to their passive nature; however, they are also the cheapest, lowest power, and highest resolution sensors we currently have. I will discuss our lab’s long-term effort to build a practical, lightweight, long-term navigation system for mobile robots based on cameras. If our robots have ‘seen further’ it is surely by standing on the shoulders of giants and so I will reflect on the key technological ingredients from the literature that we have incorporated into our navigation system including matchable features in the front end and relative, multi-experience map representations and robust estimation in the back end. The answer to the question posed in the title hinges on the ability to ‘match’ new images to those previously captured and so I will discuss how we have been using deep learning to automatically figure out what features to ‘keep an eye on’ for long-term localization, with some promising results. I will also use this talk to formally announce the open sourcing of our Visual Teach and Repeat 3 (VT∓R3) navigation framework for the first time, so you can try it out for yourself!

Biography

Prof. Timothy Barfoot (University of Toronto Robotics Institute) works in the area of autonomy for mobile robots targeting a variety of applications. He is interested in developing methods (localization, mapping, planning, control) to allow robots to operate over long periods of time in large-scale, unstructured, three-dimensional environments, using rich onboard sensing (e.g., cameras, laser, radar) and computation. Tim holds a BASc (Aerospace Major) from the UofT Engineering Science program and a PhD from UofT in robotics. He took up his academic position in May 2007, after spending four years at MDA Robotics (builder of the well-known Canadarm space manipulators), where he developed autonomous vehicle navigation technologies for both planetary rovers and terrestrial applications such as underground mining. He was also a Visiting Professor at the University of Oxford in 2013 and recently completed a leave as Director of Autonomous Systems at Apple in California in 2017-9. Tim is an IEEE Fellow, held a Canada Research Chair (Tier 2), was an Early Researcher Awardee in the Province of Ontario, and has received two paper awards at the IEEE International Conference on Robotics and Automation (ICRA 2010, 2021). He is currently the Associate Director of the UofT Robotics Institute, Faculty Affiliate of the Vector Institute, and co-Faculty Advisor of UofT's self-driving car team that won the SAE Autodrive competition four years in a row. He sits on the Editorial Boards of the International Journal of Robotics Research (IJRR) and Field Robotics (FR), the Foundation Board of Robotics: Science and Systems (RSS), and served as the General Chair of Field and Service Robotics (FSR) 2015, which was held in Toronto. He is the author of a book, State Estimation for Robotics (2017), which is free to download from his webpage (http://asrl.utias.utoronto.ca/~tdb).

Robot Audition 5.0: Listening to Several Things at Once and Beyond
KAZUHIRO NAKADAI
Principal Scientist, Honda Research Institute Japan Co., Ltd. & Specially-Appointed Professor, Tokyo Institute of Technology, Japan
Time15:55—16:15 CEST, Sep. 28 (Tue)
HallHall 6
Abstract

In this talk, I overview robot audition, and focus on the calibration and adaptation of microphone positions, microphone array orientations, time offsets, sampling rates, and transfer functions between microphones and sound sources, which are important issues in robot audition systems to deal with real environments. Robot audition was proposed in 2000 by Prof. Hiroshi G. Okuno and myself to realize real-world auditory functions for robots. Its research stages can be categorized into Robot Audition 1.0-5.0, likening it to Society 5.0 proposed by the Japanese government. Robot Audition 1.0 was the dawn of this area, and it has been proceeded in cooperation with psychology called auditory scene analysis. Based on the idea that humans perceive each sound source as a stream, it has tried to understand auditory scenes where multiple sound sources exist simultaneously through stream segregation using various heuristic rules. Although most studied have been conducted in a simulated environment, the effectiveness of heuristic rules and spatial cues that humans rely on has been shown. Robot Audition 2.0 has extended it to realtime binaural processing in a real environment by introducing ideas from physiology and brain science. Primary robot audition functions such as sound source localization, sound source separation, and automatic speech recognition, and preliminary results for speech recognition of three simultaneous utterances has been reported. In Robot Audition 3.0, acoustic signal processing with a microphone array has played an important role in improving the performance of robot audition. Realtime speech recognition of simultaneous utterances from 11 people has been demonstrated by integration of the three primary functions, and the open-source software HARK has been released as a collection of robot audition technologies, which has been continuously downloaded over 20,000 times every year since its release in 2008. In Robot Audition 4.0, HARK has led to the deployment of robot audition technology in various harsh conditions called extreme audition. Drone audition for search-and-rescue operations at disaster sites has been developed and showed live demonstrations outdoors to localize human voices from the sky with a microphone array embedded drone. Currently, in Robot Audition 5.0, we integrate robot audition technology with deep learning technology, and work on social implementations such as bird song analysis and communication assistance system for hearing difficulties.

Biography

Kazuhiro Nakadai received a B.E. in electrical engineering in 1993, an M.E. in information engineering in 1995, and a Ph.D. in electrical engineering in 2003 from the University of Tokyo. He worked with Nippon Telegraph and Telephone for four years as a system engineer from 1995 to 1999. After that, he worked on the Kitano Symbiotic Systems Project, ERATO, JST as a researcher from 1999 to 2003. Currently, he is a principal scientist for Honda Research Institute Japan, Co., Ltd. He has had a concurrent position at Tokyo Institute of Technology, as a visiting associate professor from 2006 to 2010, a visiting professor from 2011 to 2017, and a specially-appointed professor from July 2017. He also had a concurrent position as a guest professor at Waseda University from 2011 to 2018. His research interests include AI, robotics, signal processing, computational auditory scene analysis, multi-modal integration, and robot audition. He has been an executive board member for JSAI from 2015 to 2016, and for RSJ from 2017 to 2018. He is a senior member of IEEE, and a fellow of RSJ.

Search-based Planning for Higher-dimensional Robotic Systems
MAXIM LIKHACHEV
Associate Professor, Carnegie Mellon University & Sr. Staff Software Engineer, Waymo
Time15:55—16:15 CEST, Sep. 29 (Wed)
HallHall 2
Abstract

Search-based Planning refers to planning by constructing a graph from systematic discretization of the state- and action-space of a robot and then employing a heuristic search to find an optimal path from the start to the goal vertex in this graph. This paradigm works well for low-dimensional robotic systems such as mobile robots and provides rigorous guarantees on solution quality. However, when it comes to planning for higher-dimensional robotic systems such as mobile manipulators, humanoids and ground and aerial vehicles navigating at high-speed, Search-based Planning has been typically thought of as infeasible. In this talk, I will describe some of the research that my group has done into changing this thinking. In particular, I will focus on two different principles. First, constructing multiple lower-dimensional abstractions of robotic systems, solutions to which can effectively guide the overall planning process using Multi-Heuristic A*. Second, using offline preprocessing to provide online planning algorithms that provably guarantee to return solutions within a (small) constant time for repetitive planning tasks. I will present algorithmic frameworks that utilize these principles, describe their theoretical properties, and demonstrate their applications to a wide range of physical high-dimensional robotic systems.

Biography

Maxim Likhachev is an Associate Professor at Carnegie Mellon University, directing Search-based Planning Laboratory (SBPL) and a Senior Staff Software Engineer at Waymo. His group at CMU researches heuristic search, decision-making and planning algorithms, all with applications to the control of robotic systems including unmanned ground and aerial vehicles, mobile manipulation platforms, humanoids, and multi-robot systems. Maxim obtained his Ph.D. in Computer Science from Carnegie Mellon University with a thesis called “Search-based Planning for Large Dynamic Environments.” He has over 150 publications in top journals and conferences on AI and Robotics and numerous paper awards. His work on Anytime D* algorithm, an anytime planning algorithm for dynamic environments, has been awarded the title of Influential 10-year Paper at International Conference on Automated Planning and Scheduling (ICAPS) 2017, the top venue for research on planning and scheduling. Other awards include selection for 2010 DARPA Computer Science Study Panel that recognizes promising faculty in Computer Science and being on a team that won 2007 DARPA Urban Challenge and on a team that won the Gold Edison award in 2013. Maxim founded RobotWits, a company devoted to developing advanced planning and decision-making technologies for self-driving vehicles and recently acquired by Waymo, and co-founded TravelWits, an online travel tech company that brings AI to make travel logistics easier. Finally, Maxim is an executive co-producer of regional Emmy-nominated The Robot Doctor TV series aimed at showing the use of mathematics in Robotics and inspiring high-school students to pursue careers in science and technology.

Motion Planning among Decision-Making Agents
JAVIER ALONSO-MORA
Associate Professor, Delft University of Technology
Time15:55—16:15 CEST, Sep. 29 (Wed)
HallHall 3
Abstract

We move towards an era of smart cities, where autonomous vehicles will provide on-demand transportation while making our streets safer and mobile robots will coexist with humans. The motion plan of mobile robots and autonomous vehicles must, therefore, account for the interaction with other agents and consider that they are, as well, decision-making entities that may cooperate. Towards this objective I will discuss several methods for motion planning and multi-robot coordination that leverage constrained optimization and reinforcement learning and ways to model and account for the inherent uncertainty of dynamic environments. The methods are of broad applicability, including autonomous vehicles, mobile manipulators and aerial vehicles.

Biography

Javier Alonso-Mora is an Associate Professor at the Department of Cognitive Robotics of the Delft University of Technology, the director of the Autonomous Multi-robots Laboratory and a Principal Investigator at the Amsterdam Institute for Advanced Metropolitan Solutions. Previously, he was a Postdoctoral Associate at the Computer Science and Artificial Intelligence Lab (CSAIL) of the Massachusetts Institute of Technology. He received his Ph.D. degree in robotics from ETH Zurich, in partnership with Disney Research Zurich. He serves as associate editor for the IEEE Robotics and Automation Letters, as associate editor for Springer Autonomous Robots, as the Publications Chair for the IEEE International Symposium on Multi-Robot and Multi-Agent Systems 2021 and as associate editor for ICRA, IROS and ICUAS. He is the recipient of several prizes and grants, including the ICRA Best Paper Award on Multi-robot Systems (2019), an Amazon Research Award (2019) and a talent scheme VENI award from the Netherlands Organisation for Scientific Research (2017).

Autonomous Aerial Manipulation
H. JIN KIM
Professor, Aerospace Engineering, Seoul National University
Time15:55—16:15 CEST, Sep. 29 (Wed)
HallHall 4
Abstract

Autonomous aerial manipulation technology using an aerial robotic platform equipped with a manipulator can offer unique capabilities to perform a wide range of challenging tasks. For example, manipulation up in the air, inspection in hard-to-reach areas, or monitoring of a large environmental sturucture can be very difficult or dangerous for humans or conventional robots to perform. Autonomous aerial manipulators can be promising alternatives to traditional platforms. On the other hand, in order to exploit such potential, various essential functions need to be integrated. Even the basic requirements such as the stability maintenance and state estimation of the platform itself can be challenging for aerial manipulation, due to the airborne nature with limited onboard resources, subject to external disturbance and uncertainty inherent to the movement and physical adjacency to surroundings. My team investigates the related problems aiming to improve the applicability of aerial manipulation. This talk will illustrate key examples from our ongoing research activities including sensing and estimation, control, trajectory planning, and an extension to cooperative aerial manipulation. It will conclude with lessens learned from various experiments and remaining issues for future research.

Biography

H. (Hyoun) Jin Kim directs the Laboratory for Autonomous Robotics Research in the Department of Aerospace Engineering, Seoul National University, Korea. She receiveed her B.S. at Korea Advanced Institute of Science and Technology, Korea, and M.S and Ph.D at University of California, Berkeley, USA, all in Mechanical Engineering. She joined Seoul National University in 2004 where she is currently a Professor. Her research was acknowledged with awards at conferences including ICRA, IROS, ACC, and IFAC World Congress. She served as an Associate Editor for several journals including IEEE Transactions on Robotics and IFAC Mechatronics. She is a member of the National Academy of Engineering of Korea.

Trustworthy human-robot interaction in socially assistive scenarios
GINEVRA CASTELLANO
Professor, Intelligent Interactive Systems & Director, Uppsala Social Robotics Lab, Uppsala University, Sweden
Time15:55—16:15 CEST, Sep. 29 (Wed)
HallHall 5
Abstract

Today we are witnessing an increased robotisation in all areas of society, from manufacturing to assistive technology, from healthcare to education. These application areas require robots to be able to interact with humans in an efficient and socially acceptable manner. At the same time, like all technologies, robots may not only bring benefits, but also change how we think and behave. This calls for human-robot interaction researchers to design and develop more human-centric and trustworthy artificial intelligence and robotics, which put humans at the center and preserve human agency and autonomy, where robots adapt to the way humans communicate in the world, rather than the other way around. My research career has been devoted to the questions surrounding whether and how machines should embody human-like qualities, in order to be truly human-centric and trustworthy. In this talk I will present examples of human-robot interaction studies in socially assistive scenarios from my research at the Uppsala Social Robotics Lab, investigating dimensions of human-likeness, trust and transparency, in the quest for more human-centric robots.

Biography

Ginevra Castellano is a Professor in Intelligent Interactive Systems at the Department of Information Technology, Uppsala University, where she leads the Uppsala Social Robotics Lab. Her research interests are in the areas of social robotics and human-robot interaction, and include social learning, personalized adaptive robots, multimodal behaviours and robot ethics. She has published more than 100 research papers on these topics. She was the coordinator of the EU FP7 EMOTE (EMbOdied-perceptive Tutors for Empathy-based learning) project (2012–2016). She was the recipient of a Swedish Research Council Starting Grant (2016–2020). She is the PI for Uppsala University of several EU and national research projects, including, among others, the EU Horizon 2020 ANIMATAS (Advancing intuitive human-machine interaction with human-like social capabilities for education in schools; 2018-2021) project, the COIN (Co-adaptive human-robot interactive systems) project, funded by the Swedish Foundation for Strategic Research (2016–2021), and the project "The ethics and social consequences of AI & caring robots. Learning trust, empathy and accountability" (2020-2024), supported by the Marianne and Marcus Wallenberg Foundation, Sweden, in the WASP-HS program (Wallenberg AI, Autonomous Systems and Software Program– Humanities and Society). Castellano was a General Chair at IVA 2017 and will be a General Chair at HRI 2023. She is an Associate Editor of Frontiers in Robotics and AI and the ACM Transactions on Human-Robot Interaction. Castellano was the recipient of the 10-Year Technical Impact Award at the ACM International Conference on Multimodal Interaction 2019.

Is Soft Robotics bringing Prosthetic Devices one-step closer to their natural counterparts?
GURSEL ALICI
Professor, University of Wollongong
Time15:55—16:15 CEST, Sep. 29 (Wed)
HallHall 6
Abstract

Soft robotics offers unprecedented solutions for applications involving safe interaction with humans and objects, and manipulating and grasping fragile objects, crops and similar agricultural products. Progress in soft robotics will have a significant impact especially on medical applications such as wearable robots, prosthetic devices, assistive devices, and rehabilitation devices.

In this talk, we aim to update on where we are in soft robotics to build prosthetic hands with features that will bring them one-step closer to their natural counterparts. The history of prosthetic hands dates back to 202 BC. Since then, significant efforts have been dedicated to the development of prosthetic hands. The primary features of a prosthetic hand should be to receive and identify its user’s intention noninvasively, and equally importantly send sensory feedback about its "state" to its user noninvasively in order to help "restore normality" for its use—bilateral control. The communication between a prosthetic device and its user (i.e., human-machine interface) has been a challenging research problem. We will also present the progress we have made in the research theme of soft robotics for prosthetic devices, exemplified with the establishment of a fully 3D printed transradial prosthetic hand, https://electromaterials.edu.au/2020/10/29/aces-technology-showcase-the-soft-robotic-hand/. This talk is expected to initiate a medium of discussion and interaction among the conference delegates and emphasize the significance of multi-disciplinary research based on robotics, upper-limb prostheses, bionics and, materials development and synthesis for prosthetic and rehabilitation devices.

Biography

Gursel Alici received his Ph.D. degree in Robotics from the Department of Engineering Science, Oxford University, Oxford, U.K., in 1994. He is currently a Senior Professor at the University of Wollongong, Australia, where he holds the position of Interim Executive Dean for the Faculty of Engineering Information Sciences and prior to this, the Head of the School of Mechanical, Materials, Mechatronic and Biomedical Engineering during 2011-2021. His research interests are soft robotics, system dynamics and control, robotic drug delivery systems, novel actuation concepts for biomechatronic applications, robotic mechanisms and manipulation systems, soft and smart actuators and sensors, prosthetic devices, and medical robotics. He has generated more than 350-refereed publications and delivered numerous invited seminars and keynote talks on his areas of research.

Dr. Alici was a Technical Editor of the IEEE/ASME Transactions on Mechatronics during 2008–2012. He was a Technical Editor of the IEEE Access, during 2013-2020. He is currently Senior Editor of the IEEE/ASME Transactions on Mechatronics, as of January 1, 2020. He has served on the international program committees of numerous IEEE/ASME International Conferences on Robotics and Mechatronics. He was the General Chair of the 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics held in Wollongong, Australia. He is the leader of Soft Robotics for Prosthetic Devices theme of the ARC Center of Excellence for Electromaterials Science. He received the Outstanding Contributions to Teaching and Learning Award in 2010, the Vice-Chancellor’s Interdisciplinary Research Excellence Award in 2013, and Vice-Chancellor’s Award for Research Supervision in 2018 from the University of Wollongong. He has held a visiting professorship position at Swiss Federal Institute of Technology, Lausanne (EPFL) (2007, 2010), City University of Hong Kong (2014), University of Science and Technology of China (USTC) (2015), and University of British Columbia, Canada (2019).

Prague