Keynotes
Learning Vision-based, Agile Drone Flight: from Frames to Event Cameras
Davide Scaramuzza
Director of the Robotics and Perception Group, University of Zurich, Switzerland
Time: 1:30pm - 2:00pm, Nov. 5 (Tue)
Venue: L3-RA, Level 3
Davide Scaramuzza
Autonomous quadrotors will soon play a major role in search-and-rescue and remote-inspection missions, where a fast response is crucial. Quadrotors have the potential to navigate quickly through unstructured environments, enter and exit buildings through narrow gaps, and fly through collapsed buildings. However, their speed and maneuverability are still far from those of birds and human pilots. Human pilots take years to learn the skills to navigate drones. Autonomous, vision-based agile navigation through unknown, indoor environments poses a number of challenges for robotics research in terms of perception, state estimation, planning, and control. In this talk, I will show that how the combination of both model-based and machine learning methods united with the power of new, low-latency sensors, such as event-based cameras, allow drones to achieve unprecedented speed and robustness by relying solely on the use of passive cameras, inertial sensors, and onboard computing.
Davide Scaramuzza is professor of robotics and perception at both departments of Neuroinformatics (University of Zurich & ETH Zurich) and Informatics (University of Zurich), where he does research at the intersection of robotics and computer vision. He did his PhD in robotics and computer vision at ETH Zurich (with Roland Siegwart) and a postdoc at the University of Pennsylvania (with Vijay Kumar and Kostas Daniilidis). From 2009 to 2012, he led the European project sFly, which introduced the PX4 autopilot and pioneered visual-SLAM–based autonomous navigation of micro drones. From 2015 to 2018 he was part of the DARPA FLA program. For his research contributions, he was awarded the prestigious IEEE Robotics and Automation Society Early Career Award, the Misha Mahowald Neuromorphic Engineering Award, the SNSF-ERC Starting Grant (equivalent to NSF Career Award), Google, Intel, Qualcomm, and KUKA awards, as well as several conference and journal paper awards (e.g., IEEE Trans. of Robotics Best Paper Award in 2018). He coauthored the book “Introduction to Autonomous Mobile Robots” (published by MIT Press) and more than 100 papers on robotics and computer vision. In 2015, he cofounded a venture, called Zurich-Eye, dedicated to visual-inertial navigation solutions for mobile robots, which today is Facebook-Oculus Zurich. He was also the strategic advisor of Dacuda, an ETH spinoff dedicated to inside-out VR solutions, which today is Magic Leap Zurich. Many aspects of his research have been prominently featured in the popular press, such as The New York Times, Discovery Channel, BBC, IEEE Spectrum, MIT Technology Review.
AI and Robotics Technology for Asteroid Sample Return Mission HAYABUSA2
Takashi Kubota
Professor, Graduate School of The University of Tokyo, Japan
Time: 1:30pm - 2:00pm, Nov. 5 (Tue)
Venue: L3-RB, Level 3
Takashi Kubota
JAXA has earnestly studied and promoted deep space exploration missions. In recent years, small body exploration missions have received a lot of attention in the world. JAXA is currently promoting Hayabusa-2 mission, which is the post Hayabusa including sample and return attempt to/from the near-earth asteroid. Hayabusa-2 spacecraft was launched to the asteroid in 2014 and performed the rendezvous for the target C-type asteroid Ryugu on June 27th in 2018. Hayabusa-2 challenges very interesting objectives: what are original organic matters and water existed in the solar system? Or how are they related to life and ocean water? Hayabusa-2 succeeded in deploying two exploration robots, which could hop and perform the in-situ surface exploration. The impactor also succeeded in exploding the surface and making an artificial crater. Then Hayabusa-2 successfully performed two trials to collect less altered materials. This talk presents AI and robotics technology developed in Hayabusa-2 mission, such as pin-point guidance, visual navigation, automatic sampling, autonomous exploration rovers, etc. This talk also presents robotics technology for future exploration plans.
Dr. Kubota is a professor at Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), Japan. He received Dr. degree in electrical engineering in 1991 from the University of Tokyo. He is also a professor of the graduate school of the University of Tokyo. He was a visiting scientist in Jet Propulsion Laboratory in 1997 and 1998. He was in charge of guidance, navigation and control, and asteroid exploration rover MINERVA in asteroid exploration mission HAYABUSA. He is a spokesperson for Hayabusa2 mission. He is also the Research Director at ISAS and the Director of Space Exploration Innovation Hub Center at JAXA. His research interests include exploration robots, AI in space, Robotics, Image based navigation etc.
Towards robots that teach and learn through physical human-robot interaction
Marcia O'Malley
Time: 1:30pm - 2:00pm, Nov. 5 (Tue)
Venue: L3-RC, Level 3
Marcia O'Malley
Robots are increasingly transitioning from factories to human environments: today we use robots in healthcare, households, and social settings. In such circumstances where the human and the robot work in close proximity---physical interactions are almost inevitable. In the past, these physical interactions have typically been treated as a disturbance, which should be avoided or rejected. But physical interaction offers an opportunity for the human and robot to implicitly communicate; when the robot guides the human, or the human corrects the robot, the human and robot are leveraging physical interactions to inform each other about some aspect of the current task. This talk will explore how robots can both teach and learn from humans through physical interaction.
Marcia O’Malley is the Stanley C. Moore Professor of Mechanical Engineering, of Computer Science, and of Electrical and Computer Engineering at Rice University where she directs the MAHI (Mechatronics and Haptic Interfaces) Lab. She is also the Director of Rehabilitation Engineering at TIRR-Memorial Hermann Hospital. Her research addresses issues that arise when humans physically interact with robotic systems, with a focus on training and rehabilitation in virtual environments. She is a Fellow of the American Society of Mechanical Engineers and serves as a senior editor for both the ASME/IEEE Transactions on Mechatronics and for the ACM Transactions on Human Robot Interaction.
Interaction with Vehicular Robots
Cristina Olaverri Monreal
Time: 2:00pm - 2:30pm, Nov. 5 (Tue)
Venue: L3-RA, Level 3
Cristina Olaverri Monreal
Completely unmanned vehicles can be regarded as autonomous machines capable of sensing their environment, making decisions, and performing actions. They are already becoming a reality, as many vehicles are already equipped with the technology that enable self-driving automation, such as lane-keeping assistance and automated braking. The feasibility of incorporating new technology-driven functionality to vehicles has played a central role in automotive design, however issues related to human capabilities, which affect a system’s operation have not always been considered. This presentation elucidates the broad issues involved in the interaction of road users with intelligent vehicle technologies and autonomous vehicles, detailing interaction-design concepts and metrics while focusing on the public’s perception of road safety and trust.
Univ.-Prof. Dr. Cristina Olaverri-Monreal graduated with a Master’s degree in Computational Linguistics, Computer Science and Phonetics from the Ludwig-Maximilians University (LMU) in Munich and received her PhD in cooperation with BMW. After working several years internationally in the industry and in the academia, in her current position as full professor and holder of the BMVIT endowed chair sustainable transport logistics 4.0 at the Johannes Kepler University Linz, in Austria her research aims at studying solutions for an efficient and effective transportation focusing on minimizing the barrier between users and road systems. To this end, she relies on the automation, wireless communication and sensing technologies that pertain to the field of Intelligent Transportation Systems (ITS). Dr. Olaverri is Vice-president of Educational Activities in the IEEE ITS Society Executive Committee and chair of the Technical Activities Committee on Human Factors in ITS. In addition, she serves as an associate editor and editorial board member of several journals in the field, including the IEEE Intelligent Transportation Systems Transactions and the IEEE International Transportation Systems Magazine. She was recently recognized for her dedicated contribution to continuing education in the field of ITS with the 2017 IEEE Educational Activities Board Meritorious Achievement Award in Continuing Education.
Robot Learning from Sensing to Behavior
Fuchun Sun
Time: 2:00pm - 2:30pm, Nov. 5 (Tue)
Venue: L3-RB, Level 3
Fuchun Sun
Traditional machine learning concentrates on developing algorithms and statistical models for a specific task, while robot learning, on the other hand, emphasizes the connection between perception and behavior, which enables robots to have cognitive abilities similar to those of human beings. Through interaction with human and environment, robots can understand and adapt to complex environments, and complete complex tasks.
In this talk, we will present a new active sensing architecture, in which a feedback mechanism is used for connecting perception and behavior. Under this framework, robot could also enhance its perception through intended action. In addition to some principle approaches on sensing and cognitive learning which are developed under the concept of active sensing in robot learning. We will further introduce a novel high-resolution sensor device with five modalities developed by our research team at Tsinghua for robot dexterous operations. This sensor device has been equipped on complex robotic hardware system including dexterous hand with muscle-like actuation. Finally, some empirical results and demos in robot learning will be presented with some promising future research directions.
Dr. Fuchun Sun is professor of Department of Computer Science and Technology and President of Academic Committee of the Department, Tsinghua University, deputy director of State Key Lab. of Intelligent Technology & Systems, Beijing, China. His research interests include robotic perception and intelligent control. He has won the Champion of Autonomous Grasp Challenges in IROS2016. He is Fellow of IEEE.
Dr. Sun is the recipient of the excellent Doctoral Dissertation Prize of China in 2000 by MOE of China and the Choon-Gang Academic Award by Korea in 2003, and was recognized as a Distinguished Young Scholar in 2006 by the Natural Science Foundation of China. He served as Editor-in-Chief of International Journal on Cognitive Computaion and Systems, and Associated Editors of IEEE Trans. on Neural Networks during 2006-2010, IEEE Trans. on Fuzzy Systems since 2011, IEEE Trans. on Systems, Man and Cybernetics: Systems since 2015 and IEEE Trans. on Cognitive and Developement Systems since 2019.
Robot Manipulation with Deformation
Yun-Hui Liu
Director, CUHK T Stone Robotics Institute, The Chinese University of Hong Kong, China
Time: 2:00pm - 2:30pm, Nov. 5 (Tue)
Venue: L3-RC, Level 3
Yun-Hui Liu
Many manipulation tasks of robots involve soft objects or deformable structures. Typical tasks of manipulating soft objects include robotic surgery handling soft tissues, cloth handling, assembly of cables, etc., and robot manipulation with deformable structures mainly mean tasks performed by soft robots or robots resting on deformable bases. The major technical challenges in automating the robot manipulation with deformation lie in two aspects: the difficulties in modelling the kinematics and dynamics of the deformation due to the unknown physical/material properties and the complicated structure involved, and in controlling the deformation stably without using any model or using models with large uncertainties. One of the promising approaches to cope with the problems is to control the manipulation tasks with deformation using visual feedback because humans can effectively and reliably perform such deformable manipulation tasks without knowing any deformation model but by monitoring the deformation using their eyes. This talk will demonstrate our latest work on vision-based model-free robotic manipulation involving deformation and applications of the approaches in robotic surgery, cable manipuation, grasping cloths, construction, etc
Yun-Hui Liu received the B. Eng. degree from the Beijing Institute of Technology, the M. Eng. degree from Osaka University, and the Ph.D. degree from the University of Tokyo in 1992. After working at the Electrotechnical Laboratory of Japan as a Research Scientist, he joined The Chinese University of Hong Kong (CUHK) in 1995 and is currently Choh-Ming Li Professor of Mechanical and Automation Engineering, the Director of the CUHK T Stone Robotics Institute, and the director of Hong Kong Centre for Logistics Robotics funded by the HKSAR government. He is also an adjunct professor at the State Key Lab of Robotics and Systems, Harbin Institute of Technology, China. He has published more than 400 papers in refereed journals and refereed conference proceedings and was listed in the Highly Cited Authors (Engineering) by Thomson Reuters in 2013. His research interests include visual servoing, medical robotics, multi-fingered grasping, mobile robots, and machine intelligence. Prof. Liu has received numerous research awards from international journals and international conferences in robotics and automation and government agencies. He was the Editor-in-Chief of Robotics and Biomimetics and served as an Associate Editor of the IEEE TRANSACTION ON ROBOTICS AND AUTOMATION and General Chair of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. He is an IEEE Fellow.
Living with robots, how far, how close?
Gentiane Venture
Time: 9:00am - 9:30am, Nov. 6 (Wed)
Venue: L3-RA, Level 3
Gentiane Venture
If it is often said that robots are coming to share our working/living space. I feel lucky it is true for me, but I don't think most people can say the same yet. Human-Robot Interaction studies show that the readiness of the systems is far from reaching expectations. To compensate for the robots’ limitations tricks are used: controlled lab experiments, Wizard of Oz, and minutely scenarized interactions. Studies controlling some very specific parameters are like in-vitro experiments, failing in providing a holistic study of human and robot shared experience. "HRI in the wild", in ecological environment, can provide a rich data set of interactions, they are like in-vivo experiments. Because of the unstructured nature of the experience, programming such interactions is ultimately time consuming and requires multiple expertise, thus often left behind. I will present our tools to create HRI in the wild and some applications in kindergarten and private houses. I will introduce our robot cognitive processes and expressive movements generation. I will conclude with some robot & UX design perspectives.
Gentiane Venture is a French Roboticist working in academia in Tokyo. She is a distinguished professor with Tokyo University of Agriculture and Technology and a cross appointed fellow with AIST. She obtained her MSc and PhD from Ecole Centrale/University of Nantes in 2000 and 2003 respectively. She worked at CEA in 2004 and for 6 years at the University of Tokyo. In 2009 she started with Tokyo University of Agriculture and Technology where she has established an international research group working on human science and robotics. With her group she conducts theoretical and applied research on motion dynamics, robot control and non-verbal communication to study the meaning of living with robots. Her work is highly interdisciplinary, collaborating with therapists, psychologists, neuroscientists, sociologists, philosophers, ergonomists, artists and designers.
Snake robots moving on land and exploring the oceans
Kristin Y. Pettersen
Time: 9:00am - 9:30am, Nov. 6 (Wed)
Venue: L3-RB, Level 3
Kristin Y. Pettersen
Snake robots are motivated by the long, slender and flexible body of biological snakes, which allows them to move in virtually any environment on land and in water. Since the snake robot is essentially a manipulator arm that can move by itself, it has a number of interesting applications including firefighting applications and search and rescue operations. In water, the robot is a highly flexible and dexterous manipulator arm that can swim by itself like a sea snake. This highly flexible snake-like mechanism has excellent accessibility properties; it can access virtually any location on a subsea oil & gas installation, move into the confined areas of ship wrecks, inside ice caves, or be used for observation of biological systems. Not only can the swimming manipulator access narrow openings and confined spaces, but it can also carry out highly complex manipulation tasks at this location since manipulation is an inherent capability of the system.
In this talk, I will present our research on snake robots and the ongoing efforts for bringing the results from university research towards industrial use.
Kristin Y. Pettersen is a Professor in the Department of Engineering Cybernetics, NTNU where she has been a faculty member since 1996. She was Head of Department 2011-2013 and Director of the NTNU ICT Program of Robotics 2010-2013. She is Adjunct Professor at the Norwegian Defence Research Establishment (FFI). She is also Key Scientist at the CoE Centre for Autonomous Marine Operations and Systems (NTNU AMOS). She is a co-founder of the NTNU spin-off company Eelume AS, where she was CEO 2015-2016.
Kristin Y. Pettersen is IEEE CSS Distinguished Lecturer 2019-2021. She is an IEEE Fellow, member of the Norwegian Academy of Technological Sciences, and member of the Academy of the Royal Norwegian Society of Sciences and Letters.
Engineering Humanoids
Tamim Asfour
Time: 9:00am - 9:30am, Nov. 6 (Wed)
Venue: L3-RC, Level 3
Tamim Asfour
Humanoid robotics plays a central role in robotics research as well as in understanding intelligence. Engineering humanoid robots that are able to learn from humans and sensorimotor experience, to predict the consequences of actions and exploit the interaction with the world to extend their cognitive horizon remains a research grand challenge. Currently, we are experiencing AI systems with superhuman performance in games, image and speech processing. However, the generation of robot behaviors with human-like motion intelligence and performance has yet to be achieved. In this talk, I will present recent progress towards engineering 24/7 humanoid robots that link perception and action to generate intelligent behavior. I will show the ARMAR humanoid robots performing complex grasping and manipulation tasks in kitchen and industrial environments, learning actions from human observation and experience as well as reasoning about object-action relations.
Tamim Asfour is full Professor of Humanoid Robotics at the Institute for Anthropomatics and Robotics at the Karlsruhe Institute of Technology (KIT). His research focuses on the engineering of high performance 24/7 humanoid robotics as well as on the mechano-informatics of humanoids as the synergetic integration of informatics, artificial intelligence and mechatronics into humanoid robot systems, which are able to predict, act and interact in the real world. In his research, he is reaching out and connecting to neighboring areas in large-scale national and European interdisciplinary projects in the area of robotics in combination with machine learning and computer vision. Tamim is the developer of the ARMAR humanoid robot family. He is scientific spokesperson of the KIT Center “Information · Systems · Technologies (KCIST)”, president of the Executive Board of the German Robotics Society (DGR), the Founding Editor-in-Chief of the IEEE-RAS Humanoids Conference Editorial Board, and Editor of the Robotics and Automation Letters.
www.humanoids.kit.edu
Learning Human-Robot Interaction for Robot-Assisted Pedestrian Regulation
Yi Guo
Time: 9:30am - 10:00am, Nov. 6 (Wed)
Venue: L3-RA, Level 3
Yi Guo
Controlling pedestrian crowd dynamics has attracted increasing attention due to its potential impact to save lives in emergency. The “fast-is-slower” effect defines the phenomenon of jamming at the exit or a bottleneck, caused by people rushing to the exit. In this talk, I will present our research on robot-assisted pedestrian regulation, where pedestrian flows are regulated and optimized through passive human-robot interaction. We design feedback motion control for a robot to efficiently interact with pedestrians to achieve desirable collective motion. Both adaptive dynamic programming and deep reinforcement learning methods are applied to the formulated problem of robot-assisted pedestrian flow optimization. Simulation results in a robot simulator show that our approach regulates pedestrian flows and achieves optimized outflow learning from the real-time observation of the pedestrian flow. Potential crowd disasters can be avoided as the critical crowd pressure is reduced by the proposed approach.
Yi Guo is a Professor in the Department of Electrical and Computer Engineering at Stevens Institute of Technology, where she joined in 2005 as an Assistant Professor. She obtained her Ph.D. degree in Electrical and Information Engineering from University of Sydney, Australia, in 1999. She was a postdoctoral research fellow at Oak Ridge National Laboratory from 2000 to 2002, and a Visiting Assistant Professor at University of Central Florida from 2002 to 2005. Her main research interests include autonomous mobile robotics, distributed sensor networks, and nonlinear control systems. She has published more than 100 peer-reviewed journal and conference papers, authored the book entitled “Distributed Cooperative Control: Emerging Applications” (John Wiley & Sons 2017), and edited a book on micro/nano-robotics for biomedical applications (Springer 2013). She currently serves on the editorial boards of IEEE Robotics and Automation Magazine, IEEE Robotics and Automation Letters, and IEEE/ASME Transactions on Mechatronics. She served in Organizing Committees of ICRA (2015, 2014, 2008, 2006).
Robots with Physical Intelligence
Sangbae Kim
Associate Professor, Massachusetts Institute of Technology, USA
Time: 9:30am - 10:00am, Nov. 6 (Wed)
Venue: L3-RB, Level 3
Sangbae Kim
While industrial robots are effective in repetitive, precise kinematic tasks in factories, the design and control of these robots are not suited for physically interactive performance that humans do easily. These tasks require ‘physical intelligence’ through complex dynamic interactions with environments whereas conventional robots are designed primarily for position control. In order to develop a robot with ‘physical intelligence’, we first need a new type of machines that allows dynamic interactions. This talk will discuss how the new design paradigm allows dynamic interactive tasks. As an embodiment of such a robot design paradigm, the latest version of the MIT Cheetah robots and force-feedback teleoperation arms will be presented. These robots are equipped with proprioceptive actuators, a new design paradigm for dynamic robots. This new class of actuators will play a crucial role in developing ‘physical intelligence’ and future robot applications such as elderly care, home service, delivery, and services in environments unfavorable for humans.
Sangbae Kim is the director of the Biomimetic Robotics Laboratory and an associate professor of Mechanical Engineering at MIT. His research focuses on the bio-inspired robot design achieved by extracting principles from animals. Kim’s achievements include creating the world’s first directional adhesive inspired by gecko lizards and a climbing robot named Stickybot that utilizes the directional adhesive to climb smooth surfaces. TIME Magazine named Stickybot one of the best inventions of 2006. One of Kim’s recent achievements is the development of the MIT Cheetah, a robot capable of stable running outdoors up to 13 mph and autonomous jumping over obstacles at the efficiency of animals. Kim is a recipient of best paper awards from the ICRA (2007), King-Sun Fu Memorial TRO (2008) and IEEE/ASME TMECH (2016). Additionally, he received a DARPA YFA (2013), an NSF CAREER award (2014), and a Ruth and Joel Spira Award for Distinguished Teaching (2015).
Design and Control of BHR Humanoid Robots
Qiang Huang
Time: 9:30am - 10:00am, Nov. 6 (Wed)
Venue: L3-RC, Level 3
Qiang Huang
Humanoid robots are promising candidates to work and assist humans in industry and services. In this talk, the development roadmap of six generations BHR humanoid robots from Beijing Institute of Technology, will be introduced. The motion generation and sensory reflex control for biped walking will be presented. For the problem that a biped humanoid always risks tipping itself over in complicated environments like disaster response scenario, this talk will discuss the bio-inspired mechanical design and control strategies of falling protection for a humanoid robot. Finally, in order to improve the dynamic motion performances of humanoid robots, a new design using speed reducer with small ratio and high torque motor with relatively low speed will be presented.
Qiang HUANG is an IEEE Fellow and a Professor in Beijing Institute of Technology (BIT), China. He received the B.S. and M.S. degrees in 1986 and 1989 in China, and the Ph.D. degree from Waseda University, Japan, in 1996. He joined AIST, Japan, as a research fellow in 1996, and was a Researcher in University of Tokyo from 1999 to 2000, and then joined BIT in 2000. At present, he is the Director of Intelligent Robotics Institute, BIT, China, and the Executive Director of Beijing Advanced Innovation Center for Intelligent Robots and Systems, China.
His research interests include humanoid robots, space robot and human-robot fusion systems. He has published over 260 refereed journal and conference papers, and holds about 60 patents. He is a recipient of IFToMM Award of Merit and has received over 10 best paper awards. He was granted the Chung Kong Scholar Professorship from MOE of China, and the Distinguished Young Scholar of the NSFC of China. He served as the General Chair of 2018 IEEE Humanoids, 2017 IEEE ROBIO and 2017 IEEE ICBS, and the Organization Chair of 2006 IEEE IROS, etc.