AI in Medicine: Where Are We Heading?
Artificial Intelligence in coming to Medicine in a big wave. From making diagnosis in various medical conditions, suggesting the most appropriate treatment, finding of the latest advancement in the literature, to predicting the prognosis and outcome of the disease, AI is offering unprecedented opportunities to improve care of our patients. Taking digestive tract cancer as an example, AI-assisted image analysis aids the detection of colorectal neoplasia during colonoscopy, provide so-called optical biopsy of the lesions, integrate genomic, epigenetic and metagenomics data to provide new classification and sub-classification of the cancers, and provide evidence-based suggestion what is the optimal therapy of the condition. Furthermore, AI-assisted surgery, through semi-automated and (in future) fully-automated robotic surgery, will take over at least some part of the surgical treatment of such malignant lesions. This is a defining moment in Medicine. What is the future role of doctors and nurses? How should we train medical students and pharmacists and physiotherapists in the future? And, importantly, when things go wrong, such as reaching a wrong diagnosis and/or mishap occurring in the treatment of a patient, who should take the responsibility? This lecture will offer a peek into the future.
Joseph Sung is Professor of Medicine and Director of Institute of Digestive Disease at the Chinese University of Hong Kong. He is also the Past President of the University and Director of Big Data Decision Analysis Centre of CUHK. He is an academician at the Chinese Academy of Engineering, Fellow of Royal Colleges of London, Edinburgh, Glasgow and Australia.
Embodied Visual Learning
Computer vision has seen major success in learning to recognize objects from massive “disembodied” Web photo collections labeled by human annotators. Yet cognitive science tells us that perception develops in the context of acting the world---and without intensive supervision. Meanwhile, many realistic vision tasks require not only categorizing a well-composed human-taken photo, but also actively deciding where to look in the first place. In the context of these challenges, we are exploring how machine perception benefits from anticipating the sights and sounds an agent will experience as a function of its own actions. Based on this premise, we introduce methods for learning to look around intelligently in novel environments, learning from video how to interact with objects, and perceiving audio-visual streams for both semantic and spatial context. Together, these are steps towards first-person perception, where interaction with the world is itself a supervisory signal.
Kristen Grauman is a Professor in the Department of Computer Science at the University of Texas at Austin and a Research Scientist at Facebook AI Research. Her research in computer vision and machine learning focuses on visual recognition and search. Before joining UT Austin in 2007, she received her Ph.D. at MIT. She is a AAAI Fellow, a Sloan Fellow, and a recipient of the NSF CAREER, ONR YIP, PECASE, PAMI Young Researcher award, and the 2013 IJCAI Computers and Thought Award. She and her collaborators were recognized with best paper awards at CVPR 2008, ICCV 2011, ACCV 2016, and a 2017 Helmholtz Prize “test of time” award. She served as a Program Chair of the Conference on Computer Vision and Pattern Recognition (CVPR) in 2015 and Neural Information Processing Systems (NeurIPS) in 2018, and she currently serves as Associate Editor-in-Chief for the Transactions on Pattern Analysis and Machine Intelligence (PAMI).
ImPACT Tough Robotics Challenge - A National Project of Japan Cabinet Office on Disaster Robotics
The ImPACT Tough Robotics Challenge is a national project of Japan Cabinet Office (period: 2014-18, researchers: 62 PIs and 300 researchers). It focused on tough technologies of robotics to give solutions to disaster response, recovery and preparedness, and achieved the following results.
- Cyber Rescue Canine, a digitally empowered rescue dog, wearing a suit for monitoring its behavior (motion, map, image and action) and conditions (health and enthusiasm).
- Active Scope Camera, a serpentine robot for search in debris that crawls and levitates in few-cm gaps, with visual, auditory and haptic sense for navigation and victim search. Dragon Firefighter, a serpentine robot flying into buildings by water jets to extinguish origins of fire.
- Serpentine robots for plant inspection with high mobility in ducts, at pipes, on uneven terrain, on vertical ladders, and over high steps. Omni gripper that can grasp wide variety of targets including sharp edges without positioning.
- An UAV that is robust in strong wind 20 m/s, in rain 300 mm/h, at change of payload, and at stop of propellers, with hearing and localizing voice coming from ground during flight.
- A 4-legged robot for plant inspection in risky places, with high mobility on rubbles, on vertical ladders, and over high steps. A 30-cm robot hand that can keep grip of 50-kg loads without electricity.
- A construction robot of double-swing dual-arm mechanism with both high-power and preciseness, with force and touch bilateral feedback without sensor at the end-effector, and teleoperation support by real and virtual bird’s-eye-view images.
Satoshi Tadokoro graduated from the University of Tokyo in 1984. He has been a Professor of Tohoku University since 2005, and the Director of Tough Cyberphysical AI Research Center since 2019. He has been the President of International Rescue System Institute since 2002, and was the President of IEEE Robotics and Automation Society in 2016-17. He served as a program manager of MEXT DDT Project on rescue robotics in 2002-07, and was a project manager of Japan Cabinet Office ImPACT Tough Robotics Challenge Project.
Katherine J. Kuchenbecker
Our scientific understanding of haptic interaction is still evolving, both because what you feel greatly depends on how you move, and because engineered sensors, actuators, and algorithms typically struggle to match human capabilities. Consequently, few computer and machine interfaces provide the human operator with high-fidelity touch feedback or carefully analyze the physical signals generated during haptic interactions, limiting their usability. The crucial role of the sense of touch is also deeply appreciated by researchers working to create autonomous robots that can competently manipulate everyday objects and safely interact with humans in unstructured environments. My team works in all of these related areas, aiming to sharpen our understanding of haptic interaction while simultaneously inventing helpful human-computer, human-machine, and human-robot systems that take advantage of the unique capabilities of the sense of touch. This talk will showcase key examples from our ongoing research, including Haptipedia (www.haptipedia.org), three-to-one dimensional reduction of vibration signals, social-physical human-robot interaction for exercise, and large fabric-based tactile sensors that employ electrical resistance tomography. I will close by sharing suggestions on recruiting and leading a diverse team of researchers.
Katherine J. Kuchenbecker directs the Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. She earned her Ph.D. at Stanford University in 2006, did postdoctoral research at the Johns Hopkins University, and was an engineering professor at the University of Pennsylvania before moving to the Max Planck Society in 2017. She delivered a TEDYouth talk on haptics in 2012 and has been honored with a 2009 NSF CAREER Award, the 2012 IEEE RAS Academic Early Career Award, a 2014 Penn Lindback Award for Distinguished Teaching, and various best paper and best demonstration awards. She co-chaired the Technical Committee on Haptics from 2015 to 2017 and the IEEE Haptics Symposium in 2016 and 2018.