David R. Thompson

Principle Research Technologist

NASA Jet Propulsion Laboratory/California Institute of Technology, USA

Imaging Spectroscopy for Earth and Planetary Science

Bio: David Thompson is a principal research technologist at the Jet Propulsion Laboratory, California Institute of Technology, He is a Technical Group Lead of the JPL Imaging Spectroscopy Group, Instrument Scientist for NASA’s EMIT Mission, and Investigation Scientist for NASA’s Airborne Visible Infrared Imaging Spectrometer (AVIRIS) project. His research uses spectroscopic data to characterize Earth and other planetary bodies. His algorithms have guided autonomous robots and sensors fielded in North America, South America, the Atlantic Ocean, Airborne campaigns, Low Earth Orbit, and the surface of Mars. He is recipient of the NASA Early Career Achievement Medal, the Lew Allen Award for Excellence, and the NASA Software of the Year Award.

Abstract: Imaging spectrometers in the visible/shortwave infrared range capture the majority of solar-reflected information, adding a spectral dimension to the spatial dimension of traditional data used in computer vision. This enables rigorous spectroscopy for quantitative maps of physical and chemical properties at high spatial resolution. The instruments have a long history of deployments for mapping terrestrial and coastal aquatic ecosystems, geology, and atmospheric properties, and are also critical tools for exploring other planetary bodies. Recognizing this potential, space agencies like NASA, ESA, and others have slated imaging spectrometers for Earth-orbiting missions with global coverage. These high-dimensional spatio-spectral datasets can address key challenges facing our environment, but pose a rich challenge for computer scientists and algorithm designers. More broadly, the technology presages new earth-based applications that measure new physical dimensions of the human-scale built environment beyond what our eyes can see. This talk will introduce remote imaging spectroscopy in the Visible and Shortwave Infrared, describing the measurement strategy and data analysis considerations including atmospheric correction. We will describe historical and current instruments, software, and public datasets.

Pierre Boulanger

Chief Technology Officer

FLIR Systems, Inc.

CNN with Long Ware Infrared cameras in various of applications

Bio:Pierre Boulanger is the Chief Technology Officer of FLIR Systems, where he leads a group of engineers and scientists to develop and integrate revolutionary technologies that bring a competitive advantage to FLIR’s commercial offerings. His team developed the based technologies behind FLIR’s cell-phone-camera size Lepton LWIR product, the SWaP-C industry-leading Boson family of products, FLIR’s low-cost radar product, and FLIR’s Neural Network based technologies, development infrastructure, and first offerings. He has been working at FLIR since 2002, and co-authored over 50 patents for the company.

Abstract: Infrared cameras provide complimentary information over sensors of other modalities. This presentation will present how FLIR is coupling CNN technology with Infrared cameras to enhance customer value in countless applications. Some Dataset Engineering techniques will also be presented, including the use of synthetic images, and IR specific augmentations.

Dr. Amir R. Zamir

postdoctoral researcher

Stanford & UC Berkeley, USA

Transfer Learning for Multi-Task Perception and Robotics

Bio: Amir Zamir is a postdoctoral researcher at Stanford University and University of California, Berkeley. His research interests are broadly in computer vision and machine learning with a focus on transfer/self/un supervised learning and perception-for-robotics. He has been recognized with CVPR (2018) Best Paper Award, CVPR (2016) Best Student Paper Award, NVIDIA Pioneering Research Award (2018), and Stanford ICME Seed Award (2016), among others. His research has been covered by popular press outlets, e.g. NPR or The New York Time.

Abstract: I will present a number of methods for learning perception tasks robustly and with little training data. I'll discuss Taskonomy, which is a computational method for extracting transfer learning strategies to enable learning with less labeled data, and Cross-Task Consistency as a way of robust learning by enforcing the conditions perception tasks enforce on each other. Finally I'll show how we use the learned tasks as perceptual priors about the world within robotic frameworks, in order to improve the sample efficiency and generalization of robot learning.