University of Massachusetts, Amherst, USA
Bio: Erik Learned-Miller is an Associate Professor of Computer Science at the University of Massachusetts, Amherst, where he joined the faculty in 2004. His research interests include face recognition, unsupervised learning and learning from small training sets, vision for robotics, and motion understanding. Learned-Miller received a B.A. in Psychology from Yale University in 1988. In 1989, he co-founded CORITechs, Inc., where he co-developed the second FDA cleared system for image-guided neurosurgery. He worked for Nomos Corporation, Pittsburgh, PA, for two years as the manager of neurosurgical product engineering. He obtained Master of Science (1997) and Ph. D. (2002) degrees from the Massachusetts Institute of Technology, both in Electrical Engineering and Computer Science. In 2006, he received an NSF CAREER award for his work in computer vision and machine learning. He spearheaded the Labeled Faces in the Wild (LFW) and FDDB face databases which are de facto standards for face verification and face detection. He was a co-Program Chair for CVPR 2015 in Boston.
Abstract: I will review three recent projects in our lab and discuss relations to Perception Beyond the Visible Spectrum:
1. Multi-view CNNs. In this work, we recognize 3d models of objects. We come to the somewhat surprising conclusion that by first rendering the models as images, and then recognizing the images, we dramatically improve recognition rates, despite the fact that the images are strictly less informative.
2. Seeing the Invisible: Motion Segmentation of Camouflaged Animals. In this work, we describe a new method for describing moving objects in videos with moving cameras. Since our method does not rely on the appearance of an object, but only on optical flow, it could be applied to any imaging modality for which optical flow can be obtained, and in which the imaging is described well by a pinhole camera model.
3. Cross quality distillation. This is work by my colleague Subhransu Maji that shows how to use images with more information (for example, high resolution) to help train models on low quality images (e.g. blurry images). The gains from this technique are surprisingly large. I will describe how these techniques could be applied to non-visible spectrum images, such as infrared.
Solution Architect, NVIDIA Corporation
Bio:Jon Barker is a Solution Architect with NVIDIA, helping customers and partners develop applications of GPU-accelerated machine learning to solve defense and national security problems. He is particularly focused on applications of the rapidly developing field of deep learning. Prior to joining NVIDIA, Jon spent almost a decade as a government research scientist within the U.K. Ministry of Defence and the U.S. Department of Defense. While in government service, he led R&D projects in sensor data fusion, big data analytics, and machine learning for multi-modal sensor data to support military situational awareness and aid decision making. He has a Ph.D. and B.Sc. in Pure Mathematics from the University of Southampton, U.K.
Abstract: Recent advances in Deep Learning have equipped machines with near human levels of ability to automatically detect and classify visual content in a range of applications. Whilst imagery in the visual spectrum has dominated these impressive results, we are increasingly seeing similar performance in applications beyond the visible spectrum such as automotive, medical imaging and remote sensing. These advances are being driven by the growth in available training data, algorithmic advances and the dense computational power provided by GPUs. An overview of relevant applications of Deep Learning beyond the visible spectrum will be provided as well as recommendations on how to leverage the robust ecosystem of available GPU hardware and software to get started developing new applications.
Office of Naval Research, USA
Bio: Behzad Kamgar-Parsi is a program officer with the Mathematical, Computer and Information Sciences Division of the Office of Naval Research (ONR), managing programs in the Intelligent and Autonomous Systems, and Image Understanding. Before joining ONR in 2001, he was a research scientist at the Naval Research Laboratory Information Technology Division, the University of Maryland Center for Automation Research, and a post-doctoral fellow at the Rockefeller University Theoretical Physics Lab. He has also served as a consultant to NIST and NASA. He has conducted research in computer vision, image processing, underwater acoustic imaging, computational intelligence, statistics and physics, and has published many papers in leading journals and conferences in these fields. He received a B.Sc. (summa-cum-laude) in physics from the University of Tehran and a Ph.D. in theoretical physics from the University of Maryland.