Chief Systems Engineer
Air Force DCGS, USA
Bio: Kevin Priddy is the Chief Systems Engineer for the Air Force Distributed Common Ground Station (AF DCGS). The AF DCGS, also referred to as the AN/GSQ-272 SENTINEL weapon system, is the Air Force’s primary intelligence, surveillance and reconnaissance (ISR) collection, processing, exploitation, analysis and dissemination (CPAD) system.
He received the Electrical Engineering degree from Brigham Young University in 1982 and entered the USAF as a an engineering officer. In 1985 he received his MS in electrical engineering from the Air Force Institute of Technology focusing on electro-optics and semiconductor theory. He entered the Air Force Institute of Technology in 1989 to pursue his PhD in 1989 and graduated with his PhD degree in Electrical Engineering focused on machine learning and pattern recognition n 1992. Shortly after receiving his PhD, he left the Air Force and joined Accurate Automation Corporation which was focused on utilizing artificial neural networks on a variety of problems of interest for industry and the DoD. While at AAC he was the co-inventor of a digital neural network processor which in its day outperformed all of the other neural network processors on the market. 1997 he joined the Cognitive Systems Initiative at Battelle Memorial Institute where he became its director in 1999 until 2002. In 2002 he joined Jacobs Sverdrup in Dayton Ohio to assess how well automated target recognition systems were performing that were developed with the DARPA MSTAR project. In 2005 he rejoined the USAF as a civilian in the Air Force Research Laboratory (AFRL) Sensors Directorate and began working with the development of wide area motion imagery (WAMI) systems.
He is a nationally recognized expert in the conceptual formulation, development, and transition of airborne WAMI systems and their associated layered sensing environments for processing exploitation and dissemination of WAMI data. He was the chief engineer for the AngelFire project which fielded in Iraq and for its follow-on Blue Devil, which fielded in Afghanistan, as well as the driving force behind the Pursuer three-dimensional multi-int layered sensing viewer used to exploit ISR data products. His national leadership in these areas has resulted in significant investments from around the DoD to improve the nation’s capability to perform ISR missions in support of counterinsurgency (COIN) operations in Iraq and Afghanistan and laid the foundation for future advancements in ISR capabilities. He has co-authored one book on neural networks, over 50 technical papers and is a co-inventor of three patents. He is a fellow of SPIE and AFRL for his contributions in machine learning and real-time wide area motion imagery.
Abstract: Machine learning and deep learning in particular have been front and center in a resurgence of data analytics and pattern recognition tools in the past few years. Tremendous advances have been made in one shot learning, face recognition, pattern recognition and even scene understanding. This success has created a massive resurgence in researchers promising the solution to a myriad of new problems.
This presentation will focus on the hype, hope and promise of deep learning and compare where deep learning is today to previous renaissance periods in machine learning. There are a number of lessons that have been learned over the years which many who have not studied the past may repeat.
Two years ago Yan Le Cun pointed out a number of weaknesses in deep learning when applied to real world problems. This presentation will present a practitioner’s point of view of working with learning systems to solve problems of interest to the DoD. Over the years there have been many proclaiming that a new algorithm or learning system was going to solve significant problems of interest in one area only to fail miserably when applied to a different problem in the same domain or in a different domain altogether.
This presentation will highlight lessons learned over a long period of working with learning systems and will hopefully show the audience pitfalls to avoid as they move the state-of-the art forward. Deep learning is making significant strides but we must temper our enthusiasm with reality and address the shortcomings to create really useful learning systems.
University of Delaware
Bio:Jingyi Yu is a Professor at the School of Information Science and Technology at the ShanghaiTech University and a Professor in the Department of Computer and Information Sciences at the University of Delaware. He is also the founding director of the Virtual Reality and Visual Computing Center at ShanghaiTech. Yu received B.S. from Caltech in 2000 and Ph.D. from MIT in 2005. His research interests span a range of areas in computer vision, computational photography and computer graphics and he is a recipient of the NSF CAREER Award and the AFOSR YIP Award. He was a Program Chair of ICCP 2016 and an Area Chair of ICCV '11,'15, '17, CVPR '17, and NIPS '15, '17. He also serves as an Associate Editor of IEEE TPAMI, Elsevier CVIU, Springer TVCJ and Springer MVA. In 2015, he founded Plex VR, a startup company that focuses on light field virtual reality and augmented reality.
Abstract: The complete plenoptic function records radiance of rays from every location, at every angle, for every wavelength and at every time. Existing techniques have been focused on reconstructing a subset of different dimensions: a static camera for spatial, a video camera for spatial and temporal, a light field camera for spatial and angular, and imaging spectrometer for spatial and spectral. Simultaneously capturing multiple dimensions of the plenoptic function is challenging and requires special imaging systems and post-processing algorithms. In this talk, I present several latest solutions towards ultimate plenoptic imaging. I first demonstrate a hyperspectral light field (H-LF) camera array, with each camera equipped with a narrow bandpass filter centered at a specific wavelength. To fuse H-LF data, I present a new spectral-invariant feature descriptor and its companion matching metric to maintain robustness and accuracy. Based on the new metric, we further tailor a H-LF stereo matching scheme that employs a spectral-dependent data cost and a spectral-aware defocus cost for high fidelity depth estimation. The camera array system is effective but bulky. Therefore, I further present a single camera H-LF solution called Snapshot Plenoptic Imager (SPI) that uses spectral coded catadioptric mirror arrays for simultaneously acquiring the spatial, angular and spectral dimensions. We show that the spectral signal exhibits the sparsity property and we apply a learning-based approach to improve the spectral resolution from limited measurements. Finally, I demonstrate using the recovered plenoptic function for a variety of applications, ranging from video surveillance to color image synthesis and hyperspectral refocusing.
Senior Solutions Architect
Bio: Leo Tam serves as the Deep Learning (DL) Senior Solutions Architect, part of the NALA-based, World Wide Field Organization (WWFO). Leo Tam brings to NVIDIA 10 years of R&D experience, most recently as a Postdoctoral Research Scientist at Stanford’s School of Medicine, where he applied deep learning technologies for automatic diabetic retinopathy diagnosis using high resolution images of the human retina. The resulting algorithm placed in the top 3% of a world-wide competition sponsored by the California Healthcare Foundation. Leo holds multiple patents for MRI encoding methods while serving as a Research Scientist at Resonance Research, Inc. and the Yale University School of Medicine. While at Yale, Leo researched, developed, and published a statistical machine learning technique to detect sparse features from MRI images. Leo earned his BSc in Mathematics-Physics from Brown University and PhD from Yale University.
Abstract: Medical imaging is a rich field from which to draw nonvisible spectrum imaging data. A data scientist working on such datasets may seek open source tools and quickly face a myriad of options. In the talk, I present three recent projects and extract learnings for hardware and software tools to build and deploy models. The three projects include volumetric cardiac segmentation for ejection fraction prediction, fully unsupervised clustering for high-content microscopy images, and retinal imaging for retinopathy screening. Respectively, they investigate novel deep learning approaches for semantic segmentation, unsupervised learning, and application specific network architectures. The segmentation application shows the importance of labeled data and extracting performance from data sources. The clustering analysis reveals how deep learning can bring end-to-end learning to traditional machine learning domains. Finally, patch-based architectures are used to increase accuracy for image classification. Software tools surveyed and used in our projects include NVIDIA DIGITS, BVLC Caffe framework, NYU’s Torch7 framework, and Google’s Tensorflow implementation. The lessons gleaned from these case studies help scope the dataset requirements, engineering economy, and application feasibility. Emerging from the case studies are a series for recommendations that may constitute best practices for academic and industrial deep learning applied to disparate imaging modalities and applications.