11th IEEE Workshop on Perception Beyond the Visible Spectrum (PBVS)
In conjunction with
CVPR 2015, Boston, MA, USA
ORL BAE
Home
News
Call for Paper
Submission
Committees
Program
Keynote Speakers
Best Paper
Database
Sponsors
 Latest News

:: PBVS '15 presentation instructions are available now!

:: PBVS '15 program is available now!

:: Dr. Guna Seetharaman will give a talk on PBVS '15!

:: Dr. John Irvine will give a talk on PBVS '15!

:: Important dates are updated!

:: Prof. Margrit Betke will give a talk on PBVS '15!

:: Paper submission website is open!

:: PBVS '15 website is online!

 Questions?


Email Guoliang Fan

Invited Talks

Analyzing the Flight Behavior of Bats in Thermal Infrared Video

Margrit Betke, PhD
Professor of Department of Computer Science
Boston University


[Read more...]



Towards a Unified Understanding of Image Quality: Quantifying Spatial, Temporal, and Spectral Information for Computer Vision

John M. Irvine, PhD
Chief Scientist
Charles Stark Draper Laboratory, Inc.


[Read more...]



3D Assisted Common Operational Framework

Guna Seetharaman Ph.D. FIEEE
Principal Engineer for Computer Vision and Computing Architectures
Information Directorate, Air Force Research Laboratory


[Read more...]
















Analyzing the Flight Behavior of Bats in Thermal Infrared Video


Blasch

Margrit Betke
Professor of Department of Computer Science
Boston University
Website:http://www.cs.bu.edu/~betke/


Abstract: Thermal imaging is a valuable technology for recording the movements of nocturnal animals such as bats. Analysis of bat flight with computer vision algorithms provides a new perspective on how these gregarious animals move through three-dimensional space. Results can be applied to a large array of tasks, for example, bio-inspired engineering of airplanes, or censusing the population of a bat species for quantifying its ecological and economic impact. Colonies of Brazilian free-tailed bats are of particular interest because they represent some of the largest aggregations of mammals known to mankind. It is challenging to census these bats accurately, since they emerge in large numbers at night from their day-time roosting sites. We have used infrared thermal cameras to record Brazilian free-tailed bats in California, Massachusetts, New Mexico, and Texas, and developed automated image analysis methods that detect, track, and count emerging bats. We found that six colonies of Brazilian free-tailed bats in the southwestern United States may have plummeted from 54 million members to 4 million since 1957. In this talk, I will describe guidelines of camera setup and calibration procedures in the field, and how to use multi-camera stereography to analyze the three-dimensional flight paths of the bats. Our techniques include detection of individual animals in each camera view, reconstruction of their positions in three-dimensional space, across-time and across-space data association, and multiple-object tracking.

Bio: Margrit Betke is a Professor of Computer Science at Boston University, where she co-leads the Image and Video Computing Research Group. She conducts research in computer vision, in particular, the development of methods for detection, segmentation, registration, and tracking of objects in visible-light, infrared, and x-ray image data. She has worked on gesture, vehicle, and animal tracking, video-based human-computer interfaces, statistical object recognition, and medical imaging analysis. She has published over 100 original research papers. She earned her Ph.D. degree in Computer Science and Electrical Engineering at the Massachusetts Institute of Technology in 1995. Prof. Betke has received the National Science Foundation Faculty Early Career Development Award in 2001 for developing "Video-based Interfaces for People with Severe Disabilities." She co-invented the "Camera Mouse," an assistive technology used worldwide by children and adults with severe motion impairments. While she was a Research Scientist at the Massachusetts General Hospital and Harvard Medical School, she co-developed the first patented algorithms for detecting and measuring pulmonary nodule growth in computed tomography. She was one of two academic honorees of the "Top 10 Women to Watch in New England Award" by Mass High Tech in 2005. She is a Senior Member of the ACM and IEEE. She currently leads a 5-year research program to develop intelligent tracking systems that reason about group behavior of people, bats, birds, and cells.


[Back to the top]











Towards a Unified Understanding of Image Quality: Quantifying Spatial, Temporal, and Spectral Information for Computer Vision


Blasch

John M Irvine
Chief Scientist
Charles Stark Draper Laboratory, Inc.


Abstract: A fundamental concept for image analysis is that the underlying quality of the imagery must be sufficient to support the analytic tasks. Historically, the analysis and quantification of image quality has centered on human perception leading to standards such as the Johnson Criteria and the National Imagery Interpretability Rating Scale (NIIRS). More recent research, however, indicates that "machine perception" of image quality differs from human perception. More precisely, the ability of computer vision algorithms to detect, track, or classify objects is sensitive to different image characteristics than the human analyst attempting similar tasks. For human perception, the NIIRS has been extended to imagery data beyond the visible spectrum: thermal infrared, synthetic aperture radar (SAR), multispectral imagery, and motion imagery. In this presentation, we review these perceptual standards for image quality and contrast them with performance of computer vision algorithms. In general, the ability to detect, track, and classify objects depends on saliency of the object. For machine vision methods, algorithm performance is sensitive to the complexity of the clutter and the level of noise - factors that play a much smaller role for human perception. We will present empirical results that illustrate these issues for motion imagery, thermal infrared, SAR, and multispectral data. These experiments will motivate an information theoretic formulation of machine perception of image quality, which we will present.

Bio: John M. Irvine, PhD, is the Chief Scientist for Data Analytics at Draper Laboratory. His interests include signal and image processing, image quality, video analytics, and novel biometrics. He was a Principal Investigator (PI) for IARPA's ACE Program and multiple ONR Programs addressing analysis of image and video analysis for Intelligence, Surveillance, and Reconnaissance applications. Dr. Irvine was also a PI for DARPA's Human Identification at a Distance Program (HumanID), Senior Scientist for NGA's STAR Program, and PI for an OSD-sponsored program on Automated Target Recognition (ATR) and information fusion technology. He was a principal scientist for the development of the National Imagery Interpretability Rating Scales (NIIRS) and extensions to Video NIIRS. He serves on planning committees for IEEE and SPIE, and has served on several advisory panels for the Department of Defense and the Department of Energy. Prior to joining Draper, he was the Deputy Division Manager for the Systems and Technology and a Technical Fellow at SAIC. He has authored over a hundred journal and conference papers and holds a PhD in Mathematical Statistics from Yale University.


[Back to the top]











3D Assisted Common Operational Framework


Blasch

Guna Seetharaman Ph.D. FIEEE
Principal Engineer for Computer Vision and Computing Architectures
Information Directorate, Air Force Research Laboratory
Rome NY 13441


Abstract: This talk will include how one can combine LIDAR and Motion Imagery derived 3D point clouds, so as to constitute a robust common operational frame in which additional information can be injected/fused/refined to develop a better net perception of the dynamic environment.

Bio: Dr. Guna Seetharaman is a Principal Engineer for Computer Vision and Video Exploitation, at the Information Directorate, Air Force Research Laboratory, Rome, NY. He is currently focused on high performance computing for video exploitation: computer vision, machine learning, content-based image retrieval, persistent surveillance and computational science and engineering. He served as an associate professor of computer science and engineering, at the Air Force Institute of Technology (AFIT, 2003-2008) and University of Louisiana at Lafayette (1988-2003). He earned his PhD in electrical and computer engineering from the University of Miami in 1988. He co-founded Team Cajunbot - a participant in DARPA Grand Challenge. He led the LiDAR data processing and obstacle detection efforts in Team Cajun Bot, demonstrated in 2005 and 2007 DARPA Grand Challenges. He was a member of the AFIT based core team for demonstrating and transitioning a wide area persistent imaging and surveillance system known as Angel Fire. He has published more than 150 peer-reviewed articles in: Computer Vision, low-altitude aerial imagery, Parallel Computing, VLSI-signal processing, 3D Displays, Nano-Technology, micro-optics, and 3D Video analysis. He guest edited IEEE COMPUTER special issue devoted to Unmanned Intelligent Autonomous Vehicles, Dec 2006. He also guest-edited a special issue of the EURASIP Journal on Embedded Computing in the topics of Intelligent Vehicles. He is an associate editor of the ACM Computing Surveys. He has served as the General Chair of IEEE Workshop on Computer Architecture for Machine Perception 2003, and co-chaired the technical program committee of IEEE AIPR2014. He has been recognized as the Fellow of the IEEE as of Jan 2015. He served as the Section Chair of the Mohawk Valley Section of the IEEE, for 2013 and 2014. He is a member of Tau Beta Pi and Eta Kappa Nu. He can be contacted at guna (at) ieee dot org and guna.seetharaman (at) acm dot org

[Back to the top]




Copyright © PBVS, Contact the Webmaster.