Associate Research Scientist
Computer Vision and Human Vision, Perception for Robotics, visual motion analysis, multi-view geometry, shape, texture, action perception.
Cornelia Fermuller is Associate Research Scientist at the Computer Vision Laboratory of the Institute for Advanced Computer Studies. She holds a Ph.D from the Technical University of Vienna, Austria ( 1993) and an M.S. from the University of Technology, Graz, Austria (1989), both in Applied Mathematics.
Cornelia Fermuller’s research is in the areas of Computer Vision and Human Vision. In her work of Computer Vision she has developed many computational models and implemented software solutions for applications in visual navigation and image processing. Her work on biological vision involves examining the computational constraints, building simulation models, and performing psychophysical experiments to understand the possible computational mechanisms explaining human motion and low level signal perception.
Many of her studies have been investigating the computational principles underlying multiple view geometry and statistics, and she has discovered a number of basic computational principles in the analysis of visual motion and shape recovery. These include view-invariant texture descriptors, constraints on 3D motion estimation, 3D shape and image segmentation, insights on the effects of sensor design on motion estimation, and the findings of statistical bias in low level processing. She has applied these studies in a number of applications, including new imaging sensors for better motion and shape recovery, software for visual motion tasks in Navigation and Robotics, and various tasks of video computing, such as compression, video manipulation, and image based rendering.
Her current research interests are centered around developing cognitive robotic systems that integrate perception with action, reasoning and language. One ongoing project is about developing a robot that visually searches for objects in a room. A second project is about recognizing human manipulation actions, where the robot uses a human-inspired approach that involves attention, segmentation and attribute description to recognize objects and actions, and uses semantic knowledge acquired from language tools about the relationship of objects, actions, their attributes, and the environment in which they occur.