Being able to understand environment (usually time-varying and unknown a priori) is an essential prerequisite for intelligent/autonomous systems such as intelligent mobile robots and self-driving cars. The environmental information can be acquired through various sensors, but the raw information from sensors are often noisy, imprecise, incomplete, and even superficial. To obtain from raw sensory data an accurate internal representation of the environment, or a digital map with accurate positions, headings, identities of the objects in the environment is very critical but very difficult in the development of autonomous systems. The major challenge is due to the uncertainty of the environment and the insufficiency of sensors. Basically there are two categories of techniques for handling uncertainties: adaptive and robust. Adaptive techniques exploit a posteriori uncertainty information that is "learnt" on-line, whilst robust techniques take advantage of a priori knowledge about the environment and sensors. We are investigating novel computer vision methods, including deep learning, for invariant feature extraction and object recognition, combining with multisensor data fusion methods and multiple model approaches to modelling and control. ImageNet (http://www.image-net.org/) is a valuable image database providing a huge amount of image samples for machine learning based computer vision research.
A. Computer Vision Based Scene Understanding and Navigation for the Blind: The Invisible Made Audible
This research aims to develop new methods for feature extraction, object recognition and scene understanding for computer vision based navigation for the blind, for the purpose of independent living. Using computer vision methods a mobile device in indoor environments can recognise specific text and marks, people and objects, estimate location, distance and direction, and give audio description of the environment and navigation instructions. The methods proposed should be highly efficient and effective so that they may be implemented on a smartphone or eyeglasses with built-in cameras connected to a portable computer.
B. Deep Learning for Face recognition and Scene Understanding (with Applications in Security)
Recent research has shown that brain waves contain useful information about intention or mind. After some training sessions, distinctive patterns associated with specific intentions can be produced and detected in brain waves, which can be translated into commands to control computers and robots. One of the interesting applications of this idea is prosthesis control for the disabled. We are developing novel methods for brain wave pattern detection and analysis (feature extraction, selection, and classification), and online BCI systems for specific applications with various BCI protocols.
A. Deep Learning for Developing Subject-independent Brain-Computer Interface Systems
One of the major issues in brain-computer interfacing (BCI) is that the state-of-the-art BCI systems are subject-dependent, which means that for a specific user a BCI system has to be built up through extensive training with this user and can work well on this user only. It is desirable that, like speaker-independent speech recognition systems, BCI systems can be user/subject-independent. This study will focus on subject adaptation through a deep learning approach. Unsupervised learning algorithms for deep learning neural networks will be developed to automatically extract and select subject-invariant features from brain signals (EEG), which would be substantially different from traditional methods for feature extraction in BCI systems. The overfitting issue in deep learning neural networks will be addressed from both theoretical and experimental perspectives. A large amount of EEG signals from multiple subjects will be considered as samples from different distributions. It is expected that the developed deep learning approach would be able to automatically extract/discover common features that are effective for user intention classification in BCI systems.
B. SSVEP-based Emotion Related Attention Bias Detection and Modification
Mental disorders have greatly affected the quality of life of many people around the world. Some mental disorders are due to attention bias, such as stress and anxiety due to attention bias to negative emotions. Unlike the traditional psychological approach to attention bias detection and modification, this PhD work will deal with the problem through steady-state visually evoked potential (SSVEP) analysis. Both positively and negatively emotional stimuli (images or videos), suitable for inducing SSVEP, will be designed and extensive SSVEP experiments using subjects with low-level and high-level anxieties will be conducted. Novel methods for effective feature extraction and selection from SSVEP will be developed. From these features effective biomarkers for attention bias detection will be identified using machine learning and data mining methods. Another important part of this PhD work is to interpret the identified biomarkers from the perspective of neuroscience and make use of them to develop online SSVEP-based biofeedback for attention bias modification or intervention.
C. Developing Reliable Methods for Onset Detection in Self-paced Brain Computer Interface Systems
Self-paced brain computer interface (SBCI) systems allow people with motor disabilities to use their brain signals to control devices, whenever they wish, which would play an important role in improving independence and quality of life of disabled people. One of the major challenges in SBCI is to detect the onset of the user’s intentional control accurately and reliably. This research aims to develop novel robust methods for onset detection for motor imagery based SBCI systems. New effective features for onset detection will be explored from different perspectives. Probabilistic models and machine learning approaches will be developed for onset detection after feature extraction. The newly developed methods will be tested on both offline EEG data and online SBCI systems with a properly selected application.
(4) Machine Learning Approach to Document Representation, Clustering/classification, and Ranking
Searching for useful information from ‘Big Data’, such as tremendous amount of documents on the Web, is a very challenging problem. This research aims to develop novel machine learning methods for effective document contents modelling, including feature extraction and selection, and algorithms for efficient document indexing, classification/clustering, and ranking. The developed methods will be extensively validated on large document databases.
© Copyright 1999-2017 University of Essex.
This page was last modified by John Gan in Feb 2017
E-mail: jqgan @essex.ac.uk