1) MSc course CE803 Human Machine Interaction (2008 - present)
2) MSc course CC462, Behaviour based Robotics (1998-2000)
3) MSc course CC465, Programming Embedded Systems (2001-2003)
4) MSc course CE901, MSc Project & Dissertation, (1999-present)
5) MSc course CE903, Group Project, (2008-present)
6) BSc course CC362, Mobile Robots (1998-present)
7) BSc course CE301, Individual final-year Project, (1999-present)
This home page is still under construction. More information will be added gradually.
20 Lectures, 2 Classes, 8 Laboratories
The aim of this module is to provide a general understanding of autonomous mobile robots that have widely potential applications in the real world. Various different approaches are reviewed together with associated design methodologies. Mobile robots are intelligent machines that have many embedded computers, sensors and actuators which interact intelligently in the real world. They are generally characterised by real-time performance, autonomous operation and learning capabilities.
The students who attended in this module should be able to:
Coursework: One assignment (Weight: 40%)
Examination: Two-hour closed examination in the summer term (Weight: 60%)
Some knowledge of Artificial intelligence, Robotics, Computer Architecture and C Language.
10 lectures, 2 classes, 3 Laboratories
The aim of this module is to explore conceptual and practical aspects associated with building robotic systems and intelligent machines based on behaviour decomposition. This computing paradigm was inspired in the late 80's by the perceived limitations of classical AI in contrast to the apparent success of primitive living organisms. It is now one of the dominant control architectures in mobile robots.
Objectives At the end of the course students should be able to:
Assessment Coursework: One assignment (Weight: 25%)
Examination: Two-hour closed examination in the summer term (Weight: 75%)
Highly Recommended Reading
Prerequisite Some knowledge of Artificial intelligence, Robotics, Computer Architecture and C Language.
This project is to develop a prototype autonomous mobile robot that can interact with the world through speech, visual display, facial expression and physical motion. It is intended to be used in university open days to interact with students. It can greet these potential students and answer their questions about the courses in our department. The robot consists of an iCAT android that has a cat face and can speak and has facial expression, and a mobile base that can move around indoors. The main task will be face detection and recognition using image processing techniques, specially focused on new algorithmic approach to track people, predict their behaviour and react appropriately. The commercial text-to-speech software will be used in the project.
In order to study the fish-like motion control and autonomous navigation algorithms, it is important to build a 3D simulator to evaluate the developed algorithms through many trials of simulated experiments before a real robotic fish is constructed. However conventional simulators such as MOBS for mobile robots have some limitations. They are usually used for 2D movement and their kinematics models and dynamitic models are simple compared with robotic fishes because the common mobile robot takes no account of the interaction between itself and the locomotion medium - air, while the interaction between robotic fishes and water is one of the most important aspects for the robotic fish motion control. There are other researchers who focus on the animation on the real fish, such as Tu and William. Their main aims are the fish-like behaviours of self-animating artificial fishes but not the motion control or the autonomous navigation. Up to now, not other researchers built 3D simulators for the autonomous navigation of robotic fishes. It is a new and challenging field for us.
 Xiaoyuan Tu. “Artificial Animals for Computer Animation: Biomechanics, Locomotion, Perception and Behavior”. New York Springer, 1999.
 William F. Gates. “Animation of Fish Swimming”. U of British Columbia, 2001
Children with severe motor disability or lack of full control in upper limbs have difficulty to work with traditional joysticks and/or keyboards that are regularly deployed in PC games, such as XBOX. A Myoelectric Control can be an alternative human-machine interface (HMI). Surface myoelectric signals (MES) contain rich information that can be used to recognise intended movement commands from muscular contraction. Hence, myoelectric control system (MCS), which uses MES as an input signal, has the potential to become a competent alternative for the traditional HMIs. The objectives of the project are:
• To develop a powerful and flexible SVM-based pattern recognition core for MCS
• To develop an Java-based interface for XBOX • To model the muscle fatigue during long-term operation
The aim of project is the design and construction of a hybrid controller for a powered wheelchair using surface myoelectric signal and head motions. The myoelectric signals are collected non-invasively by bipolar electrodes that are placed on the skin of two muscles on the neck. Head motions are recognised by a simple gonio-meter placed vertically on back of neck. The instruments are formed in a collar shape that can easily be worn. Sensor fusion technique is applied on both bio-signals and head position data to provide reliable and robust control schemes for the powered wheelchair. The proposed controller would be able to move the powered wheelchair in four directions: forward, backward, right and left and adjust its orientation without hand movement.
In recent years, unmanned aerial vehicles (UAVs) have been widely used in both commercial and military applications, which offers offers a great number of research challenges such as aerodynamics, artificial intelligence, sensors data fusion, human-machine interfaces, global positioning system, control theories and wireless communication. This project will be focused on Draganflyer SAVS, i.e. a gyro stabilised RC helicopter with an onboard stabilized aerial video system. It has four rotors, also namely X4-flyer. The task is to investigate the system modelling and stabilisation problems, as well as the development of a suitable motion planning algorithm to control the X4-flyer reaching a specific destination inside the robot arena.
A crucial capability for an autonomous mobile robot is that it should be able to estimate its position in the real world. It is well known that a mobile robot cannot rely solely on the dead-reckoning method to determine its position because dead-reckoning errors are cumulative. Therefore, a mobile robot must be equipped with external sensors to observe useful features from the environment in order to update its position more accurately. There are two kinds of the environment features that have been used for mobile robot localization. One is an artificial beacon or landmark that is populated at a known position in a structured environment, and another is a natural beacon or landmark that is abstracted from a real world. The use of artificial beacons is to simplify the localisation process by measuring the robot motion relative to pre-placed beacons, and is a map-based approach. The use of natural beacons is to enable a mobile robot to operate in unstructured environments, but pays a high cost to interpret sensory data in order to obtain useful features. There has been considerable work being carried out by using external sensors to measure either artificial beacons or natural beacons. No research work has been done to combine both in the robot localisation process.
This project is to investigate how to integrate multiple simple and inexpensive sensors in the localisation process for a mobile robot to observe both features in an environment. An extended Kalman filter will be used to fuse the data from multiple sensors. As a mobile robot traverses its environment, it should be able to observe both artificial and natural beacons to update its position and the environment map continuously.
Cooperation of multiple mobile robots has recently been a challenge research topic in robotics community. In general, a team of mobile robots forms a multi-agent system to perform the specific task by cooperation. Such a multi-agent system has three distinguish features. First, the environment for mobile robots to confront is dynamic and multiple robots change their positions all the time. Second, multiple robots should communicate their position one another, or a central control station detects robot's position to transmit to individual robots. Third, a modular design is needed so that each component of the system can be developed systematically.
A robot soccer game is an interest domain for studying the multi-agent systems. The main objective for each team of robots is to score the goals as much as possible in presence of an opponent's team that has the same task. Therefore, the system needs real-time sensing, decision making, and reactive behaviours. This project is to investigate how to build such a multi-agent system to play a robot soccer game.