User Manual

1. Introduction

xLabs’ revolutionary software enables continuous, realtime tracking of your head, face-features, eyes, and gaze. Gaze means that we can work out what you’re looking at on the computer screen! We do this without restricting a natural range of movement, or using expensive additional hardware. Only a webcam is needed.

Maybe it’s easiest to explain with a short video:

The system can provide this data to web pages, enabling gaze-sensitive web browsing and allowing you to participate in online usability testing. Head tracking enables you to browse the web without touching the computer, play games, and more. Check out some of our demos to see what it can do! All processing is done on your own computer, so we don’t need to send video anywhere. This ensures your images stay private. The system only works when you allow it. You can easily remove the xLabs browser extension at any time. This software was created by xLabs Pty. Ltd. of Melbourne, Australia. doc_gaze

1.1 System Requirements

All you need is a modern computer (laptop or desktop) with a webcam. Most laptops have built-in webcams – these work fine. We recommend a fast, recent machine with an Intel i5 or Intel i7 processor running at 2GHz or better. Windows Tablets such as the Surface also work well. You also need Google Chrome version 36 or better. Unfortunately, we can’t use the Open-Source Chromium browser at this time, because it doesn’t support Google Native Client.

1.2 Privacy & Security

We never record, store or transmit any video or images. Our software analyses images from the camera automatically, and then forgets them. Images are never stored on your computer, or anywhere else. We don’t connect to or use the microphone. If your camera has an internal activity LED, it will turn on whenever we connect to it. You can turn the system off, using the popup menu, at any time. You can also easily disable or remove it just like any other Chrome Extension. We don’t track your web browsing. We only track use of our software features. We also make a record of webpages that interact with our software. We do not record users of webpages, and we do not record the actions of individual users. We only use Chrome’s camera interface, therefore we do not expose users to any additional security risks due to our camera access.

2. How to get the software

Using the Chrome web browser, visit our page on the Chrome store:

Click “Add to Chrome” to add our software to your browser. addToChrome The software is installed immediately, but pages must be refreshed before they can communicate with our software. After installation, our Options Page will appear. You will be prompted to allow our software to access the camera. Please click “Allow”, otherwise our software will not be able to work. allow

Above: Make sure you select “Allow” when prompted for camera access, otherwise the browser will not be permitted to access the camera.

If you forget or deny camera access, you will need to delete this setting from your browser via the chrome://settings pages.

3. How to use it

After installation, you can use our software on most web pages, including http://, https://, and file:// addresses. Our software does not work on pages that start with chrome://, such as our Options Page.

3.1 The Popup Menu

The easiest way to control our software is via the Popup Menu. You can access the menu by clicking on the ‘X’ icon in the Chrome toolbar: popup The popup menu shows 5 modes, including “Off”. Each mode will be described in detail below. They are:

  • Head
  • Mouse
  • Learning
  • Page-defined

Note that you cannot select Page-defined Mode: Instead, compatible web pages will activate this mode. However, you can use the Popup Menu to turn our software Off at any time. The Popup Menu also provides quick access to our Options Page and Help on our website.

3.1.1 Head Mode

This is a good first test that our software is able to access your camera. Head Mode tracks the position of your head. In the default configuration, it updates a full 6 Degree Of Freedom (DOF) pose 20 times per second. Therefore, web pages can access X, Y, Z position and Roll, Pitch, Yaw angles. By default we display your head pose as a pair of ‘+’ crosses. The green cross shows position; the relative position of the blue cross depicts the orientation of your head. If the crosses don’t appear, or don’t move, there may be a problem with your camera. headPose

Above: Head pose presented as two crosses. The Blue cross indicates Roll, Pitch and Yaw; the Green cross shows X, Y, and Z position.

3.1.2 Mouse Mode

Mouse mode allows you to surf the web without touching your computer! We use head movements to control a red dot on screen, that acts like a mouse pointer. To use Mouse Mode you need to do a simple calibration. When you enable Mouse Mode, a black shadow fills the screen from the edges to the centre. When the shadow reaches the centre, you should look straight at the hole in the middle. The shadow then disappears and you will see a red circle that you can control with your head. If you let the red circle rest on a link for some time, the system will “click” the link and take you to that page. If you push the red circle against the left side of the page, a red rectangle will appear. After a few seconds, you will be taken “Back” to the previous page. Similarly, if you push the red circle against the top or bottom of the page a blue rectangle will appear, and the page will scroll up and down respectively. Be sure to keep your head upright. If you tilt (roll, for the pilots out there!) your head to the right, and red triangle will appear and Mouse Mode will temporarily be disabled. You can resume Mouse Mode and reset the calibration by tilting your head to the left. A black triangle appears, before the black shadow. Look at the centre of the screen while the shadow covers it. mouseCalib

Above: When mouse mode is enabled or reset, you will see a shadow fill the screen from the corners to the centre. Look at the centre of the screen to calibrate the mouse control.


Above: The red circle is now controlled by your head movements. Let the red circle rest on a link for a short time to click and navigate.


Push the red circle against the top or bottom edge of the window to scroll the page. Scrolling down is shown above.


Tilt your head to the right (note: don’t turn your head to the side, tilt it instead) to suspend mouse mode temporarily. The red triangle indicates that the command has been understood.

reset  Tilt your head to the left to reset or enable mouse mode again. You can do this at any time, e.g. if you forgot to look at the centre during calibration.

3.1.3 Learning Mode

Learning Mode is a test and demonstration of our gaze-tracking capability. This mode will show you where we think you are looking on screen, as a red circle. The same data can be used by web pages (it doesn’t have to be displayed on screen). In Learning Mode, the system doesn’t need any explicit calibration. You just browse the internet and click around the page. The software watches while you do this, and learns to map your appearance to the points you’re clicking. As long as you look where you’re clicking, eventually we will be able to predict your gaze without a click. Don’t worry if you forget to look a few times; we can usually ignore these errors and you can always delete your calibration data. So let’s try Learning Mode. Select it from the Popup Menu, and then load a web page, e.g. Try clicking around the page. To save time, for now avoid clicking any links – just click near them instead. When you click a blue circle will appear. This shows we’ve registered the click. learningClick

Above: Learning mode clicks are indicated with a blue circle and a red dot. If you press and hold the left mouse button, you can move the red dot with your head. This symbol is a reminder that we are using these clicks to calibrate the system.

After 20-30 seconds a red circle should appear wherever you look. At first, the circle will be large and may not always be in the right place. But click a few more times (especially where you observe it is not accurate) and the system should heal itself, becoming more accurate. The system will re-train itself every 10 seconds if new clicks are provided. So, you may have to wait this long for accuracy to improve. You may notice a thin black circle at times; this indicates calibration is happening. learningGaze

Learning mode: Gaze indicator. The red circle indicates where we think you are looking. Circle size indicates confidence (smaller circles mean higher confidence).

The system tries to generalize between all the clicks you’ve made, so you don’t need to click everywhere; but it’s a good idea to click in all gross areas of the screen, such as the top, bottom, sides and corners with a few clicks somewhere in the middle. For better calibration, you can also click and hold the left mouse button while moving your head a little bit; this teaches the system very rapidly because the data has more variation. In other websites we also use an interactive calibration process, that reliably achieves a good calibration in a fixed period of time. But the Learning Mode included here works on any website and hence is included with the software. Note that gaze-tracking learns the shape of your face and therefore you need to re-calibrate for other users. You can delete old calibration data via the Options Page.

3.1.4 Page-Defined Mode

You can’t select the page-defined mode; after all, it is defined by the, err .. page! But you can turn it off. This mode is used by xLabs-compatible web pages to provide you with gaze or head tracking features.

3.2 The Options Page

The Options Page can easily be accessed from the Popup Menu. This page contains information about our software and some controls for using it. The Options Page is organised as a set of Tabs. Feel free to have a look around. Some core features are described below.

3.2.1 Version

The installed version of our software can be found in the “About” tab. This can be useful for support.

3.2.2 Delete calibration data

When you change users, you’ll need to delete any existing calibration data. This can be done from the “Controls” tab in the Options Page.

3.2.3 Change camera resolution

You can change the camera resolution from the “Camera” tab. Changes will take effect next time the camera is started. Note that some widescreen format megapixel cameras need to be used at their higher, native resolutions for a good, undistorted image.

3.2.4 Change camera device

We use Chrome’s default camera device. You can change this using Chrome’s settings at:


Unfortunately you can’t change the camera from within our software.

3.2.5 Environment Setup Tips

The “Help” tab shows some tips on how to get a good picture from your camera. Webcams don’t give a clear view of the face in bad lighting, or where there are shadows on the face. We need a clear view of your face to be able to track it well.

env_ok_nightAt night or indoors, find a place with uniformly bright lighting rather than small spotlights.


During the day, it is best to face towards windows to illuminate the face.


Don’t use directional spotlights that cast shadows over the face.


Make sure the environment is well lit; cameras work at reduced frame-rate or quality in low light conditions.


Don’t sit with the light behind you! The face will be a black silhouette against a white background!

3.2.6 Configuration

The “Config” tab shows the state of our software. This is useful for developers who write software that interacts with our system.

3.3 Other web pages

Other web pages can activate and interact with our software. You can prevent this by turning our software Off using the Popup Menu, or by disabling our software. After you have explored the built-in functionality of our browser extension, you might want to check out some of the playable demo games on our website.

4. Help & More Info

Much more information is available on this website, Some common questions are answered below. Make no mistake, the eye/gaze tracking system is very complex. It tries to work with everyone, regardless of appearance, using any camera, on any computer, anywhere in the world. It’s not simply running a program again and again. It’s learning and calibrating itself in every new environment it sees. Not easy! So occasionally, things will go wrong.

4.1 Environment & Lighting

The environment, and particularly lighting & illumination, are critical for good results. Cameras react to varying levels of light by automatically adjusting the brightness of the image. In low light, long exposures will produce blurry or noisy images that we can’t use for precision eye/gaze tracking. The camera doesn’t know which bit of the picture we’re interested in – the bit with the face in it. Instead it balances brightness over the whole image. So, if the background is very bright, it will make the face very dark. Eye/gaze tracking needs good and even illumination over the whole face, and especially around the eyes. Shadows can be a problem. Good, bright, diffuse lighting – overhead or daylight – is best. Desk lamps can be a problem, because the light can be quite focused and cause shadows elsewhere. If you are using daylight, it is best for the user to face the window. This means that the face will be well lit, rather than having a bright sky behind the user’s head and shadowed face.

4.2 Does it work with Glasses?

First, contact lenses are fine. Head & face tracking almost always works perfectly with glasses. However, Gaze tracking requires an unobstructed view of your eyes. Many glasses frames distort or break up the appearance of your eyes, preventing accurate gaze tracking. However, we’ve seen it work occasionally! We are working on a solution for glasses in general. Heavy eye makeup, dark eye-shadow and exotic cosmetic effects can also be difficult for our software. Hair that covers the eyebrows and/or eyes (e.g. a long fringe) is also a problem. The software is not trained to recognise faces of very young people (e.g. under 10 years). Likewise, very bushy beards that obscure most of the face may also cause problems. Essentially, we need to be able to see a face to track it :) This software is designed to track a single face – it might get confused if too many faces are in view of the camera.

4.3 I accidentally denied camera permission!

If you forgot to Allow access to your webcam, we won’t be able to use it. Chrome remembers this setting and it’s a bit tricky to undo it. To re-enable camera access, go to:

  1. Click “Show Advanced Settings”.
  2. Click “Content Settings”.
  3. Scroll down to “Media” and click “Manage Exceptions”.
  4. Click to highlight “Chrome-extension://xxxx ” and click the ‘x’ to forget this setting.
  5. Re-visit our Options Page, and when prompted, click “Allow”.


Above: Chrome Settings dialog for managing camera settings. We’ve highlighted the options to revoke and grant permissions to our extension, and to change the camera used.

4.4 My camera doesn’t work!

We access the camera through the Chrome browser. If the camera works in Chrome on other websites, it should work for us. Can you use the camera on other websites? Note that your camera should be placed in the centre of the screen, e.g. at the either on the top or bottom bevel. Your face must be clearly visible in the camera image. You need to sit approximately 20-100 cm from the camera in most cases, although you can sit further away if your camera has optical zoom. You can test your camera here. cameras

Above: Typical camera placement, showing a built-in webcam on an Apple laptop and an external camera on a desktop monitor.

4.5 Multiple screens

Currently, our software does not understand multiple screens. We can only work with one display.

4.6 Tracking Suspended, or intermittent function

In various modes you may see the “Tracking Suspended” error. This means the software can’t see the the corners of your eyes, which we need to track very precisely. This is usually due to poor lighting and/or shadows around the inner eye area.

4.7 Poor accuracy in one part of the screen

If you observe that gaze-tracking is not accurate in just one part of the screen, try adding more calibration clicks in that area and waiting a few seconds (in Learning mode). If accuracy remains poor, the problem is most likely lighting – there may be shadows hiding the eyes in certain poses.

4.8 It’s too slow!

We’re continually making it faster, but at the moment we need a fast, modern computer (e.g. Intel i5 or i7 processors). Also make sure you don’t have too many browser tabs open, or many other programs running.

4.9 Moving too fast

Very rapid movements make the camera give blurred images. When this happens, the system will briefly lose track of the user’s face, and then find it again a fraction of a second later. For best results, try not to move too quickly, especially during calibration clicks. Blurring is also more common in low light, because cameras will choose longer exposures.

4.10 Changing Users

Since the system remembers gaze calibration data from earlier sessions, if you change user you need to go to our Options Page and delete the old calibration data. If you forget to do this, you may find poor accuracy during gaze tracking.

4.11 Calibrated Pose-Space

The system learns to associate particular poses and positions of the head and face with gaze coordinates on screen. It generalizes beyond the observed training data to try to “fill in the gaps” – to be able to predict gaze for poses it has not seen before. With more training, the system gets better at generalization. Imagine the set of poses the system has seen as a bubble or cloud around your head. The more poses you have calibrated, the larger this cloud becomes. Eventually, all poses you find comfortable will be within the calibrated cloud. Changing the angle of a laptop screen can move your head into an uncalibrated position. This is easily fixed by clicking a times in Learning Mode or interactive calibration interfaces, which adds the new pose to the calibrated “cloud”. Calibration does not transfer between users. You need to re-calibrate for every user. When changing users, you need to enable calibration to clear the old user’s model.

4.12 Contact us

Please contact us or explore this website for more assistance. We’d love to hear your feedback about our software.

5 Credits

This software was conceived and created by the xLabs team. They are:

  • Alan Zhang
  • Joe Hanna
  • Dave Rawlinson
  • Steve Roberts

5.1 Open-Source software

This system wouldn’t have been possible without many Open-Source and free software contributions. We have used many libraries under BSD-like licences that generously allow commercial, closed-source use. Unfortunately, since we have a secret algorithm to protect, we cannot make our software open-source at this time. Our software was developed using computers running Linux operating systems. We used OpenCV for basic image processing tasks. We use the STASM library for face-fitting. We use BOOST for C++ to provide all the bells and whistles of a fully-featured software application. Thanks to everyone who contributes to open-source software.