Machine Learning for Context Analysis

Deterministic Analysis vs. Machine Learning for Context Analysis

Machine learning for context analysis and artificial intelligence (AI) are important methods that allow computers to classify information about their environment. Today’s smart devices integrate an array of sensors that constantly measure and save data. On the first thought one would image that the more data is available, the easier it is to draw conlusions from this information. But, in fact larger amounts of data become harder to analyze using deterministic methods (e.g. thresholding). Whereas such methods by themselves can work efficiently, it is difficult to decide which analysis parameters to apply to which parts of the data.

Using machine learning techniques on the other hand this procedure of finding the right parameters can be greatly simplified. By teaching an algorithm which information corresponds to a certain outcome using training and verification data, analysis parameters can be determined automatically or at least semi-automatically. There exists a wide range of machine learning algorithms including the currently very popular convolutional neural networks.

Context analysis setup overview

Figure 1 – Overview of the complete analysis system with its various data sources

Context Analysis

Many health care applications rely on the correct classification of a user’s daily activities, as these reflect strongly his lifestyle and possibly involved health risks. One way of detecting human activity is monitoring their body motion using motion sensors such as our LPMS inertial measurement unit series. In the application described here we monitor a person’s mode of transportation, specifically

  1. Rest
  2. Walking
  3. Running
  4. In car
  5. On train

To illustrate the results for deterministic analysis vs. machine learning for context analysis approach we first implemented a state machine based on deterministic analysis parameters. An overview of the components of this system are shown in Figure 1.

Deterministic approach overview

Figure 2 – Deterministic approach

The result (Figure 2) is a relatively complicated state machine that needs to be very carefully tuned. This might have been because of our lack of patience, but in spite of our best efforts we were not able to reach detection accuracies of more than around 60%. Before spending a lot more time on manual tuning of this algorithm we switched to a machine learning approach.

Machine learning approach overview

Figure 3 – Machine learning approach

The eventual system structure shown in Figure 3 looks noticeably simpler than the deterministic state machine. Besides standard feature extraction, a central part of the algorithm is the data logging and training module. We sampled over 1 milion of training samples to generate the parameters for our detection network. As a a result, even though we used a relatively simple machine learning algorithm, we were able to reach a detection accuracy of more than 90%. A comparison between ground truth data and classification results from raw data is displayed in Figure 4.

Context analysis algorithm result

Figure 4 – Result graphs comparing ground truth and analysis output for ~1M data points

Conclusion

We strongly belief in the use of machine learning / AI techniques for sensor data classification. In combination with LP-RESEARCH sensor fusion algorithms, these methods add a further layer of insight for our data anlysis customers.

If this topic sounds familiar to you and you are looking for a solution to a related problem, contact us for further discussion.

Siemens In The House

From left the LP-RESEARCH team with Siemens: Klaus Petersen (Co-founder & CEO), Helmut Wenisch (Head of Corporate Technology at Siemens K. K.), Alok Kumar Dubey (Siemens K. K.), Tobias Schlueter (Head of Research), and Lin Zhuohua (Co-founder & CFO)

From left the LP-RESEARCH team with Siemens: Klaus Petersen (Co-founder & CEO), Helmut Wenisch (Head of Corporate Technology at Siemens K. K.), Alok Kumar Dubey (Siemens K. K.), Tobias Schlueter (Head of Research), and Lin Zhuohua (Co-founder & CFO)

We mentioned before that our new AR development platform is in the making. It hasn’t only generated quite a lot of attention at the recent Slush and Tech in Asia startup fairs. We often have people interested in what we do visiting our office for a demo. Last week, for example, Helmut Wenisch, Head of Corporate Technology at Siemens K.K., and Alok Kumar Dubey of Siemens K.K. were visiting us to experience our prototype first-hand. Thank you so much for coming by and the inspiring conversation!

Virtual Tape Measure with Google’s Project Soli

The folks at Google ATAP were so nice and allowed us to participate in the Project Soli alpha developer program. Please have a look at their website for more information about the project. Project Soli is a chip-sized miniature millimeter-wave radar, supported by a sophisticated DSP pipeline developed by Google. Based on this signal processing, it is possible to analyze and evaluate finger gestures in the vicinity of the sensor. This allows for new ways of human-device interaction.

We have spent some time with the developer kit and made an application called Virtual Tape Measure. Purpose of this demo application is to replace the need for a physical tape measure when e.g. checking the dimensions of table while shopping for furniture. This is a fairly simple application of the Soli technology. We are currently looking into further, more complex use cases. Please see the diagram below describing the basic functionality of the system.

Sensor Fusion for Virtual Reality Headset Tracking

In order to test the functionality of our sensor fusion algorithm for head-mounted-display pose estimation, we connected one of our IMUs (LPMS-CURS2), a Nexonar infrared (IR) beacon and a LCD display to a Baofeng headset. The high stability of the IR tracking and the orientation information from the IMU as input to the sensor fusion algorithm result in accurate, robust and reactive headtracking. See the figure below for details of the test setup. The video shows the resulting performance of the system.

Position Tracking based on Linear Acceleration Measurements Only

Position tracking based on pure linear acceleration measurements is a difficult problem. To result in actual position values, linear acceleration (i.e. data from an accelerometer minus gravity) needs to be integrated twice. If there is only a minimal bias on the data of one of the tracked axes, the resulting position values will rapidly drift off.

Although it is well possible to increase the performance of such positioning information by sensor fusion with external reference signals (optical system, barometric pressure etc.), in many cases direct forward calculation of position from linear acceleration is required.

Lately we have been working on gradually improving our linear acceleration measurements in accuracy and tried to tune these measurements with various filters in order to gain relatively reliable displacement information.

The video below shows an exmaple of displacement tracking on the vertical axis using an LPMS-B device. Except for the sensor’s gyroscope, accelerometer and magnetometer, no external references have been used.

1 2 3