About Klaus Petersen

I like to create magical things, especially projects related to new technologies like augmented and virtual reality, mobile robotics and MEMS-based sensor networks. I code in C(++) and Python, trying to keep up with my very talented colleagues :-)

LPVR Middleware a Full Solution for AR / VR

Introducing LPVR Middleware

Building on the technology we developed for our IMU sensors and large scale VR tracking systems, we have created a full motion tracking and rendering pipeline for virtual reality (VR) and augmented reality (AR) applications.

The LPVR middleware is a full solution for AR / VR that enables headset manufacturers to easily create a state-of-the-art visualization pipeline customized to their product. Specifically our solution offers the following features:

    • Flexible zero-latency tracking adaptable to any combination of IMU and optical tracking
    • Rendering pipeline with motion prediction, late latching and asynchronous timewarp functionality
    • Calibration algorithms for optical parameters (lens distortion, optical see-through calibration)
    • Full integration in commonly used driver frameworks like OpenVR and OpenXR
    • Specific algorithms and tools to enable VR / AR in vehicles (car, plane etc.) or motion simulators
Overview of LPVR Middleware Functionality

Application of LPVR Middleware to In-Car VR / AR

The tracking backend of the LPVR middleware solution for VR and AR is especially advanced in the aspect that it allows the flexible combination of multiple optical systems and inertial measurement units (IMUs) for combined position and orientation tracking. Specifically it enables the de-coupling of the head motion of a user and the motion of a vehicle the user might be riding in, such as a car or airplane.

As shown in the illustration below, in this way the interior of a vehicle can be displayed as static relative to the user, while the scenery in the environment of the vehicle moves with vehicle motion.

Illustration of In-car VR Installation

For any application of augmented reality or virtual reality application in a moving vehicle, this functionality is essential to provide an immersive experience to the user. LP-Research is the industry leader for providing customized sensor fusion solutions for augmented and virtual reality.

If you have interest in this solution, please contact us to start discussing your applications case.

LPMS Operator’s Manual Update

It’s been a long time, but finally we have updated our reference manual to the latest generation of sensors.

The manual is accessible through our documentation & support page or directly from here.

Below is a list of the most important updates, some of which are fixes that customers have asked for for quite a while:

  • Removed hardware specific parts. These are now covered in the quick start manuals.
  • Corrected scaling factors for all non-floating-point data transmission modes.
  • Corrected error in description of reset modes.
  • Moved to-be-deprecated LpSensor detail description to appendix.
  • Added list with APIs for direct sensor programming. OpenZen is to replace LpSensor.

Machine Learning for Context Analysis

Deterministic Analysis vs. Machine Learning

Machine learning and artificial intelligence (AI) are important methods that allow machines to classify information about their environment. Today’s smart devices integrate an array of sensors that constantly measure and save data. On the first thought one would image that the more data is available, the easier it is to draw conlusions from this information. But, in fact larger amounts of data become harder to analyze using deterministic methods (e.g. thresholding). Whereas such methods by themselves can work efficiently, it is difficult to decide which analysis parameters to apply to which parts of the data.

Using machine learning techniques on the other hand this procedure of finding the right parameters can be greatly simplified. By teaching an algorithm which data corresponds to a certain outcome using training and verification data, analysis parameters can be determined automatically or at least semi-automatically. There exists a wide range of machine learning algorithms including the currently very popular convolutional neural networks.

Context analysis setup overview

Context Analysis

Many health care applications rely on the correct classification of a user’s daily activities, as these reflect strongly his lifestyle and possibly involved health risks. One way of detecting human activity is monitoring their body motion using motion sensors such as gyroscopes, accelerometers etc. In the application described here we monitor a person’s mode of transportation, specifically

  1. Rest
  2. Walking
  3. Running
  4. In car
  5. On train

To illustrate the results for deterministic analysis vs. machine learning approach we first implemented a state machine based on deterministic analysis parameters.

Deterministic approach overview

The result is a relatively complicated state machine that needs to be very carefully tuned. This might have been because of our lack of patience, but in spite of our best efforts we were not able to reach detection accuracies of more than around 60%. Before spending a lot more time on manual tuning of this algorithm we switched to a machine learning approach.

Machine learning approach overview

The eventual system structure looks noticeably simpler than the deterministic state machine. Besides standard feature extraction, a central part of the algorithm is the data logging and training module. We sampled over 1 milion of training samples to generate the parameters for our detection network. As a a result, even though we used a relatively simple machine learning algorithm, we were able to reach a detection accuracy of more than 90%. A comparison between ground truth data and classification results from raw data is displayed below.

Context analysis algorithm result

Conclusion

We strongly belief in the use of machine learning / AI techniques for sensor data classification. In combination with LP-RESEARCH sensor fusion algorithms, these methods add a further layer of insight for our data anlysis customers.

If this topic sounds familiar to you and you are looking for a solution to a related problem, contact us for further discussion.

IMUcore Sensor Fusion

Introducing IMUcore

IMUcore is the central algorithm working inside all LP-RESEARCH IMUs. It collects orientation data from several sources and combines them into fast and drift-free tilt and direction information. To work with any type of MEMS sensor, various online and offline calibration methods have been implemented to guarantee high quality data output. The algorithm is very versatile and performance-saving. It can be implemented on embedded MCUs with minimum power consumption.

IMUcore is now available as a solution from LP-RESEARCH. Please contact us here for more information or a price quotation.

Overview of embedded sensor fusion in LPMS devices

Sensor Fusion Filter Overview

IMUcore uses gyroscope data as basis information to calculate orientation. Errors introduced through measurement noise are corrected by accelerometer and compass data. Optionally the sensor fusion can be extended with an optical (or other) tracking system to additionally provide position information.

All aspects of the IMUcore algorithm in one image

If this topic sounds familiar to you and you are looking for a solution to a related problem, contact us for further discussion.

iOS Support for LPMS-B2

LPMS-B2, besides Bluetooth classic, also supports Bluetooth 4 / Bluetooth Low Energy. This allows us to connect the sensor to Apple mobile devices such as the iPad, iPhone or the Apple watch. We recently have created a library that enables development of applications supporting LPMS-B2 on these devices.

The library can be accessed via our open source repository.

The repository contains a skeleton application that shows usage of the most basic parts of the library. The library itself is contained in the following files:

LpmsB2.m
LpmsB2.h
LpmsBData.m
LpmsBData.h

A sensor object is initialized and connected using the follwoing code:

#import "LpmsB2.h"
#import "LpmsBData.h"
..
CBPeripheral *peripheral;
CBCentralManager *centralManager;
..
LpmsB2 *myLpmsB2;
myLpmsB2 = [[LpmsB2 alloc] init];
[myLpmsB2 connect:centralManager Address:peripheral];

More coming soon..

1 2 3 4 5 6 10