OpenZen 1.0 Release

Going Full Circle for Sensor Data Streaming with OpenZen

Since the foundation of LP-Research, it is not only important for us to provide excellent hardware to our customers but we also want to provide software components which ease the adoption and usage of our products. Over the years, we have provided various libraries to support customers using our sensor hardware on a diverse set of platforms.

As our range of sensor offerings is growing, we realized that we need to consolidate our software library stack while still supporting multiple platforms. We wanted to use this opportunity to create a more modular system to work with sensors with various measurement components.

Figure 1 – OpenZen Unity plugin connected to a LPMS-CU2 sensor and live visualization of sensor orientation.

Based on theses requirements, we developed OpenZen. It is our take on a high performance sensor data streaming and processing library. It combines our experience gained during mopre thant five years of sensor data processing with modern software techniques. The core of OpenZen is developed with the modern C++14 language. We are hosting the source code in an open source repository for seamless public domain access to learn and contribute to the code base.

Core Concept

One basic principle of OpenZen is to abstract the sensor components provided by a sensor from the transport layer of the communication. In this way, once the user is familiar with the OpenZen API, a wide range of sensor types via various connection layers can be used. To reach the lowest latency and the highest sensor data throughput, we designed OpenZen to be fully event-based and without any polling loops which could introduce delays.

Sensor Types and Connectivity

With release 1.0, OpenZen provides a sensor interface for the measurements of inertial-measurement units (IMU) and the output of global navigation satellite systems (GNSS). For example, our new LPMS-IG1P sensor is a combined IMU and GNSS-unit. Both units can be read-out via OpenZen.

A list of supported sensors is here.

OpenZen supports sensor connections via various interfaces like USB, serial port, CAN-Bus and Bluetooth. Furthermore, measurement data from sensors can also be streamed via a network and received on a second system by an OpenZen instance.

A list of supported transport layers is here.

Operating Systems and Programming Languages

Currently, OpenZen can be compiled and used on Windows, Linux and MacOS systems. We are working on ports to more platforms, for example Android. Due to its modular design, the OpenZen API can be accessed from many programming languages. At this time, we support the C, C++ and C# programming languages and we provide a ready-to-go Unity plugin.

Thomas wearing LPVIZ

AR HMD for In-Car Applications – LPVIZ (Part 1)

What is In-Vehicle AR

This article describes our first steps in the development of an AR HMD for in-car, aerospace and naval applications.

Over several years we have developed our LPVR middleware. In the first version the purpose of this middleware was to enable location-based VR with a combination of optical and IMU-based headset tracking. Building on this foundation we extended the system to work as a tracking solution for transportation platforms such as cars, ships or airplanes (Figure 1).

In contrast to stationary applications where an IMU is sufficient to track the rotations of an HMD, in the in-vehicle use-case, an additional IMU needs to be fixed to the vehicle and the information from this sensor needs to become part of the sensor fusion. We realized this with our LPVR-DUO tracking system.

Applying this middleware to existing augmented reality headsets on the market turned out to be challenging. Most AR HMDs use their own proprietary tracking technology that is only suitable for stationary use-cases, but doesn’t work in moving vehicles. Accessing such a tracking pipeline in order to extend it with our sensor fusion is usually not possible.

Illustration of In-car VR Installation

Figure 1 – Principle of in-car AR/VR as implemented with LPVR-DUO

Applications

There are a large number of applications for in-car augmented reality ranging from B2B use-cases for design and development to consumer-facing scenarios. A few are listed in the illustration below (Figure 2).

AR applications in a car

Figure 2 – In-car AR use cases range from a simple virtual dashboard to interactive e-commerce applications. The “camera pass-through” enables the driver to virtually look through the car to see objects otherwise occluded by the car chassis.

HMD Specifications

For this reason, we decided to start the development of LPVIZ, an AR HMD dedicated to in-vehicle applications. This AR HMD for in-car, aerospace and naval applications is to represent the requirements of our customers as closely as possible:

  • Strong optical engine with good FOV (LUMUS waveguides), unobstructed lateral vision (safety), low persistence and high refresh rate
  • System satisfies all requirements for immersive AR head tracking (pose prediction, head motion model, late latching, asynchronous timewarp etc.)
  • HMD is thethered to computing unit in vehicle by a thin VirtualLink cable
  • Computing unit is compact, but powerful enough to run SteamVR and thus supports a large range of software applications
  • Options to use either outside-in or inside-out optical tracking inside the vehicle, as well as LeapMotion hand tracking

In-Car HMD Hardware Prototype Development

We have recently created the first prototype of LPVIZ, with hardware development still in a very early stage, but enough to demonstrate our core functionality and use-case well.

Thomas wearing LPVIZ

Figure 3 – Tracking of LPVIZ works based on our LPVR-DUO technology making use of ART outside-in tracking and our LPMS-CURS2 IMU module. This image shows Dr. Thomas Hauth performing an optical-see-through (OST) calibration.

Figure 4 – The LPVIZ prototype is powered by a LUMUS optical engine. This waveguide-based technology has excellent optical characteristics, perfectly suitable for our use-case.

Work in Progress

As you can see from the prototype images, our hardware system is still very much in an alpha stadium. Nevertheless we think it shows the capabilities of our technology very well and points in the right direction. In the next hardware version that will already be close to a release model, we will reduce the size of the device, applying the points below:

  • Use active marker LEDs instead of large passive marker balls OR inside-out tracking
  • Collect all electronics components on one compact electronics board, with only one VirtualLink connector
  • Create a compact housing, with a glasses-like fixture instead of a VR-style ring mount (Figure 5)

Figure 5 – First draft of a CAD design for the housing of the LPVIZ release version

Collaboration with Varjo

Varjo VR-2

Varjo High-Resolution HMDs

Headsets by the Finish start-up Varjo recently have had a profound impact on the market of business-to-business virtual reality devices. Varjo’s advanced display technology allows viewing immersive environments with uniquely high resolution. It makes the company’s HMDs a great choice for professional design and industrial applications.

We have been working with Varjo for a few months in order to adapt our LPVR driver to work with their headsets. As a result we have recently released a first version of the driver and are ready to deploy it to customers.

LPVR Tracking Technology

Valve Lighthouse is the default tracking technology built into Varjo headsets. This system, in spite of being very suitable for games and single user applications, is limited in its tracking volume and accuracy (mainly reproducibility). In order to allow multi user, large space applications (location-based VR) an alternate tracking system is needed.

With LPVR-CAD for Varjo we allow the combination of Varjo headsets with our tracking technology, based on marker-based inside-out tracking, feature-based inside-out tracking or outside-in tracking such as Advanced Realitime Tracking (ART).

Besides static tracking solutions we also offer support for our LPVR-DUO in-car tracking system.

Varjo marker holder top view

Figure 1 – For LPVR outside-in-based tracking, we offer a customized marker holder for Varjo HMDs.

Varjo marker holder detail view

Figure 2 – The marker holder fits all currently available HMDs: VR-1, VR-2 (Pro) and XR-1

Big in Korea

Location-based Virtual Reality for Automotive Design

Figure 1 – Using LPVR-CAD large room-scale tracking, 3D design content is visualized on VIVE Pro HMDs

In cooperation with Korean automotive design solutions provider AP-Solutions, we created a large location based virtual reality installation at the Hyundai research and development center close to Seoul, Korea. The system is used to showcase, amend and modify prototype and production-ready automobile designs (Figure 1).

LPVR Large Room Scale Tracking Engine

Figure 2 – Each VIVE Pro HMD is equipped with optical tracking markers and an LPMS-CU2 IMU. The IMUs are covered with black tape to avoid reflections of infrared light.

The system uses optical tracking together with LP-Research’s LPVR solution to track up to 20 users wearing Vive Pro Head-mounted Displays (HMD). Each user carries a VIVE hand controller for a total of 40 tracked objects in a space close to 400sqm.

Responsiveness is achieved by using LPVR (Figure 2) to combine LPMS IMU data and a software package to achieve optimum performance. The optical system uses 36 infrared cameras to track the 160 markers attached to the HMDs and hand controllers. The position and orientation data of each user’s HMD is combined using LP-Research’s algorithm.

The content of the virtual space is rendered using a CAD software package running on backpack PCs worn by each of the 20 users. The PCs communicate and coordinate via a central server.

Korean News Coverage

Images courtesy of Hyundai Motor Group Newsroom.

AVGVST Guest Post: Refining Human Motion

This is a guest post by AVGVST creative agency. AVGVST are our good neighbours here in Nishizabu Tokyo, so we thought it is a good idea to ask them to create a few good-looking blog posts for us.

Human Motion Capture

Human motion capture is a term commonly known from the world of movie production: Gollum in The Lord of the Rings lurking and smiling at his shiny ring in a weirdly human-like manner or the beautifully alien creatures of Avatar floating through a fantastic landscape.

Although transferring human body movements to a movie character is an established method, it might be surprising to some that human motion capture has a range of applications in a areas beyond the world of film production.

Motion Capture can improve human life by boosting a person’s work efficiency, support injury recovery and help preventing excessive strain on the human body under rough working conditions. The medical and manufacturing industries are just two of many fields where motion capture helps to optimizing human movements.

IMU-Based Technology for Refining Human Motion

One of the main topics of LP-RESEARCH for the application of its advanced sensor technology is to provide means for quantitatively refining human motion and make them faster, safer and more efficient.

LP-RESEARCH’s chief scientist Tobias Schlüter is writing software that uses motion sensor data to measure the movements of a person. Small sensors attached to the subject’s limbs track body motion and based on the acquired information, adjustments can be made to the subject’s movements.

This can result in improved speed safety and efficiency for a specific activity.

Motion capture AVGVST illustration

Worker Safety & Well-Being First in Industrial Production

Using this technology a patient trying to recover from a severe injury might find a faster way back to normal life. A runner working to improve his running style might gather useful information to optimize his training strategy.

A central topic in industrial manufacturing is the improvement of production efficiency: human workers performing repetitive tasks face problems of fatigue and physical conditions like back pain. Human motion capture and the corresponding analysis methods help to correct sub-optimal movements to help the work fatigue less, stay healthy and at the same time become more efficient.

With applications in sports, medical treatment, industrial production and more: Sensor technology from Tokyo – welcome to LP-RESEARCH.

To find out more details of how this technology works, please contact us

1 3 4 5 6 7 15