LPNAV-VAC – Cost-Efficient Navigation System for AGVs

Introduction

We’re proud to announce a breakthrough result in the development of our LPNAV low-cost navigation system for small-sized automatic guided vehicles (AGV).

One focus area of LPNAV are vacuum cleaning robots that require spatial understanding of their environment to calculate an optimum cleaning strategy. As vacuum cleaning robots are mainly consumer devices, solutions for this market need to be cost-efficient, while maintaining state-of-the-art performance.

Figure 1 – The LPNAV-VAC development kit contains a robot platform, a dedicated computing unit, an IMU sensor and a camera

Development Platform

LPNAV-VAC combines three different data sources in order to calculate a robot’s position inside a room: an inertial measurement unit, data from the robot’s wheel encoders and video images from a camera installed on the robot (Figure 1). A central computing unit combines the information from these data sources to simultaneously create a map of the surroundings of the robot and calculate the position of the robot inside the room.

It is essential that sensor fusion algorithm is able to dynamically update the map it is constructing. As new sensor information arrives the map is continuously adapted to reflect an optimized view the robot’s environment.

While this principle of simultaneous localization and mapping (SLAM) is an established method for some robot navigation systems, these solutions tend to rely on laser scanners (LIDAR) or vision-only reconstruction. The combination of all available data sources in the robot allows LPNAV-VAC to create high definition maps of the environment while using low-cost, off-the-shelf components.

First Demonstration

In the demonstration video above my colleague and main developer of LPNAV-VAC is steering our AGV platform through the ground floor of our Tokyo office. While the right side of the screen shows the view from the robot camera and detected visual features, the right side shows the path of the robot through the environment. As the robot progresses through the room a 3D map is created and continuously updated.

Please note that the robot doesn’t lose tracking during turns, while driving over small steps in the room or with changing environment lighting. Also Thomas moving around in front of the camera doesn’t disturb the LPNAV algorithm.

Using this map and the robot’s position information a path planning algorithm can find an optimum path for the robot to efficiently clean the room.

See-through Display First Look – LPVIZ (Part 3)

Virtual Dashboard Demonstration

This is a follow-up post to the introduction of our in-vehicle AR head mounted display LPVIZ part 1 and part 2.

To test LPVIZ we created a simple demo scenario of an automotive virtual dashboard. We created a Unity scene with graphic elements commonly found on a vehicle dashboard. We animated these elements to make the scene look more realistic.

This setup is meant for static testing at our shop. For further experiments inside a moving vehicle we are planning to connect the animated elements directly to car data (speed etc.) communicated over the CAN bus.

The virtual dashboard is only a very simple example to show the basic functionality of LPVIZ. As described in a previous post, many a lot more sophisticated applications can be implemented.

The video above was taken through the right eye optical waveguide display of LPVIZ. We took this photo with a regular smartphone camera and therefore it is not very high quality. Nevertheless, it confirms that the display is working and correctly shows the virtual dashboard.

The user is looking at the object straight ahead. In case the user rotates his head or changes position, his view of the object will change perspectively. An important point to mention is the high luminosity of the display. We took this photo with the interior lighting in our shop turned on normally, and without any additional shade in front of the display.

How to Use LPMS IMUs with LabView

Introduction

LabView by National Instruments (NI) is one of the most popular multi-purpose solutions for measurement and data acquisition tasks. A wide range of hardware components can be connected to a central control application running on a PC. This application contains a full graphical programming language that allows the creation of so called virtual instruments (VI).

Data can be acquired inside a LabView application via a variety of communication interfaces, such as Bluetooth, serial port etc. A LabView driver that can communicate with our LPMS units has been a frequently requested feature from our customers for some time, so that we decided to create this short example to give a general guideline.

A Simple Example

The example shown here specifically works with LPMS-B2, but it is easily customizable to work with other sensors in our product line-up. In order to communicate with LPMS-B2 we use LabView’s built-in Bluetooth access modules. We then parse the incoming data stream to display the measured values.

The source code repository for this example is here.

Figure 1 – Overview of a minimal virtual instrument (VI) to acquire data from LPMS-B2

Fig. 1 shows an overview of the example design to acquire the accelerometer X, Y, Z axes of the IMU and displays them on a simple front panel. Fig. 2 & 3 below show the virtual instrument in more detail. After reading out the raw data stream from the Bluetooth interface, this data stream is converted into a string. The string is then evaluated to find the start and stop character sequence. The actual data is finally extracted depending on its position in the data packet.

Figure 2 – Bluetooth access and initial data parsing

Figure 3 – Extraction of timestamp, accelerometer X, Y, Z values

Notes

Please note that the example requires manually entering the Bluetooth ID of the LPMS-B2 in use. The configuration of the data parsing is static. Therefore the output data of the sensor needs to be configured and saved to sensor flash memory in the LPMS-Control application. For reference please check the LPMS manual.

An initial version of this virtual instrument was kindly provided to us by Dr. Patrick Esser, head of the Movement Science Group at Oxford Brooks University, UK.

Collaboration with Pimax

We are happy to announce a collaboration with the head-mounted display (HMD) manufacturer Pimax. Pimax HMDs feature very high resolution (up to 8K pixels) displays and an industry-leading field-of-view (max. 200°). By default, Pimax HMDs support SteamVR tracking and therefore are limited to relatively small tracking volumes.

We developed a special driver that allows our LPVR middleware LPVR-CAD and LPVR-DUO to work with Pimax headsets. Using LPVR, the headsets can now be used within a large-scale, location-based context, in connection with outside-in optical systems such as ART (Advanced Real-Time Tracking).

As Pimax is planning to implement UltraLeap hand tracking in their HMDs in the future, we are confident that we will also be able to extend our inside-out tracking algorithm to their devices.

The video above shows the basic functionality of tracking a Pimax HMD using LPVR and an optical tracking system. The headset’s motions are represented in SteamVR. For this demonstration the tracking volume is relatively small, but can be extended easily by using more outside-in tracking cameras.

This video was kindly provided to us by evoTec Solutions. Evotec is a new company in Switzerland that focuses on virtual reality (VR) solutions for corporations. Contact them for further information!

Design Prototype and Inside-out Tracking – LPVIZ (Part 2)

LPVIZ Prototype Industrial Design

This post is a follow-up to the introduction of our augmented reality (AR) headset LPVIZ. See our previous post here.

For the past two months the LPVIZ team has been working hard to improve our initial prototype. We have enhanced the device’s appearance and optimized it ergonomically. My colleague Seeon Mitchel has made draft 3D prints of the design that he has been planning for the initial release of LPVIZ. The results are looking excellent (Figure 1 & 2).

The ring design for fixing the unit to the user’s head feels comfortable. Even for longer usage duration the unit does not cause fatigue to the neck. See below two photos of the current functional prototype with the newly printed shell.

Figure 1, 2 – The fully functional LPVIZ design prototype

Inside-out Tracking and Gesture Recognition

The latest LPVIZ prototype features a built-in stereo camera. We are using the excellent Rigel module by the company UltraLeap that allows us to, at the same time, run a SLAM (simultaneous localization and mapping) algorithm and UltraLeap’s hand tracking.

Using the Rigel’s stereo camera, my colleague Thomas Hauth has developed a state-of-the-art inside-out tracking algorithm that allows the headset to be used inside a vehicle, even if no special cameras are installed. The video (Figure 3) below shows the fundamental functionality of the algorithm.

Figure 3 – The video shows the fundamental functionality of the LPSLAM inside-out tracking algorithm

It is important to note that this will not be a full replacement for ART outside-in tracking inside the vehicle. ART’s tracking engine is more accurate and robust under difficult lighting conditions. Still, our purpose is to also serve customers that have a smaller budget or no possibility to install additional equipment inside their vehicle.

1 2 3 4 5 6 15