Vrui VR Toolkit

Translations
The task of a virtual reality (VR) development toolkit is to shield an application developer from the particular configuration of a VR environment, such that applications can be developed quickly and in a portable and scalable fashion. Three important parts of this overarching goal are encapsulation of the display environment, encapsulation of the distribution environment, and encapsulation of the input device environment. In more detail, these three partial goals are:
Display abstraction
A toolkit should provide OpenGL rendering contexts that are set up in such a fashion that rendering a model in user-specific coordinates will display that model on all rendering surfaces (monitors, screens, head-mounted displays) in correct head-tracked stereographic mode.
Distribution abstraction
As larger VR environments require more than one computer to operate, the detail aspects of distribution (number of computers, connection topology, etc.) should be hidden by the toolkit. In principle, there are at least three ways to distribute a rendering environment: "split first," where an application is replicated on all computers and synchronized by distributing input device and ancillary data; "split middle," where a shared data structure, e.g., a scene graph, is used to transmit data from one application computer to all rendering computers; and "split last," where the OpenGL API call stream is broadcast from one application computer to all rendering computers, e.g., using Chromium. A toolkit should also hide the difference between a distributed rendering environment running on a cluster of individual computers, and one running on a shared-memory multi-CPU computer with several independent graphics pipes.
Input abstraction
There are a wide variety of different vendors, models and protocols to connect VR input devices such as space balls, space mice, 6-DOF trackers, wands, data gloves, etc., to the computers comprising a VR environment. A toolkit must hide these differences in hardware, and provide a uniform view of the set of connected input devices. Furthermore, a toolkit should provide mechanisms not only to hide the hardware details of the input device environment, but also the number and configuration of input devices. An application should be written without aiming for a particular input environment (such as "CAVE wand and head tracker" or "stylus, two pinch gloves, and head tracker"). Instead, the toolkit should provide a layer that allows an application to specify its input requirements at a higher level, and allows a user to map input devices to these requirements. For example, an application might offer a tool to grab an application-specific object and move it, and the user might ask the Vrui toolkit, at run-time, to bind that tool to, say, the trigger button on the right-hand VR controller.
Most existing VR toolkits cover the first aspect well; some VR toolkits cover the second aspect by using any one of the listed distribution schemes, but no toolkit we found covers the third aspect. Although all toolkits have some kind of built-in input device driver to hide hardware details, none provide a higher-level semantic interface that allows to write an application once, and run it in VR environments with widely differing input environments. This implies that no toolkits provide an adequate way to run desktop applications using a regular mouse and keyboard; although many of them have simulators, these are merely awkward low-level debugging tools and are not useful for running and actually using a VR application on a desktop.

The Vrui VR toolkit aims to support fully scalable and portable applications that run on a range of VR environments starting from a laptop with a touchpad, over desktop environments with special input devices such as space balls, to full-blown immersive VR environments ranging from a single-screen workbench to a multi-screen tiled display wall or CAVE to modern commodity head-mounted displays such as HTC Vive or Valve Index. Applications using the Vrui VR toolkit are written without a particular input environment in mind, and Vrui-enabled VR environments are configured to map the available input devices to application functions such that the application appears to be written natively for the environment it runs on. For example, a Vrui-based VR application running in desktop mode should be as usable and intuitive as a 3D application written specifically for the desktop.
Figure 1: The same Vrui application, run in a 4-sided CAVE (left) and on a laptop (right).

Project Goals

The main project goals were to design and implement a VR development toolkit for scalable and portable applications providing the following abstractions and additional features: A more complete list of goals and the architecture / design features implementing them can be found in the Vrui Manifesto.

Project Status

At this point, all building blocks of the Vrui VR toolkit are in place and functional, and specific supported and tested environments are: Application-level streaming to layer shared graphics data structures in a split-middle architecture over Vrui's internal split-first distribution is finally working reliably. The first application using split-middle is ProtoShop. Instead of computing the inverse kinematics to manipulate proteins on all nodes of a cluster, the computation is only done on the cluster's head node, and the results are broadcast to the render nodes. This ensures cluster-wide synchronization in all cases, and improves responsiveness on clusters where the head node has multiple CPUs (or CPU cores), and the render nodes only have single CPUs.

News

As of 04/08/2009, the Vrui VR toolkit has been released publicly under the GNU General Public License version 2, and the most recent and several older versions are available for download from the download page.

As of 11/25/2010, the Vrui VR toolkit has moved to version 2.0.

As of 06/07/2011, the Vrui VR toolkit has moved to version 2.1.

As of 12/16/2012, the Vrui VR toolkit has moved to version 2.6.

As of 08/12/2013, the Vrui VR toolkit has moved to version 3.0.

As of 12/14/2013, the Vrui VR toolkit has moved to version 3.1.

As of 10/13/2016, the Vrui VR toolkit has moved to version 4.2, with full native support for HTC Vive head-mounted VR displays.

As of 09/06/2018, the Vrui VR toolkit has moved to version 4.6.

As of 03/15/2023, the Vrui VR toolkit has moved to version 10.1.

Pages In This Section

Vrui HTML Documentation
Online copy of the HTML documentation packaged with the Vrui toolkit's source code release.
X11 Cluster Rendering
Instructions on how to set up X11 to support cluster rendering, for example to support multi-screen VR environments.
GLContextData
Essay on how the GLContextData class hides nasty details of multi-pipe OpenGL rendering from the application developer.
Download
Download page for the current and several older releases of the Vrui VR toolkit.

Translations

This page has been translated into other languages by volunteers: