The Torus Syndicate launch is just around the corner! One thing players will notice when they get their hands on a copy is the importance of gaze in the game. A person’s eyes provide a window into their focus and intent; they generally look towards what they want, where they want to go, and with whom they want to speak. Gaze input takes advantage of how people naturally behave and, when done correctly, makes for a truly intuitive and immersive input mechanism.

Doing it correctly, as it often turns out, is easier said then done, and that starts with figuring out exactly at what the player is gazing. The simplest solution — the one we’ve almost exclusively seen in the wild — is to cast a ray forward from the player’s head; when it collides with something, then that’s what the player is looking at. Early versions of The Torus Syndicate used this logic. Early play testers of The Torus Syndicate did not like it.

Play testers had trouble actually triggering the gaze mechanism. That invisible ray would often just miss the side of what they were trying to select, with small or far-away objects exacerbating the situation. Part of the problem lies in that fact that today’s commercially-available VR headsets don’t do eye tracking. They know the position and orientation of the player’s head with sub-millimeter and -degree accuracy, but they know nothing about what their eyes are actually doing. People look with their eyes more so than just their heads, so the VR headsets can only give us a rough approximation about what the players are actually trying to look at. We could instruct players to keep their eyes dead center in their sockets and move only their heads when gazing, but we wanted people to feel like they had actually become our game’s human protagonist. Instead, we were making them feel more like owls.

An owl moves its head, keeping its eyes stationary.

Players shouldn’t have to act like this owl to get the best experience.

Our first attempt at restoring our players’ humanity was to give them bigger targets to hit. We divorced an object’s gaze size from its real size, making them appear larger solely for the purposes of the gaze mechanism. However, we now had the opposite problem: our gaze system, which once thought that players were gazing at nothing special, now thought that the players were gazing at many things special. Sometimes, players would try to talk to someone, but the gaze system would misread them and accidentally teleport them across the map. We quickly realized that we simply weren’t taking into consideration how perspective actually works. As an object gets farther from the player, it appears smaller and becomes harder to hit. So long as our compensation mechanism doesn’t vary depending on an object’s distance, the gaze system would require an inconsistent (and sometimes impossible) level of precision that would be frustrating to the player.

One way to make for a consistent and frustration-free experience is to dynamically change the gaze size of all gazeable objects depending on where the player is. That is, if an object is far away, make it appear larger to the gaze system so that it’s as easy to pick out as a close object. This is an awkward solution, the most problematic aspect of which is that it simply won’t work if there are multiple players. Fortunately, we found another way by imagining the player’s gaze as a cone extending out from their forehead, with everything inside the cone considered to be in the player’s gaze. Up close, the cone only encompasses a small area. Far away, it grows to encompass a larger area. This is really just taking that awkward change-the-size-based-on-the-player’s-position solution we just talked about and flipping it inside out. However, instead of enlarging far-away objects to make them easier to pick out with a thin line, we keep the objects the same size and pick them out with a cone that gets thicker with distance to compensate for the effects of perspective. The results are the same: an even sensitivity independent of where an object is in relation to the player. With a cone, though, we can keep multiplayer on the table.

The green figures animate as the player's gaze passes over them.

The green figures animate as the player’s gaze passes over them.

Unfortunately, cone casting isn’t as commonly supported by commercially-available 3D game engines as ray and sphere casting are. That’s not surprising. The calculations involved in checking if a collision occurs between an object and a ray or sphere are simpler (and computationally less expensive) than between the same object and a cone. The workaround, though, was surprisingly simple, if only in retrospect. We attached a virtual, invisible cone to the player’s forehead. The cone is set to only interact with the special gaze colliders attached to objects ready for player manipulation, which helps to keep resource consumption and spurious physics events to a minimum. As the player looks around, the volume sweeps across space. When an object collides with it, we can be pretty confident that it’s entered the player’s gaze.

Knowing what the player is gazing at has proven indispensable to making our players’ experience as immersive as possible. Each element in the game can sense when they’re looking at it and act accordingly. It’s almost like the world bends to our players’ mere thoughts, and that’s absolutely the kind of world we’ve set out to build.

Hurray! The Torus Syndicate is now on Steam.

Check out our store page and buy our game if you’ve been enjoying these posts!