Hello! We are Elwin and Tino from SassyBot Studio. This is the post-mortem of Ludum Dare Jam 27 with the theme being ‘10 seconds’. In this jam we created a game using projection mapping and head tracking. This allows for gameplay around a 3D surface with the illusion of depth remaining intact. The resulting game had us think both outside, and inside of the box. Without further delay we present you with Bomb Defuse Simulator 2013!
Bomb Defuse Simulator 2013 is a game where you have 10 seconds to defuse a bomb before it explodes. The game is played with experimental technology which allows the player to walk around the game surface and look into the game scene. The player can control the camera by moving around in real space and control the in-game scissors with an Xbox controller. Using the real space and Xbox controller the player will have to defuse the bomb before it explodes.
Because the game is tied to the environment, and thus you cannot play it from home, we will try to explain how this works. To experience this from a spectator and player point of view we would recommend you watch this short video.
To start the game the player will have to stand in a predefined spot in the room and press start on the Xbox controller. This will calibrate the system so that the head tracking works properly. The game will start with an opening screen and verbal instructions explaining to the player to cut the correct wire or else the bomb explodes.
When the opening sequence is over the game creates three procedurally generated wires. All the wires stem from the ‘Bomb Logic Box’ in the bottom of the bomb case. One of these wires will lead to the bomb whereas the rest of the wires lead to the timer. Once the wires are generated the player is prompted with a ‘3, 2, 1, Go!’ after which the timer starts counting back from ten to zero. During this time, the player will have to walk around the bomb to the glass side of the case to see which wire leads to the bomb. When the player knows which wire to cut the player will need to walk to the open side of the bomb case to finally cut the wire using the Xbox controller.
If the player cuts the wrong wire, or the timer reaches zero, the bomb explodes and the player gets to defuse another bomb with newly generated wires. If the player cuts the correct wire the player is awarded with a cheering sound after which a new bomb is presented with an additional wire to increase the difficulty of the game.
The wires generated in Bomb Defuse Simulator 2013 are created procedurally to provide the player with a random challenge each time the game is played. The solution is a modification of an iTween script example (RandomPathGeneration), which can be obtained by purchasing the examples package of iTween.
With this script Unity then follows the path and place slices of a cylinder along it. The result is a something that looks like a wire. Sure, this is not a very efficient way to do this, but then again, who looks at optimisation and efficiency in a game jam?
The controls in the game are split up into physical input and Xbox controller input. With physical input the player moves around the bomb to see what is happening. This is literally done by walking around the real environment as you can see in the image below.
The Xbox controller input is used to manoeuvre the scissors to the correct wire before cutting it because when you have 10 seconds to defuse a bomb, you use obviously use scissors. Pictured below is how the controller buttons are mapped to the scissor behaviour.
To describe the act of creating a videogame to be played on a 3D surface with, or without, the addition of head tracking we would like to introduce the term ‘Videogame Mapping’. In this sense, to videogame map onto a cube would be to create a videogame that can be played on a cube.
The rest of this section will discuss the hardware, setup and provide insight into how videogame mapping works. The resources we used to make videogame mapping work consist of Unity 3.5.7f6, an Optoma EX532 projector, a Microsoft Kinect camera, a wireless Xbox controller and two cardboard boxes.
To understand more about how to project onto 3D geometry we kindly forward to this page: http://vvvv.org/documentation/how-to-project-on-3d-geometry. It covers everything needed to know about the basics of setting up projection mapping.
The primary problem that occurs with projecting depth, rather than texture, on a flat surface is that the illusion of depth will only look correct from the perspective of the projector. Using a Kinect camera to register the position of the player in real space can be used to adjust the projection so that the illusion will remain intact.
To set up a projection mapped construction it is recommended to start with taking measurements of the environment so that the metrics can be used to replicate the scene in 3D. This means measuring the dimensions of the projection surface, which can be boxes or other objects. Furthermore, it is useful to define a point in real space that acts as the world origin or world zero. Make sure to set a metric standard. For example, make one virtual Unity unit represent one centimetre in real space.
With the world origin defined it is easier to measure where the projector is in the scene relative to the world origin. Take relevant projector specifications into account such as field of view and lens-shift, and translate these values as closely as possible into values for the virtual camera. The setup for Bomb Defuse Simulator 2013 fortunately did not have to bother with lens-shift but be aware that this can be important.
Head tracking is crucial to making the illusion work. To get Kinect up and running we recommend you visit this page: http://wiki.etc.cmu.edu/unity3d/index.php/Microsoft_Kinect_-_Microsoft_SDK. After following the instructions to the letter our Kinect input was recognised by Unity. For the illusion to work, only the head position of the player is relevant meaning we simply turn everything else off.
The Kinect must be in a static and reasonably horizontal position relative to the player, making sure that the Kinect camera always covers the play space. Before the incoming Kinect values are usable, Unity needs to understand how these real world values translate into virtual space. Additionally, Unity needs to understand where the player starts in real and virtual space when the game is initialized.
To align the real world and virtual world distance using the Kinect, record the position of the player at two points in the room where one position is exactly a meter apart from the other. With these two positions in virtual space, compared to a meter in real space, it is fairly simple to create a modifier that can be used to make the Kinect’s input translate into useful virtual movement.
The player needs to stand in a specific spot in real space when starting the game in order for the player’s starting position in virtual space to be approximately identical. With the real and vertical position of the player aligned the Kinect should translate player movement reasonably accurately.
Videogame mapping certainly has its advantages and drawbacks. In this section we would like to briefly discuss a number of advantages that we see in games of this nature. We would love to know if there are more advantages you can think of and letting us know.
A number of people have described Bomb Defuse Simulator 2013 as an alternate reality game where reality is enhanced by virtual reality. We would have to agree that without a similar environment and setup this game would be very hard or unappealing to play. With only a copy of the game executable there are still crucial pieces missing in order to play the game such as hardware, environment, and measurements. Copying a mapped videogame and playing it at home would not provide same desired experience that the game in its original setup provides.
In our case we projected onto cardboard boxes to prove the concept. In theory this concept can be applied to larger and more unconventional objects. Doing so will challenge the game designer with utilizing the real space in order to create a game in virtual space. Without the same object and hardware characteristics of an initial videogame mapping, a replica will not be the same. Even when a videogame mapping is recreated it will remain difficult to duplicate the environment in which the game is originally played. Because the mapping object and environment can vary, videogame mapping will often remain unique per game.
We knew that the best way to showcase videogame mapping was to create a game that makes the depth in the object relevant to gameplay. Admittedly, in retrospect a bomb defuse game is not one of the most original concepts when compared to other Ludum Dare 27 submissions. However, it did make the gameplay fit well with the mapping object. Each and every new object a game is played on presents new gameplay possibilities. Imagine what game could be made when projecting onto a pyramid for example.
Bomb Defuse Simulator 2013 was created using Unity 3.5.7f6 and as many of you know Unity is capable of doing very impressive things, with the right guidance. Similar videogame mapping results can probably be achieved in comparable 3D game engines although we feel that without the ease of use that Unity offers this could not have been done within the timeframe of a game jam. Relying on Unity to create a game like we normally would and project that back onto an object makes game development much easier and more enjoyable.
As with many things in life, advantages sometimes also introduce disadvantages. We would like to go through a few of the downsides of games using videogame mapping. Of course, there are disadvantages we haven’t thought of and we would like to know which ones you can think of too. Here are a number of downsides that we could think of.
It is important to understand that the perspective projected from the position of one person does not look correct for any other person looking at the same object, or same section of an object. It should technically be possible to project the perspective of two different persons onto different sections of the same object using the same projector and Kinect although that was beyond the scope of this jam project. Besides, the room was not large enough to accommodate for this.
Besides the computer required to execute the game there are also less accessible hardware pieces required in order to try this setup. Not everyone has a projector and Kinect lying around and in fact, we had to borrow the Kinect from a fellow game designer in order build Bomb Defuse Simulator 2013.
Apart from the hardware there is also space to take into consideration. As can be seen in the picture of the setup earlier in this article the Kinect was placed as far from the mapping object as the environment would allow it. We experienced that crouching in certain places would cause the Kinect to lose its tracking. Without enough space to play the head tracking in videogame mapping can turn out to be rather tricky.
This point is also raised as an advantage of videogame mapping. Games of this nature will have value for its exclusivity aspect. The downside to this is that, unless you use the same object to play on, adjustments to the game have to be made to play the game. These adjustments are very time consuming and may not even work depending on the object. There are game mechanics that will work while being indifferent to the object that is projected onto such as a memory game or shoot em’ up. However, the challenge is to create games where the mechanics make direct use of the characteristics that videogame mapping offers. This challenge is what makes this concept time consuming and inefficient to apply to various unique objects.
We believe that there is much more to be explored in this field and have only seen the tip of a very appealing iceberg. Avenues that we think are incredibly fascinating include:
– Networked videogame mapping multiplayer.
– Backside surface projection mapping.
– Use monitors as geometric faces instead of a projector to visualise virtual space.
– Enable multiple players with a single Kinect and projector.
– Optimise the setup and calibration of Kinect and mapping object.
– Prototype concave videogame mapping.
– Prototype immersive videogame mapping concepts.
Ludum Dare 27 has been another great experience that we have grown from on many different levels. Hopefully this article has been successful in explaining what kind of game Bomb Defuse Simulator 2013 is and how it is played. On a technical level we have provided insight in how the concept works and what is required to achieve this in a basic form. The nature of the game has its advantages and disadvantages; some of which we have described, although not all have been explored. Finally, we mentioned in which areas more investigation and experimentation is required. Thank you for your time and we hope this has been useful to the game development community.