Creating immersive experiences with diegetic interfaces
I like to think of Interaction Design in its purest form as being about
shaping the perception of an environment of any kind. Yes, today the discipline
is so closely tied to visual displays and software that it almost seems to
revolve around that medium alone, but that’s only because as of now, that’s
pretty much the only part of our environment over which we have complete
control.
The one field that has come closest to overcoming this limitation is the video game industry whose 3D games are the most vivid and complete alternate realities technology has been able to achieve. Game designers have control over more aspects of an environment, albeit a virtual one, than anyone else.
Lately I’ve been thinking a lot about this idea that interfaces can be more closely integrated with the environment in which they operate. I’d like to share some of what I’ve learned from the universe of video games and how it might be applicable to other kinds of designed experiences.
In Designing for the Digital Age, Kim Goodwin criticizes the term “Experience design” as being too presumptuous because we don’t really have the power to determine exactly what kind of experience each person with their own beliefs and perceptions has. Even when we work across an entire event from start (e.g. booking a flight) to finish (arriving at the door), there are still countless factors outside our control that can significantly impact how a person will experience it.
Video game designers on the other hand can orchestrate a precise scenario since almost every detail in their virtual world is for them to determine. They can arrange exactly what kind of person sits next to you on a flight no matter who you are or how many times you take that flight.
That isn’t to say that videogames don't have their limitations. Of course, it isn’t completely true that game designers can determine who sits next to you. They can only determine who your avatar sits next to. The most significant weakness of videogames is the inability to truly inhabit a designed environment or narrative. As much control as we may have over a virtual world, as long as we are confined to experiencing it through television screens and speakers, it won’t be anywhere near comparable to our real world.
Fortunately, there’s a growing effort to address this lack of immersion.
A key area of the problem lies in how we’re presented and interact with complex information diegetically, that is, interfaces that actually exist within the game world itself.
Before continuing, it helps to be familiar with some basic concepts and terminology around diegesis in computer graphics, the different spaces of representation between the actual player and their avatar. The diagram above illustrates the four main types of information representation in games.
Non-diegetic representations remain the most common type of interface in games. In first person shooters, arguably the most immersive type of game since we usually see the scenery through our avatar’s view, the head-up display has remained an expected element since Wolfenstein 3D first created the genre.
While we’re mainly talking about the diegetic space here, spatial representations are worth touching on. While not truly diegetic, spatial representations nonetheless manage to embed interface elements in a less disruptive way by integrating them more seamlessly into the 3D world. Mirror’s Edge uses this to great effect by highlighting objects visually as a navigation aid for the player as they move through complex urban settings using creative ways besides plain running.
If we’re aiming for immersion though, the interface will usually have to be at least partially diegetic (and Mirror’s Edge actually does make use of them as well; that is, the interface is visible to the character in the game and exists “physically”, so to say, in that world. There has been some discussion recently on the merits of diegetic representations of information versus the more traditional non-diegetic type (typically HUDs). The fact that there is a debate on this issue alone reflects the progress graphics technology has made recently. For most of the history of video games, diegetic interfaces were impractical or plain impossible because of computing or hardware limitations.
Here is a fairly recent example of a diegetic interface in the Need For Speed series that illustrates a common problem with in-game interfaces. Even though I initially liked the realistic cockpit view, I eventually switched over to the HUD mode most of the time because it was just easier to follow than the digitized instrument panels.
In comparison, it's worth noting the more effective use of the actual rear view mirrors for their intended purpose. Traditionally, these types of auxiliary views were usually featured in a non-diegetic window overlay, presumably because of the difficulty of displaying real-time scenes at non-orthogonal angles and in an amorphous frame.
In the early 1990s, LucasArts’ X-Wing series used an in-game cockpit that was effective enough that most of the instrumentation did not have an alternative non-diegetic representation as an option. There is a tradeoff though. Unlike some driving games in which the view pans horizontally when the car takes a turn, the instruments here are on a static background that we never see from any other angle.
One of the most graphically impressive (and demanding of computer hardware) games is Crytek’s Far Cry 2 where several types of information are presented through their actual medium diegetically (map, watch, GPS, etc.).
As Marcus Andrews points out in Gamasutra, Far Cry 2 does use a traditional HUD sometimes to complement the diegetic elements. He views this as a failure on the designers’ part to express all information in the diegetic manner, but others have argued that it may have been merely the case that they determined the combination of the two was the most effective way to balance immersion and clarity of information.
There are games that have a fully diegetic interface. The most notable recent example of this is Dead Space which set an explicit direction from the start of its development to have all interactions take place within the fictional world.
This led to clever but logical ways to represent game mechanics traditionally displayed in HUDs. For example, a tube filled with a luminous liquid on the avatar’s back acts as a health indicator while the hints for the player are displayed as messages from other characters in the game.
Interestingly, this has been partially facilitated by not using the standard first person perspective used in First Person Shooters (hence the name). Instead it has a third person tracking view where the player’s character is visible from behind and the game is played mainly by seeing over the character’s shoulder. Unlike a straight first person view where the hologram screen would take up most of our field of view, this perspective allows the use of the interface while still being able to watch the area in front of them. (This can be critical because the monsters in Dead Space won’t wait around while you’re busy saving your game.)
Finally, there’s Aliens: Colonial Marines. As the in-game still shows, there are no non-diegetic elements present in the first person view. The avatar is holding a device instead that will be immediately familiar to anyone who has seen Aliens. Even in the film, the motion tracker served as an effective diegetic interface since the audience could clearly understand what the moving dots on the display meant.
While it’s a little odd that it took a quarter century until someone made a true video game adaptation of a film that has more or less served as a template for the First Person Shooter genre, it’s just as well since even ten years ago, it would have been impossible to render a motion tracker so realistically and effectively in a game.
Technology seems to be finally overcoming the restrictions that have kept diegetic interfaces limited to gimmickry until now. While still in its infancy, the push to duplicate more of our natural interactions with our environment seems to be gaining momentum as evidenced by new products using non-traditional interaction models. Most of them, like the popular Nintendo Wii, have yet to deal with immersion in terms of interfaces. On the other hand, Microsoft’s, whose controller-free gaming technology Kinect is about to enter the market, has stated its intention to eliminate what it calls the “barrier” between the player and the game world.
While researching this article, I was really interested to find how other professions are making use of this technology. To my surprise, I struggled to find examples. While 3D has been widely adopted by fields like architecture and medicine in other ways, the value placed on presenting complex information in more immersive ways still seems to be largely limited to entertainment media.
How can this technology be applied outside gaming and what value would it be providing that other alternatives cannot?
In a way, diegetic interfaces have been used for training purposes since the first flight simulators were created to teach pilots to fly by instruments. The instruments may not have been represented on a digital display, but they were complex interfaces in an immersive simulated environment, which fits the definition of a truly diegetic interface at least as much the in world devices in Dead Space.
Modern professional flight simulators still use physical interfaces. Unfortunately, while it may be cost less than using a real airplane, the price of these devices still remained out of the reach of most people. This all changed when PC based flight simulation software brought the experience, complete with diegetic instrumentation, to the mass market.
Why not extend this to other types of complex products as well? At Cooper, we're currently working on a life-supporting medical device that’s used on critical patients in operating rooms and intensive care units. Not surprisingly, training people to use this device is not a simple task. Let’s say there are 3 components required to create an effective training scenario for this type of product.
I’m interested if anyone else is aware of other applications of diegetic user interfaces. How else might these ideas be applied?
The one field that has come closest to overcoming this limitation is the video game industry whose 3D games are the most vivid and complete alternate realities technology has been able to achieve. Game designers have control over more aspects of an environment, albeit a virtual one, than anyone else.
Lately I’ve been thinking a lot about this idea that interfaces can be more closely integrated with the environment in which they operate. I’d like to share some of what I’ve learned from the universe of video games and how it might be applicable to other kinds of designed experiences.
In Designing for the Digital Age, Kim Goodwin criticizes the term “Experience design” as being too presumptuous because we don’t really have the power to determine exactly what kind of experience each person with their own beliefs and perceptions has. Even when we work across an entire event from start (e.g. booking a flight) to finish (arriving at the door), there are still countless factors outside our control that can significantly impact how a person will experience it.
Video game designers on the other hand can orchestrate a precise scenario since almost every detail in their virtual world is for them to determine. They can arrange exactly what kind of person sits next to you on a flight no matter who you are or how many times you take that flight.
That isn’t to say that videogames don't have their limitations. Of course, it isn’t completely true that game designers can determine who sits next to you. They can only determine who your avatar sits next to. The most significant weakness of videogames is the inability to truly inhabit a designed environment or narrative. As much control as we may have over a virtual world, as long as we are confined to experiencing it through television screens and speakers, it won’t be anywhere near comparable to our real world.
Fortunately, there’s a growing effort to address this lack of immersion.
A key area of the problem lies in how we’re presented and interact with complex information diegetically, that is, interfaces that actually exist within the game world itself.
Before continuing, it helps to be familiar with some basic concepts and terminology around diegesis in computer graphics, the different spaces of representation between the actual player and their avatar. The diagram above illustrates the four main types of information representation in games.
Non-diegetic representations remain the most common type of interface in games. In first person shooters, arguably the most immersive type of game since we usually see the scenery through our avatar’s view, the head-up display has remained an expected element since Wolfenstein 3D first created the genre.
While we’re mainly talking about the diegetic space here, spatial representations are worth touching on. While not truly diegetic, spatial representations nonetheless manage to embed interface elements in a less disruptive way by integrating them more seamlessly into the 3D world. Mirror’s Edge uses this to great effect by highlighting objects visually as a navigation aid for the player as they move through complex urban settings using creative ways besides plain running.
If we’re aiming for immersion though, the interface will usually have to be at least partially diegetic (and Mirror’s Edge actually does make use of them as well; that is, the interface is visible to the character in the game and exists “physically”, so to say, in that world. There has been some discussion recently on the merits of diegetic representations of information versus the more traditional non-diegetic type (typically HUDs). The fact that there is a debate on this issue alone reflects the progress graphics technology has made recently. For most of the history of video games, diegetic interfaces were impractical or plain impossible because of computing or hardware limitations.
Here is a fairly recent example of a diegetic interface in the Need For Speed series that illustrates a common problem with in-game interfaces. Even though I initially liked the realistic cockpit view, I eventually switched over to the HUD mode most of the time because it was just easier to follow than the digitized instrument panels.
In comparison, it's worth noting the more effective use of the actual rear view mirrors for their intended purpose. Traditionally, these types of auxiliary views were usually featured in a non-diegetic window overlay, presumably because of the difficulty of displaying real-time scenes at non-orthogonal angles and in an amorphous frame.
In the early 1990s, LucasArts’ X-Wing series used an in-game cockpit that was effective enough that most of the instrumentation did not have an alternative non-diegetic representation as an option. There is a tradeoff though. Unlike some driving games in which the view pans horizontally when the car takes a turn, the instruments here are on a static background that we never see from any other angle.
One of the most graphically impressive (and demanding of computer hardware) games is Crytek’s Far Cry 2 where several types of information are presented through their actual medium diegetically (map, watch, GPS, etc.).
As Marcus Andrews points out in Gamasutra, Far Cry 2 does use a traditional HUD sometimes to complement the diegetic elements. He views this as a failure on the designers’ part to express all information in the diegetic manner, but others have argued that it may have been merely the case that they determined the combination of the two was the most effective way to balance immersion and clarity of information.
There are games that have a fully diegetic interface. The most notable recent example of this is Dead Space which set an explicit direction from the start of its development to have all interactions take place within the fictional world.
This led to clever but logical ways to represent game mechanics traditionally displayed in HUDs. For example, a tube filled with a luminous liquid on the avatar’s back acts as a health indicator while the hints for the player are displayed as messages from other characters in the game.
Interestingly, this has been partially facilitated by not using the standard first person perspective used in First Person Shooters (hence the name). Instead it has a third person tracking view where the player’s character is visible from behind and the game is played mainly by seeing over the character’s shoulder. Unlike a straight first person view where the hologram screen would take up most of our field of view, this perspective allows the use of the interface while still being able to watch the area in front of them. (This can be critical because the monsters in Dead Space won’t wait around while you’re busy saving your game.)
Finally, there’s Aliens: Colonial Marines. As the in-game still shows, there are no non-diegetic elements present in the first person view. The avatar is holding a device instead that will be immediately familiar to anyone who has seen Aliens. Even in the film, the motion tracker served as an effective diegetic interface since the audience could clearly understand what the moving dots on the display meant.
While it’s a little odd that it took a quarter century until someone made a true video game adaptation of a film that has more or less served as a template for the First Person Shooter genre, it’s just as well since even ten years ago, it would have been impossible to render a motion tracker so realistically and effectively in a game.
Technology seems to be finally overcoming the restrictions that have kept diegetic interfaces limited to gimmickry until now. While still in its infancy, the push to duplicate more of our natural interactions with our environment seems to be gaining momentum as evidenced by new products using non-traditional interaction models. Most of them, like the popular Nintendo Wii, have yet to deal with immersion in terms of interfaces. On the other hand, Microsoft’s, whose controller-free gaming technology Kinect is about to enter the market, has stated its intention to eliminate what it calls the “barrier” between the player and the game world.
While researching this article, I was really interested to find how other professions are making use of this technology. To my surprise, I struggled to find examples. While 3D has been widely adopted by fields like architecture and medicine in other ways, the value placed on presenting complex information in more immersive ways still seems to be largely limited to entertainment media.
How can this technology be applied outside gaming and what value would it be providing that other alternatives cannot?
In a way, diegetic interfaces have been used for training purposes since the first flight simulators were created to teach pilots to fly by instruments. The instruments may not have been represented on a digital display, but they were complex interfaces in an immersive simulated environment, which fits the definition of a truly diegetic interface at least as much the in world devices in Dead Space.
Modern professional flight simulators still use physical interfaces. Unfortunately, while it may be cost less than using a real airplane, the price of these devices still remained out of the reach of most people. This all changed when PC based flight simulation software brought the experience, complete with diegetic instrumentation, to the mass market.
Why not extend this to other types of complex products as well? At Cooper, we're currently working on a life-supporting medical device that’s used on critical patients in operating rooms and intensive care units. Not surprisingly, training people to use this device is not a simple task. Let’s say there are 3 components required to create an effective training scenario for this type of product.
- The user interface, which in this case is likely to be a touchscreen plus separate hardware buttons.
- Basic hardware parts like storage compartments, cables and other tasks that require physical interaction.
- An environment that is frequently unpredictable and chaotic including other people who have different personalities.
I’m interested if anyone else is aware of other applications of diegetic user interfaces. How else might these ideas be applied?
free download
ReplyDeletemetal gear survive release date
hell let loose gameplay
far cry 5 release date
state of decay 2 gameplay
mount and blade 2 release date
red dead redemption 2 pc?
DBZ Fighting Games
vampyr game