Interactive Game Show

People

Proposal

Objectives

Overview

Description

The goal of the project will be to simulate video game interactivity over the broadcast medium. We will not assume a point-to-point back channel (phone line, Internet connection, etc.) although that would only add further potential to our extensible system.

Consider a shooting game, common in video game arcades. These games are first person perspective, with targets and other sprites rendered over a background, which is usually also a rendering of 3D objects.

To adapt this experience to the broadcast arena, we need to send all the information via the broadcast, then allow the receiving client to filter and interpret the information, render the scene and the sprites, and resolve the effects of user interaction, such as breaking a window or shooting an enemy.

Furthermore, to accommodate the existing "channel surfing" paradigm, we enough information to be accessible in order to allow someone to enter the game late.

The main concentration of this project will be the exploration of the limits to which client side processing and I/O can be used to interact with the broadcast medium.

Video Stitching

The idea here is to provide a 3D backdrop for our game. Ideally, we would like a complete surround video panorama. And on top of that, we would overlay our 3D sprites and objects, that move around in this world.

Since this video panorama is not our expertise and area of focus, ideally, we would be able to just borrow this technology from another group. In the mean time, we may just choose to implement fixed one-color background tiles, or a single frame of background (a la Quicktime VR). Later on, we can implement video panorama.

Transmission

We must transmit global data continuously. As such, it is most efficient if the global data requires few bits to represents, but can change over time, since it needs to be carouselled anyway.

A big challenge will be adapting old formats or inventing new ones to deliver concise representations of objects at the right time (i.e., as close as possible to when it is needed). The format used will have to handle 3-D data, animation data, and interactivity data in as little bandwidth as possible (since video background will occupy quite a bit).

Channel Surfing

The basic problem with the broadcast medium is that the user/viewer can decide to join in a channel at any time. Therefore, our gaming architecture needs to be robust to support random channel surfing.

What if a user joins our channel in the middle of a game? Then slowly, the 3D scene would build up, and sprites would start appearing. However, all sprites in mid-transmission will be lost and discarded. Only new objects will start.

This is also a complication to our attempts at interactivity and playing with the time dimension, since channel surfers will miss out on later events if the require earlier actions. For example, if the player needs to shoot Wario fifteen minutes into the program in order to have a chance at saving the princess, a player who tunes in twenty minutes into the program may receive an alternate ending (the infamous "thank you Mario, but our princess is in another castle!").

Rendering

The environment must be rendered in realtime. Existing technology such as Quicktime VR or the DOOM engine provide panoramic and full 3D experiences, respectively. These suit game play extremely well. However, they may require more data than is available (when a channel is changed, for example).

Also, it is possible that the data may be received in different formats (e.g., a panorama for the backdrop, 3D models of objects, and MPEG sprites). In that case, the information must be composited in a visually acceptable way.

Interactivity

One of the benefits of starting on a video game is that the interaction is limited and well defined. We have a one-player environment: movement entails rendering the scene from a different point of view. For a shooting game, fixed point rotation may be suitable, which means panoramas suffice for the environment and active objects (e.g., enemies) do not have to react to the user's movement. However, having the client make objects respond to the user's movement in a 3D space would only require the format to specify an object's behavior.

The following list describes behavior that the client must resolve in order to interact with the user:
 

  1. Triggers (e.g., destroyed planes explode)
  2. Conditionals (boss monster appears only when other creatures are dead)
  3. Motion of object (static: fixed path can be part of global data? dynamic: formulae as a function of user position, or markup language?)
  4. Game mechanics (e.g., the enemy shoots at you every 2 seconds, and will hit if he has line of sight)
  5. Appearance (MPEG, 3D model, local information?)
  6. Input/output


How do we resolve these effects on the client side, considering there is no feedback to the broadcaster, only local processing?

Note that many of these interactivity issues also raise transmission problems: how do we send the information "just in time"? Data that describes an event must be sent before the event can occur, but sending it too early may result in it being missed by someone who tunes in late. Furthermore, the data must be scheduled so as not to exceed the bandwidth available. See also Transmission.

Finally, we may wish to allow interaction aside from just game play. For example, user-supplied textures, competing for scores, or even configuration options might be specified ahead of time.
 

Time Dimension

The idea here is to simulate low interactivity for the user/player through smart placement of objects and story-telling. For example, certain creatures/sprites would only show up and be displayed when certain conditions are met. In either case, the data would be sent, packaged in an elaborate data structure to keep track of state, dependencies, etc.

The issue here is how to design an effective data structure for each sprite, to capture all the possible conditions and store this meta data. Some objects may get broadcast early, to take advantage of open bandwidth. Some objects may never be used/displayed if conditions are not met.

With an elaborate system, we can simulate the multiple paths scenario, where the user appears to have a choice in where to go, etc. The tradeoff is large flexibility in the time dimension (lots of stored data and buffering possibly required) versus allowing channel surfers a full experience.

Content

Content creation may prove the true test of our design. While the background and 3-D sprites are not difficult, the design of paths for the sprites and conditions for interactivity may prove problematic to coordinate. Additionally, this area overlaps with channel surfing and data transmission issues, since decisions in one of the three areas can place restrictions on the others.

Ideally, the content creator would have a tool that would make decisions such as when to send sprite or interaction data transparent to the producer; this may become part of the project if content creation proves sufficiently hard.
 

Last Updated: Feb. 25, 1999, [email protected]