ProjectsAugmented Media
Passing the Bubble Perception in Virtual Environments KDI (NSF, Annotations)

 

Passing the Bubble

 
 
     
  Keywords: Augmented Video, Collaboration, Decision Making, Annotation, Situational Awareness  
     
  We study augmented video as a mechanism for improving collaboration and decision making. Our special focus is decision making that depends on decision-makers and information analysts sharing their understanding. The interaction between these two groups involves commanders passing their intent to their information analysts, then refining their plans and decisions on the basis of information gathered by their analysts, often using AUV’s. We examine how different ways of augmenting video differ in how cognitively efficient they are in creating shared understanding. Most people who have seen augmented video assume it to be a powerful mechanism for communicating complicated objectives and facts about situations. But little or no work has been done on:

1. determining which of the many ways of augmenting video are most effective, and

2. developing a cognitive theory that explains why these different methods differ in their potency and cognitive efficiency. Video, if properly annotated, promises to enrich and reshape collaborative exchanges. Our goal is to understand how to maximize the impact such videos will have on collaboration.
Project Video



Find out more about how this study is conducted and analyzed.

122.0MB AVI (PC)
93.6MB MOV (MAC)
43.1MB MP4 (MAC)

     
 

Objectives

Our specific research aim is to determine how and when to use a collection of annotation techniques to effectively share situational awareness and understanding, among geographically distributed team members. The situational awareness we are trying to pass is strategic, tactical, descriptive of environmental conditions and future plans. The users of this knowledge in real world contexts are decision makers (commanders), analysts, and task teams who are coming on duty, particularly when they cannot be face to face with each other. It is widely assumed that as video comes online, whether from cameras mounted in helmets or cameras in AUV’s, information analysts and the decision makers they support will find it useful to add annotations. Yet little if anything is known about how to do this well. When are decision-makers best advised to use static annotations, when should they use dynamic annotations? Which form of annotation is cognitively most effective? Presumably the representation that is best depends on the job it is to perform. Our goal in these studies is to simulate realistic planning contexts in order to determine how planners and analysts should annotate stills and videos to make better decisions.

 
     
 

Approach

To create a realistic but tractable experimental paradigm to study transfer of situational understanding we designed an experiment in which one decision maker ‘passes the bubble of awareness’ to another decision maker in the context of a virtual world style computer game: Starcraft. This is a complex strategy game with high quality graphics that requires substantial planning, expertise, and understanding of strategy. The world in which players live is generated by an extremely powerful graphics engine that retains perspective, and 3-dimensionality, not unlike the type of data feeds a UAV might produce.


The basic experiment proceeds like this: two subjects take over the station of two players who have just left the game after having played for 15 minutes. One of the new players is the bubble receiver, the other is his opponent. The bubble receiver is shown a 4 min video of the preceding game. The opponent waits in another room. There are 9 conditions for this video: three control conditions where 4 mins worth of stills or videos of the preceding games are voice annotated but not graphically or video annotated, three conditions where there are static annotations in addition to the voice over; and three conditions in which there are dynamic (i.e. moving annotations) in addition to the voice over. After viewing the video, the bubble receiver answers a set of questions to test his grasp of the goals, intents, and strategies ("measured choices") that should have been communicated in the video and which will be relevant for the next several minutes of gaming. Then the two players take over and compete for 5 mins, where upon the game is paused, the score is taken and a shot of the screen recorded. After that tiny break the players resume play until one wins. At that point, cameras are set up to record the results of two debriefing sessions. In the first, the bubble receiver is shown the 4 min video again and asked to comment on the accuracy of the information, what helped and what didn’t. These interviews are meant to help us learn what techniques are effective in the videos, and the content that players prefer. In the second debriefing session, both players watch the first 5 minutes of their own play, and comment on pieces of information they would have liked to have known, the non bubble receiver often saying that he wished he had known some facet of what the bubble receiver did know, and together the two talking in way that provides an indication of the subjective estimate of what is useful and not in the video stimulus.


In an experiment of this complexity it is vital that as many factors be controlled for as possible. The first thing we try to control is the quality of the stimuli. Stimuli are made by having two experts play against each other for about 15 mins. At a natural break point the game is frozen and then one of these two players will make a storyboard of the key points to encode in a stimulus. The resulting stimulus is then sent to another expert we have hired who reviews the stimulus and compares it to other stimuli other expert makers have created. The factors that are quality controlled are general media quality, clarity of speech, (if necessary a new voice over was added), duration, and a few other formal factors unconnected with the actual choice of content. To control for the player expertise of the subjects, we put together a team of three experts who played the subjects repeatedly and assigned them an expertise measure. Since then we have devised a statistical technique for evaluating subject expertise that conforms to the judgements of our human judges. To control for the natural bias in each game – a natural result of two players competing to win and stopping the game at an arbitrary time rather than score – we used a set of control subjects to both play the game without the presentation. This tended to show us the natural bias in the game independent of the measure revealed by the score alone.


One important factor not controlled for is the content stimulus makers chose to present in their video presentations. Naturally we have made an effort to counterbalance the experiments by using different stimulus makers for each condition. This is not yet perfectly counterbalanced because we have had to throw out some stimuli in the quality control phase. More importantly, though, we are now in the process of developing a list of the informational elements it is useful to know about a game so that we can then review the stimuli and weight them according to the information elements they contain. One fact which makes this additional control even more difficult is that we have observed that intermediates weight the value of such information elements differently than experts. This means that we will need to run more intermediates to separate out the bias which expertise places on the value of information.

A final control we have begun to use, and which is proving extremely informative, is to compare the performance of bubble receivers when the bubble presentation is made live by the person whose game they are taking over. In this case, the presentation now contains interactivity and dialogue between receiver and transmitter is present, which means that presentations are being adapted to the needs of each bubble receiver. Nonetheless, we see these live presentations as the ultimate target for our canned presentations. By analyzing the information represented in these live stimuli we get another measure of what bubble receivers want to know.
Our next steps are to
• increase the number of experiments run,

• increase the number of control subjects run,

• add a new control condition – speech only,

• increase the number of live experiments run and run them under different conditions (such as, when the bubble receiver rather than the bubble passer is the dominant partner and asks questions),

• re-analyze and quality control all stimuli to ensure greater uniformity,

• re-analyze all experimental results using the knowledge items present in each stimulus as a factor (after tagging the knowledge items in all stimuli),

• review all stimuli and enumerate the graphical annotation type being used for each knowledge item;

• run many more intermediates to determine if intermediates have different preferences than experts for stimuli and annotation types

 
     
     
     
     
  Project Team  
 
 
David Kirsh
(202) 623-3624
Office: CSB173
kirsh@ucsd.edu
 
Thomas Rebotier
(858) 822-2475
Office: CSB151
rebotier@cogsci.ucsd.edu
 
 
Bryan Clemens
(858) 822-2475
Office: CSB 206A
bclemons@cogsci.ucsd.edu
 
Andy Guerrero
(858) 822-2475
Office: CSB 206B
adguerre133@hotmail.com
 
 
Shawn Oksenendler
setme123@hotmail.com
 
 
 
Ryan Shelby
rdshelby@ucsd.edu