|
||||||||||||||||||
Projects > Context Aware Office | ||||||||||||||||||
|
||||||||||||||||||
Personal Metadata |
||||||||||||||||||
Keywords: Digital Metadata, AI | ||||||||||||||||||
How can we improve people’s work by adding digital metadata to their environment? Metadata, construed broadly, is information about objects and their related events and processes. The most familiar forms of metadata are subject, title, author, date and related descriptions of articles, papers and books. But the concept is comprehensive enough to include annotations found on a document, data about how a document has been used, when, by, whom and how often, and even where it is to be found on a desk or in a library. My objective in this research project is to explore how to digitally augment work using AI techniques to track some of the significant interactions that occur in them.
|
|
|||||||||||||||||
The Problem Metadata has long been used to augment physical environments. Libraries, in particular, need a method by which books and other artifacts can be efficiently tracked and identified by multiple attributes. Standards such as Open Geospatial Interoperability Standard (OGIS) have been developed to solve specific superficial problems: facilitating interoperability of data across sources. The World Wide Web attempted to move beyond basic level metadata. Tim Berners-Lee developed it with the vision of metadata linking together documents in an infinitely large semantic web. Yet there has been little attempt at recognizing and understanding the deeper structure of local environments: digital or physical. Figure 1 is an artist’s depiction of a future distributed hybrid environment in which team members in offices at UNC and CMU are relying on virtual objects to help them collaborate. Such research may be a forerunner of the future but it puts too little emphasis on the cognitive questions that arise when people use digital tools to augment their physical environments. For example, what kinds of mental representations of the environment and its contents are the collaborators sharing? Is their conceptualization of the same information identical? What factors influence the implicit understanding of these virtual objects? Do they affect all people in the same way? What deeper structures underlie the interaction between the objects and the environment, and how can it be modeled? How can we use cognitive engineering to optimize and record the mental projection of metadata onto these objects? Can we store this metadata in a natural way to later facilitate better and easier retrieval? People project structure onto their environment, saturating objects with meaning. The document I last touched, the article that X gave me, the paper I put all my annotations on. All these descriptions are associations and attributes that help people manage the resources around them. Often they are work related, task specific, personal, and context relative. The power of distributed cognition comes from the ability for people to communicate their projected structures, or metadata, to aid others. This communication may be explicitly given by the individual, or is implicit based on the situation, common beliefs, cultural understandings, etc. How can we harness these properties using digital support to help facilitate collaboration as well as understanding of our own environment? Can a system discover the effective description of the environment that allows machine inference of the function of office resources, which may not be obvious for the casual observer? This project aims to address all these issues in a way that facilitates an optimization of workflow by solving a small part of a much bigger system. |
||||||||||||||||||
|
||||||||||||||||||
Project Team | ||||||||||||||||||
|
||||||||||||||||||
|
||||||||||||||||||