Volume 4 Issue 2
Fall 2008
ISSN 1937-7266

Understanding System Implementation and User Behavior
in a Collaborative Information Seeking Environment

Chirag Shah

School of Information & Library Science (SILS)
University of North Carolina
Chapel Hill, NC, USA
chirag@unc.edu

ABSTRACT

Typically an information seeking or retrieval process is considered as a single-person activity. There are, however, situations when collaborating with others is beneficial. Unfortunately, the majority of IR systems that we see in practice are realized considering individuals. I propose to study the process of multiple users working together with the same information seeking goal, called Collaborative Information Seeking (CIS). My objective in this research is not only to understand user behavior in a collaborative environment, but also to build and evaluate systems that support such a collaboration. I plan to do this work with the aid of a series of user studies. Details of the first study that I have designed is presented here. In addition, I have identified some key references from the literature and produced a set of questions and hypotheses that I would like to address.

Categories and Subject Descriptors

H.3.3 [Information Search and Retrieval]: Search Process; H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces—Collaborative computing, Computer-supported cooperative work

General Terms

Algorithms, Design, Human Factors, Performance

Keywords

Collaborative Information Seeking, User Study, Evaluation

1 Introduction

It is well said that two minds are better than one. When it comes to accessing or processing some information, it seems natural to expect that when people work together in collaboration they (1) can accomplish more, (2) benefit from other people’s experience and expertise on the given topic, and (3) influence each other and develop a more profound understanding of the subject than when they are isolated.

While it seems that doing collaboration with others will make sense in many situations, the value of collaboration is often overlooked. Twidale and Nichols [22] pointed out a problem that “The use of library resources is often stereotyped as a solitary activity, with hardly any mention in the substantial library science and information retrieval literature of the social aspects of information systems." They argued that introducing support for collaboration into information retrieval systems would help users to learn and use the systems more effectively. They further claimed [23] that a truly user-centered system must acknowledge and support collaborative interactions between users and showed that users often desire to collaborate on search tasks. Based on their extensive study with patent office workers, Hansen and Jarvelin [6] also concluded that the assumption that information retrieval performance is purely individual needs to be reconsidered. While this issue of collaboration has attained considerable attention lately as we will see later, there is a lot to be done to address even some fundamental issues in this field.

I propose to study the model of Collaborative Information Seeking (CIS) in which a set of users collaborate based on their common information need. In order to validate this model, I propose a set of specific situations that serve as the instances of the model. This formulates an overall research problem for me, which is presented below.

Situation: Two or more users have the same information need and they are working together in the same time-frame seeking as well as processing and organizing that information. Some examples are

  • A group of students working on a class project. They have the same goal, but may have different expertise.
  • A couple looking to buy a house. They both want the same thing, but the husband is more concerned about the financial aspects and the wife is looking for a good neighborhood.
  • A patron trying to find some information on a topic with the help of a librarian. The patron is the subject expert and the librarian is the search expert.
  • A set of users watching the same video and commenting or tagging it. They both are consuming the same information, but may have different opinions about it.

Difficulty: There is a lack of specially designed tools to facilitate collaboration among a set of users for information seeking. Most search services are designed keeping single user environment in mind. People have been using general purpose tools such as email and IM to collaborate online, and there is an absence of integrated environments that not only support search and communication, but also help the collaborate discover and learn about the information that they may not have otherwise working in isolation.

The notion of collaboration itself is not well-understood. In addition to this, designing tools that seamlessly facilitate collaboration among a set of users by integrating their actions (comments, tags, judgments, etc.) is hard.

Example:
Imagine a team of analysts surveying the market for a product. First off, we do not want them to repeat each others’ efforts, unless it is for verification. Secondly, given that we successfully distribute work among them, how do we combine and redistribute the results for their individual analysis? Can we divide the task, execute the requests, and distribute the results based on different aspects of the product? How do we aggregate all the information found and evaluate for commonality and conflicts? How can we facilitate effective communication among the analysts? What can we say about the efficiency and effectiveness of this collaboration?

There are several situations such as this that arise in our day-to-day information seeking tasks, requiring us to collaborate with others. Even though there is a lack of tools that support collaborative information seeking, people still collaborate in an ad hoc manner using traditional tools [21]. As the need to collaborate continues, the behavior of users in a collaborative environments remains under-studied, and the tools to facilitate collaborative information seeking have left much to be desired, my proposed research is not only timely, but also very important. The objective of my research is three-fold: (1) build systems and interfaces that support collaborative information seeking, (2) study user behavior in a collaborative environment, and (3) develop methodologies and evaluations for studying such environments.

This article presents some of the key research works related to my proposed research (Section 2), enumerates a set of research questions and hypotheses that I would like to address (Section 3), and outlines a methodology that I am planning to employ. Some of the issues that I would like to bring up for discussion are presented in Section 5.

2 Background and related work

The works on collaborative information seeking in the literature have spanned from theoretical ideas about collaboration to practical systems implementing collaborative environments. Here I have tried to summarize works from three major sub-areas related to my proposed research.


Figure 1: Coagmento - a prototype for collaborative information seeking and information sharing

2.1 Conceptual aspects of collaboration

Collaboration can be studied as a concept without specific application or context. This area focuses on understanding how and why people collaborate and provide some basic terminology. Defining the act of collaboration itself can be challenging. The term ‘collaboration’ has been used very loosely in IR literature. In a way, a recommender system such as Amazon.com can be said to be collaborative, even with passive or implicit collaboration by the users. If we accept that, most of the social networking sites will fall under collaborative environments. Malone [13] presented coordination theory, defining coordination as the additional information processing performed when multiple, connected actors pursue goals that a single actor pursuing the same goals would not perform. This poses an interesting question if we can say that collaboration is a kind of coordination among users. Understanding who people collaborate with and why are also important questions as the context or the situation is critical to the success of a collaborative system. One common situation of collaboration is in an office environment where colleagues working on the same project collaborate to achieve a goal. For instance, Hansen and Jarvelin [6] studied collaborative behavior among co-workers in a patent office. From their studies, it became clear that collaboration in such an environment was inevitable. But will people collaborate with strangers if it could be beneficial?

2.2 Implementation and applications of collaborative environments

A good deal of work in implementing a collaborative system has been done around reformulating search requests of a user based on other users’ search requests on the same/similar search goals. Klink [9] and Hust [7] showed how queries can be reformulated in a collaborative IR environment to achieve better performance. My own work in the past has shown that combining queries and/or result lists from different users can improve retrieval performance over that of achieved by any of the individuals [16].

There are a large number of applications designed for collaborative information processing. The majority of these works are extensions of traditional information seeking models, and others based on specific applications and situations. They fall under two main categories: (1) Collaborative information seeking and retrieval, and (2) Co-browsing or social navigation. Kuhlthau [10] presented Information Search Process (ISP) model to explain information seeking from the user’s perspective. She later extended this model to bridge the gap between information seeking and retrieval [11]. This model inspired Hyldegard [8] to study information seeking and retrieval process in a group-based educational setting. Similarly, Smyth et al. [20] came up with I-Spy system that uses a similar concept presented in Stuff I’ve Seen [4] in a group setting. A stream of research came out of CIR group at University of Washington [5, 17] that studied situations where members of a work-team are seeking, searching, and using information collaboratively and showed how such a process can be realized in a multi-team setting. Romano et al. [15], just as many others, took the ideas from traditional IR and tried to extend them to the collaborative environment. Very recently Morris and Horvitz [14] presented SearchTogether system that allows a group of users to perform search and retrieval in a collaborative environment. The idea of providing a common browsing environment for a group of people has been very popular and many works have been reported that explored this idea [12, 20]. Root [18] introduced the concept of social browsing, which was carried over by Donath and Robertson [3] several years after as The Social Web that allows a user to know that others were currently viewing the same webpage and communicate with those people.

2.3 Evaluation of collaborative environments

Evaluating a collaborative system is difficult because an individual’s performance and a team’s performance are difficult to assess with respect to the system and tasks at hand. Baeza-Yates and Pino [2] presented some initial work on trying to come up with a measure that can extend the evaluation of a single-user IR system for a collaborative environment. While this was based on the retrieval performance, Aneiros and Estivill-Castro [1] came up with the proposal of evaluating the “goodness" of a collaborative system with usability. The majority of the work reported in the literature that has attempted to evaluate the effectiveness of a collaborative system has looked at the usability of the collaborative interface. For instance, Morris and Horvitz [14] tested their SearchTogether system with a user study to evaluate how users utilize various tools offered in their interface and how those tools affect the act of collaboration. Smyth et al. [19] measured the effectiveness of a search system while using collaborative filtering. However, there has been very little work on evaluating a collaborative system from retrieval point of view or testing it for its efficiency and effectiveness in a collaborative information seeking environment.

3 Research questions and hypotheses

From my literature review so far, it has become clear to me that there is a need to look at CIS problem from a fresh perspective rather than simply extending a traditional IR framework to incorporate multiple users. There is also a lack of a well-defined terminology, well-structured methodologies, and suitable evaluation measures for CIS environments. In addition, I recognize the need to study the following concepts.

  1. Ways of collaboration
  2. User behavior in a collaborative environment
  3. Cost and benefits of collaboration in the contexts of task and situation

The last objective inherently involves evaluation aspect. I propose to do a set of user studies and address the following questions, which in turn will help in informing the larger objectives given above.

  1. What are the tools that are essential to CIS?
  2. What are the tools that are desired for CIS?
  3. How to measure the cost and benefit of collaboration?
  4. What are the situations in which collaboration does not pay off?
  5. How can we measure the performance of a collaborative group?
  6. How can we measure the contribution of an individual in a collaborative group?

In addition, I am interested in testing the following hypotheses.

  1. Collaboration will be more effective and useful in a task that requires exploration rather than fact finding.
  2. Getting visual feedback on one’s partner’s actions will help in avoiding duplication of efforts and facilitate going through more material in the same time.
  3. Being able to consult one’s partner for documents will help in making better judgments.
  4. Users will be able to be more productive in a collaborative environment.

4 Methodology



Figure 2: Toolbar available while viewing a document

In order to meet my objectives for this research, I shall conduct a series of user studies and do an iterative development of a CIS system. This section provides details of the first study that I have already started designing. For this study, I shall bring subjects in pairs to the lab, give them various information seeking tasks, and provide them with tools using which they can work in collaboration. In other words, for this study, a team will consist of two co-located individuals working together at the same time on the same task.

The dependent variables of this study are the group dynamics (measured by the kind and volume of interactions between the users), recall and precision of the results found as relevant, knowledge gained for a given task, user satisfaction, and usability of the interface. The primary independent variable is the kind of task.

4.1 Collaborative system

I have implemented a prototype of a CIS system, called Coagmento. In addition to a typical search engine interface, this system includes a set of tools that one can use to collaborate with another user. A snapshot of this interface is shown in Figure 1. As we can see, the interface includes some familiar features of a typical search engine such as query box and results list. In addition, on the right side there is a chat panel, which the user can use to interact with his partner. Below this panel is a list of queries that either of the users ran during a given session. Clicking on a query in this list brings up the results for that query. The main function of this query box, however, is to provide a quick reference to the queries already tried by the team. Below the query box are boxes for documents under review and documents that are found relevant. When a user views a document, he can decide to save it or move it to ‘Discussion’ box for his partner to look at it. These documents can easily be seen by clicking on them in the given boxes. While viewing a saved or kept document for review, one can also discard it (Figure 2).

In the results panel on the left, visual indications are given for the documents that are viewed, kept under review, or saved by either of the members of the team.

The snippets feature is available for tasks 3 and 4 (described later in this section), where the user is expected to mark passages in the documents. Once again, both the users will be able to see the snippets saved by either of them. Clicking on ’View All’ in the snippets box will bring up all the snippets saved and display in full length.

4.2 Subjects

I plan to have 20 groups of 2 users. At a given time only one group will be present in the lab. I plan to recruit these subjects by sending out an email to undergraduate list server on UNC Chapel Hill campus. The subjects will be asked to sign up in pairs. This will help me to make sure that both the users show up together (or not) and that they both already know each other. The subjects will be given $10 each for their participation for about an hour in this study.

4.3 Collections and tasks

I plan to use various TREC collections for my study. One of the reasons for using these collections is the readily available judgments and evaluation measures provided by NIST. The subjects will be asked to complete four tasks in each session. The first two tasks will require them to search and collect as many documents as they can on a given topic. The next two tasks will require them to collect passages that can answer the question or issue given. I plan to use TREC HARD 2005 data-set for tasks 1 and 2, and TREC ciQA (Complex Interactive QA) data-set for tasks 3 and 4.

Following is an example of the first kind of task (tasks 1 and 2). This is taken from TREC’s HARD collection topic 428.

“You recently read a newspaper article from 2002 that reported that the US birth rate has continued a 12-year decline, dropping to the lowest level since national data have been available. The article reports a similar trend for China. You wonder if this is a world-wide trend or it it is only happening in the US and China, so you decide to check other newspaper articles from the 1990s. Your goal is to identify as many different countries other than the US and China that have experienced a decline in birth rate and save as many relevant documents as you can."

Following is an example of the second kind of task (tasks 3 and 4). This is taken from TREC’s ciQA2007 topic 57.

“You are detectives, specializing in antiquities and historical documents thefts. Your current assignment is to find the evidences for transport of stolen antiquities from Egypt to other countries. Since such evidence often appears when such antiquities are returned to Egypt from other countries, you should search and file news about these goods being returned. Find relevant documents and collect the snippets that have the related information on this topic."

4.4 Procedure

Following is a general procedure for running the experiment. For deciding the task order, I will use Latin Square method.

  1. Greet the subjects. Hopefully they would already know each other.
  2. Get the consent form signed by the subject.
  3. Ask the subjects to fill in the demographic information.
  4. Run the tutorial video demonstrating how to use the interface and various tools.
  5. Let the subjects practice with the interface.
  6. Give pre-task form to acquire information about how much the subject knows on a given topic.
  7. Give the task to the subjects and let them take 10 minutes to finish the task.
  8. Ask the subjects to fill in post-task questionnaire.
  9. Repeat steps 6-8 for two more tasks.
  10. At the end of completing three tasks, ask the subjects to fill in the exit questionnaire. Ask them a couple of questions informally to get their qualitative feedback.

4.5 Analyses and evaluations

Throughout the search session in this experiment, various data will be collected. I plan to do the following analyses and evaluations with the collected data.

  • One of the first factors I would like to evaluate is the usefulness of the system for the information seeking tasks that the subjects were given. The difference between the pre-task response as well as confidence level, and the post-task response as well as confidence level will help me evaluate the learning that took place while using the system.
  • Since I will have access to the relevance judgments from NIST, I can evaluate various performance metrics such as precision and recall using the results that a user saved. This will enable me in comparing the retrieval performance of my system with that of corresponding TREC submissions.
  • Various questions on the exit questionnaire about the usability of the system will give me an idea of which features of the interface are useful, which are not, and what improvements can be made to enhance the user experience.
  • To understand user behavior in this environment, it is valuable to analyze various forms of user interactions. Browsing patterns as identified by query logs and click-through data can be used for this analysis.
  • Chat transcripts will need to be manually analyzed to understand what kind of conversations took place between the two users. Since every action will be time-stamped, I can also analyze the effect of certain kind of conversation on the next action taken and vice versa.

5 Issues for Discussion

There have been several works that claim to be collaborative, but during my literature review I realized that the term collaboration is used very loosely. One of the challenges that I am facing is defining precisely what collaboration means. For my work so far, I have decided to address the collaboration that includes three aspects: intentional, interactive, and synchronous. I would like to discuss this definition and understanding further in the light of my proposed research.

One of the issues of studying user behavior in collaborative environment is the artificial motivation that is given to the user for collaboration. It will be beneficial for me to discuss this issue and obtain some ideas on how to design a user study to look at some “real" need for collaboration and study user behavior in that environment.

Another limitation of my approach is the nature of the collection and tasks used. I would like to use more “open" collections such as the Web, but then evaluation could be an issue. In general, evaluation of various parameters that I am interested in measuring is difficult. I would like to get some suggestions and insights on how to design evaluation metrics for such an environment.

It seems that people will find collaboration more valuable in exploratory kind of tasks. However, running an exploratory task presents a set of issues, including what and how to evaluate. I plan to bring this up for a discussion to get an understanding of which assumptions are acceptable and which are not for an exploratory task.

6 Conclusion

Collaboration among like-minded or like-goal-oriented people is natural and common. When it comes to information seeking, this is not an exception. However, as Twidale et al. [22] noted, the current information seeking systems are designed to make the user avoid collaborating with other users. In the last few years we have seen a change in this attitude. Several systems have recently emerged that support user collaboration. On the flip side, the majority of these approaches have simply tried to extend traditional information seeking systems to incorporate multiple users. A fundamental understanding of the process of collaboration in the light of information seeking processes is still lacking.

I believe my proposed research will help us understand some of the issues in collaboration for various forms of information seeking and give indications to enhancing user experience in a collaborative environment. The results of the studies that I proposed will augment our understanding of the way collaboration works and behavior of the users in a collaborative environment. In the course of conducting my studies I shall also be developing some systems and building interfaces that can serve as the testbeds for further conducting and evaluating various forms of collaborative processes. In addition to this, I believe I would be contributing by developing a methodology to study and evaluate collaborative environments.

7 Acknowledgement

The author is indebted to Gary Marchionini and Diane Kelly for their invaluable suggestions on this article, and constant support and guidance on this work.

8 References

[1] M. Aneiros and V. Estivill-Castro, "Usability of Real-time Unconstrained WWW-co-browsing for Educational Settings," in Proceedings of the IEEE/WIC International Conference on Web Intelligence, Compiegne University of Technology, France, September 19-22, 2005, pp.105-111.
 
[2] R. Baeza-Yates and J. A. Pino, "A First Step to Formally Evaluate Collaborative Work," in Proceedings of the international ACM SIGGROUP conference on Supporting group work: the Integration Challenge, New York, NY: ACM, 1997, pp. 56–60.
 
[3] J. S. Donath and N. Robertson, "The Sociable Web," in Proceedings of WWW Conference, 1994. Available: http://smg.media.mit.edu/people/judith/SocialWeb/SociableWeb.html
 
[4] S. T. Dumais, E. Cutrell, JJ Cadiz, G. Jancke, R. Sarin, and D. C. Robbins, "Stuff I’ve Seen: A System for Personal Information Retrieval and Re-Use," in Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, New York, NY: ACM, 2003, pp. 72-79.
 
[5] R. Fidel, H. Bruce, A.M. Pejtersen, S. Dumais, J. Grudin, and S. Poltrock. "Collaborative Information Retrieval (CIR)," The New Review of Information Behaviour Research, vol. 1, pp. 235–247, 2000.
 
[6] P. Hansen and K. Jarvelin, "Collaborative Information Retrieval in an Information-intensive Domain," Information Processing and Management, vol.41, pp.1101–1119, 2005.
 
[7] A. Hust, "Query Expansion Methods for Collaborative Information Retrieval," Informatik - Forschung und Entwicklung, vol. 19, no. 4, pp. 224–238, 2005.
 
[8] J. Hyldegard, "Collaborative Information Behaviour - Exploring Kuhlthau’s Information Search Process Model in a Group-based Educational Setting," Information Processing and Management, vol. 42, no. 1, pp. 276–298, 2006.
 
[9] S. Klink, "Query Reformulation with Collaborative Concept-based Expansion," in Proceedings of the First International Workshop on Web Document Analysis, pp. 19–22, 2001.
 
[10] C. C. Kuhlthau, "Inside the Search Process: Information Seeking from the User’s Perspective," Journal of the American Society for Information Science and Technology, vol. 42, no. 5, pp. 361–371, 1991.
 
[11] C. C. Kuhlthau, "Towards Collaboration Between Information Seeking and Information Retrieval," Information Research, vol. 10, no. 2, p. 10, 2005.
 
[12] Y. Laurillau, "Synchronous Collaborative Navigation on the WWW," in Proceedings of SIGCHI Conference on Human Factors in Computing Systems, New York, NY: ACM, 1999, pp. 308–309.
 
[13] T. W. Malone, "What is Coordination Theory?" Massachusetts Institute of Technology, Boston, MA, Tech. Rep. SSM WP # 2051-88, February 1988. Available: http://dspace.mit.edu/bitstream/handle/1721.1/2208/SWP-2051-27084940-CISR-182.pdf?sequence=1
 
[14] M. R. Morris and E. Horvitz, "SearchTogether: An Interface for Collaborative Web Search," in Proceedings of the 20th annual ACM symposium on User Interface Software and Technology (UIST), New York NY: ACM, 2007, pp. 3-12.
 
[15] N. C. Romano, J. F. Nunamaker, D. Roussinov, and H. Chen, "Collaborative Information Retrieval Environment: Integration of Information Retrieval with Group Support Systems," in Proceedings of the 32nd Hawaii International Conference on System Sciences, January 5-8 1999, pp. 1–10.
 
[16] J. Pickens, G. Golovchinsky, C. Shah, P. Qvarfordt, and M. Back, "Algorithmic Mediation for Collaborative Exploratory Search," in Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, New York, NY: ACM, 2008, pp. 315-322.
 
[17] S. Poltrock, J. Grudin, S. T. Dumais, R. Fidel, H. Bruce, and A. M. Pejtersen, "Information Seeking and Sharing in Design Teams," in Proceedings of the 2003 international ACM SIGGROUP conference on Supporting group work, New York, NY: ACM, 2003, pp. 239–247.
 
[18] R. W. Root, "Design of a Multi-media Vehicle for Social Browsing," in Proceedings of the 1988 ACM conference on Computer-supported Cooperative Work (CSCW), New York, NY: ACM, 1988, pp. 25–38.
 
[19] B. Smyth, E. Balfe, O. Boydell, K. Bradley, P. Briggs, M. Coyle, and J. Freyne, "A Live-user Evaluation of Collaborative Web Search," in Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh, Scotland, July 30-August 5 2005.
 
[20] B. Smyth, E. Balfe, P. Briggs, M. Coyle, and J. Freyne, "Collaborative Web Search," in Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI), Acapulco, Mexico, August 9-15, 2003, Morgan Kaufmann, pp 1417–1419.
 
[21] M. Twidale and D. M. Nichols. "Situated observations of library users," Computing Department, Lancaster University, Technical report, 1995.
 
[22] M. B. Twidale and D. M. Nichols, "Collaborative Browsing and Visualisation of the Search Process," in Proceedings of Aslib, vol. 48, 1996, pp. 177–182.
 
[23] M. B. Twidale, D. M. Nichols, and C. D. Paice, "Browsing is a Collaborative Process," Information Processing and Management, vol. 33, no. 6, pp. 761-783, 1997.
 

Back to Top