IEEE TCDL Bulletin
space space

TCDL Bulletin
Current 2006
Volume 2   Issue 2


Actual Use of Learning Objects and Metadata

An Empirical Analysis

Jehad Najjar and Erik Duval
Computer Science Department, K.U.Leuven
B-3001 Leuven, Belgium
{Jehad.Najjar, Erik.Duval}



Learning Object technology is being considered as the natural form of content re-use that will considerably decrease the cost of developing electronic learning (educational) material. The focus of my research is evaluating and tracking the actual use made of learning objects, metadata and their associated toolsets in Learning Object Repositories. This will enable users and tools to learn from the way users use learning objects and technology in general, and hereby, improve the use of learning objects and their associated toolsets.


1. Introduction

In this research, we are carrying out empirical studies to evaluate the actual usage of learning objects and metadata in Learning Object Repositories (LORs). We have statistically analyzed the actual use made of metadata elements when indexing and searching learning objects in the ARIADNE knowledge Pool System (KPS). Moreover, we conducted usability studies to evaluate tools and functionalities used to index or find learning objects.

In order to learn from the way people actually use new technologies for learning, we have been developing a framework [13] that helps us to track the behavior of users and learners. This framework tracks and publishes attention given to learning objects and notifies the user about objects in which he or she might be interested.

In the coming sections, we give a very brief description of our pieces of work, in addition to some selected results (see the references part of this paper for the full versions of the work). Section 2 discusses the use of metadata in the indexing and search processes. Section 3 studies the usability of learning object indexing and search tools. Section 4 presents a new framework proposed to track, publish and share the behavior of learning object users. Section 5 summarizes the research issues discussed in this paper.

2. Actual Use of Metadata

This section studies the actual use made of metadata elements in Learning Object Repositories (LORs). Those elements form the application profile of LORs and are provided to facilitate the finding of learning objects. This section is based on two publications. Sub-section 2.1 is based on the Actual Use of Metadata in ARIADNE: An Empirical Analysis [9], published at the ARIADNE 2003 conference. Sub-section 2.2 is based on User Behavior in Learning Object Repositories: An Empirical Analysis [10], published at the ED-MEDIA 2004 World Conference on Educational Multimedia, Hypermedia and Telecommunications.

Results and findings of such studies provide us with empirical guidelines to assess the development and evaluation of application profiles and metadata toolsets.

2.1 Use of Metadata in Learning Object Indexing Process

In this study, we present a statistical analysis of the actual use of ARIADNE metadata elements in indexing learning objects. The results are derived from analyzing the empirical data (usage logs) of 3,700 ARIADNE metadata instances (the number available when we started the analysis).

Table 1 shows the percentage of times each ARIADNE data element was filled in by indexers during the indexing process.

Table 1. Percentage of usage made of data elements by ARIADNE indexers

Table 1 image

From the data shown in Table 1, we notice that only one data element is almost always used: the Granularity element. Other elements are used in about 50 % of the descriptions and the rest are rarely used in the indexing process.

For the values of data elements, we can see that indexers often use just one value. However, this shows that indexers are different in the way they choose data elements to describe their learning objects as well as vocabulary values assigned to each data element. Moreover, by looking to the metadata information filled in by each indexer, we noticed that indexers often use mental templates of elements and values every time they index new learning objects.

Predicting relationships between data elements is not an easy job. The relationships between the studied data elements will form the guidelines for successful automatic indexing or application profiles development in general. In Table 2, the high correlation between Interactivity Level and Semantic Density proves that choosing an Interactivity Level means a high probability for Semantic Density to be fill-in and vice versa. Moreover, if the value of Semantic Density is "high" then Interactivity Level will most probably be "high" too.

Based on the correlations within the elements, we may automatically fill-in or suggest values of other co-related elements. We may also hide some elements from the user based on a correlation with other elements.

Table 2. Measures of association's strength between data element

Image of Table 2

These kinds of studies will allow us to enhance the use of ARIADNE metadata and provide guidelines for developing and evaluating application profiles and their associated tools such as automatic indexing.

2.2 Use of Metadata in Learning Object Search Process

In this study, we investigate the ways in which users interact with Learning Objects Repositories (LORs) when searching for relevant learning objects. We present a statistical analysis of ARIADNE query log files of readily available data on 4,723 queries launched by approximately 390 users in six ARIADNE LKPs [Genoa, Galati, Grenoble-UJF, Lausanne-EPFL, Lausanne-UNIL and Leuven-CS] over different time periods.


Bar chart showing the frequency of elements used in searches

Figure 1: Frequency of elements used in searchers' queries



Figure 1 shows the frequency that the different ARIADNE elements have been used in searchers' queries. Analysis for the frequency of elements used in searchers' queries reveals that searchers mostly accept the provided default data elements. The most used 20 data elements by searchers are default data elements. Remarkably, the query tool allows searchers to change the default settings for the query tool and show the whole list of elements. Few searchers change the default element settings provided.

These results can be interpreted in two ways. First, the default settings are the most related elements to ARIADNE users. Second, searchers have a tendency to accept default settings. Giving the precise reason for these results requires further investigation to test the above two mentioned hypothesis.

A comparison between the actual usage of data elements in both indexing (see section 2.1) and search processes, reveals that data elements that have been used by more than 50% of the indexers are not used by the majority of searchers, such as granularity, didactical context and semantic density elements. In addition, for values of such data elements, we noticed that both indexers and searchers mostly select same values to index or search learning objects.

Table 3: Frequency of elements used in searchers queries

No. elements used in Queries 0 1 2 3 4 >=5 Total
Frequency 548 2488 701 498 258 230 4723
Percent 11.6 52.7 14.8 10.5 5.5 4.9 100.0

Table 3 shows that searchers are more interested in forming queries that contain relatively few metadata elements. The majority of queries (75%) contain one to three data elements. Less than 5% of the queries contain five or more data elements. The mean of the number of elements in queries is 1.7 elements and the SD is 1.6. About 12% of queries contained no metadata elements at all. In fact, this is related to some usability problems with the query tool; some searchers directly launch queries without selecting any data element. Also, they might select their appropriate data element, but without specifying the appropriate string or mathematical operator such as: starts with, contains, Ends with, =, >, <, etc., which is required by the system to successfully issue a query in the ARIADNE query tool; this problem has been removed in the last version of the tool.

3. Usability Evaluation of Learning Object Tools

The work presented in this section discusses the usability of learning object indexing and search tools. The goal of such studies is to determine the influence of usability perspectives on the performance of users, and also to uncover the type of complications that may face users of such tools. In addition, provide recommendations to improve the usability of those tools. The research presented in this section was published in two papers. Sub-section 3.1 is based on Usability Evaluation of Learning Object Indexing: the ARIADNE Experience [11], published at the European Conference on E-Learning, 2004. Sub-section 3.1 is based on Finding Appropriate Learning Objects: An Empirical Evaluation [14], published at the European Conference on Research and Advanced Technology for Digital Libraries, 2005. Findings and recommendations of these studies are generalized for other related tools.

3.1 Usability of Learning Object Indexing Tools

This study investigates usability problems of indexing tools for Learning Objects. The complexity of manually indexing learning objects results in a bottleneck for the introduction of such objects.

A usability test was performed on the two ARIADNE indexing clients: SILO [6] and Toledo [7]. We collected detailed data from extended sessions with seven users. Participants were selected to be representative of an intended user community, including university professors, research assistants, and university students.

The goal of the usability test was to cover a wide range of abilities and sophistications, not to constitute a statistically balanced sample, because the aim of this study is to determine whether such clients help target users to achieve their goals and discover problems that may influence users' effectiveness, efficiency and satisfaction.

Results of this study show that indexing performance and semantics of metadata (provided through indexing tools) can be influenced by different usability perspectives (see Table 4):

  • Interface of indexing tools.
  • Functionalities provided to facilitate the indexing process, such as automatic indexing.
  • Indexer domain knowledge about the introduced learning objects.

Table 4: Metadata usage and semantics

Image of Table 4

We believe that the main problems behind the complexity of the indexing of learning objects are:

  • User interface of indexing tools in most of LORs are more adapted to the metadata standards and not to the indexer. This is related to a misunderstanding about implementing interoperability with metadata standards. For example, in some LORs, terminologies and organization of metadata elements are copied as is from documents of metadata standards, while terminologies (labels of metadata elements and vocabulary values) and information organization should be adapted to the local community of the LOR.
  • Functionalities provided to facilitate the extraction of metadata are not as intelligent as they should be. More advanced techniques should be used to improve the indexing process, for example, algorithms that generate automatic values based on empirical analysis of actual usage, text or image recognition techniques or ontologies. In order to save the quality of metadata semantics, we may allow users to change automatically provided information whenever they feel it is necessary.

A full version of the list of findings and recommendations related to this study is provided in [11].

3.2 Usability of Learning Object Search Tools

In this study [14] we investigate usability problems of search tools for learning objects. We present findings and recommendations of an iterative usability study conducted to examine the usability of a search tool used to find learning objects in ARIADNE Knowledge Pool System (KPS). Findings and recommendations of this study are generalized to other similar search tools.

We aim to evaluate and improve the usability of a search tool (SILO) used by ARIADNE users to search learning objects in the KPS. We want empirically to answer the following questions:

  • How do users use the search tool?
  • How effectively and efficiently does the search tool help users perform their search tasks?
  • What are the factors that may increase the performance of finding learning objects?
  • Are users satisfied with the overall use of the tool?

In order to collect primary data on the usability of SILO and double check and validate our findings we conducted two iterative usability phases:

  • First phase:
    We collected primary data from extended sessions with 16 participants to determine the initial usability of the tool.
  • Second Phase:
    Here, we evaluated the tool after solving the usability problems and integrating the recommendations that appeared in the first phase. In this second phase, we collected data from extended sessions with 10 new participants who had no prior experience with the tool.

Results of this study shows that finding appropriate learning objects is still not an easy task. The usability of the search interface may noticeably decrease the performance of users searching for relevant material. The use of terminology and structure of information in the old SILO was more adapted to the metadata standard than to user needs. That practice of metadata use can be noticed in the search interface of other existing repositories such as Merlot and SMETE.

Bar chart shwoing comparison of SILO evaluations

Figure 2: Comparison between responses of participants who evaluated SILO in the two phases

Figure 2 illustrates a comparison between the overall use of SILO before and after identifying the usability problems. Five participants (group 2) who participated in phase two were asked to use the old interface of SILO. Moreover, we asked another five participants (group 1) who participated in the first phase to evaluate SILO interface of the second phase (SILO 2). We did that to revalidate the recommendations and usability problems obtained from the first evaluation phase and to draw some comparisons between the two interfaces. Based on participants' feedback, we found that SILO 2 was much easier to use and much less overwhelming than the old SILO interface for both of the groups.

As shown in Figure 2, in both groups 1 and 2 the overall usability of SILO 2 is higher than SILO. In addition, the level of user satisfaction on all the studied factors (ease of use, use of terminology, information organization, etc.) with SILO 2 is higher in both groups.

4. Actual Use of Learning Objects

In order to create a feedback loop that enables learning from the way people actually use new technologies and tools for learning, it is essential to track the behavior of users and learners [3]. In this section, we propose and discuss a framework (see Figure 3) that we are developing for automatic collection and management of attention metadata. The research presented in this section was published in Attention Metadata Management: Tracking the use of Learning Objects through Attention.XML [13], which appeared in the proceedings of the ED-MEDIA 2005 World Conference on Educational Multimedia, Hypermedia and Telecommunications.

The framework shown in Figure 3 enables keeping track of learning objects that people use, how they use them, the time they spend with them, etc. This framework tracks user interactions with the different tools they use and then publishes that data in a standardized form of attention metadata to enable the use and exchange of these data.

The release of the new attention.xml specification [1] inspired us to examine the possibilities for using that specification to track and publish attention given to learning objects, and moreover, to notify the user about objects in which he or she might be interested.

Image showing the framework for he AMM

Figure 3: The AMM framework that tracks and publishes attention of users

As shown in Figure 3, users may interact with a wide variety of tools that make use of learning objects:

  • Learning Object Repositories (LORs): Users may interact with a LOR (such as MERLOT, EdNa, ARIADNE and SMETE) to introduce or search relevant learning objects.
  • Learning Management systems (LMSs): Teachers may interact with an LMS (such as Blackboard, Moodle, and WebCT) to manage objects of their courses. Students also can use an LMS to access those objects.
  • Internet Browsers: Users may download their relevant learning objects from the appropriate LOR and then open and read it (learn it) in another application like web browsers.
  • Messaging Systems: Users may also chat about a learning object using a messaging system.
  • Email Clients: Users may send or receive learning objects or information about them by email messages or RSS feeds.
  • Audio and Video Players: Users may use an audio or video player to learn.
  • Other tools, such as MSWord and Powerpoint.

Our framework logs users' activities while interacting with the different tools, then publishes the collected attention metadata (user behavior) related to each tool in a separate attention.XML file or stream. Afterwards, the set of those attention.xml sources are merged into one attention.XML source.

Small version of framework

Figure 4: A small part of the technical representation of our framework

To see a larger version of Figure 4, click here

In Figure 4, we represent our scope of the technical representation framework. The description of the whole framework incorporating all the components is still under investigation. Therefore, in this paper we intend to demonstrate an example of logging activities of people using the ARIADNE SILO search tool [6], and their activities when opening those objects (for example, in a Mozilla Firefox browser) to learn them. Then, we merge the collected information and publish it in an attention.XML source.

For the SILO search tool, we can log information, like the title, subject, file name, or Id of learning objects, downloaded by the different users. We can use the slogger logging extension provided with the Firefox browser to collect objective information about learning objects the user accessed, such as file name, URL, host, title and access date-time. In addition, slogger enables us to collect information like the description and keywords provided by the learner himself after reading the learning object. Moreover, we can use the StumbleUpon extension of Firefox to track information about the relevance of a learning object to the user: the user can click on "I-Like it" when the object is relevant for the user and "Not-for me" for objects that not found to be relevant.

After we track the information, our framework will generate (this is under development) an attention.XML file for user interaction with the different tools and then merge the produced sources in one attention.XML source that includes user attention given to the different objects received, read or listened to.

The merging of different attention sources presents various implementation challenges, because learning objects might be processed by different tools. That requires us to be able to do the following:

  • Manage the attention.XML source so that it contains the updated information (collected from different attention.XML sources of each tool), for example, of read times, last time read, time spent on and followed links attributes of each object.
  • Identify the same learning object and its attention data. We are researching approaches to tackle this issue.

There are different uses for the resulting attention.XML data:

  • Based on the subject, publisher, learning duration, etc. of objects with which a user interacts, we can build a recommender system that suggests new objects. In addition, we can track other information that can be used by the recommender system, like the relevance of the object to the user, using features as "I Like it" provided by Firefox StumbleUpon.
  • Attention metadata can also provide feedback to tools about user behavior. For example, we can to update a user metadata profile in a search tool, based on the attention metadata of that user.
  • The same attention metadata also enable empirical analysis of the actual use of learning objects and user interaction models [10].

For future work, we want to start implementing the different components of the proposed framework and build tools that use the produced attention files to enhance learners' productivity and efficiency.

5 Summary

This paper introduced the research activities that we are carrying out to evaluate the actual use of learning objects and metadata in Learning Object Repositories (LORs). This research is based on analyzing real-life data of users. User logs were analyzed to determine the real use of metadata in both indexing and search activities. Extended usability sessions were conducted to determine to what extent indexing and search tools enable users to reach their goals effectively and efficiently. Moreover, a framework that is intended to track, publish and share behavior of learners was introduced.

The research presented in this paper has been carried out at the Hypermedia and Databases research group at the Computer Science Department of Katholieke Universiteit Leuven, Belgium. More information on the research conducted in this research group is available at <>.


[1] Attention.XML, (2004), Attention.XML specifications. <>.

[2] Blackboard. Available at <>.

[3] Duval, E., Hodgins, W., (2003). A LOM Research Agenda, Proceedings of the 12th int. conf. on World Wide Web (Hencsey, G. and White, B. and Chen, Y. and Kovacs, L. and Lawrence, S., eds.), pp. 1-9, 2003.

[4] MERLOT, Multimedia Educational Resource for Learning and Online Teaching, Available at <>.

[5] Moodle. Available at: <>.

[6] Neven, F., Duval, E., Ternier, S., Cardinaels, K., Vandepitte, P., (2003). An Open and Flexible Indexation and Query tool for ARIADNE. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications, 2003, (Lassner, D. and McNaught, C., eds.), pp. 107-114, 2003.

[7] Toledo, <>.

[8] Jehad Najjar, Erik Duval, Stefaan Ternier and Filip Neven, Towards Interoperable Learning Object Repositories: the ARIADNE Experience, Proceedings of the IADIS International Conference WWW/Internet 2003, pp. 219-226, 2003.

[9] Jehad Najjar, Stefaan Ternier and Erik Duval, The Actual Use of Metadata in ARIADNE: An Empirical Analysis, Proceedings of the 3rd Annual Ariadne Conference, pp. 1-6, 2003.

[10] Jehad Najjar, Stefaan Ternier and Erik Duval, User Behavior in Learning Object Repositories: An Empirical Analysis, Proceedings of ED-MEDIA 2004 World Conference on Educational Multimedia, Hypermedia and Telecommunications, pp. 4373-4378, 2004.

[11] Jehad Najjar, Joris Klerkx, Stefaan Ternier, Katrien Verbert, Michael Meire and Erik Duval, Usability Evaluation of Learning Object Indexation: The ARIADNE Experience, Proceedings of ECEL 2004 European Conference on e-Learning, pp. 281-290, 2004.

[12] Jehad Najjar, Stefaan Ternier and Erik Duval, Interoperability of Learning Object Repositories: Complications and Guidelines, IADIS International Journal of WWW/Internet (ISSN: 1645-7641) 2004.

[13] Jehad Najjar, Michael Meire and Erik Duval, Attention Metadata Management: Tracking the Use of Learning Objects through Attention.XML, Proceedings of ED-MEDIA 2005 World Conference on Educational Multimedia, Hypermedia and Telecommunications (Kommers, P. and Richards, G. eds.), pp. 1157-1161, 2005.

[14] Jehad Najjar, Joris Klerkx, Riina Vuorikari and Erik Duval, Finding Appropriate Learning Objects: An Empirical Evaluation, Proceedings of ECDL 2005 9th European Conference on Research and Advanced Technology for Digital Libraries, (Rauber, A., et al. eds. ), pp. 323-335, 2005.


© Copyright 2006 Jehad Najjar and Erik Duval

Top | Contents
Previous Article
Next Article
Home | E-mail the Editor