Overview of the project.

1. Background and Literature Survey

This project develops ontological and semantic analysis techniques that revolutionize the current practice of assessing learning outcomes. It proposes a computer-aided assessment methodology that employs a formal ontology model, and develops adaptive testing and automatic course project grading techniques for performing quantitative evaluation of student performance.

This applied research tackles three key problems: automatic ontology construction via text mining, semantic analysis for adaptive testing, and automated outcome assessment. The project results in ALOAS (Adaptive Learning Outcome Assessment System), which provides outcome analysis for both academic programs and individual learners. It helps educators to objectively measure the outcomes of an academic program and meet the knowledge coverage requirements for accreditation. It also provides personalized guidance for students - using adaptive testing, ALOAS can perform an analysis of a student's knowledge structure defects. Our initial exploration focuses on computing disciplines. In the following, we briefly survey the related research areas.

1.1 Learning Outcome Analysis and Academic Program Accreditation

Learning outcome assessment is now a routine activity performed by higher education institutions for exercising continual improvement of service quality. It is a sophisticated process that documents, collects, and analyzes information for evaluating whether the students in an academic program have acquired the desired knowledge, skills, and abilities. Outcome based education (OBE) [3] promotes quality education and emphasizes student-centered learning.

There are two key questions concerning outcome assessment: (1) What are the goals to assess? and, (2) How do we ascertain the stated objectives? For the first question, accreditation organizations such as ABET (Accreditation Board for Engineering and Technology) provides detailed guidelines [1]. There are usually two categories of goals to evaluate: curriculum coverage (the required body of knowledge areas), and desired skills and aptitude (such as the ability to work in teams). ACM and IEEE also provide a curriculum model CC2005 [2] that covers hundreds of areas of computing. For the second question, an assessment process usually involves a self-study by an academic unit: defining rubrics, collecting sample student course work, performing evaluation, and proposing improvement.

Outcome analysis is usually a time consuming process. Recently, there are numerous efforts for easing the pain. The main approach is to electronically store learning goals and to correlate course materials with outcome goals. The following is a collection of typical examples: TrueOutcome [4], Web-based Outcome Assessment System [6], Blackboard course template for ABET-CAC [7], automated course material extraction [5], Blackboard Outcome Assessment System [8], and various efforts of using relational database for storing outcome evidences [9].

Our contributions: ALOAS goes beyond all the aforementioned efforts by leveraging an ontology model that represents the knowledge about curriculum requirements and outcome goals. The additional domain knowledge allows semantic search and analysis. For example, Instructors no long have to manually provide proof of curriculum coverage (e.g., "recursion is covered in CSC102 by three lecture hours and two assignments"). This information can be automatically inferred by text mining of lecture notes and assignments. For another instance, failing a test question in ALOAS will trigger more questions on related knowledge areas, to precisely identify deficiency of a student's skill set. ALOAS can be used for not only exit major exams but also in the daily learning by individual learners and instructors. There are pioneer projects such as [79] in using ontology for generating e-Iearning contents and automatic e-Iearning navigation, our work concentrates on the learning outcome assessment.