Best-Use Models for Online Testing

The Learning Hub includes a powerful (feature rich) testing tool that can be used for a range of formal (for grades) and informal assessments. Tests, called quizzes, pull questions from a master question databank, so randomization is possible, as opposed to set questions in a set order. This job aid will cover some of the features and best-use models of the Quiz tool.

These models are based on:

  • Program or individual online course assessment – either face-to-face or distance.
  • The Learning Hub quiz tool is used for one or more of:
    • Pre-tests (informal)
    • Self-tests (informal)
    • Practice exams (informal)
    • Release Criteria for some other area of the course (formal or informal)
    • Quizzes (formal)
    • Exams (formal)
  • Ability to create more questions than are necessary for individual tests.
  • Ability to classify questions by category and/or difficulty ranking.
  • Questions may include media stored in folders in the course File Manager.
  • Questions (11 types) are created / stored in the Question Databank of individual courses.
  • Quizzes are built from questions pulled from the Questions Databanks.
  • Self-tests and practice exams can pull from the same databank as quizzes and exams.
  • Security and exam integrity are important considerations.

Developing a Question Databank/Library

  • Is the master storage repository in a course instance.
  • Questions can be exported to a .TXT file that can be:
    • Meta tagged and stored in the LOR, and/or
    • Imported into another course
  • Questions can be created in MS Word (for example) and imported into the system, imported or created in Respondus, or created directly within the Quiz tool.
  • Questions can be categorized – an organizational structure for the questions in the databank.
  • Questions can exist in the root folder and can also be added to sub-folders, called sections, to provide for multiple categorizations.
  • Sections are like collections of questions – e.g. all questions related to a specific outcome or topic
  • Questions also have titles which can be descriptive enough for instructors designing quizzes and exams to recall the appropriate questions (not a sequential numbering system)

Difficulty Levels

  • Questions can be ranked 1-5 in difficulty
  • This ranking can be used to help in the selection of questions for different levels of evaluation
  • Difficulty ranking can be set at the question level, or at the quiz level – e.g. A level 5 question in course ABC-1000 might be a level 3 question if reused in course ABC-3000

Online Evaluation Models

  1. Basic
    • No databank of questions
    • Questions are directly attached to each quiz
    • Questions should be identified with a naming convention that describes the nature of the question by content
    • Informal and formal evaluations are separated – use completely different questions
  1. Intermediate
    • One databank of questions for a single course
    • Number of questions in the databank is twice the number of questions in all formal quizzes and then randomized into quizzes to ensure exam integrity.
    • Questions are identified with a naming convention that describes the nature of the question based on content.
    • Questions are randomly pulled to create informal and formal quizzes.
    • Answer detractors are randomized as a security feature.
    • Quizzes are timed.
  1. Advanced
    • Questions are stored in primary sections (categories) in the databank.
    • Questions are organized by secondary sections.
    • Secondary section questions are for this specific course only.
    • Secondary section questions are difficulty ranked.
    • Quizzes are set-up as follows:
      • Pre-assessment for each module (no grades, one attempt, fixed questions, results are recorded for instructor reference in preparing course material, not included in course mark calculation)
      • Self-tests at the end of each module or at key break points in the module content (topic specific, no grades, multiple attempts, randomized questions, not recorded)
      • Quizzes (graded, one or two attempts, randomized questions, recorded)
      • Practice mid-term (no grades, randomized questions, multiple attempts, timed, not recorded)
      • Mid-term exams (graded, randomized questions, one attempt, timed, recorded)
      • Practice final (no grades, randomized questions, multiple attempts, timed, not recorded)
      • Final exams (graded, randomized questions, one attempt, timed, recorded)
    • All quizzes pull questions from the same databank of questions.
    • All self-test and practice exam questions are randomly pulled.
    • Low to moderate difficulty ranking questions are used to for the pre-assessments and self-tests.
    • Higher ranked difficulty questions are used for formal assessments.
    • Distance delivery uses the following:
      • Proctoring
      • Timed exams
      • Combination of randomized and hard-coded questions all in random order
      • Difficulty ranking (moderate to highest)
    • Face-to-face delivery uses the following:
      • Safe Exam Browser for exams written in BCIT labs, and/or program proctoring, or delivery in the Test Centre
      • Timed exams
      • Combination of randomized and hard-coded questions all in random order
      • Difficulty ranking (moderate to highest)

Delivery Models

  • Face-to-face
  1. Self-tests available any time to all students
  2. Practice-tests delivered face-to-face with follow-up group dialogue after
  3. Quizzes (snap) delivered face-to-face in a lab
  4. Exams delivered either face-to-face in a lab (group) or students drop-in to the BCIT Test Centre during a prescribed time period to take their exam
  • Distance
  1. Self-tests available any time to all students
  2. Practice-tests delivered at a specific time on a specified date with follow-up group dialogue after (desktop conference or discussion tool)
  3. Exams:
    delivered at a distance following the BCIT proctoring process wherein a professional signs a legal document agreeing to administer an exam to a student at a specific date and time. A security release code is sent to the proctor’s e-mail address.
    Students drop-in to the BCIT Test Centre to have their exam administered during a prescribed time period.

Exam Questions Across a Program

  • Individual question databanks can be exported and loaded into the Learning Object Repository and uploadable .TXT files, secured against open access, and tagged with descriptions as to topic, difficulty level, etc.
  • Other courses in the program can either download the .TXT files or Import questions form other courses.
  • Questions from other courses can either be used as refresher, remedial, or advanced optional learning activities, or core as part of the randomized question databank.

Competency/Performance-based Content

Self-tests, quizzes and exams can be used to drive students to different content, for example:

  • Student A scores 50% on a practice exam
  • Student B scores 80% on the same practice exam
  • Student C scores 100% on the same practice exam
  • Student A is directed to some remedial readings and activities and is asked to redo the practice exam to score 75% or better before moving on with the core content
  • Student B continues with the core content
  • Student C is offered additional higher level content as an option
  • The same strategy can be used to direct (customize) student learning based on the results of a pre-test at the beginning of a course or program.

Gamification

  • Within the Learning Hub there is a tool called StudyMate that enables multiple choice and single answer questions to be converted into online games.
  • These are the same reflective self-tests as you might create using the quiz tool, but are presented in a range of engaging game interfaces:
    • Fact Cards or Fact Cards +
    • Flash Cards
    • Pick-A-Letter (hangman)
    • Fill in the Blank
    • Matching
    • Crosswords
    • Glossary
    • Challenge