From the last post:
User-based Quality, Product Quality and Manufacturing Quality all have characteristics that help me hone in on a quality model for learning products.
Let's now impose ADDIE onto this trifecta of magical characteristics.
ANALYSIS: In ADDIE, the Analysis is a needs assessment. In practice, this is when we ask ourselves the following questions:
Who is the learner audience?
What is the mode of delivery?
What subject coverage is desired?
What resources are available (SME's, documents, etc)?
What are the constraints (it's usually time)?
When does the project/module/learning piece need to be done?
Garvin says: The characteristics that connote quality must first be identified through market research (a user-based approach to quality).
I'd say that each of these components of Analysis contributes toward high quality. You should be able to identify each of these as specifically as possible. In my experience, the more vague each answer is to each question in this list, the lower the quality. I'd love to back that claim up with a real test case or multiple test cases. I actually can. I can off the top of my head think of three distinct cases (not all my work, though one definitely was) where poorly defined audiences resulted in poorly defined courses that were not well-received. On the other hand, I can think of one course I handled that was well-defined in terms of audience and coverage. The course was well received.
DESIGN AND DEVELOPMENT: Looking at these two phases, in ADDIE, we are basically talking about good design and good work.
Design is the rigorous process of identify learning objectives for the material and for the audience, and then creating a storyboard or prototype to see how things fit together. Suffice to say, if you didn't clearly define your audience, obviously the learning objectives are going be more vague. If you did not limit your subject matter coverage, you may be dumping rather than designing. Avoid the dump.
Good design checks:
ALIGN YOUR DESIGN:
Define learning objectives and verify objectives with another party (SME ideally)
Create a storyboard or outline
Does the storyboard or outline include graphical representation?
Does the storyboard or outline refer continually to the learning objectives, and can you identify when and where?
Development is where actual work gets done. There is a lot of room for quality checks during development. This falls into the iterative SAM/ADDIE/Agile feedback cycle. So feedback is definitely an important quality check that may benefit the end learning product.
ITERATIVE DEVELOPMENT:
How many feedback cycles has the work been through? With whom?
Has the work been revised based on feedback? Why or why not?
In summary, the product-based approach to quality rests on the idea that "high-quality characteristics" must then be translated into identifiable product attributes. Perhaps presence and absence of the attribute is good enough. Or maybe a combination of presence/absence and then some "standard" or level of the attribute would help to measure quality.
IMPLEMENTATION/EVALUATION:
Implementation is what happens when the course is done and it gets delivered. Ideally you'd do a test run rather than dump the entire developed course into the lap of a trainer (or a student) and just hope for the best. However, this is how a lot of information is shared. Heck, I've done it myself. You dump everyone in the pool and see who can swim.
This is not fair. Well-designed material is a representation of not only knowledge, but of your corporate brand/entity. Poorly designed material is, quite frankly, a downer.
How can we make the implementation and evaluation steps critical to quality?
IMPLEMENTATION
Run a pilot (expensive)
Test a component of the learning and evaluate it
Train the trainer (train or communicate details)
Repetitive tasks such as uploading files have a quality check or are automated
EVALUATION
Are students able to perform the tasks set forth in the learning objectives?
Are trainers able to convey the learning objectives?
Did the hands-on work focus on the learning objectives?
Gather actionable feedback - evaluations must be revised continually *
The manufacturing-based approach to quality was and continues to be rooted in how well a product conforms to standards.An approach that ignores the "specification" will not result in a quality product. In the case of implementation, we'd have to see how well the material stood up to its intended purpose. The best vehicles for this test are 1) the learning objectives and 2) the hands-on material (labs). Other ways to make sure implementation goes accordingly is to make sure the trainers (if it's not you) are prepared. Finally, a cheaper, faster way to "test" implementation might be not to run the whole pilot but instead to take a chunk of the material and try it out before a major release.
Poor Evaluation. The often ignored, youngest child in ADDIE. Evaluation can really bring everything back to the beginning by asking whether or not our specification - the learning objectives - were met? Why or why not? You can go back to iteration during design and development - did the design rely on feedback? How much? Is more needed? Was the design examined? Are we asking questions that actually tell us something about the content? Sometimes the comments are far more valuable than the smile sheets or ranking evaluations.
No comments:
Post a Comment