Monday, January 25, 2016

Super Excited Because Agile Has Come to my Corner

UPDATE: 
With a move at the office, our whiteboard got "shut down" (there was no room for it). However, I found a great virtual whiteboard app I'm trying out called Realtime Board. It looks something like this (this is the demo):



PREVIOUS POST:
When I'm in a meeting and I hear words like BUY IN and APPROVE and SIGN OFF, I get really upset. Do you want to know why? Here's why:



Agileprocess
The lower arrow is better

The BUY IN happens at the beginning in the first arrow. See how much failure is likely? 

The second arrow gives everyone a chance to get comfortable and "BUY IN" (if you MUST use that terminology) every time there's an iteration.

My new colleague and I are trying out Agile now because it just makes things far more transparent in terms of what we're doing and what we have to do. It is a better approach than having pointless meetings with too much management speak and not enough action.

Our board

Monday, January 11, 2016

Quality II: ADDIE application

From the last post:


User-based Quality, Product Quality and Manufacturing Quality all have characteristics that help me hone in on a quality model for learning products. 

Let's now impose ADDIE onto this trifecta of magical characteristics. 


ANALYSIS: In ADDIE, the Analysis is a needs assessment. In practice, this is when we ask ourselves the following questions:


Who is the learner audience? 

What is the mode of delivery? 
What subject coverage is desired?
What resources are available (SME's, documents, etc)? 

What are the constraints (it's usually time)? 
When does the project/module/learning piece need to be done? 

Garvin says: The characteristics that connote quality must first be identified through market research (a user-based approach to quality).


I'd say that each of these components of Analysis contributes toward high quality. You should be able to identify each of these as specifically as possible. In my experience, the more vague each answer is to each question in this list, the lower the quality. I'd love to back that claim up with a real test case or multiple test cases. I actually can. I can off the top of my head think of three distinct cases (not all my work, though one definitely was) where poorly defined audiences resulted in poorly defined courses that were not well-received. On the other hand, I can think of one course I handled that was well-defined in terms of audience and coverage. The course was well received. 


DESIGN AND DEVELOPMENT: Looking at these two phases, in ADDIE, we are basically talking about good design and good work. 


Design is the rigorous process of identify learning objectives for the material and for the audience, and then creating a storyboard or prototype to see how things fit together. Suffice to say, if you didn't clearly define your audience, obviously the learning objectives are going be more vague. If you did not limit your subject matter coverage, you may be dumping rather than designing. Avoid the dump. 


Good design checks:


ALIGN YOUR DESIGN: 

Define learning objectives and verify objectives with another party (SME ideally) 
Create a storyboard or outline
Does the storyboard or outline include graphical representation? 
Does the storyboard or outline refer continually to the learning objectives, and can you identify when and where? 

Development is where actual work gets done. There is a lot of room for quality checks during development. This falls into the iterative SAM/ADDIE/Agile feedback cycle. So feedback is definitely an important quality check that may benefit the end learning product.


ITERATIVE DEVELOPMENT:

How many feedback cycles has the work been through? With whom? 
Has the work been revised based on feedback? Why or why not? 

In summary, the product-based approach to quality rests on the idea that "high-quality characteristics" must then be translated into identifiable product attributes. Perhaps presence and absence of the attribute is good enough. Or maybe a combination of presence/absence and then some "standard" or level of the attribute would help to measure quality.

IMPLEMENTATION/EVALUATION: 
Implementation is what happens when the course is done and it gets delivered. Ideally you'd do a test run rather than dump the entire developed course into the lap of a trainer (or a student) and just hope for the best. However, this is how a lot of information is shared. Heck, I've done it myself. You dump everyone in the pool and see who can swim. 

This is not fair. Well-designed material is a representation of not only knowledge, but of your corporate brand/entity. Poorly designed material is, quite frankly, a downer.


How can we make the implementation and evaluation steps critical to quality? 


IMPLEMENTATION

Run a pilot (expensive) 
Test a component of the learning and evaluate it 
Train the trainer (train or communicate details)
Repetitive tasks such as uploading files have a quality check or are automated

EVALUATION

Are students able to perform the tasks set forth in the learning objectives? 
Are trainers able to convey the learning objectives? 
Did the hands-on work focus on the learning objectives? 
Gather actionable feedback - evaluations must be revised continually * 

The manufacturing-based approach to quality was and continues to be rooted in how well a product conforms to standards.An approach that ignores the "specification" will not result in a quality product. In the case of implementation, we'd have to see how well the material stood up to its intended purpose. The best vehicles for this test are 1) the learning objectives and 2) the hands-on material (labs). Other ways to make sure implementation goes accordingly is to make sure the trainers (if it's not you) are prepared. Finally, a cheaper, faster way to "test" implementation might be not to run the whole pilot but instead to take a chunk of the material and try it out before a major release. 


Poor Evaluation. The often ignored, youngest child in ADDIE. Evaluation can really bring everything back to the beginning by asking whether or not our specification - the learning objectives - were met? Why or why not? You can go back to iteration during design and development - did the design rely on feedback? How much? Is more needed? Was the design examined? Are we asking questions that actually tell us something about the content? Sometimes the comments are far more valuable than the smile sheets or ranking evaluations. 


Thursday, January 7, 2016

Quality I: Old School Quality Concerns

A few days ago, I had a little idea about quality and instructional design, and I figured I'd really need multiple posts to deal with it. Here's the first of the multiple.

Garvin (1984) dug through his literature and summarized five ideas about quality, all stemming from different fields including marketing, economics, operations management and even philosophy. His goal was (I think) to form a more holistic concept for quality as it pertained to products. 

What he came up with was a list of definitions for quality. There are five of them and he has all the proper citations in his paper, and since this blog is really for my own mind wanderings, I'm going to go ahead and leave the citations out for now. 

Transcendent Quality -  proponents of this view claim that quality cannot be defined precisely; rather, it is a simple, non-analyzable property that we learn to recognize only through experience. 

Product Quality - Product-based definitions are quite different; they view quality as a precise and measurable variable. Quality reflects the presence or absence of measurable product attributes (which are assigned a cost, when analyzed economically).

User-based Quality - individual consumers are assumed to have different wants or needs, and those goods that best satisfy their preferences are those that they regard as having the highest quality. Personalized quality - this is very true today (2016). 

Even perfectly objective characteristics, however, are open to varying interpretations. Today, durability is regarded as an important element of quality (1984).  

Manufacturing Quality -  “conformance to requirements.” Once a design or a specification has been established, any deviation implies a reduction in quality.
Quality is defined in a manner that simplifies engineering and production control.  Efforts to simplify and streamline ((which are equivalent to reductions in the number of deviations) exist solely to reduce costs. This type of quality is applicable perhaps in software quality. 

Value-based Quality - According to this view, a quality product is one that provides performance at an acceptable price or conformance at an acceptable cost

Clearly, the "transcendent quality" definition is useless to me (shout out to the philosophers in the crowd!).So let me just ignore that. I am also going to ignore value-based quality because it is really saying that "cheap things have quality value because they are cheap". I just can't see how it fits into the learning product idea in terms of material quality. I am going to ignore it. 

Moving on, product quality makes a lot of sense: presence or absence of a measurable product attribute. Some of the examples include how much butterfat is in a good ice cream, or how many knots per inch make up a good rug. These are measurable attributes. The attributes themselves are arguably subjective, but Garvin assumes that most people would know the attribute as it pertains to quality.

Both the user-based quality definition and the manufacturing definition could work for me in terms of setting up my "learning product" quality idea. The user-based quality definition speaks very much to the current trend of product customization, which has a lot of value. The common person can have customized learning, just as he or she can have customized drinks at Starbucks. 

While manufacturing quality seems like a silly thing to bring up here, it actually isn't, because people talk about technology products in a manufacturing way all the time. I'm not saying I like it, but people do talk about standards and conformance and consistency a lot. There are a lot of boring, repetitive tasks in technology that ideally should be automated, but sometimes are not, or fall into a type of "mechanical" work that is similar to manufacturing. Large-scale file changes, uploads, changes.Even automation is subject to a quality check. So it's not ridiculous to consider the "conformance to requirements" in at least one of the ADDIE steps...ah yes. I'm trying to transpose some of these ideas into ADDIE....Mwa ha ha ha.

So in summary: User-based Quality, Product Quality and Manufacturing Quality all have characteristics that help me hone in on a quality model for learning products. 

The next post will clarify further...


Wednesday, January 6, 2016

Information Mapping - Overview

A colleague of mine sent me a few interesting links, so I'm putting them here for reference and commenting later. When I have time.

Horn, R. E. Clarifying Two Controversies About Information Mapping's Method Accessed 01/06/2016

Horn, R.E. (1974).  Information Mapping, Training in Business and Industry 11 (3).

Parsons School of Design Information Mapping Papers

Tuesday, January 5, 2016

ID and Quality: A Prelude

What is quality?

When we buy eggs at the market, we don't want a broken one. When we order food at a restaurant, we want it to taste good. A high quality experience in economy class is when the flight attendant offers you a glass of complimentary wine just "because you look like you need it" (I recently had this experience, and it resulted in my writing a glowing satisfaction report for the flight). 

What is poor quality? 

Every egg carton has broken eggs. The food is too salty in the restaurant. The flight attendant was not warm and friendly. All have different root causes, but each of these can make or break a quality situation.

How do we define quality when it comes to instructional material? How do you know if what you have produced is high quality before you delivery the training?  

In order to answer these questions, I have decided to treat instructional material as a product. It is, in a sense, a product. It is painful to me, as both the daughter of academics and a trained academician, to admit this, but the plain fact of life in America is that everything can be productized. All our experiences are at some point commodified. This includes the very act of acquiring information. When you design instructional material, you are really designing a product. A learning product. 

Even though it annoys me, working on this assumption eases the search for answers because there is a mountain of literature and information about quality (but how good is it?). 

One seminal work that I'm going to start with is David Garvin's (1984) work titled, "What Does Product Quality Really Mean?". I am looking at this one closely because it is a review paper that summarizes ideas on quality (until 1984, obviously). One glaring point here is that in 1984, software quality was in its infancy. Therefore, software training was not even born yet. However, I'm going to argue that the ideas of quality, as a concept, are somewhat ageless, and that there is some hope that we can apply the ideas in Garvin's work to the present questions: What are the key attributes of high quality instructional material, and how can an instructional designer strive to product high quality material? 

My ultimate goal is to answer these two questions, and also produce a checklist that helps instructional designers determine whether or not they have met a basic quality standard. this is probably going to take more than one post!

References