Discussing with a learner today the impact of Prensky’s concepts about “Digital Natives & Digital Immigrants” and where the generational assumptions about preferences and needs in learning design are today. The response I gave was along the following lines. And, though I realize this is a bit superficial itself as a discussion of the topic, I am curious to know whether you, dear reader, also see the irony.
The 2001 Prensky article is often cited and is where his hypothesis about digital technology’s influence on generational preferences and needs in learning were disseminated most widely. It is useful to recognize that Presnky’s ideas were formulated and shared widely at a time when the Internet and World Wide Web were relatively new and internet enabled devices were proliferating as was their use in homes and schools. His ideas are certainly plausible at first read and are thus not easily disregarded, yet his analysis was at the same time arguably superficial and its influence, arguably also, has been both harmful and helpful to learners, educators, and instructional designers alike.
It is also useful to recognize that it is helpful for developing theory and method and even policy about learning to have postulations like Prensky’s about trends and influences of technology voiced and to thoughtfully consider how they may have or are influencing learning experience and design. Nevertheless, works like Prensky’s – ones that seem to make good “common” sense on the surface and have catchy names and phrases attached to them for easy recall – are too often taken as indisputable “fact” by individuals and groups inclined to avoid critically thinking them through based on their experiential or other forms of evidence (we see this a lot with individuals who are uneducated or poorly educated) and by individuals and groups prone to pick and choose ideas that support their agendas neatly (we see this a lot among politicians, policymakers, and journalists and the “media”).
Invariably, though, once the dust settles from the initial impact of such big ideas, like Prensky’s Digital Natives and Digital Immigrants, it is the case that research and more cautioned observers and critical thinkers alike will offer reflections on the idea’s accuracy and impact, which several have done. I refer you as additional reading to this reprint of the article by Helsper and Eynon (2010) from the esteemed British Educational Research Journal. Though from a British perspective, these authors’ critical argument about the harm generational assumptions can have on learners and educators is significant to consider as an instructional designer.
[Learner Scholarship Tip: You might also search in the Library and otherwise online items from the References list in the article to see where their sources come from and additional learning. Using an article’s “References” to locate and read original/other sources on a topic is a common research technique effective scholars and academic researchers use. Why? Simply because it is a way for YOU to determine (with your own critical mind!) where authors got their ideas and to confirm if they’ve interpreted sources correctly and cited them meaningfully (per your original analysis & evaluation of the sources)!]
I share all of this to recommend you are sure to expand your reading and, thus, thinking to draw your own conclusions about Prensky’s work and its influence on instructional design. For me, reviewing available evidence, I think Prensky’s ideas were useful as frames of reference and conversation – as conversation starters and lenses for analyses of events of the current era – yet ultimately they did more harm than good in the short-term because they led to an ageist bias (that “young” people are adept and can use technology effectively to learn and “old” people are inept and unlikely to learn well with digital technologies) and that has unnecessarily furthered the segregation of so-called and perceived groups of “young” and “old” learners and, yes, educators as well.
Some examples that come readily to my mind are:
The child or adolescent learner who comes from an impoverish home environment into a digital tech-intensive school environment and who may not have friends or other access means for exposure to digital devices. This learner most likely has little to no exposure to smartphone, tablet, or even desk/laptop computer use outside the school. Nevertheless, the “digital native” assumption means the learner is given a curriculum that requires a preference, familiarity, and even fluency with using digital means for learning. The learner falls behind classmates in performance due to the “digital divide” they’ve experienced in their home environment from more affluent classmates and is branded as an under-performer at worst and under-achiever at best. Still, the learner is disadvantaged all because of a “meme” (or metaphor, concept, label, profile…whatever you want to call the assumption of any young person as a “Digital Native”).
The middle-aged or older adult learner who is moderately, advanced, or even highly tech-inclined individual who is forced to complete a low-tech style curriculum design (in college classrooms, workplace training programs, etc.) under an erroneous assumption they, like their adult peers in the class, are not capable of using high-tech style learning tools and who becomes quickly bored with the learning process and drops out of the class, or even an entire degree, certificate or training program.
The skilled and seasoned and caring educator who is subtly forced out of practice because of an assumption by policy or persons in their workplace they are unable or will be slow to learn to effectively use technology to teach a younger and pressumably more technologically and digitally savvy learner populations. Perhaps the educator could have been offered an assessment of motivation, will, or even coaching for how to use a more peer-learning based model and acting as a facilitator (aka that other popular meme: “guide on the side”), rather than ousted on an assumption traditional lecture and “teacher-to-student” interaction (aka the other meme: “sage on stage”) is all the educator would be willing to use as an instructional method. Worse yet, the educator may even believe that they are unable to adapt to provide what is needed to their students and become depressed and unhappy in their profession based on widespread assumptions about age-capabilities in popular press and policy decisions. That is, a feeling of satisfaction and career success could be undone for the educator all due to a widespread misapplication of a hypothesis about generational styles
Can you think of any examples that illustrate possible effects (positive or negative) in applying Prensky’s ideas without critical analysis of the individual learner’s needs could have? Have you personally experienced any positive or negative effects from his ideas in your learning or workplace environments?
I’ll close here by noting that some of the “backlash” I’ve observed as linked to the widely and often misinformed application of Prensky’s ideas has been a move toward individualization, or personalization, of learning designs. Trends such as competency-based learning, adaptive learning technologies, prior learning and skills assessments, and the like are proliferating in 2016, in part, I’d argue, to realize the ideal that each and every learner (regardless of age) can access educational opportunities and experience learning as they prefer and need to for optimal learning (i.e., having an ability to recall, apply, and transfer to new contexts their knowledge and skills).
That is: The disruption to the monolithic “curriculum” that makes broad assumptions about what learners of certain ages (and genders!) need or prefer and are capable of is arguably revolutionary. And, in some sense, Prensky’s folly (or, rather, the folly of those who failed to think critically about and apply his ideas cautiously) has created (arguably) positive change that, perhaps ironically, is facilitated by “digital” technology!
The video embedded below was prepared as a Discovery Session opportunity for the Ashford University Teaching and Learning Conference in November 2015. You are encouraged to post a comment here or contact me individually to discuss infographics and to share your infographics.
Johnson, L. (2015). Developing cognitive skills: Infographics for CAVES! [Video file]. Retrieved from https://youtu.be/5efIblIChmM
Johnson, L. (2015). Developing cognitive skills: Infographics for CAVES! [Weblog post]. Retrieved from https://reflectivelearning.net/2015/11/02/infographics/
1. Distinguish infographics from other graphic formats
2. Recognize the characteristics of an effective infographic
3. Recognize instructional strategies for using infographics
4. Locate existing infographics for use in instructional designs
5. Recall technologies for creating effective infographics
6. Plan use of familiar technologies to create infographics
Criteria for Evaluating Infographics
These criteria are meant to be a starting set of considerations for anyone creating a rubric or other evaluation tool for assessing infographics you create or learners create in coursework.
Has a (main) point
Is Data driven
Includes high impact visuals
Designed with high contrast colors
Utilizes consistent color scheme
Is accessible… i.e., minimal text describing visuals
Remember, when creating infographics, you and your learners are employing and sharpening higher-order cognitive skills – remember these verbs as you write outcomes and objectives for infographics – CAVES:
Below are several of the resources shared in the video. If you know of other resources about infographics you would like to share, please post in a comment to this post!
Books About Infographics
Krum, R. (2013). Cool infographics: Effective communication with data visualization. Wiley. ISBN-13: 978-1118582305.
Meyer, E. K. (1997). Designing infographics. Hayden Books. ISBN-13: 978-1568303390.
Beegel, J. (2014). Infographics for dummies. For Dummies. ISBN-13: 978-1118792384.
*Remember, though, for non-technology intensive courses or to avoid issues with requiring 3rd party / web-based tools as part of your instructional designs, consider using familiar and common technologies such as Microsoft PowerPoint or Word, or possible, Google Slides and Docs.
I recently compiled a list of open-access journals and thought I would share that list here. Please feel free to add additional journal recommendations and comments in reply to this post or point to other listings of journals related to learning, teaching, and instructional design practice and research!
The list I provided here is not intended to be exhaustive of all possible journals for reading and publishing opportunities in these areas. This was merely my humble attempt one afternoon to compile a list of journals and I hope you find it useful!
(IJITDL) International Journal of Instructional Technology and Distance Learning
This tutorial describes activities at each level of the revised cognitive taxonomy. The focus is on use of innovative tools and processes with digital technology. Projects, essay exams, reports, and other traditional activities are not the focus of this tutorial. Those activities are appropriate in certain learning contexts and learning experience designers will need to weigh an activity’s learning benefit from including a technology-centered approach. Please see the References | Credits section of this tutorial for additional information on its origination, references, and preferred citation when giving attribution to the contents of the tutorial.
Real-world activities demonstrating Remembering include recalling information in meaningful ways, such as a learner being able to recite a policy, quote facts and figures, such as prices, or relate safety or other procedural rules from memory (Clark, 2015).
To design for bookmarking, ask learners to select online content on a specific topic or series of topics and organize the webpages and articles online about the topic(s) on a device, such as a laptop, tablet, or smartphone. You can assess performance on this style of activity with the learners’ submission of an image (e.g., screen capture) of the bookmarks on their device to reveal the organization of the items in a format that demonstrates accurate recall and identification of key themes in a topic or series of topics. While selecting items for bookmarking, learners may also demonstrate cognition representative of the understanding, analysis, and evaluation levels of the taxonomy.
To design for social bookmarking, ask learners to use a web-based social bookmarking technology, such as Diigo, LiveBinders, Scoop.it, Pintrest, or Delicious. The social bookmarking activity might be completed individually or as part of a small or large group collaboration. Organizing and tagging bookmarks with specific keywords demonstrates the ability to identify, recall, and name key aspects of a topic. Assess the style of activity with the learners submission of a link to the location of their bookmarks. While selecting items for social bookmarking, learners may also demonstrate cognition representative of the understanding, analysis, and evaluation levels of the taxonomy.
To design for labeling, ask learners to use a blank map or taxonomy you have created from a web-based mind mapping, concept mapping, or taxonomy mapping technology, such as FreeMind or Coggle. The labeling activity might be completed individually or as part of a small or large group collaboration. Ask learners to label an empty concept map, and mind map, or taxonomy to add key terms or concepts and demonstrate recall and recognition of order or alignment. Alternatively, learners can be asked to label an image or process diagram demonstrating the ability to identify, locate, and name through recall of specific items.
To design for quizzing, ask learners to list, match, label, identify, or otherwise recall information to answer a series of questions about a concept. Though quiz or test and the true/false, multiple-choice, fill-in-the-blank, and matching formats can be written for higher order cognitive skills, these question forms are most often associated with remembering and relating recall information at the lowest level of cognition in the revised taxonomy, which is remembering. Alternatively, asking learners to create questions and develop answer-keys for quizzing that are completed by peers is an effective strategy to engage learner recall of knowledge. Asking learners to write questions may also extend the quizzing activity so that students are demonstrating a higher level of cognitive ability.
In a listing activity, learners demonstrate the same skill as with writing a list, but it is performed in a digital environment using ordered lists, with numbers, or un-ordered lists, with bullets. To design for listing, ask learners to use a word processing program, such as Google Docs or MS Word, or even the text editor and a discussion forum within an online course, to produce lists. An alternative is to ask learners to create lists on slides using presentation technology, such as MS PowerPoint or Google Slides. Creating an ordered list by number or date demonstrates the skill of recalling and sequencing events and accurate order. Creating an unordered list by bullet pointing demonstrates the skill of recalling and compiling information. Listing is also an effective activity for brainstorming prior knowledge on a topic before a higher order cognitive activity.
Refer to Churches (2009) for example bookmarking rubrics and exemplars.
Authentic activities demonstrating Understanding include a learner being able to translate an equation, explain steps for performing a complex task, or interpret issues and instructions with original phrasing (Clark, 2015).
To design for advanced searching, ask learners to construct Boolean (e.g., AND, NOT, OR, etc.) search strings to demonstrate an understanding of a topic’s key components. The ability to modify a search by phrasing, inferring, and interpreting key components in a search topic demonstrates understanding. As an extension of this activity, providing search strings beyond single words and evaluating the results of the search incorporate higher order cognitive abilities.
To design for journaling, ask learners to use an individually managed or group blog, wiki or other online writing/journaling tool, such as Google Docs, to explain, compare, or summarize concepts. The journal can be part of a larger activity involving collaboration and discussion to scaffold development toward higher order cognitive abilities.
Otherwise known as tagging, to design for categorizing, ask learners to organize and classify a group of items, such as documents, images, or webpages, using folders or social bookmarking technology. A group of tags can be provided, or learners may be asked to create original tags for their categorizations individually or in a group setting. Learners would then apply the tags to the organize content to demonstrate understanding of the key themes of the topic. Asking learners to justify use of a tag or process of categorization transcends understanding and will demonstrate higher order cognitive abilities, including analyzing and evaluating.
Otherwise known as commenting, to design for annotating, ask learners to use a web-based annotation tool to add notes as comments to PDFs or other document files or images to demonstrate they understand content beyond a recognition or recall level. Giving an image of a process or taxonomy and having learners annotate the content is another example of commenting or annotating for demonstrating the understanding level of cognitive ability in the revised taxonomy.
Churches (2009) noted that, “the act of subscription by itself does not show or develop understanding, but often the process of reading and revisiting the subscribed feeds leads to greater understanding” (p. 11). Nevertheless, selecting a subscription, usually using an RSS-feed technology, and submitting the subscription along with an explanation or summary that interprets the relationship of the subscription (i.e., feed) to a concept, process or topic, is a way for learners to demonstrate understanding abilities. Alternatively, by selecting and justifying a series of subscriptions/feeds on a topic, learners are producing a resource demonstrating thinking at the critical level of creating, which is the highest level of the revised cognitive taxonomy.
Refer to Churches (2009), for examples of searching, journaling, and wiki editing rubrics and exemplars.
Authentic activities demonstrating Applying include using concepts in novel situations and may be demonstrated by following a problem-solving method for a variety of issues or by applying statistics to determine survey validity (Clark, 2015).
To design for operating, ask learners to demonstrate operating or manipulating “hardware and applications to obtain a basic goal or objective” (Churches, 2015, p. 9). Furthermore, to design for operating learners may also be asked to accurately use an instrument in a lab experiment that the learner videos or create an audio recording of while completing the steps. Use of screen capture technology is also useful for asking learners to demonstrate accurate techniques with library database searches and file or other content creation, which demonstrates applying of principles and techniques.
Typically involving the process of uploading as well, to design for sharing ask learners to share images, video, audio, text content, or mixtures of these and a course or using a web-based service and network community, such as Facebook, Twitter, or LinkedIn. Sharing according to a set of created or prescribed standards will demonstrate application whereby students are applying a set of principles and methods. Collaborating with peers in the process of selecting, organizing, and delivering uploaded content or content curated from around the web can initiate higher order cognitive abilities, such as analyzing and evaluating.
To design for editing, ask learners to revise existing content of their own creation or of others in a document, repository or other format, such as a wiki, a Google Doc, or even in a Google Slides presentation. The act of editing processes, procedures, and content according to a set of guidelines and principles demonstrates application, but may also involve levels of analyzing and evaluating and arguably creating levels of the revised cognitive taxonomy as well.
To design for playing, asking learners to present, perform, interview, and otherwise engage in a simulated context works well. Screened captures, video, and/or audio of play encourage reflection on accuracy in applying principles and methods in specific contexts. Learners might also demonstrate playing in MORPGs (multiplayer online role-playing games) to demonstrate role-playing and application of appropriate methods or techniques. Successful plan operation of a game demonstrates recall of methods and understanding of processes or tasks when applying skills. Incorporating nonverbal communication (e.g., gestures, attire, posture) using an avatar creates an opportunity to assess learners application of skills in the affective domain as well.
Refer to Churches (2009) for example rubrics for these activities and exemplars of collaboration, audio/video conferencing, and interactive whiteboards.
Authentic activities demonstrating Analyzing include distinguishing between facts and inferences and deconstructing concepts were material items so the organizational structure is understandable (Clark, 2015).
Mashing involves aggregating or integrating multiple sources of data within a single product or output, such as a video, report, feed, or collage of items. To design for mashing, ask learners to demonstrate their skills in analyzing by copying, inserting, embedding, or otherwise pasting various types of content into word-processing or presentation files that can be submitted for evaluation. Mashing inherently involves a degree of evaluating and creating, which are higher order cognitive skills in the revised taxonomy.
To design for linking, ask learners to compile related pages of content and a wiki, linked to relevant blog postings, or compile links in a document or other posting format demonstrating analysis of the content for selection in the compilation. By linking relevant items or groups of items, learners demonstrate the ability to deconstruct and differentiate multiple sources of information and break down the sources into coherent related components or categories.
Otherwise known as reverse engineering, cracking occurs and applications based learning, such as in computer programming, software development, webpage designing, and all forms of engineering. To design for cracking, ask learners to reverse engineer an existing item by deconstructing or “cracking” the existing creation. Naturally, designers will want to consider the legalities of reverse engineering an existing item prior to assigning this type of activity.
Refer to Churches (2015) for more examples, including an example data processing rubric and exemplars.
Authentic activities demonstrating Evaluating include appraising information, people, or situations and making judgments, which could involve selecting effective solutions, conducting appropriate hiring of personnel, or explaining and justifying budgetary items (Clark 2015).
To design for commenting, ask learners to contribute constructive critique or engage in a dialogue aimed toward negotiating meaning using a threaded discussion, comments on a blog, revisions to a wiki or shared document, or annotations of images or other forms of content. Submitting reflection comments using video, audio, images, and documents as the subject matter or means of reflection are effective was to demonstrate leaners’ cognitive ability with evaluating. Furthermore, posting comments and following up on contributions requires learners to evaluate the materials and contexts and structure interactions in meaningful and coherent ways through discussion.
To design for moderating, ask learners to moderate an asynchronous discussion, act as a primary or co-editor in a wiki or shared document, or to appraise content by peers in another format to demonstrate cognitive skills at the evaluating level of the revised taxonomy. As Churches (2009) explained, part of the process of moderating is evaluating information from a variety of perspectives to assess the worth, value, and appropriateness of content according to a set of standards.
Although collaborating can occur at each level of the revised cognitive taxonomy, it is at the evaluating and creating levels that collaborative learning activities are best situated when designing activities. To design for collaborating, ask learners to participate in a small group project drawing on the collective understanding and analysis of peers for an outcome that is reflective of their individual and collective efforts. Effective collaborative activities involve the learner and evaluating strengths and abilities of collaborators as well as the quality of contributions.
To design for networking, ask learners to select an appropriate community from a variety of options, such as Facebook and LinkedIn groups, to discern the quality of the network and to evaluate its members for a specific purpose. As an extension, screen captures of engagement and the online community can serve as evidence during assessment of quality participation according to standards set forth by the designer or agreed-upon by the learners in advance of the activity.
To design for reviewing, ask learners to conduct an initial (i.e., beta) or final (i.e., alpha) test of an application, process, or procedure as part of an activity whereby they will demonstrate their abilities and evaluating according to a set of standards for the application, process, or procedure. Effective reviewing requires learners to analyze as well as evaluate the application, process, or procedure tested to determine correct functions and effectiveness.
To design for validating, ask learners to develop a set of criteria or select from an existing criteria set to evaluate online content, peer content, or other forms of content to discern the value, accuracy and appropriateness of that content. As Churches (2009) noted, “With the wealth of information available to students combined with the lack of authentication of data, students of today and tomorrow must be able to validate the veracity of their information sources” (p. 30). Although it may seem similar to reviewing, the validating activity will typically require a higher level of judgment and therefore evaluating of the content.
Refer to Churches (2009) for example rubrics and exemplars for validating information and threaded discussion.
Authentic activities demonstrating Creating, a form of synthesizing, involve building structures or patterns from various concepts or materials and forming a new concept or material with the emphasis on the creation of innovative meaning and structure (Clark 2015). By writing a manual for operations of processes, designing a mechanism or process for accomplishing a task, integrating solutions or ideas from various sources to solve problems, or revising processes effectively, Clark (2015) notes that learners are creating and thereby synthesizing, which is the highest level of cognitive ability represented in the revised cognitive taxonomy.
To design for directing, ask learners to plan for, develop, and direct or produce an artifact representing their learning. The outcome demonstrates creating because directing or producing requires learners to envision an outcome for processes and demonstrate the ability to evaluate and analyze alternative paths and outcomes in advance. Furthermore, reflective journaling about the creation process facilitates learning across all cognitive ability levels represented in the revised cognitive taxonomy.
To design for developing, think of multimedia as the primary means of output for demonstrating abilities. Ask learners to create with multimedia to demonstrate their innovative thinking about patterns and structures of content and/or processes. Filming, animating, videocasting, podcasting, and mixing and remixing content, as well as the development of an image, audio file, or other media content, would qualify as a multimedia creation and demonstrate skills with developing which requires the highest level of cognition, creating, which necessarily relates to the synthesis of what the learner knows about the topic of their multimedia creation.
To design for publishing, ask learners to pollution text, images, sounds, or a combination of these. The necessary oversight for quality of published material requires attention from the learner to the process of creation and the published outcome. Collaboration in small groups for publish content facilitates co-creation and negotiation, which are also higher order abilities requiring greater cognition from the learners. Publishing video, audio, images or diagrams, and text are examples of publishing activities possible with common web-based technologies. Mashing text and multimedia creations and innovative patterns and structures also demonstrates cognitive abilities at the level of creating in the revised cognitive taxonomy.
To design for programming, ask learners to create original applications, patterns, procedures, processes, or games. Learners might also demonstrate creating by revising an existing process or devising innovative solutions to existing problems with existing processes.
Refer to Churches (2009) for example rubrics and exemplars of podcasting and digital publishing.
References | Credits
This tutorial is an update by the author of the text of the tutorial she created in 2008:
There are other strategies, methods, and models for evaluating the influence of training and instructional experiences of any form beyond the Kirkpatrick Model, CIPP Model, and Success Case Method. However, an awareness of these three will provide you with a solid foundation for understanding the importance of evaluation of instructional programs, particularly workforce training programs, which are often the focus of an instructional designer’s role in an organization. These three models are discussed in brief in this post.
Kirkpatrick Evaluation Model
Many sources of information about the Kirkpatrick Four Levels Model can be found online, including the website overview about the model from the Kirkpatrick Partners (2015) website where The Kirkpatrick Model is explained.
The Kirkpatrick Model’s Four Levels are Interrelated and Interdependent (image source: KirkpatrickPartners.com)
Although the Kirkpatrick Model, developed by Donald Kirkpatrick in the 1950s, is one of the oldest formal training and instruction evaluation models and widely used even today, it is not the only model in existence. In fact, as you can learn about from the Kirkpatrick Partners (2015) website, there is a revised version of the standard four level model by Kirkpatrick called the New World Kirkpatrick Model, which may be of interest to you to explore.
For a review of alternative models not developed by Donald Kirkpatrick, start by considering the blog post about Alternatives to Kirkpatrick from Bozarth (2009) and consider the brief overview of these alternative models provided below to ensure you are familiar with some of the available options, the CIPP Evaluation Model and the Success Case Method. Keep in mind that you may work for an organization that uses a combination of the Kirkpatrick Model and other models and that is quite typical as each organization tends to develop a unique culture of evaluation to suit its needs.
CIPP Evaluation Model
The CIPP Evaluation Model was developed by Daniel Stufflebeam in the 1960s. CIPP is an acronym for Context, Inputs, Process, and Product. These four components of the CIPP Evaluation model are designed to provide a holistic view of the success of instructional initiatives. While the Kirkpatrick Model is a reaction and review oriented model, the CIPP Model seeks to provide a more decision-based approach to evaluation. A summary can be read from Ivan Teh’s (2015) blog in the post where the following image and a research-based exploration of the CIPP Evaluation Model are provided.
As Bozarth (2009) noted, the CIPP Model is more about evaluating what is being done during design and implementation than an evaluation of what has been done (para. 4). Mazur (2013) offers a another research-based summary of the CIPP Evaluation Model that is a good review to see for learning about how the model is a useful addition to or complete alternative to the Kirkpatrick Model.
Success Case Method
Another alternative to or addition to the Kirkpatrick Model is the Success Case Method. For a comprehensive introduction to the Success Case Method, refer to the Brinkerhoff (2009) excerpt of Chapter 1, which is freely available online as a PDF.
Comparison of Kirkpatrick Based Models and the Success Case M (Gram, 2011).
For a comparison of the Kirkpatrick-style model’s and the Success Case Method, refer to the image here from Gram (2011) – click the image to view at full size.
As explained in the Brinkerhoff Chapter 1, the Success Case Method provides a sort of rapid-evaluation method that can help to bring focus to a learning initiative while more comprehensive models, such as the CIPP and Kirkpatrick Models, are used for longitudinal analysis of the effectiveness of instruction.
Brinkerhoff (2009) also explains that the “Success Case Method combines the ancient craft of storytelling with the more current evaluation approaches of naturalistic inquiry and case study” (p. 17).
For this reason, the method appeals to many who prefer a deeper focus on qualitative data in an evaluation method (i.e., model) for approaching a determination of the return on investment (ROI) for a performance change initiative.
Remember that instruction is inherently focus on changing performance through increased, reinforced, or revised learning of knowledge and skills.
There are many models for instructional design (ID). It will be up to you as a motivated learner of the art and science of instructional design to seek out information about models. There is a tremendous amount of information available online to consider. This post presents places to begin exploring and poses a question for you to consider about the ADDIE Model and whether components are the essence of all ID models.
One option to use for learning about ID models is a Google Images search for “Instructional Design Models” – from there, you can click on any model’s image and select its “view page” option to see the webpage where the image appears and, in most instances, learn more about the model or just search the name of the model to learn more from a wider variety of websites. You could also begin by exploring some of the more popular models in instructional design detailed on this Instructional Design Models page from Cullatta (2013) or this Instructional Design Models page from Ryder (2014).
Regardless of their nuances, all design models involve five essential tasks for the designer, which are reflected in one of the oldest instructional design models, the ADDIE Model (i.e., Analysis, Design, Development, Implementation, and Evaluation).
The image with this post of the ADDIE Model is from the CSUChuco.Edu website; it illustrates how the model is an iterative model wherein each aspect of the model occurs interdependently with the others.
Although each aspect occurs in a linear process overall, the aspects of the ADDIE Model simultaneously occur in a non-linear process (i.e., analysis, evaluation, and design naturally occur in some form during development and implementation).
This is one reason I like to call the ADDIE model the “Chi” of all design models.
Would you agree with that assertion? Why/Why not? Please comment on this post with your response!
Learning to write effective learning objectives is a fundamental skill for instructional designers because the alignment of learning objectives with instruction is necessary for ensuring the assessment of learning is focused on what learners are expected to be able to do after the instructional activity has occurred. Thus, learning objectives guide the creation of the activity and inform the assessment strategy for the instruction.
The Cognitive Taxonomy
One of the most popular strategies for writing learning objectives is to use the original or the revised Cognitive Taxonomy, which was initially refined in the 1950s by theorist Benjamin Bloom and several colleagues and later revised in the 1990s by Lorin Anderson and colleagues. Instructional designers refer to this taxonomy frequently as just the “Bloom’s Taxonomy” and, when they do, they are increasingly referring to the revised version of the taxonomy. However, many designers do still rely on the original taxonomy (see the image from the University of Georgia (n.d.) below for a comparison of the old and new versions).
Notice the major difference between the original and the revised versions of the cognitive taxonomy is the use of active language in the revised so that, for example, level 1 (beginning/bottom level) in the original refers to “Knowledge” while the revised level 1 refers to “Remembering” — for both versions, there is an assumption that if the learner can perform at the higher more advanced levels they can also perform at the lower more beginning level.
In that sense, the taxonomy is a scaffold of levels of thinking performance that learners can achieve throughout instruction and, theoretically and practically, the goal for instructional design is to build learners’ ability to perform at the highest more advanced levels of cognition throughout a series of sequences instructional experiences that develop skills upwards through the taxonomy.
The most important thing to recall about using the cognitive taxonomy for writing learning objectives is that the verb you choose to describe the performance the learner will be able to do after the instruction indicates a corresponding level of cognition. Though this might sound simple, fact is it is one of the most confusing things to understand about the cognitive taxonomy for those learning to use it for writing objectives: you do not typically want to use the label for the level of the taxonomy as your verb, especially at levels 1 and 2.
The image below, from ZaidLearn (2009), is based on the original Bloom’s Cognitive Taxonomy. It shows the levels of cognition surrounded by common verbs selected to describe performance at that level and those are surrounded by activity ideas for how learners might demonstrate their performance.
For example, it is impossible to really measure “Remembering” or “Understanding” and yet you can measure whether a student is able to recite, recall, list, describe, explain, select, list, identify, and so on. Therefore, always write your learning objective so that it includes a verb designating a performance that can actually be assessed.
For assistance understanding how to write objectives using the cognitive taxonomy, you might wish to explore the Johnson (2008) Bloom’s Taxonomy: An Overview tutorial. The tutorial was created in the Shockwave Flash format so you will need to view it on a device that can render SWF (Flash) files.
For additional inspiration about how the cognitive taxonomy can inform designs and to get ideas for lessons you will create in our course and beyond, explore the Johnson (2008) Bloom’s Taxonomy: Designing Activities tutorial, which is also in the SWF format.
ABCD Objective Formatting and SMART Objectives Principles
And, finally, note that one common method for writing objectives is the ABCD Method. This method is very helpful to use because it forces the designer to think through all of the elements involved in the learner achieving the objective: Audience, Behavior, Condition, and Degree. The ABCD Method for writing objectives assumes that the designer has also considered whether the objectives are SMART (Specific, Measurable, Achievable, Realistic, Timely). You can find many resources about writing SMART objectives and using the ABCD Method online. Or, you can start learning about these now by viewing the Campbell (2014) video, Writing Learning Objectives: The ABCD Method, which is about 9 minutes in length.
~ Lisa Johnson, Ph.D.
Campbell, A. (Andrew Campbell). (2014). Writing learning objectives: The ABCD method [Video file]. Retrieved from https://youtu.be/cCvBypIX9Do
Johnson, L. (2008). Bloom’s taxonomy: An overview [Flash tutorial]. Retrieved from http://goo.gl/1J11l
Johnson, L. (2008). Bloom’s taxonomy: Designing activities [Flash tutorial]. Retrieved from http://goo.gl/9qvh8Y
A learning theory is a theory about how learning occurs. A learning theory is different from an instructional design model, but the two are intertwined and often discussed together. In fact, many models for designing learning experiences reflect one or more theories about how humans learn.
A comprehensive listing with brief summary pages about learning theories is from the InstructionalDesign.Org website by Culatta (2013) on his Learning Theories page.
While no one can reasonably be expected to be an expert with each of the theories from that resource or others provided here, all instructional designers are encouraged to save these resources and refer to them as needed in your practice designing learning experiences and to have a general awareness of broad theoretical categories.
A useful resource for conceptualizing categories of learning theory is from Millwood (2013): this interactive graphic provides a wealth of information about how various learning theories can be conceptualized as relating to one another. Although not everyone might agree with the relationships and descriptions Millwood provides, the resource is without question a valuable resource to use when learning about and as a job aid to recall learning theories.
Additionally, conducting a Google Images Search for Learning Theories will provide you with a wealth of visuals, ranging from diagrams to comparisons to infographics, which can all inform your understanding of learning theories and their many varieties.
Finally, another extensive summary of learning theories can be found from Donald Clark’s blog “Plan B” – the table below provides hyperlinks to the menu of postings on the blog as a convenience. You are encouraged to visit the actual blog and use it as a resource in learning about instructional design and as a model of a well-designed education-related blog too.
Facilitation is an art as much as a science online or in any learning environment, to be sure. Every educator who acts as a facilitator of learning will have some unique components to their style that represents their personality and experiences. Many times, without even knowing it, a facilitator will ascribe to a specific theory of combination of theories of learning.
To learn about the variety of learning theories that can inform facilitation, review the Culatta (2013) website on Learning Theories.
To expand your thinking about online facilitation, consider the information shared in the National College for Teaching and Leadership (n.d.) Advanced Facilitation course and, particularly, the information in the image below about online learning behaviors that is shared in the Module 6 of the course for inspiration about online facilitation in discussion-based learning environments.
Models of Online Learning Behaviors Source: National College for Teaching and Leadership (n.d.) Advanced Facilitation Course.
Considering the graphic above and your experiences with discussion facilitation in your online courses, do you find yourself experiencing all of these zones in your classes? How have the instructional design and technology choices in your classes influenced how much you have experienced each zone? Please provide your thoughts in a comment.
Copyright is a very complicated area of the law because, especially with the growth of online learning and increasingly digital instructional materials, the policies are constantly being (re)interpreted. As increasing amount of intellectual property becomes available via the web and any of it can be copied in a screencast or screenshot, it is difficult to protect one’s intellectual property.
Screenshots are commonly used to create images for instructional materials. It is always good manners and smart to cite sources for images, even a screenshot. One thing to be aware of is that non-commercial educational “fair use” allows for some leeway in education designs that are not possible in commercial designs. Depending on where you design instruction, the rules are different. For profit universities currently have different expectations (i.e., restrictions and regulations on re-use of content) than public universities, for example.
As mentioned by Culatta (2013), adherence to copyright is essential as a skill for any instructional designer and anyone else designing information or instruction for public consumption. Culatta’s webpage on Copyright includes an extensive list of links for many other resources to explore about copyright and instructional design.
Tobin’s (2104) infographic on copyright effectively simplifies the copyright process as it applies to most education organizations. If you need a visual representation of all the information from the Culatta (2013) source and the extensive ideas presented in the Wright (2014) presentation on copyright basics, the resource from Tobin will be useful.
Note that the Wright (2104) presentation is not the ‘last word’ on the topic of instructional design must-know knowledge; every corporation/organization will have a legal department that dictates use of materials and such, yet this presentation offers a good overview of some basic considerations all instructional designers should be aware of for copyright.
This post is a reminder to students in Dr. Johnson’s online classroom about the reasons that the “Due” dates need not be the “Do” dates for participating in our online class discussions. In essence the message of this post is:
Please do NOT treat the “Due Dates” for participation in our course discussions as the “Do Date”…Set your class study and participation routine so that you can post early and often during the week. Avoid posting only on the close date/due date of our discussions!
The reason for this advice to you as a learners is that experience has shown me that student learning increases in an online class if students are more prepared and more often engaged with classmates and I. Yet, this advice is also a lesson in manners… let’s consider an example to put things in perspective:
Imagine you were taking your course in a place-based modality (i.e., on a physical campus) and let’s say you had classes where discussions took place two or three days a week for an hour for each class. Next, I’d like you to imagine that for that campus course you made it a habit of arriving to each class during the last 5 minutes. When you arrived, you contributed a lot of information and questions to consider. Given there are only five minutes in the class remaining, you might not expect much interaction from me or your class peers, would you? In fact, you might be viewed by peers as not really caring about them as teammates in the class environment and you could be looked at as being rude! …the same is true online, in a sense, for an online class discussion that has been going on for an entire week!
Waiting until the final day to participate and final hours on that final day of an online discussion that has been ongoing on for a week is similar to walking into a land-based class in the last minutes and demanding or even just expecting a quality level of interaction or learning… Your professor, that’s me, and your classmates have already been engaged and are wrapping up the conversation when you arrive on the last day and hours of an online discussion; quality learning opportunities and the deep instruction and engagement you are paying for in the class experience are just not going to happen!
The final day of an online discussion, if anything, like the final minutes of a face-to-face class, are about reflection, debriefing, and digesting what has been said and preparing yourself for what is to come or asking for clarification on things from the class/discussion that are unclear to you.
Please keep this in mind when participating in our class – we have a courseroom, but it is yours and my interactions throughout the weeks that will make it a “class”! Get what you’re paying for – get engaged early and often and remember:
YOUR “DUE” DATES ARE NOT INTENDED TO BE YOUR “DO” DATES!
Occasionally students in my EDU652 Instructional Design & Delivery course express hesitation about obtaining a Twitter and Facebook account as requested in the the dilemma structured tool-practice discussions in the course assessment design. This post is a response for the community of ID professionals reading this blog and for the students of EDU652 to consider.
As a students in the EDU652 course you are an Instructional Design (ID) professional in training to some degree; I certainly understand the hesitations expressed. Personally, I am engaged in social media (SoMe) activity on Twitter and Facebook, mostly due to my desire to be competent with these tools to best develop instruction for you in relation to their uses as learning technologies.
It’s basically participatory observation to inform my reflective learning with regard to these tools, to sharpen the authenticity of my practice as your educator-for-hire, and to inform research I may eventually do on the topic of social learning and topics related thereto!
How do I use Facebook and Twitter as an ID Professional?
In summary, know that I may “follow” you on Twitter (it depends on the value for my learning I perceive in the content you (re)tweet) and know that I will not “friend” you on Facebook unless we are colleagues or actual ‘real-life’ colleagues or friends. Thus, I use these accounts differently: Twitter is my most public account where I know everything I post is “public” while Facebook is a more intimate tool reserved for real-world friends and colleagues, without the expectation that what I say will be repeated, but with full knowledge by me of knowing that it might be! I follow people I’ve never met in person and may never meet in person on Twitter, yet I will not friend someone on Facebook unless we have a real-world connection or an established professional virtual connection, such as through correspondence via email, perhaps. These are “my” rules for my use of these SoMe technologies for learning and sharing as an ID professional. I firmly believe everyone is empowered to set their own rules for SoMe participation. My rules are my personal philosophy/framework for my SoMe participation.
That being said, please be reassured as a student that if you DO NOT want to practice with and explore Twitter and Facebook authentically as part of your coursework, you can still participate by exploring these tools academically in the abstract by analyzing and synthesizing research-based and popular literature discussing their implementation, affordances, and effectiveness as learning technologies.
For some context, read the following paraphrased excerpt from an email I once received from an EDU652 student…
Thanks in advance for your time reading this Professor Johnson, my question pertains to getting a Twitter and Facebook account. I don’t have any social media sites. This is on purpose. There is actually a twofold reason. One is personal and private; the other reason is purely my personal belief system and getting on would put my own personal integrity at risk. I don’t believe social media is good for society. Therefore, I have a dilemma with adequately completing the assessments for Twitter and Facebook. I feel very strongly that we should not be forced to sign up for social media accounts. I have a very strong belief system against being forced to do this! I feel I can research the pro’s and con’s of without having an account. Also, since there are varied communications’ methods out there to keep in constant communication with others I don’t feel I will be forced to ever only use Twitter or Facebook in the work place.
Now, please read my response to the student:
My thoughts are that you are entitled to avoid immersion in the social media world and to avoid signing up/participating in these third-party services we learn about in our class. Just as with the blogging and screencasting assessments, your actual use of these third-party technologies is your choice. My “alternative” assessment plan for you, and anyone who does not want to use Facebook or Twitter, is this: Simply research the tool using publicly available sources (i.e, those available to you in scholarly literature, popular writings about the tools on blogs and websites and discussion forums online, etc.) In assessments related to these tools, I expect you to address the dilemma presented about use of these tools and, as always, support your recommendations and conclusions with critical thinking and with use of support from scholarly and popular sources.
However, know that regardless of your personal opinion on social media, it is a fact that as an instructional designer – in many real-world contexts – you WILL be expected to participate in SoMe. Therefore, you might try to broaden your perspective about SoMe tools and their value for teaching and learning with technology.
To be sure you understand: I am not trying to indoctrinate you toward belief in technology enhanced learning with Web 2.0/social media as it relates to the embodiment of social learning theory and social media in EDU652 – I am providing by design some instructional experiences that can expose you to practical and immersive-experiential activities that involve common SoMe technologies that many instructional designers are expected to have experience with in the real-world of ID work. That said, please focus your replies in our discussions about these tools on an unbiased analyses of available research on the tools.
As an ID professional, I do participate in SoMe and try to do so setting a positive example. I am very aware of my digital presence on these sites, Twitter, Facebook, and even Google+ …. Additionally, since I teach about these technologies, I feel a responsibility to walk the talk and actually use the tools, so I am engaged in SoMe ultimately for professional purposes! I have learned through my use of SoMe that “success” in the use of SoMe tools varies for professionals and is fully dependent on one’s attitude about the technology’s usefulness and one’s approach or professionalism when engaging in SoMe.
Example: I connected with an esteemed member of the ID community via Twitter after having participated in the backchannel for a conference we both attended. It is not likely this esteemed, busy academic professional in ID would take the time to converse with me by telephone or email ever in a cold-call contact situation, yet via our initial connection in the use of social media I have been able to learn WITH and FROM this individual — After following each other on Twitter, we conversed by email and have now become colleagues / “friends” / on Facebook …and the learning continues as this person is a now a member of my PLC and I theirs.
SoMe technology has the affordance of creating connections among professionals that may otherwise not occur or, at least, would not as easily or rapidly occur via traditional methods for professional learning and social-networking.
That being said, remember, SoMe CAN expand and enhance your Professional Learning Community (PLC) and, in turn, your lifelong education… it can, if you use it proactively and yes, professionally… responsibly… in a smart way (i.e, have a positive digital presence and you will get positive results).
In summary, this post was about the value of PLC expansion and maintenance via SoMe – Yes, reader, you can learn about Twitter and Facebook discussions entirely from a literature / abstract / non-practitioner point of view. Yet, at the same time, remember, there can be value in the services; like ALL things in life, I’d argue, it’s got potential to be a positive in your life… or a negative; it’s what you make of it.
The image in this post has a powerful message, I think… it relates to the point of this post about being positive and you will get positive results in life! Source is from my G+ account; found online via Facebook one day and saved it; author and original source are unknown/anonymous. If you need a transcript of the image’s text, contact me I am happy to provide one for you.
[The following are original comments from me initially published in response to an internal social network post at Ashford University by a colleague discussing changes and innovations in higher education as an industry.]
“I agree that things appear dismal from some perspectives about the future of the higher education system as we know it, but just as with so many other industries have recalibrated, so too shall we in what Harvard University’s Dr. Elmore called the “Learning Sector” (in “Leaders of Learning” Harvard MOOC in 2014). The Learning Sector as we know it is evolving due to numerous, well, in scientific terms, numerous selective pressures – it is a process of natural change and selections where the most adaptive institutions and organizations will survive in the altered ecosystem of learning we’ve created as a society and world…”
“…educated consumers are perhaps the most powerful and empowered “selective pressure” forcing the evolutionary path and adaptations in our industry within the Learning Sector, indeed; just as automotive consumers (and all commercial goods consumers, really) are now more informed due to the widening access to information about goods and services via the internet; that has forced adaptations in the way common goods are bought, sold, and distributed, so too does the educated consumer of the uncommon “good” of a higher education (uncommon, in a global sense) force adaptations in our industry….”
Do you agree with my assertions? Disagree? Do you have comments, questions?
Seems to me there is a widespread ineffective-practice at many institutions of higher education in course, program, and learning experience designs generally with regard to the use of the cognitive taxonomy as a guide for measurable outcomes.
This ineffective practice stems from a misconception that lower-levels of the cognitive taxonomy are suited best to introductory courses when, in fact, effective design does not imply that occur at all.
Introductory students can “Create” – creation can be scaffolded in a program and in a course; lower levels of Bloom’s (revised or original cognitive taxonomy) are intended as scaffolds not “ends” in instructional designs.
Do you agree or disagree with this assertion? Why/Why not? I look forward to hearing your thoughts and examples of how we can most effectively apply the cognitive taxonomy (i.e., Bloom’s) to the design of measurable learning outcomes and associated assessments.
Johnson, L. (2008). Bloom’s taxonomy: An overview [SWF Tutorial]. Created for the Colorado Community College System. Available from http://goo.gl/1J11l
Johnson, L. (2008). Bloom’s taxonomy: Designing activities [SWF Tutorial]. Created for the Colorado Community College System. Available from http://goo.gl/9qvh8Y
This post includes a video presentation of highlights from my experience at the 2014 International Convention by AECT. Thanks to all who presented; thanks to those who connected with me during the conference. I enjoyed the backchannel participation on Twitter – check it out at #AECT14 and @Kuriousmind — Please feel free, faculty and students, to use this as a learning object for critique or expansion of your instructional experiences. Please cite in APA 6th edition as follows:
[This content was originally written as part of a graduate school course paper in 2010 and was repurposed for use in an internal publication for Ashford University; the Online Discussions Design Guide, in 2014. It has been adapted here for a general audience.]
A cursory review of publications about online learning will provide you with plenty of information about the variety of methods for designing, implementing, and evaluating online discussions and reasons for the use of discussions as an assessment strategy. This is not intended to be an exhaustive review of why universities require online discussions. Instead, it is intended to offer a research-based introduction to primary reasons why the requirement exists by drawing on some key literature from the field of online learning and asynchronous online discussions.
Discussions as a Critical Component of Online Learning
Discussion is known to be a critical component of the process of learning regardless of whether learning is experienced in a formal or informal context or within a place-based, blended, or online course format (Al-Shalchi, 2009; Andresen, 2009). Rourke, Anderson, Garrison, and Archer (1999), authors of the community of inquiry model, suggested that the online discussion is a dominant choice for course interactions in online learning when a course is designed to evoke “higher-order thinking” (p. 50). Furthermore, asynchronous online discussions have been known to be an effective assessment strategy in courses since the early period of online learning because they can be designed to promote “high levels of responsive, intelligent interaction between and among faculty and students” while also, due to their asynchronous format, “providing high levels of freedom of time and place to engage in this interactivity” (Rourke, Anderson, Garrison, & Archer, 1999, p. 50).
Khine, Yeap and Lok (2003) suggested that design considerations for online asynchronous discussions include the level of interaction and resulting presences afforded because interactions leading to higher levels cognitive, social, and transactional presence are correlated with increased learning and satisfaction in online courses. Havard, Du, and Olinzock (2005) support those suggestions by noting that “many distance-delivered courses experience high attrition rates that result from factors such as students feeling isolated, unmotivated, overwhelmed, or unchallenged, “but this can be mitigated by designs that encourage high levels of interaction and presence through online asynchronous discussions” (p. 134).
The literature is firm about the importance of transactional, social, and cognitive presence in online learning as it relates to the online asynchronous discussion. These types of presence are discussed in brief below to provide an introduction to the concepts as a basis for why Ashford University requires online asynchronous discussions in the online classroom.
Transactional presence is defined as the “degree to which a distance student perceives the availability of and connectedness with people in his/her educational setting” (Shin, 2003, p. 71). The definition of transactional presence offered by Shin (2003) embodies social presence insofar as “availability” represents access to others through responsive “interpersonal relationships” and “connectedness” is indicated by “belief or feeling that a reciprocal relationship exists between two or more parties, involving an individual’s subjective judgment upon the extent of the engagement in relationships with others” (p. 71). Therefore, the online asynchronous discussion provides an instructional opportunity to reduce transactional distance among course participants by increasing transactional presence. Relatedly, increasing levels of social presence can enhance the reciprocal relationships possible in an online course and thereby enhance the learner’s feeling of connectedness in an online course.
Social presence is characterized by interactions that exemplify interpersonal and affective, or emotional, connections among learning participants (Garrison, 2003; Murphy, 2004; Rourke, et al, 1999). Specific indicators of social presence include interactions, such as “self-introduction, expression of feeling, greeting, closure, jokes, the use of symbolic icons and compliments to others” (Lobry de Bruyn, 2004, p. 77). Interaction among online course participants usually initiates some level of collaboration whereby “participants show awareness of each other’s presence and begin to relate as a group” (Murphy, 2004, p. 422). Furthermore, collaboration is indicative of social presence and the formation of a community of inquiry (Garrison, 2000; Murphy, 2004).
Such collaborative communities “not only share perspectives, but also challenge and refine those perspectives” (Murphy, 2004, p. 423). The significance for implementing an effective learning environment through an online asynchronous discussion is that the challenging and refining of perspectives leads to construction of shared knowledge and meanings (Murphy, 2004; Rourke, et al, 1999). Therefore, including asynchronous online discussions is ideal because it provides a space for course participants to develop and enhance social presence in the online classroom. Nevertheless, transactional presence and social presence cannot stand alone; these forms of presence must occur in conjunction with strong levels of cognitive presence.
Cognitive presence is characterized by the “process of both reflection and discourse in the initiation, construction and confirmation of meaningful learning outcomes” (Garrison, 2003, p. 4). That is, the design of online asynchronous discussions must aim to promote reflection and forms of collaboration since these are known to be the fundamental ways in which cognitive presence is formed in online courses. Recall, when present, collaboration occurs along a continuum of six stages represented by (a) “social presence; (b) articulating individual perspectives; (c) accommodating or reflecting the perspectives of others; (d) co-constructing shared perspectives and meanings; (e) building shared goals and purposes; and (f) producing shared artifacts” (Murphy, 2004, p. 423). Earlier processes, such as social presence and interaction articulating individual perspectives, are necessary before reaching the highest levels of the continuum, but occurrence of the earlier processes does not guarantee as much since “simple interaction is a necessary prerequisite to full collaboration, but simple interaction may occur without ever moving forward to higher levels of collaboration” (Murphy, 2004, p. 423).
In summary, the continuum model of collaboration explained above from Murphy (2004) supports the use of online asynchronous discussions to create a learning experience that goes beyond superficial interactions and increases social and transactional presence lading to community development and cognitive presence, which is known to promote deeper learning of course topics. Though we may not readily think of the online asynchronous discussion as a form of collaboration, they are! To achieve the goal of increased transactional, social, and cognitive presence in our online classrooms use a variety of discussion formats.
Andresen, M. A. (2009). Asynchronous discussion forums: success factors, outcomes, assessments, and limitations. Educational Technology & Society, 12(1), 249–257. Retrieved from http://www.ifets.info/journals/12_1/19.pdf
Garrison, R. (2003). Cognitive presence for effective asynchronous online learning: The role of reflective inquiry, self-direction and metacognition. In J. Bourne & J. C. Moore (Eds.), Elements of quality online education: Practice and direction (Volume 4). Needham, MA: The Sloan Consortium.
Havard, B., Du, J., & Olinzock, A. (2005). The knowledge, methods, and cognition process in instructor-led online discussion. The Quarterly Review of Distance Education, 6(2), 125-135.
Khine, M. S., Yeap, L. L., & Lok, A. T. C. (2003). The quality of messages ideas, thinking and interaction in an asynchronous CMC environment. Educational Media International, 40(1/2), 115-125.
Lobry de Bruyn, L. (2004). Monitoring online communication: Can the development of convergence and social presence indicate an interactive learning environment? Distance Education, 25(1), 67-81.
Murphy, E. (2004). Recognising and promoting collaboration in an online asynchronous discussion. British Journal of Educational Technology, 35(4), 421-431.