Ironic Influences: On the Proliferation of Personalized Learning Initiatives

Discussing with a learner today the impact of Prensky’s concepts about “Digital Natives & Digital Immigrants” and where the generational assumptions about preferences and needs in learning design are today. The response I gave was along the following lines. And, though I realize this is a bit superficial itself as a discussion of the topic, I am curious to know whether you, dear reader, also see the irony.

The 2001 Prensky article is often cited and is where his hypothesis about digital technology’s influence on generational preferences and needs in learning were disseminated most widely. It is useful to recognize that Presnky’s ideas were formulated and shared widely at a time when the Internet and World Wide Web were relatively new and internet enabled devices were proliferating as was their use in homes and schools. His ideas are certainly plausible at first read and are thus not easily disregarded, yet his analysis was at the same time arguably superficial and its influence, arguably also, has been both harmful and helpful to learners, educators, and instructional designers alike.

It is also useful to recognize that it is helpful for developing theory and method and even policy about learning to have postulations like Prensky’s about trends and influences of technology voiced and to thoughtfully consider how they may have or are influencing learning experience and design. Nevertheless, works like Prensky’s – ones that seem to make good “common” sense on the surface and have catchy names and phrases attached to them for easy recall – are too often taken as indisputable “fact” by individuals and groups inclined to avoid critically thinking them through based on their experiential or other forms of evidence (we see this a lot with individuals who are uneducated or poorly educated) and by individuals and groups prone to pick and choose ideas that support their agendas neatly (we see this a lot among politicians, policymakers, and journalists and the “media”).

Invariably, though, once the dust settles from the initial impact of such big ideas, like Prensky’s Digital Natives and Digital Immigrants, it is the case that research and more cautioned observers and critical thinkers alike will offer reflections on the idea’s accuracy and impact, which several have done. I refer you as additional reading to this reprint of the article by Helsper and Eynon (2010) from the esteemed British Educational Research Journal. Though from a British perspective, these authors’ critical argument about the harm generational assumptions can have on learners and educators is significant to consider as an instructional designer.

[Learner Scholarship Tip: You might also search in the Library and otherwise online items from the References list in the article to see where their sources come from and additional learning. Using an article’s “References” to locate and read original/other sources on a topic is a common research technique effective scholars and academic researchers use. Why? Simply because it is a way for YOU to determine (with your own critical mind!) where authors got their ideas and to confirm if they’ve interpreted sources correctly and cited them meaningfully (per your original analysis & evaluation of the sources)!]

I share all of this to recommend you are sure to expand your reading and, thus, thinking to draw your own conclusions about Prensky’s work and its influence on instructional design. For me, reviewing available evidence, I think Prensky’s ideas were useful as frames of reference and conversation – as conversation starters and lenses for analyses of events of the current era – yet ultimately they did more harm than good in the short-term because they led to an ageist bias (that “young” people are adept and can use technology effectively to learn and “old” people are inept and unlikely to learn well with digital technologies) and that has unnecessarily furthered the segregation of so-called and perceived groups of “young” and “old” learners and, yes, educators as well.

Some examples that come readily to my mind are:

  • The child or adolescent learner who comes from an impoverish home environment into a digital tech-intensive school environment and who may not have friends or other access means for exposure to digital devices. This learner most likely has little to no exposure to smartphone, tablet, or even desk/laptop computer use outside the school. Nevertheless, the “digital native” assumption means the learner is given a curriculum that requires a preference, familiarity, and even fluency with using digital means for  learning. The learner falls behind classmates in performance due to the “digital divide” they’ve experienced in their home environment from more affluent classmates and is branded as an under-performer at worst and under-achiever at best. Still, the learner is disadvantaged all because of a “meme” (or metaphor, concept, label, profile…whatever you want to call the assumption of any young person as a “Digital Native”).
  • The middle-aged or older adult learner who is moderately, advanced, or even highly tech-inclined individual who is forced to complete a low-tech style curriculum design (in college classrooms, workplace training programs, etc.) under an erroneous assumption they, like their adult peers in the class, are not capable of using high-tech style learning tools and who becomes quickly bored with the learning process and drops out of the class, or even an entire degree, certificate or training program.
  • The skilled and seasoned and caring educator who is subtly forced out of practice because of an assumption by policy or persons in their workplace they are unable or will be slow to learn to effectively use technology to teach a younger and pressumably more technologically and digitally savvy learner populations. Perhaps the educator could have been offered an assessment of motivation, will, or even coaching for how to use a more peer-learning based model and acting as a facilitator (aka that other popular meme: “guide on the side”), rather than ousted on an assumption traditional lecture and “teacher-to-student” interaction (aka the other meme: “sage on stage”) is all the educator would be willing to use as an instructional method. Worse yet, the educator may even believe that they are unable to adapt to provide what is needed to their students and become depressed and unhappy in their profession based on widespread assumptions about age-capabilities in popular press and policy decisions. That is, a feeling of satisfaction and career success could be undone for the educator all due to a widespread misapplication of a hypothesis about generational styles

Can you think of any examples that illustrate possible effects (positive or negative) in applying Prensky’s ideas without critical analysis of the individual learner’s needs could have? Have you personally experienced any positive or negative effects from his ideas in your learning or workplace environments? 

I’ll close here by noting that some of the “backlash” I’ve observed as linked to the widely and often misinformed application of Prensky’s ideas has been a move toward individualization, or personalization, of learning designs. Trends such as competency-based learning, adaptive learning technologies, prior learning and skills assessments, and the like are proliferating in 2016, in part, I’d argue, to realize the ideal that each and every learner (regardless of age) can access educational opportunities and experience learning as they prefer and need to for optimal learning (i.e., having an ability to recall, apply, and transfer to new contexts their knowledge and skills).
That is: The disruption to the monolithic “curriculum” that makes broad assumptions about what learners of certain ages (and genders!) need or prefer and are capable of is arguably revolutionary. And, in some sense, Prensky’s folly (or, rather, the folly of those who failed to think critically about and apply his ideas cautiously) has created (arguably) positive change that, perhaps ironically, is facilitated by “digital” technology!

References

Prensky, M. (2001). Digital natives, digital immigrants. On The Horizon, 9(5), 1. Retrieved from http://www.marcprensky.com/writing/Prensky – Digital Natives, Digital Immigrants – Part1.pdf 

Helsper, E. & Eynon, R. (2010). Digital natives: Where is the evidence? British Educational Research Journal, 36(3), 503-520. Retrieved from http://cyber.law.harvard.edu/communia2010/sites/communia2010/images/Helsper_Enyon_Digital_Natives.pdf 

 

Developing Cognitive Skills: Infographics for CAVES!

The video embedded below was prepared as a Discovery Session opportunity for the Ashford University Teaching and Learning Conference in November 2015. You are encouraged to post a comment here or contact me individually to discuss infographics and to share your infographics.

View on YouTube: https://youtu.be/5efIblIChmM


Example citations for this session to use are:

Johnson, L. (2015). Developing cognitive skills: Infographics for CAVES! [Video file]. Retrieved from https://youtu.be/5efIblIChmM

Johnson, L. (2015). Developing cognitive skills: Infographics for CAVES! [Weblog post]. Retrieved from https://reflectivelearning.net/2015/11/02/infographics/


Session Outcomes

1. Distinguish infographics from other graphic formats

2. Recognize the characteristics of an effective infographic

3. Recognize instructional strategies for using infographics

4. Locate existing infographics for use in instructional designs

5. Recall technologies for creating effective infographics

6. Plan use of familiar technologies to create infographics


 

Criteria for Evaluating Infographics

These criteria are meant to be a starting set of considerations for anyone creating a rubric or other evaluation tool for assessing infographics you create or learners create in coursework.

  • Has a (main) point
  • Is Data driven
  • Includes references
  • Includes high impact visuals
  • Designed with high contrast colors
  • Utilizes consistent color scheme
  • Is accessible… i.e., minimal text describing visuals

Remember, when creating infographics, you and your learners are employing and sharpening higher-order cognitive skills – remember these verbs as you write outcomes and objectives for infographics – CAVES:

  • Creating
  • Aggregating
  • Visualizing
  • Evaluating
  • Synthesizing

Session Resources

Below are several of the resources shared in the video. If you know of other resources about infographics you would like to share, please post in a comment to this post!

Books About Infographics

Krum, R. (2013). Cool infographics: Effective communication with data visualization. Wiley. ISBN-13: 978-1118582305.

Meyer, E. K. (1997). Designing infographics. Hayden Books. ISBN-13: 978-1568303390.

Beegel, J. (2014). Infographics for dummies. For Dummies. ISBN-13: 978-1118792384.

Websites About Infographics

Ross,  A. (2009, June 7). Infographic designs: Overview, examples, and best practices. Retrieved from http://www.instantshift.com/2009/06/07/infographic-designs-overview-examples-and-best-practices/

Schrock, K. (2010-2014). Infographics as creative assessment. Retrieved from http://www.schrockguide.net/infographics-as-an-assessment.html

Resources for Finding Existing Infographics

Google Images: https://images.google.com/

Infographic-A-Day: http://igad.onlearning.us/

Web-Based Infographic Creation Tools*

http://www.easel.ly/

https://infogr.am/

http://www.visme.co/

*Remember, though, for non-technology intensive courses or to avoid issues with requiring 3rd party / web-based tools as part of your instructional designs, consider using familiar and common technologies such as Microsoft PowerPoint or Word, or possible, Google Slides and Docs.


 

Open-Access Journals for Learning, Teaching, and Instructional Design

I recently compiled a list of open-access journals and thought I would share that list here. Please feel free to add additional journal recommendations and comments in reply to this post or point to other listings of journals related to learning, teaching, and instructional design practice and research!

The list I provided here is not intended to be exhaustive of all possible journals for reading and publishing opportunities in these areas. This was merely my humble attempt one afternoon to compile a list of journals and I hope you find it useful!

(IJITDL) International Journal of Instructional Technology and Distance Learning

(IJTLHE) International Journal of Teaching and Learning in Higher Education

(IRRODL)International Review of Research in Open and Distance Learning

(JoAID) Journal of Applied Instructional Design

(JIOL) Journal of Interactive Online Learning

(JOLT) Journal of Online Teaching and Learning

(JoSoTL) Journal of the Scholarship of Teaching and Learning

 

Happy Researching!

~ Lisa Johnson, Ph.D.

Designing Activities Using the Revised Cognitive Taxonomy

Introduction

This tutorial describes activities at each level of the revised cognitive taxonomy. The focus is on use of innovative tools and processes with digital technology. Projects, essay exams, reports, and other traditional activities are not the focus of this tutorial. Those activities are appropriate in certain learning contexts and learning experience designers will need to weigh an activity’s learning benefit from including a technology-centered approach. Please see the References | Credits section of this tutorial for additional information on its origination, references, and preferred citation when giving attribution to the contents of the tutorial.

Click Here to Access the Original SWF Tutorial Version (2008, Lisa Johnson, PhD) ]


Remembering

Real-world activities demonstrating Remembering include recalling information in meaningful ways, such as a learner being able to recite a policy, quote facts and figures, such as prices, or relate safety or other procedural rules from memory (Clark, 2015).

Bookmarking

To design for bookmarking, ask learners to select online content on a specific topic or series of topics and organize the webpages and articles online about the topic(s) on a device, such as a laptop, tablet, or smartphone. You can assess performance on this style of activity with the learners’ submission of an image (e.g., screen capture) of the bookmarks on their device to reveal the organization of the items in a format that demonstrates accurate recall and identification of key themes in a topic or series of topics. While selecting items for bookmarking, learners may also demonstrate cognition representative of the understanding, analysis, and evaluation levels of the taxonomy.

Social Bookmarking

To design for social bookmarking, ask learners to use a web-based social bookmarking technology, such as Diigo, LiveBinders, Scoop.it, Pintrest, or Delicious. The social bookmarking activity might be completed individually or as part of a small or large group collaboration. Organizing and tagging bookmarks with specific keywords demonstrates the ability to identify, recall, and name key aspects of a topic. Assess the style of activity with the learners submission of a link to the location of their bookmarks. While selecting items for social bookmarking, learners may also demonstrate cognition representative of the understanding, analysis, and evaluation levels of the taxonomy.

Labeling

To design for labeling, ask learners to use a blank map or taxonomy you have created from a web-based mind mapping, concept mapping, or taxonomy mapping technology, such as FreeMind or Coggle. The labeling activity might be completed individually or as part of a small or large group collaboration. Ask learners to label an empty concept map, and mind map, or taxonomy to add key terms or concepts and demonstrate recall and recognition of order or alignment. Alternatively, learners can be asked to label an image or process diagram demonstrating the ability to identify, locate, and name through recall of specific items.

Quizzing

To design for quizzing, ask learners to list, match, label, identify, or otherwise recall information to answer a series of questions about a concept. Though quiz or test and the true/false, multiple-choice, fill-in-the-blank, and matching formats can be written for higher order cognitive skills, these question forms are most often associated with remembering and relating recall information at the lowest level of cognition in the revised taxonomy, which is remembering. Alternatively, asking learners to create questions and develop answer-keys for quizzing that are completed by peers is an effective strategy to engage learner recall of knowledge. Asking learners to write questions may also extend the quizzing activity so that students are demonstrating a higher level of cognitive ability.

Listing

In a listing activity, learners demonstrate the same skill as with writing a list, but it is performed in a digital environment using ordered lists, with numbers, or un-ordered lists, with bullets. To design for listing, ask learners to use a word processing program, such as Google Docs or MS Word, or even the text editor and a discussion forum within an online course, to produce lists. An alternative is to ask learners to create lists on slides using presentation technology, such as MS PowerPoint or Google Slides. Creating an ordered list by number or date demonstrates the skill of recalling and sequencing events and accurate order. Creating an unordered list by bullet pointing demonstrates the skill of recalling and compiling information. Listing is also an effective activity for brainstorming prior knowledge on a topic before a higher order cognitive activity.

Refer to Churches (2009) for example bookmarking rubrics and exemplars.


 

Understanding

Authentic activities demonstrating Understanding include a learner being able to translate an equation, explain steps for performing a complex task, or interpret issues and instructions with original phrasing (Clark, 2015).

Advanced Searching

To design for advanced searching, ask learners to construct Boolean (e.g., AND, NOT, OR, etc.) search strings to demonstrate an understanding of a topic’s key components. The ability to modify a search by phrasing, inferring, and interpreting key components in a search topic demonstrates understanding. As an extension of this activity, providing search strings beyond single words and evaluating the results of the search incorporate higher order cognitive abilities.

Journaling

To design for journaling, ask learners to use an individually managed or group blog, wiki or other online writing/journaling tool, such as Google Docs, to explain, compare, or summarize concepts. The journal can be part of a larger activity involving collaboration and discussion to scaffold development toward higher order cognitive abilities.

Categorizing

Otherwise known as tagging, to design for categorizing, ask learners to organize and classify a group of items, such as documents, images, or webpages, using folders or social bookmarking technology. A group of tags can be provided, or learners may be asked to create original tags for their categorizations individually or in a group setting. Learners would then apply the tags to the organize content to demonstrate understanding of the key themes of the topic. Asking learners to justify use of a tag or process of categorization transcends understanding and will demonstrate higher order cognitive abilities, including analyzing and evaluating.

Annotating

Otherwise known as commenting, to design for annotating, ask learners to use a web-based annotation tool to add notes as comments to PDFs or other document files or images to demonstrate they understand content beyond a recognition or recall level. Giving an image of a process or taxonomy and having learners annotate the content is another example of commenting or annotating for demonstrating the understanding level of cognitive ability in the revised taxonomy.

Subscribing

Churches (2009) noted that, “the act of subscription by itself does not show or develop understanding, but often the process of reading and revisiting the subscribed feeds leads to greater understanding” (p. 11). Nevertheless, selecting a subscription, usually using an RSS-feed technology, and submitting the subscription along with an explanation or summary that interprets the relationship of the subscription (i.e., feed) to a concept, process or topic, is a way for learners to demonstrate understanding abilities. Alternatively, by selecting and justifying a series of subscriptions/feeds on a topic, learners are producing a resource demonstrating thinking at the critical level of creating, which is the highest level of the revised cognitive taxonomy.

Refer to Churches (2009), for examples of searching, journaling, and wiki editing rubrics and exemplars.


 

Applying

Authentic activities demonstrating Applying include using concepts in novel situations and may be demonstrated by following a problem-solving method for a variety of issues or by applying statistics to determine survey validity (Clark, 2015).

Operating

To design for operating, ask learners to demonstrate operating or manipulating “hardware and applications to obtain a basic goal or objective” (Churches, 2015, p. 9). Furthermore, to design for operating learners may also be asked to accurately use an instrument in a lab experiment that the learner videos or create an audio recording of while completing the steps. Use of screen capture technology is also useful for asking learners to demonstrate accurate techniques with library database searches and file or other content creation, which demonstrates applying of principles and techniques.

Sharing

Typically involving the process of uploading as well, to design for sharing ask learners to share images, video, audio, text content, or mixtures of these and a course or using a web-based service and network community, such as Facebook, Twitter, or LinkedIn. Sharing according to a set of created or prescribed standards will demonstrate application whereby students are applying a set of principles and methods. Collaborating with peers in the process of selecting, organizing, and delivering uploaded content or content curated from around the web can initiate higher order cognitive abilities, such as analyzing and evaluating.

Editing

To design for editing, ask learners to revise existing content of their own creation or of others in a document, repository or other format, such as a wiki, a Google Doc, or even in a Google Slides presentation. The act of editing processes, procedures, and content according to a set of guidelines and principles demonstrates application, but may also involve levels of analyzing and evaluating and arguably creating levels of the revised cognitive taxonomy as well.

Playing

To design for playing, asking learners to present, perform, interview, and otherwise engage in a simulated context works well. Screened captures, video, and/or audio of play encourage reflection on accuracy in applying principles and methods in specific contexts. Learners might also demonstrate playing in MORPGs (multiplayer online role-playing games) to demonstrate role-playing and application of appropriate methods or techniques. Successful plan operation of a game demonstrates recall of methods and understanding of processes or tasks when applying skills. Incorporating nonverbal communication (e.g., gestures, attire, posture) using an avatar creates an opportunity to assess learners application of skills in the affective domain as well.

Refer to Churches (2009) for example rubrics for these activities and exemplars of collaboration, audio/video conferencing, and interactive whiteboards.


 

Analyzing

Authentic activities demonstrating Analyzing include distinguishing between facts and inferences and deconstructing concepts were material items so the organizational structure is understandable (Clark, 2015).

Mashing

Mashing involves aggregating or integrating multiple sources of data within a single product or output, such as a video, report, feed, or collage of items. To design for mashing, ask learners to demonstrate their skills in analyzing by copying, inserting, embedding, or otherwise pasting various types of content into word-processing or presentation files that can be submitted for evaluation. Mashing inherently involves a degree of evaluating and creating, which are higher order cognitive skills in the revised taxonomy.

Linking

To design for linking, ask learners to compile related pages of content and a wiki, linked to relevant blog postings, or compile links in a document or other posting format demonstrating analysis of the content for selection in the compilation. By linking relevant items or groups of items, learners demonstrate the ability to deconstruct and differentiate multiple sources of information and break down the sources into coherent related components or categories.

Cracking

Otherwise known as reverse engineering, cracking occurs and applications based learning, such as in computer programming, software development, webpage designing, and all forms of engineering. To design for cracking, ask learners to reverse engineer an existing item by deconstructing or “cracking” the existing creation. Naturally, designers will want to consider the legalities of reverse engineering an existing item prior to assigning this type of activity.

Refer to Churches (2015) for more examples, including an example data processing rubric and exemplars.


 

Evaluating

Authentic activities demonstrating Evaluating include appraising information, people, or situations and making judgments, which could involve selecting effective solutions, conducting appropriate hiring of personnel, or explaining and justifying budgetary items (Clark 2015).

Commenting

To design for commenting, ask learners to contribute constructive critique or engage in a dialogue aimed toward negotiating meaning using a threaded discussion, comments on a blog, revisions to a wiki or shared document, or annotations of images or other forms of content. Submitting reflection comments using video, audio, images, and documents as the subject matter or means of reflection are effective was to demonstrate leaners’ cognitive ability with evaluating. Furthermore, posting comments and following up on contributions requires learners to evaluate the materials and contexts and structure interactions in meaningful and coherent ways through discussion.

Moderating

To design for moderating, ask learners to moderate an asynchronous discussion, act as a primary or co-editor in a wiki or shared document, or to appraise content by peers in another format to demonstrate cognitive skills at the evaluating level of the revised taxonomy. As Churches (2009) explained, part of the process of moderating is evaluating information from a variety of perspectives to assess the worth, value, and appropriateness of content according to a set of standards.

Collaborating

Although collaborating can occur at each level of the revised cognitive taxonomy, it is at the evaluating and creating levels that collaborative learning activities are best situated when designing activities. To design for collaborating, ask learners to participate in a small group project drawing on the collective understanding and analysis of peers for an outcome that is reflective of their individual and collective efforts. Effective collaborative activities involve the learner and evaluating strengths and abilities of collaborators as well as the quality of contributions.

Networking

To design for networking, ask learners to select an appropriate community from a variety of options, such as Facebook and LinkedIn groups, to discern the quality of the network and to evaluate its members for a specific purpose. As an extension, screen captures of engagement and the online community can serve as evidence during assessment of quality participation according to standards set forth by the designer or agreed-upon by the learners in advance of the activity.

Reviewing

To design for reviewing, ask learners to conduct an initial (i.e., beta) or final (i.e., alpha) test of an application, process, or procedure as part of an activity whereby they will demonstrate their abilities and evaluating according to a set of standards for the application, process, or procedure. Effective reviewing requires learners to analyze as well as evaluate the application, process, or procedure tested to determine correct functions and effectiveness.

Validating

To design for validating, ask learners to develop a set of criteria or select from an existing criteria set to evaluate online content, peer content, or other forms of content to discern the value, accuracy and appropriateness of that content. As Churches (2009) noted, “With the wealth of information available to students combined with the lack of authentication of data, students of today and tomorrow must be able to validate the veracity of their information sources” (p. 30). Although it may seem similar to reviewing, the validating activity will typically require a higher level of judgment and therefore evaluating of the content.

Refer to Churches (2009) for example rubrics and exemplars for validating information and threaded discussion.


 

Creating

Authentic activities demonstrating Creating, a form of synthesizing, involve building structures or patterns from various concepts or materials and forming a new concept or material with the emphasis on the creation of innovative meaning and structure (Clark 2015). By writing a manual for operations of processes, designing a mechanism or process for accomplishing a task, integrating solutions or ideas from various sources to solve problems, or revising processes effectively, Clark (2015) notes that learners are creating and thereby synthesizing, which is the highest level of cognitive ability represented in the revised cognitive taxonomy.

Directing

To design for directing, ask learners to plan for, develop, and direct or produce an artifact representing their learning. The outcome demonstrates creating because directing or producing requires learners to envision an outcome for processes and demonstrate the ability to evaluate and analyze alternative paths and outcomes in advance. Furthermore, reflective journaling about the creation process facilitates learning across all cognitive ability levels represented in the revised cognitive taxonomy.

Developing

To design for developing, think of multimedia as the primary means of output for demonstrating abilities. Ask learners to create with multimedia to demonstrate their innovative thinking about patterns and structures of content and/or processes. Filming, animating, videocasting, podcasting, and mixing  and remixing content, as well as the development of an image, audio file, or other media content, would qualify as a multimedia creation and demonstrate skills with developing which requires the highest level of cognition, creating, which necessarily relates to the synthesis of what the learner knows about the topic of their multimedia creation.

Publishing

To design for publishing, ask learners to pollution text, images, sounds, or a combination of these. The necessary oversight for quality of published material requires attention from the learner to the process of creation and the published outcome. Collaboration in small groups for publish content facilitates co-creation and negotiation, which are also higher order abilities requiring greater cognition from the learners. Publishing video, audio, images or diagrams, and text are examples of publishing activities possible with common web-based technologies. Mashing text and multimedia creations and innovative patterns and structures also demonstrates cognitive abilities at the level of creating in the revised cognitive taxonomy.

Programming

To design for programming, ask learners to create original applications, patterns, procedures, processes, or games. Learners might also demonstrate creating by revising an existing process or devising innovative solutions to existing problems with existing processes.

Refer to Churches (2009) for example rubrics and exemplars of podcasting and digital publishing.


References | Credits

This tutorial is an update by the author of the text of the tutorial she created in 2008:

Johnson, L. (2008). Bloom’s taxonomy – Designing activities [SWF file]. Retrieved from http://media.ccconline.org/ccco/FacWiki/TeachingResources/Blooms_Taxonomy_Tutorials/BloomsTaxonomy_Activities_Tabs/BloomsTaxonomyActivitiesTabs.swf

The content, examples, and design are original to Lisa Johnson, Ph.D. and were inspired from the following resources, which are referenced as additional resources in the tutorial:

Churches, A. (2009). Bloom’s digital taxonomy [PDF file]. Retrieved from https://edorigami.wikispaces.com/file/view/bloom’s+Digital+taxonomy+v3.01.pdf

Clark, D. (1999-2015). Bloom’s taxonomy of learning domains. Retrieved from http://www.nwlink.com/~donclark/hrd/bloom.html

Please cite this 2015 version of the tutorial as follows:

Johnson, L. (2015). Designing activities using the revised cognitive taxonomy [Weblog]. Retrieved from https://reflectivelearning.net/2015/06/26/designing-activities-using-the-revised-cognitive-taxonomy/

Methods and Models for Evaluating Instructional Experiences

There are other strategies, methods, and models for evaluating the influence of training and instructional experiences of any form beyond the Kirkpatrick Model, CIPP Model, and Success Case Method. However, an awareness of these three will provide you with a solid foundation for understanding the importance of evaluation of instructional programs, particularly workforce training programs, which are often the focus of an instructional designer’s role in an organization. These three models are discussed in brief in this post.

Kirkpatrick Evaluation Model

Many sources of information about the Kirkpatrick Four Levels Model can be found online, including the website overview about the model from the Kirkpatrick Partners (2015) website where The Kirkpatrick Model is explained.

Kirkpatrick Level Chain

The Kirkpatrick Model’s Four Levels are Interrelated and Interdependent (image source: KirkpatrickPartners.com)

I would argue that, just as the ADDIE Model for instructional design has components that are the essence of nearly every other instructional design model, the levels of the Kirkpatrick Model tend to appear in some form in other evaluation models as well. It is likely for this reason that the ADDIE Model and Kirkpatrick Model continue to widely utilized.

Although the Kirkpatrick Model, developed by Donald Kirkpatrick in the 1950s, is one of the oldest formal training and instruction evaluation models and widely used even today, it is not the only model in existence. In fact, as you can learn about from the Kirkpatrick Partners (2015) website, there is a revised version of the standard four level model by Kirkpatrick called the New World Kirkpatrick Model, which may be of interest to you to explore.

For a review of alternative models not developed by Donald Kirkpatrick, start by considering the blog post about Alternatives to Kirkpatrick from Bozarth (2009) and consider the brief overview of these alternative models provided below to ensure you are familiar with some of the available options, the CIPP Evaluation Model and the Success Case Method. Keep in mind that you may work for an organization that uses a combination of the Kirkpatrick Model and other models and that is quite typical as each organization tends to develop a unique culture of evaluation to suit its needs.

CIPP Evaluation Model

The CIPP Evaluation Model was developed by Daniel Stufflebeam in the 1960s. CIPP is an acronym for Context, Inputs, Process, and Product. These four components of the CIPP Evaluation model are designed to provide a holistic view of the success of instructional initiatives. While the Kirkpatrick Model is a reaction and review oriented model, the CIPP Model seeks to provide a more decision-based approach to evaluation.  A summary can be read from Ivan Teh’s (2015) blog in the post where the following image and a research-based exploration of the CIPP Evaluation Model are provided.

CIPP Model

As Bozarth (2009) noted, the CIPP Model is more about evaluating what is being done during design and implementation than an evaluation of what has been done (para. 4). Mazur (2013) offers a another research-based summary of the CIPP Evaluation Model that is a good review to see for learning about how the model is a useful addition to or complete alternative to the Kirkpatrick Model.

Success Case Method

Another alternative to or addition to the Kirkpatrick Model is the Success Case Method.  For a comprehensive introduction to the Success Case Method, refer to the Brinkerhoff (2009) excerpt of Chapter 1, which is freely available online as a PDF.

SCMKirkpatrickComparison

Comparison of Kirkpatrick Based Models and the Success Case M (Gram, 2011).

For a comparison of the Kirkpatrick-style model’s and the Success Case Method, refer to the image here from Gram (2011) – click the image to view at full size.

As explained in the Brinkerhoff Chapter 1, the Success Case Method provides a sort of rapid-evaluation method that can help to bring focus to a learning initiative while more comprehensive models, such as the CIPP and Kirkpatrick Models, are used for longitudinal analysis of the effectiveness of instruction.

Brinkerhoff (2009) also explains that the “Success Case Method combines the ancient craft of storytelling with the more current evaluation approaches of naturalistic inquiry and case study” (p. 17).

For this reason, the method appeals to many who prefer a deeper focus on qualitative data in an evaluation method (i.e., model) for approaching a determination of the return on investment (ROI) for a performance change initiative.

Remember that instruction is inherently focus on changing performance through increased, reinforced, or revised learning of knowledge and skills.

~ Lisa Johnson, Ph.D.

References

Bozarth, J. (2009, January 17). Alternatives to Kirkpatrick [Weblog post]. Retrieved from http://bozarthzone.blogspot.com/2009/01/alternatives-to-kirkpatrick.html

Brinkerhoff, R. O. (2009). The success case method: Find out quickly what’s working and what’s not (2nd ed.). San Francisco, CA: Berrett-Koehler Publishers. PDF Chapter 1 excerpt available from: http://www.bkconnection.com/static/The_Success_Case_Method_EXCERPT.pdf

Gram, T. (2011). Comparison of Kirkpatrick based models and the success case method [Image file]. https://performancexdesign.files.wordpress.com/2011/02/2011-02-23_2243.png

Kirkpatrick, J. (2007, August). The hidden power of Kirkpatrick’s four levels. Training and Development, 61(8), 34-37.

Kirkpatrick Partners. (2015). The Kirkpatrick model. Retrieved from http://www.kirkpatrickpartners.com/OurPhilosophy/TheKirkpatrickModel/tabid/302/Default.aspx

Mazur, A. D. (2013, June 10). The CIPP evaluation model: A summary [Weblog post]. Retrieved from https://ambermazur.wordpress.com/2013/06/10/the-cipp-evaluation-model-a-summary/

Teh, I. (2015, March 12). CIPP evaluation model [Weblog post]. Retrieved from http://ivanteh-runningman.blogspot.com/2015/03/cipp-evaluation-model.html

ID Model Resources – An Overview

There are many models for instructional design (ID). It will be up to you as a motivated learner of the art and science of instructional design to seek out information about models. There is a tremendous amount of information available online to consider. This post presents places to begin exploring and poses a question for you to consider about the ADDIE Model and whether components are the essence of all ID models.

One option to use for learning about ID models is a Google Images search for “Instructional Design Models” – from there, you can click on any model’s image and select its “view page” option to see the webpage where the image appears and, in most instances, learn more about the model or just search the name of the model to learn more from a wider variety of websites.  You could also begin by exploring some of the more popular models in instructional design detailed on this Instructional Design Models page from Cullatta (2013) or this Instructional Design Models page from Ryder (2014).

ADDIE Model, CSUChico.EduRegardless of their nuances, all design models involve five essential tasks for the designer, which are reflected in one of the oldest instructional design models, the ADDIE Model (i.e., Analysis, Design, Development, Implementation, and Evaluation).

The image with this post of the ADDIE Model is from the CSUChuco.Edu website; it  illustrates how the model is an iterative model wherein each aspect of the model occurs interdependently with the others.

Although each aspect occurs in a linear process overall, the aspects of the ADDIE Model simultaneously occur in a non-linear process (i.e., analysis, evaluation, and design naturally occur in some form during development and implementation).

This is one reason I like to call the ADDIE model the “Chi” of all design models.

Would you agree with that assertion? Why/Why not? Please comment on this post with your response!

~ Lisa Johnson, Ph.D.

References

Culatta, R. (2013). Instructional design models. Retrieved from http://www.instructionaldesign.org/models/

Ryder, M. (2014). Instructional design models. Retrieved from http://carbon.ucdenver.edu/~mryder/itc/idmodels.html

Writing Effective Learning Objectives

Learning to write effective learning objectives is a fundamental skill for instructional designers because the alignment of learning objectives with instruction is necessary for ensuring the assessment of learning is focused on what learners are expected to be able to do after the instructional activity has occurred. Thus, learning objectives guide the creation of the activity and inform the assessment strategy for the instruction.

The Cognitive Taxonomy

One of the most popular strategies for writing learning objectives is to use the original or the revised Cognitive Taxonomy, which was initially refined in the 1950s by theorist Benjamin Bloom and several colleagues and later revised in the 1990s by Lorin Anderson and colleagues. Instructional designers refer to this taxonomy frequently as just the “Bloom’s Taxonomy” and, when they do, they are increasingly referring to the revised version of the taxonomy. However, many designers do still rely on the original taxonomy (see the image from the University of Georgia (n.d.) below for a comparison of the old and new versions).

Comparison of Bloom's CogTax Versions

Notice the major difference between the original and the revised versions of the cognitive taxonomy is the use of active language in the revised so that, for example, level 1 (beginning/bottom level) in the original refers to “Knowledge” while the revised level 1 refers to “Remembering” — for both versions, there is an assumption that if the learner can perform at the higher more advanced levels they can also perform at the lower more beginning level.

In that sense, the taxonomy is a scaffold of levels of thinking performance that learners can achieve throughout instruction and, theoretically and practically, the goal for instructional design is to build learners’ ability to perform at the highest more advanced levels of cognition throughout a series of sequences instructional experiences that develop skills upwards through the taxonomy.

The most important thing to recall about using the cognitive taxonomy for writing learning objectives is that the verb you choose to describe the performance the learner will be able to do after the instruction indicates a corresponding level of cognition. Though this might sound simple, fact is it is one of the most confusing things to understand about the cognitive taxonomy for those learning to use it for writing objectives: you do not typically want to use the label for the level of the taxonomy as your verb, especially at levels 1 and 2.

The image below, from ZaidLearn (2009), is based on the original Bloom’s Cognitive Taxonomy. It shows the levels of cognition surrounded by common verbs selected to describe performance at that level and those are surrounded by activity ideas for how learners might demonstrate their performance.

Cognitive Taxonomy Wheel

For example, it is impossible to really measure “Remembering” or “Understanding” and yet you can measure whether a student is able to recite, recall, list, describe, explain, select, list, identify, and so on. Therefore, always write your learning objective so that it includes a verb designating a performance that can actually be assessed.

For assistance understanding how to write objectives using the cognitive taxonomy, you might wish to explore the Johnson (2008) Bloom’s Taxonomy: An Overview tutorial. The tutorial was created in the Shockwave Flash format so you will need to view it on a device that can render SWF (Flash) files.

For additional inspiration about how the cognitive taxonomy can inform designs and to get ideas for lessons you will create in our course and beyond, explore the Johnson (2008) Bloom’s Taxonomy: Designing Activities tutorial, which is also in the SWF format.

ABCD Objective Formatting and SMART Objectives Principles

And, finally, note that one common method for writing objectives is the ABCD Method. This method is very helpful to use because it forces the designer to think through all of the elements involved in the learner achieving the objective: Audience, Behavior, Condition, and Degree. The ABCD Method for writing objectives assumes that the designer has also considered whether the objectives are SMART (Specific, Measurable, Achievable, Realistic, Timely). You can find many resources about writing SMART objectives and using the ABCD Method online. Or, you can start learning about these now by viewing the Campbell (2014) video, Writing Learning Objectives: The ABCD Method, which is about 9 minutes in length.

~ Lisa Johnson, Ph.D.

References

Campbell, A. (Andrew Campbell). (2014). Writing learning objectives: The ABCD method [Video file]. Retrieved from https://youtu.be/cCvBypIX9Do

Johnson, L. (2008). Bloom’s taxonomy: An overview [Flash tutorial]. Retrieved from  http://goo.gl/1J11l

Johnson, L. (2008). Bloom’s taxonomy: Designing activities [Flash tutorial].  Retrieved from http://goo.gl/9qvh8Y

Learning Theory Resources

Learning Theory WordleA learning theory is a theory about how learning occurs. A learning theory is different from an instructional design model, but the two are intertwined and often discussed together. In fact, many models for designing learning experiences reflect one or more theories about how humans learn.

A comprehensive listing with brief summary pages about learning theories is from the InstructionalDesign.Org website by Culatta (2013) on his Learning Theories page.

While no one can reasonably be expected to be an expert with each of the theories from that resource or others provided here, all instructional designers are encouraged to save these resources and refer to them as needed in your practice designing learning experiences and to have a general awareness of broad theoretical categories.

A useful resource for conceptualizing categories of learning theory is from Millwood (2013): this interactive graphic provides a wealth of information about how various learning theories can be conceptualized as relating to one another. Although not everyone might agree with the relationships and descriptions Millwood provides, the resource is without question a valuable resource to use when learning about and as a job aid to recall learning theories.

Additionally, conducting a Google Images Search for Learning Theories will provide you with a wealth of visuals, ranging from diagrams to comparisons to infographics, which can all inform your understanding of learning theories and their many varieties.

Finally, another extensive summary of learning theories can be found from Donald Clark’s blog “Plan B” – the table below provides hyperlinks to the menu of postings on the blog as a convenience. You are encouraged to visit the actual blog and use it as a resource in learning about instructional design and as a model of a well-designed education-related blog too.

Greeks

Religious Leaders

Religious Zealots

 

Enlightenment

Pragmatists

 

Marxists

Constructivists

 

Behaviorists

Psychoanalysts

 

School-Based

 

Schooling-Based

 

Memory-Based

Instructionists

Holists

Assessment-Based

 

 

~ Lisa Johnson, Ph.D.

References

Culatta, R. (2013). Learning theories. Retrieved from http://www.instructionaldesign.org/theories/

Millwood, R. (2013). HoTEL: Holistic approach to technology enhanced learning: Innovators opinions, perspectives [Interactive image file]. Retrieved from http://hotel-project.eu/sites/default/files/Learning_Theory_v6_web/Learning%20Theory.html

Facilitating Online Learning

Facilitation is an art as much as a science online or in any learning environment, to be sure. Every educator who acts as a facilitator of learning will have some unique components to their style that represents their personality and experiences. Many times, without even knowing it, a facilitator will ascribe to a specific theory of combination of theories of learning.

To learn about the variety of learning theories that can inform facilitation, review the Culatta (2013) website on Learning Theories.

To expand your thinking about online facilitation, consider the information shared in the National College for Teaching and Leadership (n.d.) Advanced Facilitation course and, particularly, the information in the image below about online learning behaviors that is shared in the Module 6 of the course for inspiration about online facilitation in discussion-based learning environments.

Zones of Online Facilitation

Models of Online Learning Behaviors Source: National College for Teaching and Leadership (n.d.) Advanced Facilitation Course.

Considering the graphic above and your experiences with discussion facilitation in your online courses, do you find yourself experiencing all of these zones in your classes? How have the instructional design and technology choices in your classes influenced how much you have experienced each zone? Please provide your thoughts in a comment.

~ Lisa Johnson, Ph.D.

References

Cullatta, R. (2013). Learning theories. Retrieved from http://www.instructionaldesign.org/theories/

National College for Teaching and Leadership (n.d.) Advanced facilitation [Online course]. Retrieved from https://www.nationalcollege.org.uk/transfer/open/advanced-facilitation/advfac-s01/advfac-s01-t1.html

Copyright Considerations and Resources for Instructional Designers

Copyright is a very complicated area of the law because, especially with the growth of online learning and increasingly digital instructional materials, the policies are constantly being (re)interpreted. As increasing amount of intellectual property becomes available via the web and any of it can be copied in a screencast or screenshot, it is difficult to protect one’s intellectual property.

Screenshots are commonly used to create images for instructional materials. It is always good manners and smart to cite sources for images, even a screenshot. One thing to be aware of is that non-commercial educational “fair use” allows for some leeway in education designs that are not possible in commercial designs. Depending on where you design instruction, the rules are different. For profit universities currently have different expectations (i.e., restrictions and regulations on re-use of content) than public universities, for example.

Copyright (origin unknown)As mentioned by Culatta (2013), adherence to copyright is essential as a skill for any instructional designer and anyone else designing information or instruction for public consumption. Culatta’s webpage on Copyright includes an extensive list of links for many other resources to explore about copyright and instructional design.

Tobin’s (2104) infographic on copyright effectively simplifies the copyright process as it applies to most education organizations. If you need a visual representation of all the information from the Culatta (2013) source and the extensive ideas presented in the Wright (2014) presentation on copyright basics, the resource from Tobin will be useful.

Note that the Wright (2104) presentation is not the ‘last word’ on the topic of instructional design must-know knowledge; every corporation/organization will have a legal department that dictates use of materials and such, yet this presentation offers a good overview of some basic considerations all instructional designers should be aware of for copyright.

~ Lisa Johnson, Ph.D.

References

Culatta, R. (2013). Copyright information. Retrieved from http://www.instructionaldesign.org/copyright.html

Tobin, T. J. (2014). The one-page copyright flow chart [Image file]. Retrieved from https://www.dropbox.com/s/9i5kavtshstosfs/Copyright_the_Easy_Way_Handout_Tobin_20141106.pdf?dl=0

Wright, B. (2014, April 22). Copyright basics for corporate instructional designers [Presentation/Video file]. Retrieved from  https://prezi.com/23_ugvmgm97w/copyright-basics-for-corporate-instructional-designers/