The SAMR model is a great way for educators to check themselves when trying to incorporate more tech into the classroom. There is so much pressure I am learning to be really tech savvy and to incorporate it as much as possible into your teaching moving forwards in education. If we are only reaching that first level of substitution however, all that energy going into changing modes is kind of worthless (and potentially a waste of money). For example, if you typically write out your notes for your students on the board, and switch to an overhead with a page of notes and lecture from that sheet, is it really that different? Yes you have technically included “tech” into your classroom, but you have not increased your multimodality in an effective way. Even changing that overhead to a powerpoint is quite close to substitution if all you do is lecture off the slides. At least with powerpoint you have a lot more options to get multimodal, by including video, audio, adding links to learn further, etc. I would classify powerpoint as augmentation as there is a lot of functional improvement and pathways towards redefinition. Initially I was having difficulty thinking of examples on ways to redefine common science concepts, but the additional reading by Hamilton et al., had some great suggestions. They talked about how you could shift teaching students about light from a diagram to an interactive computer simulation with variables the students can change. That made me think of a project I am doing for EDCI 767, where we analyze a science education app. My class partner and I are evaluating LifeMap, which is a really cool way of visualizing the “Tree of Life”. Learning about phylogenetics was a pretty boring topic for me in highschool, and this app gives you the ability to interactively explore the relationships between organisms and evolutionary history. This app to me is an example of modification as it represents significant task redesign.
I also read through the Multimedia Learning Theory. The Dual-Channel Assumption is a clear way to explain how humans take in information. Mayer (2009) states that you have a visual-pictorial channel (to process images through the eyes, including words) and an auditory-verbal channel (to process spoken words). The Redundancy Principle hit home for me as something educators have been teaching for a long time but perhaps without the explanation behind it on why it works so well. This principle states that messages are most effective when just spoken word and graphics. Often teachers include text and graphics on screen, and then speak as they are presenting. I remember this lesson from years ago, when I was taught “less is more” when putting text in powerpoint presentations. Teachers can overwhelm learner’s visual channels with words and pictures, and therefore slow down their ability to process and understand the information being presented. From a learner point of view, it is much easier to focus on the speaker’s voice for additional information than trying to take in paragraphs of text along with their speech. Allow your learners to have one source of information for their visual-pictorial channel with graphics (and a few keywords) and speech for their auditory-verbal channel.
The discussion we had on modality this week fits these topics well. More than ever we are having to find ways to adjust how we teach as distance and online learning becomes more and more prevalent. How can we best accommodate all learners in different circumstances, without relying on really expensive tech or overloading information on them as learners? Personally, I prefer face to face, perhaps as a result of 90% of my education being delivered in this format. I find I am able to best engage with the prof, my classmates and the class material when I have to go to a certain room and focus on that topic for an hour or so. A blended or hybrid mix of this could also be easily integrated by making some of the class hours online and synchronous especially in a tech class, as it allows you to follow along learning new software or programs on your device in real time. A big variable I find for myself is the time period. No matter how engaging a teacher tries to make a class, I start to check out after about an hr and half. Having the flexibility in online asynchronous classes help a lot since you can choose how long you are going to do that subject. Having multiaccess courses would be the most inclusive to the most learners, especially as teachers build up their online resources and tech skills so they can deliver those lessons with ease. As teachers evolve in their learning and teaching styles, multiaccess will allow so many different styles of learners from whatever location to study. At Claremont right now, there is a student who cannot come to school for health reasons. They have her zoom in each day on a chromebook, but there are definitely issues with audio during the lecture portion of the class. It would be awesome for her to be able to use the “cyber proxy” iPad or telepresence robot that drives around on wheels that we were discussing in 336 class. She clearly is very engaged with her learning and is striving to find a solution where she can participate in “normal” classes while staying safe. Fortunately, the teacher for the course I observed is striving to make sure she is included any way he can.
A general moto I am going to hold myself to through my teaching program and subsequent career is to make sure I continue learning and adapting. Seems like an obvious idea, but as life gets busy it will get easier to say to myself what I’ve been doing before is good enough. The teachers I see at Claremont that have great classes are the ones who are trying to learn new ways to incorporate tech, be more inclusive, and be adaptable in their teaching styles.
Recent Comments