Evaluation and Revision Report

Title: Rebuilding Lives: Advocacy Through Action
Course: EDTC 6332 – Educational Technology Practicum
Author: Micki Archuleta


Overview

Rebuilding Lives: Advocacy Through Action is a modular, web-based instructional solution designed in Canvas. It introduces learners to core concepts of civic engagement through the lens of advocacy, housing justice, and systems thinking. The training spans five units and culminates in a final advocacy plan. To evaluate the module, I conducted two formative assessment strategies: (1) a Subject Matter Expert (SME) Review and (2) a Small Group Evaluation.  Both evaluations were simulated by ChatGPT.


Formative Evaluation Strategy #1: Subject Matter Expert (SME) Review

Introduction

ChatGPT reviewed the Canvas course, focusing on clarity, content flow, instructional accuracy, and audience appropriateness.

Summary of Findings

First Iteration

ChatGPT Simulated SME Review

Unit / PageWhat Works WellActionable Fix (“Quick Tweaks” the SME proposed)
Unit 1Clear overview of advocacy cycle; directions concise.• Add a self-check poll so learners test prior knowledge.
• Include a local mini-research task to anchor concepts in their own community.
Unit 2Good explanation of social-issue categories; decent reading list.• Integrate a real-world infographic or interactive map.
• Prompt students to connect an issue to their ZIP code.
Unit 3The persuasive-writing guide is solid.• Show a weak vs. strong message comparison.
• Have learners revise the weak sample for practice.
Unit 4Outline for advocacy plan is logical.• Insert a peer-review step before submission.
• Require a Risk-Ethics-Sustainability check to flag privacy and data-use issues.
Unit 5Final project options cover audio, video, text.• Provide examples of effective projects in each medium.
• Add a reflective checklist students complete before uploading.
Final AssessmentRubric aligns with learning outcomes.Embed the rubric preview right on the page so students don’t have to click “Show Rubric.”
• Add a self-assessment checklist mirroring the rubric.

Second Iteration

ChatGPT Simulated SME Review

Unit / PageWhat Works WellActionable Fix (high-impact first)
Unit 1
Self-check poll + local mini-research
• Poll → quick formative data.
• Mini-research gets learners into primary sources early.
• Add a 10-word “term bank” so research teams use common language (cuts off-topic Googling).
Unit 2
Infographic / map connection
• Place-based hook aligns with C3 Framework for Civics.
• Map layer toggles = great UDL.
• Require alt-text for every student-added icon to meet WCAG 2.1.
• Offer a sample Freedom-of-Information request for super-curious students (optional doc).
Unit 3
Weak vs. strong message rewrite
• Side-by-side example clarifies standards.
• Revision prompt mirrors best practice in writing centers.
• Provide 1-sentence context for the weak sample (who’s the audience?) so revisions don’t drift.
• Add a 75-word cap to keep edits concise.
Unit 4
Peer review + Risk-Ethics-Sustainability
• Peer rubric is nicely pared down (three traits).
• RES table catches privacy blind spots.
• Shift the RES Check before peer review—so drafts already meet baseline ethics.
Unit 5
Project examples + reflection checklist
• Media-diverse examples reduce cognitive load.
• Checklist covers Bloom + ethical lenses.
• For consistency, add a link back to the rubric inside each example caption (“See how this earned full points on Creativity”).
Final Assessment
Rubric preview + self-assessment
• Inline rubric = zero click friction.
• Checklist mirrors rubric wording—excellent alignment.
• In SpeedGrader, pre-load “scored” comments that map to the checklist numbers—you’ll save grading time.

Synopsis

Immediate Revisions:

  • Enhanced accessibility features across multimedia
  • Improved rubric clarity and student instructions
  • Modified MapQuest to be location-flexible

Long-Term Considerations:

  • Expand SME review pool to include a DEI specialist
  • Integrate adaptive branching logic for different learner paths

Formative Evaluation Strategy #2: Small Group Evaluation

Introduction

ChatGPT Simulated 25 10-12 graders and reviewed for audience expectations.

Summary of Findings

First Iteration

Youth Panel Feedback (25 students, grades 10-12)

Hot TakesLoved 💚Confused / Wanted 💭Quick Tweaks the class suggested
Unit 1 Intro“Slides were short.”“Wanted to know if we ‘got it’ right away.”Add a Kahoot-style poll and show results instantly.
Unit 1 Content“Research felt like homework, not a game.”Turn it into a mini-research quest about our own town.
Unit 2 Readings“Map of global stats was cool.”“Didn’t see anything about local stuff.”Drop in an interactive map/infographic where we place pins on our city.
Unit 3 Writing Lesson“Examples help.”“Not sure how a bad message looks.”Give a weak vs. strong side-by-side and let us fix the weak one.
Unit 4 Draft“Checklist is OK.”“No chance to see a friend’s work.”
“Scared about legal problems.”
Add a peer-review prompt and a risk & ethics worksheet.
Unit 5 Instructions“Project choices are fun.”“Still don’t know what an A project looks like.”Show sample projects (video, audio, written).
Unit 5 Wrap-up“Did I remember everything?”Provide a reflection / self-checklist to tick off.
Final Submit Page“Rubric link is tiny.”“We forgot to open it.”Embed the rubric and a self-assessment checklist right above the Submit button.

Second Iteration

Youth Panel Feedback (25 students, grades 10–12)

Hot TakesLoved 💚Confused / Wanted 💭Quick Tweaks the class suggested
Poll in Unit 1“Fun, felt like Kahoot.”“Show us class results right away!”Toggle “Results After Voting” in poll settings.
Local Mini-Research“Cool to look up our own town.”“Needed example of good source—got lost on TikTok.”Add a 3-minute screencast: “How to vet a source in 60 seconds.”
Infographic Map“Dragging icons was like a game.”“Map loads slow on phones.”Offer a low-bandwidth PNG fallback.
Weak vs. Strong Message“Roasting the weak one was funny.”“We weren’t sure what counts as too spicy language.”Give a “persuasive tone slider” graphic (formal ↔ casual) as a guardrail.
Peer Review Prompt“Stickers were 🔥.”“Hard to find partner’s doc—link maze.”Put the peer review link in the to-do list AND email.
RES Table“Made me think ‘what could go wrong’—scary but good.”“Didn’t know what ‘mitigation’ means.”Add hover-tooltip: Mitigation = how you shrink a risk.
Project Examples“Video example = goals.”“Written example looked long.”Bold the three sentences that hit rubric criteria so length feels lighter.
Reflection Checklist“Checkboxes = satisfying.”“Couldn’t tick them—it’s just a picture?”Convert to an interactive Canvas quiz with auto-complete answers.

Synopsis

Immediate Revisions:

  • Edited confusing quiz items
  • Enhanced scaffolding for peer review
  • Added student-facing examples for final submissions

Long-Term Considerations:

  • Explore embedding peer feedback tools directly within Canvas rubrics
  • Offer optional badges or certifications for course completion

Summary and Reflection

This practicum deepened my understanding of instructional iteration. I learned that accessibility and clarity are not “add-ons” but essential from the start. The feedback loop with both experts and students was invaluable, especially for refining the user experience. I’ve also gained insight into how performance context—such as geographic diversity—can change how content is interpreted.

Professionally, this project reaffirmed my commitment to educational game design, civic engagement, and inclusive pedagogy. It has laid the groundwork for future instructional design projects and scholarly publication. I plan to share this module openly while using the evaluation findings to improve future iterations and scale impact.