Student Project Pack: Analyze a News Ecosystem — From Deepfakes to Fundraising to Policy
project-based learninginterdisciplinarycurriculum

Student Project Pack: Analyze a News Ecosystem — From Deepfakes to Fundraising to Policy

UUnknown
2026-02-17
10 min read
Advertisement

A cross-disciplinary student project pack to study how deepfakes, crowdfunding misuse, and workplace policies interconnect in modern news ecosystems.

Hook: Turn noise into a researchable story — fast

Students and teachers face a familiar pain: the news cycle floods classrooms with half-verified claims, platform policy shifts, and ethical puzzles — yet there are few structured, trustworthy ways to study how these events connect. This project pack gives instructors a cross-disciplinary, step-by-step path to guide student research that ties together deepfakes, crowdfunding controversies, and workplace policy rulings to reveal how modern information ecosystems form, respond, and change.

Why this pack matters in 2026

Late 2025 and early 2026 saw several converging trends: high-profile AI-driven image abuse on major social platforms, rapid user migration to alternative networks, sudden crowdfunding disputes involving celebrity identities, and court rulings that exposed gaps in workplace policy and dignity protections. These events are not isolated. They form an interlocking set of causal chains: platform features and moderation choices influence the spread of harmful content; misinformation and identity misuse change public trust and donation behavior; workplace policies and legal rulings shape how institutions respond to community harms.

Students who can analyze these connections will build practical media literacy, legal reasoning, data analysis, and ethical research skills that are directly applicable to careers in journalism, policy, tech, and education.

Learning objectives

  • Understand how platform design and policy decisions shape information flows and harms.
  • Analyze a deepfake or generative-AI incident using technical and media methods.
  • Investigate a crowdfunding controversy for provenance, author intent, and platform policy enforcement.
  • Evaluate workplace and institutional policy rulings to identify legal and ethical implications.
  • Produce a cross-disciplinary synthesis: a research brief, dataset, and public-facing explainer with policy recommendations.

How the pack is structured (6 modules, 6–8 weeks)

Each module includes a short lecture, hands-on activity, assessment rubric, and reproducible template (notebook + checklist). The modules scale from introductory media-mapping to an integrative capstone.

Module 1 — Map the ecosystem (week 1)

Goal: Build a visual map that connects actors, platforms, and actions.

  1. Collect primary anchors: Identify 3 recent events (for example: an AI image abuse controversy on a major platform, a celebrity crowdfunding dispute, and an employment tribunal about changing-room policy) and log timelines.
  2. Create a stakeholder map: users, platforms, regulators, journalists, funders, victims, and advocacy groups.
  3. Deliverable: A 1-page visual map and a 300–500 word rationale.

Module 2 — Deepfake-driven platform moves (weeks 2–3)

Goal: For one AI-related incident, analyze technical mechanics, policy responses, and audience effects.

  • Case study example (2026): the wave of non-consensual sexualized imagery generated through an integrated chatbot on a major platform, and alternative platforms’ download spikes following that controversy.
  • Methods: Use reverse image search, metadata inspection, and available deepfake-detection tools (e.g., open models like XceptionNet forks, Sensity APIs, or forensic suites such as InVID). Where possible, use pre-tagged datasets rather than scraping private content.
  • Activity: Reconstruct the timeline of the platform’s public statements, product changes, and third-party reactions (e.g., surge in Bluesky installs reported in early January 2026).
  • Deliverable: A technical appendix (notebook) demonstrating at least one detection method and a short media-analysis memo.

Module 3 — Crowdfunding and identity misuse (week 3)

Goal: Investigate a crowdfunding controversy for provenance, author intent, and platform policy enforcement.

  • Case study example (January 2026): a GoFundMe created under a celebrity’s name that the celebrity disavowed, showing remaining funds and public confusion.
  • Methods: Archive the fundraiser (web capture), trace linked accounts, contact platform support channels to record response times, and cross-check with public statements from the purported beneficiary.
  • Ethics: Never interact with or encourage donations to disputed fundraisers; document rather than amplify.
  • Deliverable: A provenance report and a recommended set of platform safeguards for donation pages.

Module 4 — Workplace policy rulings and institutional response (week 4)

Goal: Read and interpret a workplace tribunal ruling and evaluate its implications for policy and dignity protections.

  • Case study example (2026): a tribunal finding that a hospital’s changing-room policy created a hostile environment for complainants, highlighting how procedural policy and equality law intersect.
  • Methods: Obtain the tribunal decision or reliable summaries, extract legal findings, and map how the ruling interacts with national guidance and institutional practices.
  • Deliverable: A policy brief (700–1,000 words) advising a university or healthcare institution on how to update changing-room and complaint-handling policies.

Module 5 — Cross-analysis: connecting threads (week 5)

Goal: Synthesize findings across modules to trace common mechanics and leverage points.

  • Research questions to explore: How do platform affordances (APIs, bots, live streaming) enable the rapid replication of harmful content? How do fundraising algorithms and verification rules affect donor behavior in the wake of identity misuse? How do workplace policies reflect or fail to reflect digital harms?
  • Activity: Produce a causal diagram that links platform features, moderation gaps, public perception, and institutional policy failures.
  • Deliverable: A 1,200–1,800 word cross-disciplinary synthesis and a slide deck for stakeholders.

Module 6 — Public-facing deliverables and policy recommendations (week 6)

Goal: Translate research into actionable recommendations for platforms, fundraisers, and institutions.

  • Deliverables: a) Public explainer article; b) Reproducible dataset and Jupyter notebook; c) Short policy memo aimed at a platform or regulator.
  • Evaluation: Peer review and public instructor feedback session.

Tools, data sources, and ethics checklist

Students should use a combination of qualitative and quantitative tools. Below are prioritized, accessible options for 2026 classroom work.

Technical and analytic tools

  • Forensic & image tools: InVID, FotoForensics, TinEye, Google Reverse Image Search, Amnesty’s YouTube DataTools, Sensity (for enterprise-class detection), and open-source deepfake detectors (research forks of Xception, EfficientNet).
  • Data collection: Web archives (Wayback Machine), CrowdTangle (for public Facebook/IG where available), platform APIs (be mindful of rate limits and ToS), and Appfigures or Sensor Tower for install/download signals.
  • Collaboration & reproducibility: Jupyter notebooks, Observable for interactive visualizations, GitHub for version control, and Zenodo for dataset DOI archiving.
  • Qualitative methods: discourse analysis frameworks, stakeholder interviews (with IRB/consent), and legal-text interpretation templates.
  1. Obtain institutional review where human subjects (interviews, user data) are involved.
  2. Avoid amplifying harmful content — document via screenshots or archives, not by reposting raw images or fundraisers for donation.
  3. Respect platform Terms of Service and copyright; prefer public, archived data. If a platform forbids scraping, use public archives or negotiated data access.
  4. Redact identifying details of victims when publishing demos or datasets.

Assessment rubric (scoring matrix)

Use this rubric to grade student teams. Each item is scored 1–5 (1 = incomplete, 5 = exemplary).

  • Research rigor: clarity of methods, use of primary sources, reproducibility.
  • Cross-disciplinary synthesis: ability to connect technical, legal, and social dimensions.
  • Ethical conduct: demonstrated safeguards, consent, and anonymization where needed.
  • Communication: clarity of public explainer and policy memo for non-expert audiences.
  • Technical deliverable: working notebook, dataset, and visualizations.

Sample research questions & hypotheses

  • RQ: Did reports of a platform’s AI tool producing non-consensual sexual images causally precede increased installs of alternative networks? H: Publicized AI abuse incidents increase short-term installs of platforms that market themselves as safer alternatives.
  • RQ: How often are crowdfunds created under a public figure’s name without verification, and what safeguards limit donor exposure? H: Lack of identity verification correlates with higher rates of disputed fundraisers during high-profile negative press.
  • RQ: Do workplace tribunal rulings about dignity reflect gaps in digital policy language (e.g., provisions on non-consensual deepfake imagery)? H: Existing workplace policies frequently omit explicit digital harms, causing inconsistent institutional responses.

Case study insights (applied examples from 2025–2026)

These examples illustrate how real events can anchor student inquiry.

  • Platform AI abuse and competitor growth: A late-2025 controversy over an integrated chatbot generating sexualized images of real people catalyzed regulatory attention and a measurable spike in installs for alternative networks. Students can quantify download spikes using app intelligence providers and correlate them with news timelines.
  • Crowdfunding misuse and celebrity disavowal: Public figures have repeatedly disavowed fundraisers created in their name. Investigations reveal remaining funds, slow refund processes, and inconsistent platform transparency — ideal for a provenance study and a policy recommendation to improve verification and refund workflows.
  • Workplace policy and tribunal outcomes: Employment tribunals can highlight how institutional policies — even when claiming compliance with guidance — might still create hostile environments. Legal reasoning combined with stakeholder interviews can produce actionable policy language for institutions.

Practical, actionable advice for instructors

  1. Start small: select one case per module and limit the data collection window to reduce scope creep.
  2. Emphasize reproducibility: require students to submit a short README explaining data sources and any redactions.
  3. Use role-play: assign students to be platform policy advisors, fundraisers, or regulators to explore multiple perspectives.
  4. Integrate legal literacy: invite a guest (law professor or employment lawyer) to walk through tribunal decisions.
  5. Prioritize privacy: teach students to anonymize victims and avoid amplifying harmful material — reward ethical choices in grading.

Resources and further reading (2024–2026 context)

  • Technical: InVID, Sensity, and open deepfake-detection model hubs (2024–2026 research forks).
  • Policy: recent regulator actions in 2025–2026 on AI and non-consensual content in the U.S., California AG inquiries into AI chatbots, and evolving EU AI Act guidance.
  • Journalism: contemporaneous reporting on platform controversies and crowdfunding disputes (January 2026 examples are especially relevant).

Advanced strategies and future predictions (2026–2028)

Use these as extension prompts for senior students or capstones.

  • Prediction: platform diversification — expect continued user migration cycles to niche networks following high-profile moderation failures. Students should track cross-platform conversation flows rather than treating each app in isolation.
  • Prediction: stronger provenance requirements — regulators and payment platforms will push for more identity verification on high-value fundraisers and algorithmic transparency around donation recommendations.
  • Prediction: hybrid legal remedies — workplace tribunals and civil suits will increasingly cite digital harms (e.g., non-consensual AI imagery) and demand explicit policy language; institutions will need integrated digital dignity policies.
  • Advanced technical angle — students can experiment with provenance chains using cryptographic timestamps or simple content-hash registries to demonstrate tamper-evidence on critical media assets.

Common pitfalls and how to avoid them

  • Avoid scope creep: pick tight research questions and a clear timeline.
  • Don’t amplify abuse: document via archives and redactions, never repost harmful media for classroom debate.
  • Respect terms of service: if a platform forbids scraping, use public archives or negotiated data access.
  • Beware confirmation bias: build counterfactuals and test rival explanations (e.g., install spikes may be driven by marketing rather than news alone).

Deliverables checklist (what students hand in)

  • Stakeholder & timeline map (Module 1)
  • Technical appendix & detection notebook (Module 2)
  • Provenance report on crowdfunding (Module 3)
  • Policy brief and annotated tribunal summary (Module 4)
  • Cross-analysis synthesis and slide deck (Module 5)
  • Public explainer + data & code repository (Module 6)

Teacher’s quick-start: a 4-hour sprint version

  1. Hour 1: Present one combined timeline (platform abuse → fundraising dispute → tribunal ruling).
  2. Hour 2: Break students into teams to map stakeholders and formulate one hypothesis each.
  3. Hour 3: Each team completes a short evidence-gathering task (reverse image search, archive a fundraiser page, read a tribunal summary).
  4. Hour 4: Teams present 5-minute findings and 1 policy recommendation. Instructor grades for clarity and ethical posture.

Actionable takeaways

  • Focus on relationships: platform features, user behavior, and institutional policy are parts of a causal chain — map them.
  • Respect data ethics: archive and redact rather than amplify.
  • Build reproducibility: a notebook and dataset are part of the grade, not optional extras.
  • Translate to policy: research should end with concrete recommendations for platforms, fundraisers, or employers.
“Students who learn to trace the links between technology, money, and law don’t just critique the news — they build fixes.”

Call to action

Ready to run this in your class? Download the full pack (templates, rubric, notebooks) and a sample dataset version for classroom use. Try the 4-hour sprint in your next session and share student outputs with the knowable community for peer feedback. If you want a tailored syllabus or a guest lecture on legal interpretation of tribunal rulings or technical deepfake detection, reach out — let’s turn noisy current events into robust learning pathways.

Advertisement

Related Topics

#project-based learning#interdisciplinary#curriculum
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-17T01:31:04.357Z