
The Adrenaline Economy: Launching a Horror Drama Marketplace
The Anatomy of a Niche: Why Horror and Why Now?
The Creator Partnership: Building a Sustainable Talent Pipeline
UX for the Uncanny: Designing for Dread
The Art of Curation: Quality Control in the Shadows
The Monetization Matrix: Beyond Traditional Ad Revenue
Marketing to the Macabre: Viral Growth Hacking
The Legal Labyrinth: Rights, Royalties, and IP
The Tech Stack: High-Fidelity in a Bite-Sized Format
Building the Coven: Community and Fandom Engines
Data-Driven Dread: Using Analytics to Guide Content
The Global Scream: Scaling Across Borders
The Dark Side of Branding: Sponsorships and Integration
Safety in the Shadows: Moderation and Compliance
The Future of Fear: VR, AR, and Interactive Narratives
The Zero Hour: Launching and the Roadmap to MVP
SPEAKER_1: Alright, so last lecture we established that brand partnerships have to be genre-native — the wrong sponsor poisons the entire emotional contract. That framing really clarified something for me. But it got me thinking about a related problem: what happens when the content itself crosses a line? Because horror is supposed to be dark, but there's clearly a point where dark becomes dangerous. SPEAKER_2: That tension is exactly where moderation gets complicated for a horror platform. Content moderation involves structuring participation, facilitating cooperation, and preventing abuse. It's about governance, not sanitization, especially in horror where context matters. SPEAKER_1: So for someone building this from scratch, what are the actual red lines? The things that are non-negotiable regardless of artistic intent? SPEAKER_2: Three hard stops. First, content that sexualizes minors — zero tolerance, immediate removal, mandatory reporting. Second, real-world harm instructions embedded in fictional framing — a 'horror story' that's actually a synthesis guide for dangerous substances. Third, content that targets real, identifiable individuals with credible threats. Those three are the red lines where artistic expression has no standing. SPEAKER_1: And everything outside those three is a judgment call? SPEAKER_2: A structured judgment call. Successful moderation must be principled, consistent, and contextual, focusing on proactive and transparent practices tailored to horror content. Contextual judgment is crucial. A scene's cultural and narrative context significantly impacts moderation decisions, especially in horror. SPEAKER_1: How does the platform operationalize that contextual judgment at scale? Because the curation team from lecture four is four to six people — they can't review everything. SPEAKER_2: AI handles initial moderation through keyword clustering, image recognition, and audio analysis, flagging content for human review when necessary. The question is what percentage gets escalated. A defensible threshold is around fifteen to twenty percent of submissions routed to human review, with that number rising for borderline cases. SPEAKER_1: So the automation catches the obvious violations, and humans own the gray zone. But here's what I keep wondering — why do trigger warnings matter more in horror than in, say, a drama platform? SPEAKER_2: Because horror audiences opt into fear, but they don't always opt into specific trauma triggers. Someone who loves psychological thrillers may have a clinical PTSD response to realistic depictions of drowning. Trigger warnings in horror allow users to manage their exposure, respecting the genre's boundaries while acknowledging individual sensitivities. SPEAKER_1: That's a meaningful distinction. Now, there's a compliance dimension here too — app store policies. How many moderators does a platform actually need to stay on the right side of Apple and Google? SPEAKER_2: At year-one scale — a library of one hundred to four hundred pieces — a dedicated moderation team of three to five people is defensible, working alongside the curatorial team rather than separately. The critical thing is that security protocols must include data protection and regulatory compliance as explicit mandates, not assumptions. Non-compliance with app store policies risks app removal, not just content takedown. SPEAKER_1: That's a real existential risk. And I imagine the policies aren't static — they shift, and the platform has to keep up. SPEAKER_2: Right, and this is where research on organizational security behavior becomes directly relevant. Analysis of 118 interviews across organizations found that when official policies are too rigid or out of step with real work conditions, employees don't just ignore them — they create shadow security: alternative compliance measures that actually work. The same dynamic happens in content moderation teams. SPEAKER_1: Shadow security — that's a striking term. What does that look like inside a moderation team specifically? SPEAKER_2: A moderator finds that the official flagging rubric keeps misclassifying folk horror as gore because the automated system can't read cultural context. So they start maintaining a personal reference document of regional horror conventions that the official system doesn't account for. That's shadow security — well-intentioned, but it creates inconsistency and institutional blind spots. SPEAKER_1: So the shadow workaround is actually a signal that the official policy has a gap. SPEAKER_2: Exactly. Organizations rarely evaluate whether security policies are fit-for-purpose in real work environments — that's a documented failure mode. Implement feedback loops for moderators to formally address policy gaps, preventing shadow security practices. Employees willingly report problems; organizations that ignore those reports are the ones that end up with fragmented, inconsistent enforcement. SPEAKER_1: What about the enforcement side — how does the platform handle creators who push against the moderation framework? SPEAKER_2: Research is clear that pure discipline — warnings, sanctions — is ineffective because monitoring costs are high and widespread non-compliance makes enforcement feel arbitrary. The more durable approach is persuasion: training, transparency about why a line exists, and making compliance low-friction. If a creator understands that a specific scene triggers app store removal and gets a clear recut guide, most will comply. If they just get a rejection notice, they'll push back or leave. SPEAKER_1: That connects directly to the creator partnership model — the platform's relationship with creators has to hold even when moderation creates friction. SPEAKER_2: And that's the balance. Artistic expression and safety guidelines aren't opposites — they're in tension, and the resolution is transparency. Creators need to know exactly what the red lines are, why they exist, and what the path to compliance looks like. Opaque moderation destroys the Curator-as-Partner relationship faster than almost anything else. SPEAKER_1: So for Yolanda, and really for anyone building this — what's the single thing they should carry out of this lecture? SPEAKER_2: That moderation is not a filter bolted onto the content pipeline — it's load-bearing infrastructure, the same way the legal framework from lecture seven is. Robust moderation tools keep the content within genre, protect the platform from app store removal, and preserve the creator trust the entire ecosystem depends on. Three red lines, a fifteen-to-twenty percent human review threshold, transparent rubrics, and feedback loops that catch policy gaps before shadow workarounds do. Build that before the library scales, not after the first violation.