← Back to Week 6 Slides
Week 6 · Production Process

Making the Invisible Visible

On the production process of building a SaaS research platform, and the challenge of showing work that resists being shown.


I. Introduction: My Work Doesn't Photograph Well

Over the past six weeks, I have been building an AI-mediated communication research platform — a system that allows marketing practitioners at appointment-based businesses to design, execute, and measure multi-channel outreach experiments. The platform supports visual DAG workflow authoring, automated SMS and email delivery with hybrid fallback, real-time per-node metrics, and multi-tenant data isolation.

None of this photographs well.

When I tell people I built a "research platform," they picture a website. What I actually built is a backend system: 14 service modules, a DAG execution engine, circuit breakers, a permission system, message delivery pipelines, and a multi-tenant PostgreSQL architecture with schema-based isolation. The frontend — a React Flow canvas and some dashboards — is roughly 10% of the work. The other 90% is invisible.

This essay is about the production process of making that invisible work visible: how the project evolved from wireframes to a working system, and how I plan to show it on Demo Day.

II. From Wireframe to Website

The transition from wireframe to production was not the smooth pipeline that the phrase implies. It was a series of breaks, pivots, and rebuilds.

In Week 3, I built demo.html — a 51-kilobyte interactive prototype that simulated the entire platform. It had a strategy builder, an audience targeting panel, an experiment dashboard, and analytics charts. It looked complete. It was entirely fake. Every number was hardcoded, every interaction was a CSS animation, and no data ever touched a database.

This prototype served a critical function: it let me test the concept before committing to the implementation. The audience could click through it and give feedback on the UX without me spending weeks on backend architecture that might need to change.

The wireframe was not a plan for the website. It was a hypothesis about what the website should be.

Between Week 3 and Week 5, that hypothesis was repeatedly challenged. A research pivot in Week 3 rejected the original assumption that AI communication is universally better than human outreach. This reframed the platform from "AI outreach tool" to "A/B research platform" — a fundamentally different product that required a completely different backend architecture.

The actual transition from wireframe to working system involved replacing every fake element with a real one: simulated campaign data became PostgreSQL queries across tenant-isolated schemas; CSS-animated progress bars became real-time metrics polled from a DAG execution engine; static analytics became computed conversion funnels. The UI barely changed. The entire foundation underneath it was rebuilt.

What I learned: "wireframe to website" is a misleading metaphor. It suggests continuity — that you draw the wireframe, then fill it in with code. In practice, the wireframe is disposable. The real product emerges from the constraints discovered during implementation, not from the vision encoded in the mockup.

III. From Tech Demo to Installation Space

Showing a SaaS platform in a presentation room is an act of translation.

A tech demo assumes a technical audience. You open a terminal, show some logs, maybe step through a debugger. An "installation space" — even metaphorically — assumes a mixed audience. Some viewers will understand what a DAG executor is; others will only understand what they can see on screen.

The core tension: If I only show the UI, the audience sees a drag-and-drop editor — something that looks simple and doesn't communicate the engineering depth. If I only explain the architecture, I lose anyone who doesn't know what a circuit breaker is.

My solution is a three-layer presentation strategy:

Layer 1 — Live UI Walkthrough: Open the real system. Create a campaign, drag nodes onto the canvas, configure an audience. Let the audience experience the tool as a practitioner would. This layer is for empathy: it shows who uses this and what it feels like.

Layer 2 — Architecture Reveal: After clicking "Execute," pause. Show what happens behind the scenes. The DAG executor traverses the graph, evaluating condition nodes, sending messages through the delivery pipeline, triggering the hybrid fallback when SMS goes unanswered. This layer is for understanding: it shows what the system does that the UI cannot reveal.

Layer 3 — Data Story: Show a completed experiment's results. Conversion rates between groups, delivery success rates, the moment when the fallback channel outperformed the primary one. This layer is for evidence: it shows what the platform proves.

The key insight is that no single layer is sufficient. The UI walkthrough without the architecture reveal looks trivial. The architecture reveal without the data story looks theoretical. The data story without the UI walkthrough lacks context. All three layers together create the "installation space" — a structured experience that translates invisible backend innovation into something an audience can understand.

IV. From A/B Testing to Polished Mockup

A/B testing has an inherent temporal problem: experiments need time to run.

A real campaign on this platform sends SMS messages to a treatment group and emails to a control group, waits for responses over 24 to 72 hours, triggers fallback channels for non-respondents, and then computes statistical significance across conversion metrics. This is the core value of the platform — and it is completely undemonstrable in real time.

The hybrid fallback strategy, which automatically switches from SMS to email when a message goes unanswered for 24 hours, is one of the platform's most technically interesting features. It requires exactly the kind of time that a presentation does not have.

The "polished mockup" of an A/B experiment is not a screenshot of results. It is a compressed narrative: here is what we tested, here is what happened over 72 hours, and here is what the data tells us.

My approach to this problem is time compression through pre-run experiments. Before Demo Day, I will run a real campaign through the full pipeline: audience selection, message delivery, fallback triggering, conversion tracking. Then, during the presentation, I will walk through this completed experiment as a narrative — showing the real data, the actual delivery logs, the genuine conversion metrics — while explaining what happened at each stage.

This is not a mockup. It is the actual output of the system, presented as a story rather than a live process. The "polish" comes not from making the data look better, but from structuring the narrative so that the audience understands the time dimension that they cannot experience directly.

V. Challenges

The production process surfaced several challenges that I did not anticipate at the wireframe stage:

Multi-tenancy is invisible but expensive. Roughly 20% of the backend code exists to ensure that Tenant A cannot see Tenant B's data. This is a critical feature — without it, the platform cannot serve multiple businesses — but it produces zero visible output. There is no UI element that says "your data is isolated." It simply is.

Error handling is the majority of the code. The "happy path" for sending an SMS is about 30 lines. The error handling — circuit breakers, retry logic, fallback channels, delivery status tracking, rate limiting — is roughly 500 lines. This is the invisible bulk of production software, and it is entirely unshowable in a demo.

The research pivot was necessary but costly. The Week 3 decision to reframe the project from "AI outreach tool" to "A/B research platform" was intellectually honest — we should not assume AI communication is better without evidence. But it required restructuring the entire backend: new database tables, new service modules, new API contracts. The wireframe survived; everything underneath it was replaced.

Testing is invisible labor. The platform has 13 test suites covering unit, integration, and end-to-end scenarios. Writing these tests took significant effort and produced no visible artifact. But they are what make the system reliable enough to demo without fear of embarrassing failures.

VI. How I Plan to Show My Work

For Demo Day, I will use the three-layer strategy described above, structured as a 10-minute presentation:

Minutes 0–2 — Context. Why do appointment-based businesses need A/B testing? A brief story about a nail salon manager who doesn't know whether SMS or email works better for rebooking lapsed customers.

Minutes 2–5 — Layer 1. Live in the real system: create a campaign, build a workflow on the DAG canvas, configure the audience. Show the platform as a practitioner experiences it.

Minutes 5–7 — Layer 2. Switch to an architecture diagram. Explain what happened after "Execute": the DAG executor, the message pipeline, the fallback logic. Make the invisible infrastructure briefly visible.

Minutes 7–9 — Layer 3. Show the results of a pre-run experiment. Real data, real delivery rates, real conversion comparison. Let the data story close the loop.

Minutes 9–10 — Reflection and Q&A.

The strategy is not to make the backend "visible" in the literal sense — you cannot see a circuit breaker. The strategy is to create a structured experience that communicates the depth of the system through multiple complementary perspectives: the user's experience, the system's behavior, and the data's evidence.

The hardest part of building a SaaS platform is not making it work. It is making the work visible.


~1,700 words