← Back to blog
ProductFrontendVORA

Building a Real Meeting Export: From Raw Transcript to a Usable Report

Our meeting export feature went through three major versions. Version 1 was fast to ship but not useful.Version 2 looked polished but still missed the point.Version 3 is the first version teams can actually send without rewriting. This is the full pa...

by Jay3 min readVORA B.LOG

Our meeting export feature went through three major versions.

Version 1 was fast to ship but not useful.
Version 2 looked polished but still missed the point.
Version 3 is the first version teams can actually send without rewriting.

This is the full path from "export exists" to "export is useful."

Version 1: Plain Transcript Dump

The first release exported raw transcript text with timestamps.

It was accurate as a record, but poor as a working document. Most people do not want every utterance. They want:

  • what was decided
  • what needs to be done
  • who owns each task
  • what matters next

So yes, version 1 technically exported data. But it did not solve the real user problem.

Version 2: Styled HTML Template

Next, we added a nicer report layout with sections and better visual structure.

The design improved, but the content extraction logic was still naive. Important details were mixed with low-value lines because we were using simple heuristics.

Result: better-looking output, same core usability gap.

Version 3: AI-Structured Report

VORA App Meeting Workspace

The current version generates a full report from transcript context using an explicit output structure.

We moved from "extract snippets" to "generate a document with required sections."

Target format:

  1. meeting overview
  2. key discussion points
  3. decisions table
  4. action items table
  5. keyword summary

That structure made reports far more usable for handoff and follow-up.

Why Prompt Specificity Matters

The biggest improvement was not model choice. It was prompt precision.

Vague prompts produced generic output.
Strict prompts produced predictable reports.

We explicitly constrained:

  • section order
  • table requirements
  • response tone
  • formatting boundaries

Once the prompt became concrete, output quality became stable.

Markdown Table to HTML Rendering

The AI output includes markdown tables, but our export target is HTML.
Instead of adding a full markdown dependency chain, we implemented a focused renderer for the patterns we actually use.

That kept the runtime light while producing clean downloadable reports.

UX Simplification: Fewer Choices, Better Outcome

Early UI offered multiple export modes, which created friction.
We replaced that with one primary action:

  • click export
  • generate refined report
  • download ready-to-share HTML

This reduced decision overhead and aligned with what users wanted most.

What Still Needs Work

Task ownership remains the weakest point when speaker attribution is ambiguous.

Without reliable diarization in every context, assigning owners can still require manual review. We are testing low-friction ways to improve this without making export flow slower.

Takeaway

A report feature succeeds when it reduces follow-up effort, not when it simply preserves raw data.

For us, the turning point was treating export as a communication artifact, not a transcript file operation.