AI Writing, Analyst, Prompt Engineer
I’m a writer who adapted for the times. Alongside film/TV work, I trained into AI writing and evaluation—earning an Applied Business Science certificate with a concentration in AI prompting from UNC Charlotte—then put it to work at TELUS International, where I rated model outputs at scale and learned how prompt systems, rubrics, and safety policies actually perform under pressure.
I brought that rigor to client work at Inbound Producers Marketing, serving as the content writer who helped build AI-accelerated systems that turn financial advisors into client magnets: mapping topics, drafting and refining long-form SEO content, standing up lead magnets and nurtures, and closing the loop with compliant publishing and reporting.
Human voice first, AI at speed—prompt systems and refusal rules, rubric + golden-set QA, and a repeatable build-publish-measure loop that’s compliant and revenue-focused.
Meta (Project-Based) | Data Labeling Analyst (Contract)
Audited labeled data and model outputs to measure quality, consistency, and guideline adherence; summarized trends and risks for stakeholders.
Tracked operational and quality metrics in Excel/Sheets; built QA checklists and sampling plans to reduce rework and improve accuracy.
Documented tooling issues, edge cases, and policy conflicts with clear reproduction steps; escalated bugs with supporting evidence.
Supported guideline and workflow updates by validating changes in-queue, updating knowledge repositories, and answering labeling questions.
Partnered cross-functionally (PM/product/engineering) to support launches and confirm post-change quality performance.
Analytical and Reporting Skills
Trend analysis (finding patterns in errors across batches)
QA sampling strategies (spot checks, targeted checks, rework loops)
Simple metrics tracking (accuracy, agreement rate, defect rate, escalation rate)
Clear write-ups (what happened, why it matters, recommended fix)
Tooling and Process Skills
Workflow documentation (SOPs, checklists, handoff notes)
Bug/issue reporting (repro steps, examples, severity, suggested resolution)
Knowledge base upkeep (updating guidance as rules change)
Fast ramping (learning new tools + policies quickly and staying consistent)
Trust, Safety, and Compliance (NDA-safe)
PII awareness (spotting sensitive info and handling it correctly)
Policy-aligned judgments (flagging unsafe content and risky outputs)
Confidential work habits (working under NDA with clean documentation)
Soft Skills
High-precision writing (clear, specific, not vague)
Decision justification (explaining “why” in a way others can audit)
Bias-aware evaluation (checking for stereotypes or unfair assumptions)
Cross-functional communication (writing notes that PM/engineering can use)
AI Evaluation and Quality Skills
LLM response grading (helpfulness, accuracy, safety, tone, policy fit)
Rubric-based QA (scoring frameworks, pass/fail rules, consistency checks)
Error typing (hallucination, omission, bad reasoning, policy risk, ambiguity)
Comparative ranking (choosing best output between options with clear reasoning)
Calibration (aligning judgments to a standard; reducing reviewer drift)
Data Labeling and Taxonomy Skills
Annotation workflows (multi-step labeling, edge-case handling)
Taxonomy building (label definitions, decision trees, examples/counterexamples)
Guideline interpretation (applying rules consistently, even in gray areas)
Ambiguity resolution (how you decide when labels conflict or overlap)
Prompt and Instruction Crafting
Industry-specific prompt writing (creative + entertainment contexts)
Prompt stress testing (finding failure points and improving instructions)
Rewrite and constraint handling (length limits, style rules, structured outputs)
Evaluation prompt design (creating prompts that reveal model weaknesses)
Remote
November 2025 - March 2026
AI Writing & Evaluation at Telus International
Core Skills
Prompt architecture: system prompts, role/goal framing, constraint design, few-shot exemplars
Policy → prompts: convert guidelines into instruction trees, refusal criteria, and escalation paths
Rubric design & evaluation: factuality, severity/safety, bias/fairness, style/brand fit
Golden sets & error taxonomies: coverage mapping, edge-case mining, regression testing
Content QA ops: sampling plans, inter-rater reliability (IRR), feedback pipelines to product/eng
Change management: versioning, release notes, evaluator onboarding, training scripts
Experience Highlights
TELUS International — LLM Rater / Writer Analyst
Rated model outputs at scale using rubric-based reviews for factuality, helpfulness, and safety. Authored concise evaluator notes with reproducible steps, flagged edge cases, and contributed to IRR improvements through clearer decision rationales and example libraries.Internal Guidelines Chatbot (cross-functional prototype)
Partnered with SMEs to translate policy into instruction hierarchies and refusal rules; built few-shot exemplars, golden sets, and an error taxonomy. Outcome: tighter policy adherence, fewer unsafe edge cases, faster dev feedback, and clearer training ramps for new evaluators.
What I Did (End-to-End)
Scope & risks: define personas, tasks, guardrails, and failure modes
Draft prompts: system + developer + user layers; tone and domain constraints
Seed exemplars: positive/negative pairs, boundary tests, and counter-prompts
Validate: run golden sets; measure pass/fail against rubrics; iterate prompts/policy
Operationalize: packaging for teams (prompt sheets, do/don’t examples, refusal/redirect scripts)
QA & reporting: sampling cadence, IRR checks, dashboards, and change logs
Remote
May 2024 - June 2025
Methods & QA Philosophy
Evidence over vibes: every guideline gets an example; every example maps to a rubric cell
Safety is a system: refusals must be consistent, clarifying, and steer to safer alternatives
Human voice preserved: AI accelerates research and structure; final tone remains distinctly human
Measure what matters: accuracy, harmfulness avoidance, and user-perceived helpfulness
Tooling (typical)
LLMs (GPT family + vendor variants), evaluator consoles, Sheets/Docs for golden sets, lightweight dashboards for pass/fail & IRR, issue trackers for feedback loops, style guides/brand bibles.
Training & Credentials
BFA, Creative Writing — Full Sail University
Applied Business Science (UNC Charlotte) — concentration in AI prompting
The Art & Science of Prompt Engineering (Udemy)
PIXL Certification (LXD 101, Self-Paced Course Creation, ILT Development)
Representative Deliverables (NDA-safe descriptions)
Prompt packs (system/developer/user) with voice, scope, and safety notes
Rubrics with level descriptors (Pass/Borderline/Fail) and quick-reference checklists
Golden-set workbook: cases, expected outputs, rationales, and acceptance thresholds
Error taxonomy & playbook: common failure patterns and fix paths
Evaluator guide: onboarding script, calibration tasks, and sample annotations
Release notes: what changed, why it changed, how to test it