AI Writing, Analyst, Prompt Engineer

I’m a writer who adapted for the times. Alongside film/TV work, I trained into AI writing and evaluation—earning an Applied Business Science certificate with a concentration in AI prompting from UNC Charlotte—then put it to work at TELUS International, where I rated model outputs at scale and learned how prompt systems, rubrics, and safety policies actually perform under pressure.

I brought that rigor to client work at Inbound Producers Marketing, serving as the content writer who helped build AI-accelerated systems that turn financial advisors into client magnets: mapping topics, drafting and refining long-form SEO content, standing up lead magnets and nurtures, and closing the loop with compliant publishing and reporting.

Human voice first, AI at speed—prompt systems and refusal rules, rubric + golden-set QA, and a repeatable build-publish-measure loop that’s compliant and revenue-focused.

AI Writing & Evaluation at Telus International

Core Skills

  • Prompt architecture: system prompts, role/goal framing, constraint design, few-shot exemplars

  • Policy → prompts: convert guidelines into instruction trees, refusal criteria, and escalation paths

  • Rubric design & evaluation: factuality, severity/safety, bias/fairness, style/brand fit

  • Golden sets & error taxonomies: coverage mapping, edge-case mining, regression testing

  • Content QA ops: sampling plans, inter-rater reliability (IRR), feedback pipelines to product/eng

  • Change management: versioning, release notes, evaluator onboarding, training scripts

Experience Highlights

  • TELUS International — LLM Rater / Writer Analyst
    Rated model outputs at scale using rubric-based reviews for factuality, helpfulness, and safety. Authored concise evaluator notes with reproducible steps, flagged edge cases, and contributed to IRR improvements through clearer decision rationales and example libraries.

  • Internal Guidelines Chatbot (cross-functional prototype)
    Partnered with SMEs to translate policy into instruction hierarchies and refusal rules; built few-shot exemplars, golden sets, and an error taxonomy. Outcome: tighter policy adherence, fewer unsafe edge cases, faster dev feedback, and clearer training ramps for new evaluators.

What I Do (End-to-End)

  1. Scope & risks: define personas, tasks, guardrails, and failure modes

  2. Draft prompts: system + developer + user layers; tone and domain constraints

  3. Seed exemplars: positive/negative pairs, boundary tests, and counter-prompts

  4. Validate: run golden sets; measure pass/fail against rubrics; iterate prompts/policy

  5. Operationalize: packaging for teams (prompt sheets, do/don’t examples, refusal/redirect scripts)

  6. QA & reporting: sampling cadence, IRR checks, dashboards, and change logs

Methods & QA Philosophy

  • Evidence over vibes: every guideline gets an example; every example maps to a rubric cell

  • Safety is a system: refusals must be consistent, clarifying, and steer to safer alternatives

  • Human voice preserved: AI accelerates research and structure; final tone remains distinctly human

  • Measure what matters: accuracy, harmfulness avoidance, and user-perceived helpfulness

Tooling (typical)
LLMs (GPT family + vendor variants), evaluator consoles, Sheets/Docs for golden sets, lightweight dashboards for pass/fail & IRR, issue trackers for feedback loops, style guides/brand bibles.

Training & Credentials

  • BFA, Creative Writing — Full Sail University

  • Applied Business Science (UNC Charlotte) — concentration in AI prompting

  • The Art & Science of Prompt Engineering (Udemy)

  • PIXL Certification (LXD 101, Self-Paced Course Creation, ILT Development)

Representative Deliverables (NDA-safe descriptions)

  • Prompt packs (system/developer/user) with voice, scope, and safety notes

  • Rubrics with level descriptors (Pass/Borderline/Fail) and quick-reference checklists

  • Golden-set workbook: cases, expected outputs, rationales, and acceptance thresholds

  • Error taxonomy & playbook: common failure patterns and fix paths

  • Evaluator guide: onboarding script, calibration tasks, and sample annotations

  • Release notes: what changed, why it changed, how to test it