$
AND AI-NATIVE WORKFLOW DESIGN
$ I did not set out to build a career at the intersection of marketing, operations, and AI systems. What started as growth work kept pulling me toward the harder layer underneath: the workflow, the operating model, the approval path, the structure that decides whether something scales or breaks under pressure.
That pattern followed me everywhere. In enterprise banking, I rebuilt compliant content and advisor enablement systems that had to earn trust from Legal and Compliance before they could move faster. In startup and independent work, I learned how to build distribution engines, content systems, and growth loops without the safety net of large teams or paid spend.
More recently, that same instinct has pulled me deeper into AI-native systems: agent orchestration, memory, observability, workflow design, and the boundary between probabilistic generation and deterministic control. I am less interested in clever prompts than in durable systems teams can actually operate.
What I want next is simple: harder problems, more ownership, and the chance to build things that last. The through-line in my work has never been volume for its own sake. It has been taking ambiguity, risk, and complexity, then turning them into structure people can trust.
Built and ran a compliant social selling system at enterprise scale.
Rebuilt approval workflows until Legal and Compliance trusted the system.
Created repeatable growth systems for creators and digital-first brands.
Focused on orchestration, memory, observability, and workflow architecture.
The goal is not chaos disguised as velocity. I build the scaffolding that lets teams move faster without losing trust, clarity, or control.
Whether the constraint is compliance, scale, or cross-functional friction, trust has to be designed into the workflow. It does not appear after the fact.
I care about the architecture around the model: memory, guardrails, instrumentation, interfaces, and the decisions that make AI usable in the real world.
> Click each role to see the systems, scope, and outcomes behind the title.
I treat AI as infrastructure. That means defining roles, constraints, memory, policy boundaries, and runtime visibility around the model so the system is usable after the demo.
The skills I've developed over the years across growth, systems, operations, AI, and execution.
How the work tends to be described.
Not vanity quotes. Just the consistent themes that have come up in reviews, leadership feedback, and cross-functional working relationships.
My work was recognized for improving structural issues like content quality, turnaround times, and platform operations instead of optimizing for short-term optics.
I was specifically praised for seeing the bigger picture and using process to protect clients, brand, and risk rather than treating process like overhead.
The strongest recurring feedback was around trust: clear ownership, strong partnership across Marketing, Legal, and Compliance, and the ability to make complicated systems easier to work with.
The programs became easier for people to believe in over time, to the point that internal stakeholders and recruiting conversations began using them as proof of what the organization could support.
Looking for someone who thinks in systems, drives growth, and builds with AI. Always open to conversations about operations, automation, and the future of work.