MAKING COMPLEXITY CLEAR
What I’ve learned is simple: good policy requires good communication. Good data isn’t enough. People need to understand what it means. Technical insights aren’t enough. Policymakers need them translated into action. Ethical principles aren’t enough. The public needs to see why they matter. And in every case, good communication is what turns insight into impact.
That’s why I write. Through The Caffeinated Chronicle Substack, I translate complex issues, from AI bias to government efficiency, into accessible narratives so citizens, not just experts, can engage with the systems shaping their lives.
BEYOND THE DATA
My first language has always been storytelling. For nearly two decades I worked in marketing, helping Fortune 500 companies and startups connect with audiences through award-winning campaigns. Photography, my creative constant, trained me to notice the details. That same eye guides how I analyze data, policies, and claims, and how I shape narratives that resonate with both markets and institutions.
The thread across it all is clear: making complexity understandable, persuasive, and human.
LOOKING FORWARD
AI is reshaping how we work, learn, and govern. Whether this strengthens society or undermines it depends on choices we make now. Across pixels, marketing, psychology, and AI governance, the constant is communication. Today I bridge technical possibility and public understanding, policy theory and everyday impact, innovation and the values that should guide it.
Strategic communications is where these threads come together. My work spans from public explainers that inform citizens, to solution briefs and campaigns that drive adoption, to policy analysis that shapes oversight.
THE LENS I BRING
I started in marketing communications, turning complex ideas into clear narratives. Graduate training in psychology deepened my understanding of human behavior and decision-making. Together, those foundations shape how I approach policy and governance today. That includes analyzing institutional accountability and building practical tools like CTRL+Think to preserve critical thinking in AI integrated classrooms.
WHEN AI BECAME PERSONAL
Everything shifted when I joined a healthcare AI startup. I supervised a 12-person annotation team training conversational models, partnered with clinical leadership, and navigated FDA-related requirements. That work revealed how policy frameworks shape technical choices, and how small data decisions cascade into big impacts on patient care and safety.
It also solidified a conviction: the future of AI isn’t defined only by what’s technically possible. It depends on the ethics, values, and oversight we embed today.
WHY THIS WORK MATTERS TO ME
Working in healthcare AI made clear how much communication and policy matter when technology directly affects people’s lives. Data choices, regulatory frameworks, and ethical principles are never abstract; they shape outcomes for patient care and safety in tangible ways.
As a school board member, I see AI’s impact on learning up close. Students are already turning to systems that can write homework or solve problems, sometimes at the expense of their own reasoning. That experience sparked CTRL+Think, a toolkit designed to help educators and students preserve critical thinking in AI-integrated classrooms.
As a parent, the stakes feel even higher. When I push for accountability in AI or challenge claims like the Department of Government Efficiency’s $53.1B “savings” discrepancy, I am thinking about the digital future today’s children will inherit. For me, this work is not only professional, it is personal.