THE JOURNEY FROM PIXELS TO POLICY

My path to AI ethics wasn't linear. It started with a camera, wound through Fortune 500 companies and startups, and landed in the space where technology meets human values.

For nearly two decades, I built my career in marketing, helping brands tell their stories and connect with audiences. I learned how to translate complex ideas into compelling narratives, how to understand what motivates people, and how powerful the right message can be.

But policy work was always calling to me. Even while climbing the corporate ladder, I found myself drawn to civic engagement. I served on Nassau County's Millennial Think-Tank to address "brain drain" on Long Island, later joined the New Jersey Psychological Association's Committee on Legislative Action to advocate for student concerns in policy development, and now sit on my local school board working on AI guidelines and educational policy. These weren't side projects. They were where I felt most energized.

When I decided to pursue psychology, diving deep into developmental research, it wasn't just about career change. It was about understanding human behavior from the inside out, building on what marketing had taught me about motivation and decision-making. This psychological foundation directly informs everything I do today. My understanding of developmental processes and cognitive development shapes my CTRL+Think research on how AI tools affect student learning. My knowledge of human decision-making and behavioral patterns helps me analyze institutional accountability issues. Psychology isn't my past, rather, it's the lens through which I approach every policy challenge

WHEN AI BECAME PERSONAL

Everything changed when I joined an AI healthcare startup. Suddenly, I was working inside AI development itself, not just marketing the end products. Supervising a team of 12 data annotators as we trained conversational AI for empathic responses, working closely with the head of clinical AI, I had a direct role in the daily decisions that shaped how these systems would analyze human interactions in healthcare settings, helping organizations enhance patient safety and clinical accuracy.

Working closely with FDA regulatory requirements, I learned how policy frameworks directly shape technology development. I watched as our small decisions about data, algorithms, and guidelines shaped how AI systems would analyze human interactions in healthcare settings. That's when I realized: the future of AI isn't just about what's technically possible. It's about the ethics, values, and regulatory frameworks we embed today.

But my research into the broader AI landscape revealed concerning gaps. Despite good intentions across the industry, biases were being built into systems. Privacy concerns were emerging in various applications. The space between "what we could build" and "what we should build" became starkly apparent through my exploration of AI development practices. I found myself asking: who's making sure AI serves everyone fairly? Who's checking the math when governments claim AI-driven savings? Who's thinking about how these tools affect the way children learn to think?

WHY THIS WORK MATTERS TO ME

As a school board member, I see these questions playing out in real time. I watch kids navigate a world where AI can do their homework, answer their questions, and solve their problems. But I also worry about what happens when they forget how to think through challenges themselves. That concern sparked my CTRL+Think project, exploring how we can preserve critical thinking skills in an AI-saturated world.

Being a parent amplifies everything. When I advocate for responsible AI policies or analyze government claims for transparency, I'm thinking about the world I'm helping create for my children and their generation.

FINDING THE TRUTH IN DATA

My work has taught me that good policy starts with good data. When I analyzed the Department of Government Efficiency's claimed savings and found a ~$53 billion discrepancy, it wasn't just about the numbers. It was about accountability, transparency, and ensuring that AI-assisted government analysis doesn't go unchecked.

Through my Substack, I try to make these complex issues accessible. Democracy works best when people understand what's happening, whether that's how AI systems make decisions, how government efficiency claims are calculated, or why certain policies matter for kids in classrooms.

BEYOND THE DATA

When I'm not diving into policy documents or developing behavioral interventions, I'm behind a camera. Photography has been my creative constant for over a decade. It keeps me grounded and reminds me that behind every data point and policy proposal are real people with real stories.

The same eye that helps me frame architectural spaces and capture the details that make interiors compelling helps me spot inconsistencies in data. The patience required for the perfect shot translates to the persistence needed for thorough research. And the precision that comes from architectural photography - getting the lines right, finding the best angles, seeing spaces clearly - informs how I approach AI ethics. I'm always asking who's being served, who might be left out, and how we can do better.

LOOKING FORWARD

AI is reshaping everything. How we work, learn, make decisions, and relate to each other. But it's not inevitable that this transformation will be positive. It depends on the choices we make now, the policies we put in place, and whether we prioritize enhancing human capabilities alongside technological advancement.

I'm committed to being part of that conversation. Bridging the gap between technical possibility and ethical implementation, between policy theory and practical reality, between innovation and protection for those who need it most.

The future of AI depends on the ethics we embed today. I'm here to help make sure we get it right.