Articles by Victoria

Uncovering AI chatbots: When "Private" Conversations Aren’t Really Private

Why AI’s Shift to Profit & Privacy Should Matter to Us

Nov 16, 20254 min read
cover

Hello everyone! Welcome to another Articles by Victoria, the place where I randomly write things I’m curious about.

I’ve been working in tech as a solutions engineer long enough to see the cycle: hype around new AI tools → huge excitement → then comes the difficult questions about ethics, governance, data...

As I support my clients while building a community as Director of WomenDevsSG, I observed this has been a concern to many:

Will this tool help me? And what happens to my privacy and data?

The recent string of stories around OpenAI going for-profit and the leak of private ChatGPT conversations into places like Google Search Console hit a nerve to me and most likely some users too. It’s one thing to talk about AI ethics in theory; it’s another to realise how big names in the industry are actually acting.

And from my perspective, I find myself thinking:

If this is happening in a huge organisation like OpenAI, what does it mean for the rest of us? For the smaller players?

My Thoughts on OpenAI’s shift to for-profit

When I talk to executive stakeholders in the companies I lead or advise, the first reaction to “OpenAI becomes more commercial” is excitement. After all, more funding means faster models, more products, more features we can embed in our stack.

Plus, AI research and infrastructure are expensive, and the nonprofit “capped-profit” model was never going to keep up with global ambitions. Turning for-profit makes sense on paper: it gives OpenAI more flexibility, access to capital, and room to grow.

I hate to admit, but I too got swept up in the feelings of anticipation for more possible integrations with my company’s products.

But then once we started thinking what this means for OpenAI users, the perception changes. We’ve already seen hints of that in how AI services are evolving. Features that used to be free are now locked behind subscriptions. APIs that were once open are restricted by licensing. Transparency around how data is used becomes less clear because “trade secrets” and “competitive advantage” take priority.

Hence, the question my clients are asking are not only just “how good is this model?” anymore, but also “how much of myself am I giving away when I use it?”

Recently I read about that some private chatGPT conversations were found on Google Search Console. Every log on one’s personal chat history such as their financial data, personal information and more are exposed online. And this is not the first time this happens.

Once again, this is a reminder that these tools are built on layers of APIs, cloud routing, and logging systems. Each layer is a potential point of exposure. So even when a company has good intentions, technical complexity can work against privacy.

How far can we trust AI chatbots and tools?

As someone who works with clients on AI adoption, I’ve always emphasized to them that privacy is not a guarantee. Even when companies have the best intentions, systems are complex. Data moves between servers, APIs, and integrations. All it takes is one setting or one oversight for private information to end up exposed.

So, when I think about OpenAI’s shift toward profit and the recent leak, I can’t help but connect the two. The more commercial these systems become, the more incentives tilt toward growth and away from restraint. Privacy becomes a feature, not a foundation.

Can we trust companies that build and profit from our conversations to truly protect them? Maybe. But not without questions, and not without our vigilance.

Trust isn’t built through branding. It’s built through accountability, transparency, and how a company behaves when nobody’s watching. The recent leak made that painfully clear. It wasn’t just a bug; it was a symptom of how data stewardship often takes a back seat when growth and monetization lead the conversation.

The Verdict: Proceed but dont blindly trust

Let me be clear, I’m not anti-AI. I believe in its potential and I am a user myself. I’ve seen firsthand how it helps women in my community prototype faster, express ideas better, and learn new skills.

However, I’m also pragmatic at the same time. The more profit drives AI’s evolution, the more we need to build counterbalances: governance frameworks, transparent audits, informed users, and strong communities that push for accountability.

As developers, engineers, and leaders, we can’t just focus on what AI can do. We need to question how it does it, who it serves, and what happens when our private thoughts become someone else’s data point.

Thanks for reading till the end! This article is part of a new series called “AI, but make it make sense”. The aim of this series is to demystify anything AI, for non-techies and techies! So far, in the series, we’ve talked about a few topics such as:

https://lo-victoria.com/understanding-ai-agents-an-overview

https://lo-victoria.com/the-truth-about-vibe-coding-feat-github-copilot-agent-mode

If these are interesting to you, do check out the series here for more! I will be slowly putting out articles on my personal thoughts of certain AI-related topics, just like this article, in this series as well!

Thanks for reading! Cheers!

Let's Connect!

More from AI, but make it make sense

View full series →

More Articles