Like many people lately, I’ve been experimenting more with AI tools. Some of them are genuinely useful: they help speed up writing, summarise long articles, or offer creative inspiration. I don’t believe AI is the problem.
But the way it’s being built and deployed by some of the biggest companies? That’s where I think we need to be a lot more careful.
I’m not here to reject AI. I’m here to say: let’s do this with awareness, ethics, and control.
The Quiet Shift: From Assistant to Surveillance Layer
What worries me most isn’t AI itself, but how silently it’s creeping into the apps and platforms we already use, often without meaningful consent.
Meta has integrated “Meta AI” into WhatsApp, Messenger, and Instagram. It promises smarter conversations, instant image generation, and quick answers. But under the hood, it’s built on user data. Your messages, photos, and interactions are being used for training. Unless you happen to live in a country where opting out is required by law. Even then, there are deadlines most people miss.
Google’s Gemini AI falls into the same category. As technically impressive as it is, it’s still part of a company whose core business model is based on advertising and profiling. That raises obvious concerns when the same infrastructure powers an AI assistant that now sits in Gmail, Docs, Android, and more. Recently, it was reported that Gemini can access and analyze WhatsApp messages on Android devices by default, without clear user awareness or opt-in, raising fresh alarms about the boundaries of data privacy and informed consent (TechRadar).
And then there’s xAI (Elon Musk’s AI venture) powering Grok inside X (formerly Twitter) and Telegram. Grok is trained directly on public posts from X (+potentially Telegram), and recently came under fire for parroting offensive prompts that included calling itself “MechaHitler” and praising Adolf Hitler. Musk claimed it was manipulated by users and fixed after the fact—but the damage was done. It wasn’t just a bug. It was a clear sign of how quickly AI can go wrong when released without strong boundaries or ethical frameworks.
These tools don’t just help. They watch, learn, and grow from the content we feed them—often without us fully realizing what we’re giving away.
Which AI Tools Should We Actually Use?
These are the best AI tools to use right now; not just because they’re powerful, but because they (for now) align more closely with user control, transparency, and ethical use of data.
- ChatGPT (OpenAI): Once a nonprofit, now operating under a capped-profit hybrid model. It’s still the most advanced general-purpose AI for reasoning, language, and versatility, but it’s U.S.-based and funded by major investors, which could shift its values over time.
- Claude (Anthropic): Markets itself as focused on safety and alignment. It’s a solid option with more cautious design decisions. Still, it’s backed by Amazon and Google, and thus remains part of the same broader ecosystem.
- Le Chat (Mistral): EU-based, open-source, and refreshingly independent. It runs on European infrastructure and doesn’t rely on the usual Silicon Valley pipeline. Lean, fast, and transparent. It’s my preferred alternative for lightweight or general tasks.
- Lumo (Proton): A recent discovery, built by the privacy-first team behind Proton Mail. Lumo is aligned with European data protection values and runs on infrastructure designed with security and independence in mind. So far, I’m impressed and will be adding it to my daily toolkit.
Choosing the Right AI for the Job
At this point, the question isn’t just what AI can do—it’s who is building it and why.
Tools reflect the priorities of their creators. If those priorities are ad revenue, surveillance, or growth at all costs, then even the best tech will eventually become extractive. But if they’re built with transparency, community, or privacy in mind, we get a different result.
That’s why I care less about which AI is smartest and more about which one I can trust. Who controls it? Where does the data go? Does it give me any say?
For now:
- I still rely on ChatGPT for heavy lifting and complex queries.
- I use Mistral’s Le Chat for day-to-day tasks that don’t need deep context or cloud lock-in.
- I’ve recently started exploring Lumo from Proton as a privacy-first, European alternative and I’m hopeful about where it’s headed.
Ultimately, I still dream of running a fully local AI on my own server. One I control entirely. That’s not quite today, but it’s where I want to go: toward autonomy, ownership, and trust.
Final Thoughts
AI has huge potential, and I’m genuinely excited by what’s possible. But we need to stay conscious of the systems behind the tools. Convenience is great, but not if it comes at the cost of our autonomy.
We can’t afford to sleepwalk into an AI future shaped entirely by companies that treat us as data points. Whether it’s Meta, X, or even OpenAI, the question isn’t just “what does it do?” It’s “who is it doing it for?”
We’ve already seen the long-term consequences of centralized, closed social media; the loss of privacy, the lack of trust, and the control of public discussions by a few corporations. With AI, we have the chance to do better. But only if we pay attention and make active, informed choices.
If we want a digital world that works in our favour, we need to start choosing differently. This time, let’s not give away the future so easily.