I have spent the last two years neck-deep in the world of Artificial Intelligence. I haven’t just been “dabbling” or asking chatbots to write limericks about cheese; I’ve been formally studying the architecture, experimenting with the logic, and trying to find a path through what has become a chaotic digital circus. After twenty-four months of watching a thousand different startups promise to “revolutionize” my morning coffee, I’ve reached a conclusion that will likely upset the tech-evangelists.
I’ve decided to stop playing the field. I have formally narrowed my AI skills and my entire workflow to Google-developed tools.
Now, some might call that a lack of imagination. I call it a refusal to spend my life managing a bag of mismatched spanners. While the rest of the world is busy chasing every “model of the week” like a puppy chasing a van, I’ve chosen to settle down with the one player that actually has the bank balance and the infrastructure to still be standing when the dust settles.
The Mirage of the Startup Supermodel
The primary reasons for this monogamy is simple: sustainability and capacity.
In the AI world, we are currently surrounded by “supermodel” startups. They look fantastic on a landing page, and they promise the world, but they are fueled entirely by venture capital and the desperate hope that they’ll find a business model before the money runs out. Relying on them for your professional life is like hiring a brilliant architect who lives in a tent.

They might have a moment of genius today, but by next Tuesday, they’ve run out of beans and vanished into the night.
Google, however, is a different animal. It is a corporate leviathan with a long-term strategy that is actually sustainable. They aren’t just trying to survive the next quarter; they are building the digital plumbing for the next century. When I invest my time in mastering their tools, I’m not just learning a clever trick; I’m anchoring myself to a system that will be there when I wake up in five years.
The Data Reservoir
Then we have the matter of the “petrol.” AI, as we all know, is powered by data. Most AI companies have to go out and “scrape” the internet like a scavenger looking for scraps in a gutter. Google doesn’t have that problem because they own the gutter, the street, and the houses on either side.
They natively control an ocean of data through Search, YouTube, and the entire Workspace ecosystem. This isn’t just about quantity, but context. If you want an AI that actually understands the world, it helps if that AI has spent the last two decades watching how humanity searches for information and communicates.

Attempting to replicate that level of ingrained knowledge by jumping between five different third-party apps is, quite frankly, a waste of everyone’s time (and money).
The Hierarchy of Competence
So, how do I actually use this stuff? I’ve organized my approach into a systematic way of using Google’s specific tools to ensure I’m not just playing with a toy, but operating a machine.
The first level is establishing a Truth Floor. This is where I use NotebookLM – a lot.
The biggest problem with most AI is that it’s a very confident liar. It will tell you a blatant falsehood with the conviction of a politician. To fix this, I use the “closed-loop” approach. I feed NotebookLM my specific documents and references, my own preliminary research, my meeting transcripts, my data, and I tell it to stay in that box.

It doesn’t go off into the woods to find answers, only from places I provide.
If it makes a claim, it shows me the exact paragraph in my own files where it found it. It’s an anchor in a world of digital hallucinations, if there are any biases or hallucinations, it most likely will be my own, something I can own if I’m called out..
The next tier is The Cognitive Heavy-Lifter. This is where Gemini comes in. The key here isn’t just “intelligence” it’s the “cognitive span.” I honestly don’t think AI models are “intelligent”, in the first place, they’re just efficient.
Most AI models have the memory of a goldfish, or my own memory patterns… Forget what I’v said five minutes ago. Gemini, however, has a context window that can hold millions of data points at once.
You can feed it a decade of company history, thousands of customer transcripts, and a library’s worth of technical manuals, and it can attempt to understand and collate things at a pace I can only dream of as an older person. It understands the entire arc of a project, not just the last prompt I typed. It’s like having a librarian who has read every book you’ve ever owned and can cross-reference them in seconds. Just less grumpier and lets you blast some music in the background.
Sure, there are still a number of times where I get annoyed and just gaslight the LLM and tell it “you’re useless sometimes, you know?”. But at least I’ve narrowed things down enough to know I’ve hit a wall before things go haywire.
The third level are Agents. These are “Gems” in Gemini. This isn’t unique to Google, I know, it just makes it easier when integrating into other tools and resources, especially NotebookLM. I can build custom experts that act as codifications of behavior.

I have a technical editor Gem, a strategic consultant Gem, and a “devil’s advocate” because I specifically tell it to call out anything it’s not sure of, give scorecards, quantify the scores, etc. These agents remember my tone, my goals, my guardrails and my preferences (and insecurities?) across the entire timeline of my work.
Finally, we have The Operational Front where integration with Google Workspace automates as much mundate tasks as possible. This is where the AI escapes the chat window and actually starts doing the heavy lifting in my Docs, my Gmail, and my Sheets. To be frank, I’m not that in-tune with this front because I don’t subscribe to the paid tier of Google Workspace. I’m still on the fence if it’s a good idea to let Google have direct influence of my calendar, my emails, and my files. Though they already have all of those the past few decades.
The Corporate Muscle
Google has significant revenue from its other businesses. As much as we may despise the idea, they make money from ads, cloud infrastructure, hardware, SaaS, and solutions to continue developing the usability and scalability of these tools at an enterprise level. They aren’t going to suddenly pivot to a new “crypto-AI-trend” because they need to please a new board of investors or stick to slop-fuelled antics.
I’m not being a Google shill (though holding their stock says otherwise), but I’ve stopped chasing the “model of the month” because I don’t have the time to be a full-time beta tester for every startup with a flashy logo. I don’t have the funds either.

I just want to be more than decent on the one system that has the data, the funding, and the integration to actually be useful in a professional context that I’m already neck deep invested in… It’s just more efficient, and lazy, I guess.
But as I said, that’s just my approach. I prefer a system that functions like a unified, high-performance machine rather than a collection of mismatched parts from various sheds coz that’s the best my brain can handle.
What about you? What is your preferred AI learning strategy, or are you still spending your mornings juggling fifteen different subscriptions?
0 Comments