2025 was not a quiet year in the story of humankind.
2026 is shaping up to be even harder.

The COVID-19 pandemic in 2020 helped usher in a new era of AI computing—how directly, history will judge (and who writes that history matters). My children were very young then. Luckily, our tech jobs stayed relatively stable. In hindsight, it was still a good time to be based in Austin, Texas.

A mix of an aging world population, too little imagination and long-term thinking, the real toll of prolonged isolation, and a K-shaped recovery produced a “new normal”: governments that seem less accountable to ordinary people, forty years of policy that shifted wealth toward the very old and the top 0.1%, and short-term politics that too often ignore the next generations. I see that as concentration of power among the old and the very rich.

In that context, AI tools usable by the general public took off around 2022. Since then, the pace has only increased.

Many sites will chronicle this period (Wikipedia is still catching up). One strong timeline is AI Timeline.

Recently, Google released Gemma 4 (late March 2026). That matters because it lets people build AI-centric software with fewer external dependencies.

The model is fast on modest hardware (CPU, GPU, memory). Despite being open-weight, it supports many capabilities you expect from frontier models:

  • Context window of 256K tokens

  • Tool calling to invoke local processes natively

  • Multimodal inputs: image, video, audio, text, and more

  • Coding: useful code on a tight resource budget

More detail in Google’s Gemma 4 model card.

← Blog index