- Back of the Envelope
- Posts
- ✉ Envelope #49: Crumbs on the floor, AI that remembers, and another that found my choir stage
✉ Envelope #49: Crumbs on the floor, AI that remembers, and another that found my choir stage
(Estimated read time = 5 minutes)
The other day, I found myself staring at some bread crumbs by the coffee table in our living room.
Just crumbs — probably from the PB sandwich my 2-year-old dropped while watching Bluey.
But for a second there, it made me tear up…
Because for the past 12+ years, every time something hit the ground, my dog Mochi would be on it in under two seconds.
We would stop her most of the time of course, but she’s fast with those little crumbs… she would lick the floor and leave some saliva marks. It’s a little disgusting to be honest.
…But Mochi passed away this past weekend. And I would give anything to watch her lick the floors again 😢.
It’s weird how grief works. At first it crashes over you like a tidal wave, but then it shows up in softer, quieter ways… like the sight of those crumbs, and the silence that follows.
Here are some pics of her throughout the years:

Razan (wifey) wrote a beautiful note on Facebook that described her better than I could:

And just for a bit of fun, I ended up turning Mochi into a toy.
Not physically, but digitally, using the recent ChatGPT image tools. I then use Google’s video model, Veo 2, to create a short animation of her coming out of the toy box.

Silly? Absolutely.
Healing? A little bit…
It’s another reminder how short life is.
On the flip side, it demonstrates how powerful these new AI tools have become.
Not just for productivity, but for emotion. For storytelling, and for remembering.
Speaking of AI advancements…
AI That Forgets Less (And Builds Better)
If you (or someone you know) have ever tried using AI for coding/programming, you probably ran into this:
You are working on something and you have given the model a bunch of context. The model has the entire codebase (e.g., using Cursor or Windsurf or GitHub Copilot).
Then, a few more prompts later, AI somehow forgets what it was building and it breaks its own code.
Kind of like us zooming in one specific structural connection detail but somehow forgetting the building’s main structural systems.
This is what made the new GPT-4.1 such a big deal when it quietly rolled out recently for developers.
The new model allows much larger context window (1 million tokens!) — meaning, you can feed it more information, and it will remember more of it consistently which reduces the chances of it bugging out.
For developers (and structural engineers dabbling in code or vibe coding), this is pretty significant and people are experiencing good results.
We can now build more tools or scripts without constantly re-reminding AI what it’s doing.
This also means potentially, we’ll start to see more useful apps/software being built and deployed faster.
AI That Looks, Thinks, and Acts
On the same front, OpenAI also rolled out new versions of their reasoning models: o3 and o4-mini.
Here’s the twist compared to the previous reasoning models (o1 and o3-mini): they can now do things while they think.
For examples, these models now reason by actively zooming in on images, running Python scripts, or searching the web as part of their thinking process.
Here’s another perspective.
Before the new model, AI could read a structural pdf set and maybe give a vague summary because it can OCR the words and have some understanding based on that.
Now, it can potentially start to focus on specific details and review it like an engineer and check for missing information or things to watch out for.
It’s nowhere near perfect at this stage for our daily workflows, but I believe it’s a clear step toward the future where AI can genuinely help us review drawings or catch inconsistencies and oversights.
I haven’t had the chance to fully test it but it’s on my todo list to try it, starting with one detail, then a full sheet, and then maybe a small pdf set to see how it performs and find out the capability and limitations.
I’ll let you know how it goes when I get to it (unless you beat me to it first… hit reply if you’ve tried something! Note that you need the paid version to get access to o3 and o4-mini-high though).
I Asked AI to Find My Past. It Did.
Speaking of o3 and o4-mini being able to zoom in on images and conduct searches…
Here is something I tried, but quick background first.
Back in 3rd grade, I went on a choir tour across Central Europe. I still have all these old photos of performances, churches, me in bright red vest… but I have no idea where most of them were taken.
So I gave one to the new model:

You can actually watch it “think”.
It’s zooming in on details, search the web to compare, and retry new ideas…etc. It’s pretty cool to see it in action.
Here is a short snippet:

It went on like this for a little over five minutes.
… and got it wrong (it told me it’s Marienkirche in Germany, I Googled it and that wasn’t the place).
But then, I gave it a little nudge, or gentle guidance if you will, and it figured it out within 3 minutes (I confirmed it with Google Search).

From Wikipedia:
It gave me a little goosebumps.
Not only that it was “impressive technology,” but also because I now feel like I have a way to revisit some of the places that I’ve been to when I was little but have no clue where I was.
It was almost magical…
We’re Not Wired for This
If all these AI stuff feels like too much, too fast… you are not alone.
We (humans) are wired to think in linear progression. We expect tomorrow to look more or less similar to today.
But AI advancement is not moving in straight lines. It’s moving up exponentially.
What seemed impossible last month is now a fun experiment on my todo list.
What felt like a pipedream (e.g., an AI that reads drawings and understand them completely), now feels like something we could obtain in less than a few years or even months.
And that’s part of the reason I continue to learn, experiment, and share as much as I am able, so that we (structural engineers) are not left behind.

Alrighty and that’s it for today.
If you’ve been experimenting with new tools (either building, exploring, or just playing), I’d love to hear what you’ve been discovering.
Thanks for reading! Until next time.
And I’ll go vacuum up those crumbs now…
PS.
If you are working on a steel building and you haven’t heard of Durafuse, you might want to check it out.
It is a proprietary steel moment frame connection that offers high ductility and fast recovery after an earthquake.
And they’ve got your back with full design team support related to the connections: calcs, plan check comments, RFI responses, and shop drawing reviews.
Click here to learn more: https://go.sehq.co/durafuse
PPS.
Have you heard the joke about the opposite of Artificial Super Intelligence?
.
.
.
It’s real stupid.

By the way, if you came here through LinkedIn and would like to continue the conversation there, this will take you back:
Reply