4 Comments
User's avatar
Teo F's avatar

This article nails an important shift: as models become commoditized, engineering rigor becomes the real differentiator. Mastering system design, observability, and end‑to‑end reliability will separate the teams that succeed in production from those that stay in pilots. It's refreshing to see emphasis placed on practical delivery over chasing every new model.

Alex Razvant's avatar

Speaking of mental models, that's the right one to have ;)

Valentin Jimenez's avatar

Hi Alex, great research and another insightful post!

I noticed that you didn’t directly mention choosing to master a local stack for reasons like privacy, speed, and cost. That’s something I’ve seen discussed in several places; for example, becoming proficient in training small language models locally to speed up workflows and focus on your own data seems like it could offer a lot of value.

Since many people still don’t fully trust external AI systems, adoption can lag. In that context, investing in complex cloud infrastructure might not always be the best approach, especially if the cost isn’t justified by the number of users. It feels like local servers and local workflows might still be a strong direction this year?

What are your thoughts about it? Maybe a hybrid approach for local and cloud development?

Pietro Montaldo's avatar

Really interesting framing. The smartest bets aren’t just on models, they’re on infrastructure, workflows, and where value actually accumulates