AI Second

Many companies these days are announcing "AI first" initiatives. Other than being good for stock prices, they believe AI is good enough for human work. I don't disagree. I use AI tools for work, and for personal reasons. I've found them to be very helpful. So then do AI think is "good" or "bad"? I believe–like everything–it depends.
It depends
The best use cases I've found for AI are the things I'm bad at: drawing art for video games, writing physics simulations, using new programming languages, preparing taxes, etc. Because these things take so long for me to figure out on my own, the results from AI are a huge time saver. Not only that, the results are better than what I can do. I don't have time to practice drawing, so I'm happy that AI can draw me a tree, even if it's soulless and devoid of anything that could be perceived as art. It lets me focus on the pieces I care about, which is writing code. Obviously hiring a human artist would be better, but also more expensive. Maybe I can do that once I have a game that makes money.
There's another category of use-cases where the stakes are very low, and there's not much skill involved: writing unit tests, taking meeting notes, preparing outlines, building static websites, etc. AI tools do these tasks perfectly fine. The results won't blow you away, but they'll save you a lot of time.
That said, I won't use AI for anything I'm good at, where quality is important to me. I'm almost never happy with the API design choices than AI agents produce. They get the job done, but aren't designed well for long term maintenance and growth. Furthermore, I've never found them good at large code refactors, in part because they start to hallucinate new functionality. Additionally, when I do a lot of writing, I don't trust AI to effectively convey what I want in text, but it's fine if it wants to check my grammar or whatever. Usually, if I have the experience to visualize the end state I want, and how to get there, AI won't meet my standards. It might be capable of bits and pieces, but it won't one-shot an entire implementation of something I'm skilled enough to make myself.
One of the reasons I think AI can't do what I'm skilled at is context. In our brains there is a lot of context we take for granted. Perhaps the AI agent can do what I want, but only if I write enough context for it. LLMs are statistical models of the collective sum of all written human knowledge. They're specifically trained to produce the expected sequence of tokens. Meaning: they're not creative. Any creativity produced by an LLM must be derived from the (human-generated) prompt. But writing a highly detailed prompted might be just as much work as doing it myself.
Mediocrity
In short, AI lifts up the bottom. It gives us all tools to make us better at things we're bad at, but it doesn't make you better at what you're good at. If you've never written code in your life, you now have access to a dozen (and counting) tools to build no-code or low-code apps. You'll never replace a real engineer with these, but now you can build something. If you've never been good at art, you can produce almost anything you want on demand. It might not be "art", but it can bring life to an empty page. If you've never been good at languages, you have an on-demand translator for hundreds of languages. It might not capture nuance or intention in your Booker Prize-winning novel or poem, but it can grow the audience of your video game or app.
We're at a point where the baseline skill for a lot of things is good enough thanks to AI. This means individually, we are capable of a lot more. But I don't think AI is yet an expert at anything. If you want the best software, you hire an expert. If you want the best art, you hire an expert. If you wan the best writing, you hire an expert. There is a corollary: in this future where AI makes us all mediocre artists, how do you become a master? Companies are hiring fewer software developers (in part) because of AI. How will we train the next generation of senior software developers if the new crop of candidates rely on AI to do mediocre work?
I'm not convinced the current models will ever be good enough to replace experts. Iterative improvements in benchmarks might make the LLMs "smarter", but they don't make them better decision makers. They don't have anything at stake, so they can't make long-term/short-term trade-offs like a human. They have no accountability, so they don't care if they mess up. They don't experience the human condition, so they can't make artistic choices. They only "know" what you tell them in the prompt, and no more.
One person can get a lot more done with AI than they can alone, but the results just won't be that good. AI can do the job of mediocre humans. No company wants to hire mediocre humans to begin with, but that's because mediocre humans are expensive! By comparison, AI is cheap. When a company says something like "AI first," I don't assume they mean they will replace humans with AI. I assume it means they want more mediocrity. Sometimes that's fine–plenty of businesses want quantity over quality. But for anyone that wants excellence, reach for humans first, and AI second.