AI-Assisted Development: 5 Questions Every Engineering Manager Should Ask
AI-Assisted Development: 5 Questions Every Engineering Manager Should Ask
I recently sent a survey about AI tools to our engineering team. The response rate was higher than I'd hoped—genuine enthusiasm was clear. But underneath that enthusiasm, I noticed something else: Anxiety. Developers were interested in these tools, but worried about doing the wrong thing. They wanted guidance. They wanted to know what 'good' looked like. Without that clarity and support, that initial enthusiasm turns into hesitation.
That's where a lot of team managers fail. They see the enthusiasm, start pushing adoption, and don't realise they've left their engineers unsupported—knowing they should use these tools but unsure how to do it safely and well.
If you're in the early stages of AI adoption, or refining an approach already underway, these five questions will help you move from reactive to strategic. They're not AI-specific—they're questions disciplined engineering leaders ask before committing to any significant change. But right now, they're particularly important for AI.
Question 1: What Problem Are We Solving?
The real question: "What specific problem are we solving with this tool? Can we describe it without mentioning AI?"
This is the foundational question, and it's where most adoption stumbles.
If you can't describe the problem without mentioning AI, you don't have a clear problem—you have hype. AI tools are solutions to specific problems. If the problem is fuzzy, the solution will be too.
Here's what clarity looks like:
Clear: "We're adopting GitHub Copilot to reduce boilerplate code writing in React components, freeing engineers to focus on architectural decisions and business logic. Currently, we spend roughly 15% of sprint cycles writing repetitive scaffolding code."
Vague: "We're using AI to improve developer productivity."
One is specific, measurable, and actionable. The other is a hope.
The discipline of answering this question forces you to be honest. Sometimes the answer is "we don't actually have a clear problem, so we're not adopting this." That's fine. That's actually the mature answer.
Question 2: What's Our Learning Strategy?
The real question: "How are we ensuring our team builds competency with this tool, not just uses it?"
Tools are only as good as the people using them. Without a learning strategy, you get surface-level adoption: people using GitHub Copilot but not understanding the output, accepting suggestions without critical thought, or worse—feeling threatened by the tool.
This question matters because it acknowledges something important: adoption requires capability building, not just tooling.
What does a learning strategy look like?
- Dedicated time: Pairing sessions where engineers explore the tool together. Lunch-and-learns where someone shares patterns they've discovered. Time blocked for experimentation, not squeezed into sprint cycles.
- Documentation: Teams sharing patterns and practices they've discovered. "This is how Copilot helps with React hooks" or "Here's where it struggles with domain logic."
- Psychological safety: Creating space for people to say "I don't understand this output" or "I don't trust this suggestion." The tool should amplify human judgement, not replace it.
- Recognition: Acknowledging that some engineers will adopt faster than others. Some will love it immediately; others will need more time and support.
Without this, you'll see resistance—and it won't be unreasonable resistance. It'll be people protecting their craft, which is exactly what you want. Channel that energy into learning, not into "I'm not using this."
That distinction matters. It's the difference between imposition and partnership.
Question 3: How Do We Maintain Quality Standards?
The real question: "How does AI-assisted code change our quality practices—code review, testing, release confidence?"
This is where many teams fail. They adopt AI tools, see velocity increase, and relax their quality gates. Big mistake.
Quality engineering doesn't go away because you're moving faster. If anything, it becomes more important.
Here's the reality: AI-generated code must still be reviewed, understood, and tested. Sometimes more rigorously than hand-written code, because the developer needs to verify the tool actually understood the problem.
What does maintained quality look like?
- Code reviews remain thorough. AI output is often clearer to review (it's more consistent), but that doesn't mean less scrutiny. It means smarter scrutiny. "Does this match our patterns?" "Is it idiomatic for our codebase?" "Does the developer understand what this code does?"
- Testing requirements don't drop. If anything, you test AI-generated code more carefully initially. You're building trust.
- Clarity on standards: Teams define what "good" looks like for AI-assisted code. That might be different from hand-written code in subtle ways. Make it explicit.
- Emphasis on understanding, not trusting: The code is faster, but the developer still needs to understand it. That's non-negotiable.
Question 4: Are We Equipped for Security & Compliance?
The real question: "What are the security, compliance, and intellectual property risks of this tool, and how are we managing them?"
This question separates serious engineering leaders from cowboys.
AI tools trained on public data can have licensing implications. Generated code might inadvertently match patterns from the tool's training data—which raises questions about IP. And compliance rules vary wildly: healthcare, finance, and retail all have different requirements.
You need to know:
- How is this tool trained? What data went into it? Are there licensing or IP concerns?
- Where can we use it safely? Some tools are fine for internal scaffolding but risky for customer-facing code. Some are fine in some industries but not others.
- What's our compliance story? If you're handling healthcare data, you need a clear stance.
- What code is off-limits? You probably have some proprietary patterns or sensitive logic where AI assistance adds risk. Define that boundary.
This isn't about being paranoid. It's about being professional. Your legal and security teams should have a voice here. If they don't, that's a red flag.
The answer might be "we've assessed this and we're comfortable using Copilot on routine scaffolding, but not on authentication logic or payment processing." That's a mature answer. So is "we're not ready for this yet."
Question 5: What's Our Success Metric?
The real question: "How will we know this tool is actually helping? What will we measure?"
Not everything that feels faster is actually faster. Velocity can be an illusion if quality dropped or team morale tanked.
Success metrics force you to be honest. They help you course-correct if adoption isn't working. And they give your team confidence that you're making decisions based on evidence, not faith.
What does this look like?
- Before/after metrics: Cycle time, code review time, deployment frequency. How much faster are we actually moving?
- Quality metrics: Defect rate per sprint. Did quality improve, stay the same, or drop?
- Developer satisfaction: "Do you feel more productive?" "Does this tool help you or distract you?" These are real metrics.
- Learning metrics: "How confident are you with Copilot output?" Confidence matters—it's a leading indicator of sustainable adoption.
- Honest assessment: "This helped with X but not Y." Some tools are narrowly useful, and that's fine. Name it.
The hard part? You need a baseline. Before you adopt, measure the current state. Then measure the same thing three months in. "It feels faster" isn't data. "Code review time decreased from 2 hours to 1.5 hours" is.
The Bigger Picture: AI Adoption as Leadership Discipline
These five questions aren't unique to AI. They're the questions disciplined engineering leaders ask before adopting any significant change: "What problem are we solving? How do we build capability? How do we maintain standards? What are the risks? How will we know it's working?"
The difference is that AI adoption reveals how mature your engineering culture actually is.
A team with psychological safety will embrace these tools because they feel heard and involved. A team without psychological safety will resist them because they feel threatened. A disciplined team will ask hard questions before adopting. An immature team will just follow the hype.
If your culture is healthy—if people feel safe speaking up, if you invest in their growth, if you believe quality and velocity go together—then AI adoption strengthens your team. These tools amplify human judgment; they don't replace it. In a healthy team, that's powerful. In an unhealthy team, it's dangerous.
My career mission is simple: build teams where people thrive, delivery improves, and quality is non-negotiable. AI adoption that serves that mission is worth doing. AI adoption that undermines it isn't.
So before you commit to any tool, ask these five questions. Ask them with your team. Listen to what they say. You might find that the answers reshape your adoption strategy entirely. That's the point. These questions aren't gatekeeping—they're clarity.
What About Your Team?
If you're in the early stages of AI adoption, or refining an approach already underway, I'd genuinely like to hear your experience. What questions do you ask? What's worked? What surprised you? What would you add to this list?
AI adoption in engineering is still new territory. The best thinking comes from people in the trenches making real choices.
Reach out on LinkedIn. I'm always interested in how other leaders are navigating this.
Mark Lambert is an Engineering Manager focused on strategic AI adoption, modern engineering practices, and building generative cultures where developers thrive. He's currently leading engineering teams at Avayler (Halfords Group), championing AI-assisted development practices across the organisation. You can find him on LinkedIn or read more on mlambert.uk.