For 24 years, Microsoft’s Amanda Silver has been working to help developers — and for the past few years, that’s meant building tools for AI. After a long development of GitHub Copilot, Silver is now a corporate vice president of Microsoft’s CoreAI division, where he works on tools for deploying apps and agentic systems within businesses.
His work is focused the Foundry system within Azure, designed as a unified AI portal for businesses, giving him a close look at how companies actually use these systems and where deployments ultimately fall.
I spoke with Silver about the current capabilities of enterprise agents, and why he believes this is the biggest opportunity for startups since the public cloud.
This interview has been edited for length and clarity.
So, your work is focused on Microsoft products for outside developers – mostly non-AI focused startups. How do you see the impact of AI on companies?
I see this as a watershed moment for startups as deep as the public cloud movement. If you think about it, the cloud has a big impact for startups because it means that they don’t need to have real estate space to host their racks, and they don’t need to spend a lot of money on infusion of capital to get hardware to host in their labs and things like that. Everything became cheaper. Now the AI agent will continue to reduce the overall cost of software operations again, because many of the jobs involved in setting up a new business – whether it’s supporting people, legal investigations – many of them can be done faster and cheaper with AI agents. I think that will lead to more businesses and more startup launches. And then we find startups of higher value with fewer people at the helm. And I think that’s an exciting world.
What does this look like in practice?
Techcrunch event
Boston, MA
|
June 23, 2026
We certainly see multistep agents widely used in all different types of coding tasks, don’t we? As an example, one thing developers need to do to maintain a codebase is to stay up-to-date with the latest versions of the libraries it relies on. You may be relying on an older version of the dot-net runtime or the Java SDK. And we can make these agents the reasoning system of your entire codebase and it’s easier to update it, which can be a 70% or 80% reduction in the time it takes. And it really needs a deployed multistep agent to do that.
Live-site operations are another – if you’re thinking about maintaining a website or a service and something goes wrong, there’s power at night, and someone needs to be called to wake up to respond to the incident. We still have people on call 24/7, in case we lose service. But it used to be a hated job because you would always wake up for these little incidents. And we’ve now built a genetic system to successfully diagnose and in many cases completely mitigate the issues that come up with these live site operations so that people don’t have to wake up in the middle of the night and struggle to go to their terminals and try to diagnose what’s going on. And that also helps us reduce the average time it takes to resolve an incident.
One of the other puzzles at the moment is that agent deployments haven’t happened as fast as we expected even six months ago. I wonder why you are like that.
When you think about people who build agents, what keeps them from being successful, in many cases, it comes down to not really knowing what the agent’s purpose is. There is a cultural change that needs to happen in how people create these systems. What is the business use case they are trying to solve? What are they trying to achieve? You need to be very clear about what success means for this agent. And you have to think, what data do I give the agent so that it can reason about how it can accomplish this task?
We see things as bigger obstacles, beyond the general uncertainty of allowing agents to be deployed. Anyone who goes and looks at these systems will see a return on investment.
You mentioned general uncertainty, which I think feels like a big blocker from the outside. Why do you see this as less of a problem in practice?
First of all, I think it will be very common for agent systems to have human-in-the-loop scenarios. Think of something like a package return. It used to be that you had a workflow for processing returns that was 90% automated and 10% human intervention, where someone had to look at the package and had to make a judgment call about how the package was damaged before they decided to accept the return.
That’s a perfect example of where actually today the computer vision models are getting so good that in many cases, there’s no need to have a lot of human supervision to inspect the package and make that determination. There are still some borderline cases, where maybe computer vision isn’t good enough to make a call, and maybe there’s progress. It’s like, how often do you have to call the manager?
There are some things that always need some kind of human management, because they are critical operations. Consider incurring a contractual legal obligation, or deploying code into the production codebase that could potentially affect the reliability of your systems. But even then, there is the question of how far we can go in automating the rest of the process.







