Skip to main content
Feb 11

Microsoft VP: AI's New Equation for Startups

For over two decades, Amanda Silver has been a pivotal figure in assisting developers at Microsoft, with her focus shifting significantly in recent ye

5 min read83 views3 tags
Originally reported bytechcrunch

For over two decades, Amanda Silver has been a pivotal figure in assisting developers at Microsoft, with her focus shifting significantly in recent years towards building tools for artificial intelligence. Following a substantial period contributing to GitHub Copilot, Silver now serves as a corporate vice president within Microsoft’s CoreAI division. In this role, she is dedicated to developing systems for deploying applications and agentic AI within enterprises. Her work is specifically centered on the Foundry system within Azure, which functions as a centralized AI portal for businesses. This position offers her unique insights into how companies are practically implementing these AI systems and where current deployments tend to fall short.

I recently engaged in a discussion with Silver, exploring the current capabilities of enterprise AI agents and her conviction that this technology represents the most significant opportunity for startups since the advent of the public cloud.

This interview has been edited for both length and clarity.

The conversation began by addressing Silver’s work, which primarily targets external developers using Microsoft products, often startups not inherently focused on AI. I asked how she envisions AI impacting these companies.

Silver articulated her view, stating, "I see this as being a watershed moment for startups as profound as the move to the public cloud." She elaborated on the cloud's transformative effect, noting how it liberated startups from the need for physical real estate to host servers and reduced substantial capital expenditure on hardware. "Everything became cheaper," she observed. She then drew a parallel, explaining that "agentic AI is going to kind of continue to reduce the overall cost of software operations again." This reduction stems from the ability of AI agents to perform many tasks involved in launching a new venture—such as support functions or legal investigations—"faster and cheaper." Silver believes this will foster "more ventures and more startups launching," ultimately leading to "higher-valuation startups with fewer people at the helm," a prospect she finds "an exciting world."

Probing further, I inquired about the practical implications of this vision.

Silver provided concrete examples, highlighting the widespread adoption of multi-step agents across various coding tasks. "Just as an example," she offered, "one thing developers have to do to maintain a codebase is stay current with the latest versions of the libraries that it has a dependency on." She cited dependencies like older .NET runtimes or Java SDKs. She explained that "we can have these agentic systems reason over your entire codebase and bring it up to date much more easily, with maybe a 70 or 80% reduction of the time it takes. And it really has to be a deployed multi-step agent to do that."

Another critical area benefiting from AI agents is live-site operations. Silver described the scenario of a website or service encountering an issue overnight, traditionally requiring an on-call person to be woken up to respond. While 24/7 human coverage remains for critical outages, she noted, "it used to be a really loathed job because you’d get woken up fairly often for these minor incidents." Microsoft has developed "an agentic system to successfully diagnose and in many many cases fully mitigate issues that come up in these live site operations so that humans don’t have to be woken up in the middle of the night and groggily go to their terminals and try to diagnose what’s going on." This innovation, she added, "also helps us dramatically reduce the average time it takes for an incident to be resolved."

I then raised an observation: "One of the other puzzles of this present moment is that agentic deployments haven’t happened quite as fast as we expected even six months ago. I’m curious why you think that is."

Silver attributed this slower pace to a fundamental challenge: "If you think about the people who are building agents, what is preventing them from being successful, in many cases, it comes down to not really knowing what the purpose of the agent should be." She emphasized the need for a "culture change" in how these systems are developed. Organizations must clearly define "What is the business use case that they are trying to solve for? What are they trying to achieve? You need to be very clear-eyed about what the definition of success is for this agent. And you need to think, what is the data that I’m giving to the agent so that it can reason over how to go accomplish this particular task?"

She concluded that "We see those things as the bigger stumbling blocks, more than the general uncertainty of letting agents get deployed. Anybody who goes and looks at these systems sees the return on investment."

Building on her mention of "general uncertainty," which often appears as a significant barrier from an external perspective, I asked, "Why do you see it as less of a problem in practice?"

Silver explained that "it’s going to be very common that agentic systems have human-in-the-loop scenarios." She provided the example of package returns, where a workflow might historically be "90% automated and 10% human intervention," with someone physically inspecting a package to assess damage before accepting a return.

She elaborated that this is "a perfect example where actually now the computer vision models are getting so good that in many cases, we don’t need to have as much human oversight over inspecting the package and making that determination." While some "borderline" cases might still arise where computer vision isn't sufficient, necessitating escalation, she likened it to the question of "how often do you need to call in the manager?"

Finally, Silver acknowledged that "There are some things that will always need some kind of human oversight, because they’re such critical operations." She cited examples such as "incurring a contractual legal obligation, or deploying code into a production codebase that could potentially affect the reliability of your systems." Yet, even in these critical domains, she noted, "there’s the question of how far we could get in automating the rest of the process."

ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news