Neeraja Yadwadkar: Advancing Serverless Computing

Neeraja Yadwadkar: Architecting the Intelligent Future of Cloud Computing and Machine Learning

The story of Neeraja Yadwadkar begins in India, where her passion for computing took root. She completed her bachelor’s degree at the Government College of Engineering, Pune, a foundation that nurtured her technical curiosity and discipline. She then pursued a master’s degree in Computer Science at the prestigious Indian Institute of Science, one of India’s foremost institutions for advanced research.

Her intellectual journey reached a defining milestone at the University of California, Berkeley, where she earned her PhD in Computer Science from the renowned RISE Lab. Under the mentorship of distinguished scholars Randy Katz and Joseph Gonzalez, Neeraja Yadwadkar developed her groundbreaking dissertation on Automatic Resource Management in the Datacenter and the Cloud.

Her doctoral work tackled a fundamental challenge: how to manage complex, large-scale systems that span physical servers, virtual machines, and hybrid clouds—without relying on fragile heuristics. Instead, she introduced data-driven, machine learning-based approaches capable of extracting actionable insights from the massive performance data these systems generate.

This was not incremental progress. It was a shift in philosophy.

Neeraja Yadwadkar: Bridging Systems and Machine Learning

What sets Neeraja Yadwadkar apart is her conviction that systems and machine learning are not separate domains—they are complementary forces.

Advances in hardware architectures, ML algorithms, and cloud infrastructure are converging. Neeraja Yadwadkar recognized early that:

  • Machine Learning can solve complex resource management problems in systems.

  • Systems research must evolve to accommodate the properties of emerging ML algorithms.

  • The cloud can become a unified computational entity rather than a collection of isolated resources.

Her research focuses on two transformative goals:

  1. Using ML techniques for systems – Leveraging predictive models to manage resources efficiently, reduce costs, and ensure performance reliability.

  2. Building systems for ML – Designing cloud architectures that support large-scale machine learning workloads seamlessly.

This dual focus makes Neeraja Yadwadkar’s work uniquely positioned at the heart of modern computing innovation.

Stanford, VMware, and Expanding Horizons

Following her PhD, Neeraja Yadwadkar served as a postdoctoral researcher in the Computer Science Department at Stanford University, collaborating with Christos Kozyrakis. There, she deepened her exploration of distributed systems, cloud computing, and ML-driven automation.

She also spent a year with the VMware Research Group, gaining industry perspective on how academic breakthroughs translate into real-world infrastructure.

These experiences strengthened her overarching vision: a fully automated, management-less cloud that delivers fine-grained, consumption-based access aligned with users’ cost and performance goals.

Neeraja Yadwadkar: Leading UT-SysML at UT Austin

Today, Neeraja Yadwadkar leads UT-SysML at The University of Texas at Austin, a research group dedicated to exploring the interplay between computer systems and machine learning.

Her research spans:

  • Cloud Computing Systems

  • Serverless Computing

  • Machine Learning for Systems

  • Systems for Machine Learning

  • Distributed Systems and Resource Management

In Spring 2023, she introduced a forward-looking graduate course titled SysML: Computer Systems and Machine Learning Interplay. The course reflects her belief that the next generation of engineers must think across boundaries, not within silos.

Under her mentorship, doctoral and master’s students tackle some of the most challenging questions in modern computing—how to make systems intelligent, adaptive, and self-optimizing.

Her mentorship style blends rigor with vision. Students are encouraged not only to solve problems, but to redefine them.

Rethinking Resource Management in the Cloud

Traditional resource management relies heavily on fixed heuristics. But modern cloud systems are far too dynamic and complex for such simplistic strategies.

Neeraja Yadwadkar’s dissertation advanced a bold argument: machine learning models can manage and optimize systems more effectively by learning from performance and utilization data.

Yet she did not overlook the challenges. Her work directly addressed:

  • Uncertainty in ML predictions

  • High training costs

  • Generalization from benchmarks to real-world data

By confronting these limitations head-on, Neeraja Yadwadkar demonstrated that ML-based resource management could be both practical and robust.

Her long-term vision? An easy-to-use, cost-efficient cloud where:

  • Users do not need to micromanage resources.

  • Performance goals are automatically satisfied.

  • Costs align precisely with consumption.

  • ML workloads are served through model-less interfaces with intelligent autoscaling.

This is more than optimization—it is automation at scale.

Building the Cloud of Tomorrow

One of the most exciting aspects of Neeraja Yadwadkar’s work lies in serverless computing. She envisions systems that hide the complexity of resource allocation while ensuring predictable cost-performance trade-offs.

In the context of ML inference, she has contributed to developing model-less serving systems. Instead of forcing users to understand and manage model-specific deployment details, these systems allow high-level specifications such as:

  • Desired accuracy

  • Latency constraints

  • Service-level objectives

Behind the scenes, intelligent mechanisms handle model selection and autoscaling.

This approach moves us closer to a fully automated cloud infrastructure capable of supporting both traditional applications and emerging AI workloads.

Leave A Reply

Your email address will not be published.