Interactive Serverless Computing Research Platform
A comprehensive research platform for studying serverless computing architectures through interactive visualization,
real-time simulation, and detailed performance analysis. Developed by Siddharth Agarwal,
PhD Researcher in Cloud Computing at the University of Melbourne, this tool enables researchers, educators,
and practitioners to explore and understand complex serverless system behaviors.
Take a guided tour to learn the fundamentals and understand how to use Serv-Drishti effectively.
-
CPU (mCPU):
Mem (MB):
Place in first available node with sufficient resources
0
Active Functions: 0
Active Nodes: 0
Queued Requests: 0
Cold Start Executing Available
API Gateway
Incoming Request
Request Dispatcher
Queue empty
Dispatch Request
Compute Nodes will appear here...
Cold Start Times
Routing Strategy Battleground
Compare two routing strategies side-by-side with the same simulation settings.
0
Strategy A: Warm Priority | First Fit
Funcs: 0
Nodes: 0
Queue: 0
Gateway A
Dispatcher A
Queue empty
Nodes will appear...
Strategy B: Round Robin | Best Fit
Funcs: 0
Nodes: 0
Queue: 0
Gateway B
Dispatcher B
Queue empty
Nodes will appear...
Battleground Session Performance Comparison:
Average Latency Over Time:
Resource Utilization Metrics:
Active Functions Over Time:
Infrastructure Growth (Nodes) Over Time:
Cost Analysis (Execution Time à Memory):
Placement Algorithm Analysis
Current Placement Algorithm:
First FitPlace in first available node with sufficient resources
Placement Algorithm Performance Comparison:
Function Distribution Across Nodes:
Placement Performance Over Time:
Placement Algorithm Testing Guide:
1
Select Algorithm
Choose a placement algorithm from the dropdown in the Visualizer Controls panel
2
Send Requests
Send multiple requests to generate placement data and see how functions are distributed
3
Analyze Results
View performance metrics and distribution charts to understand algorithm behavior
4
Compare Algorithms
Switch algorithms and compare performance using the Battleground for side-by-side analysis
Select Request Routing Logic
Warm Priority Routing Details:
Priority to Warm Functions: Incoming requests are first directed to any currently warm (green) function instance that has available concurrency slots. This minimizes latency by utilizing existing resources.
Queueing: If no warm functions are immediately available, requests are added to a FIFO (First-In, First-Out) queue. This provides a buffer for traffic spikes and prevents request loss.
Scale-Out Trigger:If the queue is not empty and all existing functions are either busy or cold-starting, the system attempts to spin up a new function instance (and potentially a new compute node if needed, based on Max Nodes limit). This ensures capacity keeps up with demand.
Queue Draining: As functions complete tasks or new functions become warm after a cold start, they immediately pull requests from the front of the queue, ensuring rapid processing of backlog.
Round Robin Logic Details:
Sequential Distribution: Requests are distributed in a simple, sequential manner to all available (warm and not at max concurrency) function instances.
Fair Distribution: This method aims to achieve a relatively even distribution of load across all available functions, as it cycles through them one by one.
Simple to Implement: It's a straightforward strategy that doesn't require complex load metrics.
Limitations: It doesn't account for individual function load or performance variations, which might lead to an imbalance if functions have different processing speeds or existing workloads.
Least Connections Logic Details:
Dynamic Load Balancing: Requests are routed to the available function instance that currently has the fewest active (busy) requests.
Optimal Resource Utilization: This method is generally more effective than Round Robin at distributing load evenly in scenarios where functions might have varying processing times or where new functions are spinning up.
Reduced Latency: By sending requests to the least busy function, it aims to minimize waiting times and overall request latency.
Requires State Tracking: This logic requires the dispatcher to keep track of the current number of active requests for each function instance to make an informed routing decision.
Requests Performance in Current Session:
Simulation Results Graph (Cumulative):
Resource Utilization & Queue Metrics Over Time:
Download Simulation Data:
Note: This data is temporary and will be lost if you refresh the page or close your browser tab.
About Serv-Drishti
đŦ Research Project
Serv-Drishti is a research-driven interactive serverless computing simulator developed as part of PhD research in Cloud Computing at the University of Melbourne. The name "Drishti" means "vision" or "insight" in Sanskrit, reflecting our goal to provide clear insights into serverless computing patterns and behaviors. This tool bridges the gap between theoretical serverless concepts and practical implementation understanding.
đ¨âđŦ Creator & Researcher
Siddharth Agarwal is a PhD Researcher in Cloud Computing at the University of Melbourne, specializing in serverless computing architectures, performance optimization, and distributed systems. This simulator represents years of research into serverless computing patterns, cold start optimization, and resource management strategies.
đŦ Research Applications
Academic Research: Used in PhD research to study serverless computing patterns, cold start optimization, and resource allocation strategies.
Performance Analysis: Comprehensive metrics collection for analyzing serverless function performance, latency patterns, and resource utilization.
Algorithm Comparison: Side-by-side comparison of different routing strategies, placement algorithms, and load balancing techniques.
Educational Tool: Designed for computer science education, helping students understand complex distributed systems concepts through visualization.
Industry Applications: Practical tool for architects and developers to model and test serverless architectures before deployment.
Data-Driven Insights: Export capabilities for further analysis, research papers, and performance benchmarking studies.
đī¸ Modular Architecture
Clean, maintainable codebase with clear separation of concerns and extensible design patterns:
Core Layer: Chart management, simulation engine, and placement algorithms
UI Layer: Notification system, configuration management, and event handling
Feature Modules: Battleground testing and data export capabilities
Utilities: Shared utilities and application coordination
đŧ Professional Applications
Academic Education: Perfect for computer science courses, workshops, and training sessions on serverless computing.
Enterprise Training: Onboard new team members and demonstrate serverless architecture concepts to stakeholders.
Presentations & Demos: Engaging visual aid for conference talks, client demonstrations, and technical reviews.
Architecture Planning: Test different configurations before implementing in production environments.
Performance Analysis: Understand the impact of various parameters on system behavior and optimization.
Research & Development: Experiment with new placement algorithms and optimization strategies.
đ ī¸ Technical Features
Built with modern web technologies and designed for extensibility and maintainability:
Core Technologies: Modern web browsers (Chrome 90+, Firefox 88+), Chart.js for powerful visualizations, Docker for easy deployment
Modular JavaScript Architecture: Clean, maintainable codebase with separation of concerns
Responsive Design: Optimized for all devices and screen sizes
Data Export Capabilities: CSV, JSON, and PNG formats for external analysis
đ¨ User Experience
Interactive Tutorial: Step-by-step guided tour of all features for new users.
Contextual Help: Click the âšī¸ icons for detailed explanations of any control or parameter.
Demo Scenarios: Pre-configured scenarios for quick exploration and learning.
Visual Architecture: Clear representation of API Gateway, Dispatcher, Compute Nodes, and Functions.
Real-time Metrics: Live performance data and resource utilization tracking.
Professional Design: Modern, responsive interface optimized for learning and demonstration.
đŦ Educational Value
Serverless Concepts: Learn about auto-scaling, cold starts, and resource management in serverless architectures.
Performance Optimization: Understand how different configurations affect system performance and latency.
Architecture Patterns: Explore various placement and routing strategies used in production systems.
Real-world Scenarios: Simulate actual serverless workloads and traffic patterns.
Decision Making: Use data-driven insights to optimize serverless deployments and configurations.
đ Getting Started
New to Serv-Drishti? Start with the Interactive Tutorial button in the welcome section, or try the Demo Scenarios to see the system in action. Use the âšī¸ help icons throughout the interface for detailed explanations of any feature. For more information, visit the project website.
đ¨âđģ About the Creator
Serv-Drishti is developed by Siddharth Agarwal, a PhD Researcher in Cloud Computing at the University of Melbourne. This project represents cutting-edge research in serverless computing visualization and education, combining academic rigor with practical application for the developer community.