Bootstrapping evolvability for inter-domain routing with D-BGP

Technical reports
Raja R. Sambasivan, David Tran-Lam, Aditya Akella, Peter Steenkiste
Carnegie Mellon University Department of Computer Science Technical Report CMU-CS-16-117.
Publication year: 2016

Note: This technical report has been superseded by the SIGCOMM’17 paper.

It is extremely difficult to utilize new routing protocols in today’s Internet. As a result, the Internet’s baseline inter-domain protocol for connectivity (BGP) has remained largely unchanged, despite known significant flaws. The difficulty of using new protocols has also depressed opportunities for (currently commoditized) transit providers to provide value-added routing services. To help, this paper proposes Darwin’s BGP (D-BGP), a modified version of BGP that can support evolvability to new protocols. D-BGP modifies BGP’s advertisements and advertisement processing based on requirements imposed by key evolvability scenarios, which we identified via analyses of recently proposed routing protocols.

So, you want to trace your distributed system? Key design insights from years of practical experience

Technical reports
Raja R. Sambasivan, Rodrigo Fonseca, Ilari Shafer, Gregory R. Ganger
Carnegie Mellon Parallel Data Lab Technical Report CMU-PDL-14-102
Publication year: 2014

Note: This technical report has been superseded by our SoCC16 paper, Principled workflow-centric tracing of distributed systems.

End-to-end tracing captures the workflow of causally-related activity (e.g., work done to process a request) within and among the components of a distributed system. As distributed systems grow in scale and complexity, such tracing is becoming a critical tool for management tasks like diagnosis and resource accounting. Drawing upon our experiences building and using end-to-end tracing infrastructures, this paper distills the key design axes that dictate trace utility for important use cases. Developing tracing infrastructures without explicitly understanding these axes and choices for them will likely result in infrastructures that are not useful for their intended purposes. In addition to identifying the design axes, this paper identifies good design choices for various tracing use cases, contrasts them to choices made by previous tracing implementations, and shows where prior implementations fall short. It also identifies remaining challenges on the path to making tracing an integral part of distributed system design.

Visualizing request-flow comparison to aid performance diagnosis in distributed systems

Technical reports
Raja R. Sambasivan, Ilari Shafer, Michelle Mazurek, Gregory R. Ganger
Carnegie Mellon University Parallel Data Lab technical Report CMU-PDL-13-104
Publication year: 2013

Note: This technical report has been superseded by our InfoVis’13 paper of the same name.

Distributed systems are complex to develop and administer, and performance problem diagnosis is particularly challenging. When performance degrades, the problem might be in any of the system’s many components or could be a result of poor interactions among them. Recent research ešorts have created tools that automatically localize the problem to a small number of potential culprits, but ešective visualizations are needed to help developers understand and explore their results. is paper compares side-by-side, diš, and animation-based approaches for visualizing the results of one proven automated localization technique called request-žow comparison. Via a óä-person user study, which included real distributed systems developers, we identify the unique benets that each approach provides for dišerent usage modes and problem types.

Visualizing request-flow comparison to aid performance diagnosis in distributed systems

Technical reports
Raja R. Sambasivan, Ilari Shafer, Michelle Mazurek, Gregory R. Ganger
Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-12-102
Publication year: 2013

Note: This technical report has been superseded by our InfoVis’13 paper of the same name and CMU-PDL-13-104.

Distributed systems are complex to develop and administer, and performance problem diagnosis is particularly challenging. When performance decreases, the problem might be in any of the system’s many components or could be a result of poor interactions among them. Recent research has provided the ability to automatically identify a small set of most likely problem locations, leaving the diagnoser with the task of exploring just that set. This paper describes and evaluates three approaches for visualizing the results of a proven technique called “request-flow comparison” for identifying likely causes of performance decreases in a distributed system. Our user study provides a number of insights useful in guiding visualization tool design for distributed system diagnosis. For example, we find that both an overlay-based approach (e.g., diff) and a side-by-side approach are effective, with tradeoffs for different users (e.g., expert vs. not) and different problem types. We also find that an animation-based approach is confusing and difficult to use.

Automation without predictability is a recipe for failure

Technical reports
Raja R. Sambasivan, Gregory R. Ganger
Carnegie Mellon University Parallel Data Lab Technical Report CMU-PD-11-101
Publication year: 2011

Note: This technical report has been superseded by our HotCloud’12 paper, “Automated diagnosis without predictability is a recipe for failure.”

Automated management seems a must, as distributed systems and datacenters continue to grow in scale and complexity. But, automation of performance problem diagnosis and tuning relies upon predictability, which in turn relies upon low variance—most automation tools aren’t effective when variance is regularly high. This paper argues that, for automation to become a reality, system builders must treat variance as an important metric and make conscious decisions about where to reduce it. To help with this task, we describe a framework for understanding sources of variance and describe an example tool for helping identify them.

Diagnosing performance problems by visualizing and comparing system behaviours

Technical reports
Raja R. Sambasivan, Alice X. Zheng, Elie Krevat, Spencer Whitman, Michael Stroucken, William Wang, Lianghong Xu, Gregory R. Ganger
Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-10-103
Publication year: 2010

Note: This technical report has been superseded by our NSDI’11 paper, “Diagnosing performance changes by comparing request flows,” and CMU-PDL-10-107.

Spectroscope is a new toolset aimed at assisting developers with the long-standing challenge of performance debugging in distributed systems. To do so, it mines end-to-end traces of request processing within and across components. Using Spectroscope, developers can visualize and compare system behaviours between two periods or system versions, identifying and ranking various changes in the flow or timing of request processing. Examples of how Spectroscope has been used to diagnose real performance problems seen in a distributed storage system are presented, and Spectroscope’s primary assumptions and algorithms are evaluated.

Diagnosing performance changes by comparing system behaviors

Technical reports
Raja R. Sambasivan, Alice X. Zheng, Elie Krevat, Spencer Whitman, Michael Stroucken, William Wang, Lianghong Xu, Gregory R. Ganger
Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-10-107
Publication year: 2010

Note: This techreport has been superseded by our NSDI’11 paper, “Diagnosing performance changes by comparing request flows.”

The causes of performance changes in a distributed system often elude even its developers. This paper develops a new technique for gaining insight into such changes: comparing system behaviours from two executions (e.g., of two system versions or time periods). Building on end-to-end request flow tracing within and across components, algorithms are described for identifying and ranking changes in the flow and/or timing of request processing. The implementation of these algorithms in a tool called Spectroscope is described and evaluated. Five case studies are presented of using Spectroscope to diagnose performance hanges in a distributed storage system caused by code changes and configuration modifications, demonstrating the value and efficacy of comparing system behaviours.

Improving small file performance in object-based storage

Technical reports
James Hendricks, Raja R. Sambasivan, Shafeeq Sinnamohideen, Gregory R. Ganger

Notice: Undefined index: fac_pub_cit in /home/rajasam4/rajasambasivan.com/wp-content/themes/faculty-child/page-publications.php on line 460
Publication year: 2006

This paper proposes architectural refinements, server-driven metadata prefetching and namespace flattening, for improving the efficiency of small file workloads in object-based storage systems. Server driven metadata prefetching consists of having the metadata server provide information and capabilities for multiple objects, rather than just one, in response to each lookup. Doing so allows clients to access the contents of many small files for each metadata server interaction, reducing access latency and metadata server load. Namespace flattening encodes the directory hierarchy into object IDs such that namespace locality translates to object ID similarity. Doing so exposes namespace relationships among objects (e.g., as hints to storage devices), improves locality in metadata indices, and enables use of ranges for exploiting them. Trace-driven simulations and experiments with a prototype implementation show significant performance benefits for small file workloads.

Eliminating cross-server operations in scalable file systems

Technical reports
James Hendricks, Shafeeq Sinnamohideen, Raja R. Sambasivan, Gregory R. Ganger
Carnegie Mellon University Parallel Data Lab Technical Report CU-PDL-06-105
Publication year: 2006

This technical report has been superseded by our USENIX’10 paper on “A transparently-scalable metadata service for the Ursa Minor storage system.”

Distributed file systems that scale by partitioning files and directories among a collection of servers inevitably encounter crossserver operations. A common example is a RENAME that moves a file from a directory managed by one server to a directory managed by another. Systems that provide the same semantics for cross-server operations as for those that do not span servers traditionally implement dedicated protocols for these rare operations. This paper suggests an alternate approach that exploits the existence of dynamic redistribution functionality (e.g., for load balancing, incorporation of new servers, and so on). When a client request would involve files on multiple servers, the system can redistribute those files onto one server and have it service the request. Although such redistribution is more expensive than a dedicated cross-server protocol, the rareness of such operations makes the overall performance impact minimal. Analysis of NFS traces indicates that cross-server operations make up fewer than 0.001% of client requests, and experiments with a prototype implementation show that the performance impact is negligible when such operations make up as much as 0.01% of operations. Thus, when dynamic redistribution functionality exists in the system, cross-server operations can be handled with little additional implementation complexity.

Ursa Minor: versatile cluster-based storage

Technical reports
Michael Abd-El-Malek, William V. Courtright II, Chuck Cranor, Gregory R. Ganger, James Hendricks, Andrew J. Klosterman, Michael Mesnier, Manish Prasad, Brandon Salmon, Raja R. Sambasivan, Shafeeq Sinnamohideen, John D. Strunk, Eno Thereska, Matthew Wachs, Jay J. Wyliel
Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-05-104
Publication year: 2005

No single encoding scheme or fault model is optimal for all data. A versatile storage system allows them to be matched to access patterns, reliability requirements, and cost goals on a per-data item basis. Ursa Minor is a cluster-based storage system that allows data-specific selection of, and on-line changes to, encoding schemes and fault models. Thus, different data types can share a scalable storage infrastructure and still enjoy specialized choices, rather than suffering from “one size fits all.” Experiments with Ursa Minor show performance benefits of 2–3× when using specialized choices as opposed to a single, more general, configuration. Experiments also show that a single cluster supporting multiple workloads simultaneously is much more efficient when the choices are specialized for each distribution rather than forced to use a “one size fits all” configuration. When using the specialized distributions, aggregate cluster throughput nearly doubled.