Guide
DevOps Development: Keeping the Lights On
Overview: The DevOps methodology embodies two core philosophies: decreasing the lead time of software deployment and the automation of delivery and testing. DevOps emerged as a practical response to the agile development movement, in contrast with traditional, phase-based or “waterfall” development, which is inefficient and labor-intensive. Traditional methods should be phased out, and companies...
Guide
Dashboards Don't Work (Unless You Have a Metrics Management Strategy)
Tech has had a tremendous impact on the way today’s businesses seek continued growth and improvement. No matter what business they are in, executives everywhere are investing in technology that improves their business processes, gets them ahead of the competition and widens their margins. Ultimately, the return on that investment is determined by how well technology supports a business’ ability to...
Guide
Health and Risk: A New Paradigm for Capacity Management
Capacity management, considered by top analyst firms to be an essential process in any large IT organization, is often so complex that in today’s accelerated business world it cannot be effectively implemented. Changing priorities, increasing complexity and scalable cloud infrastructure have made traditional models for capacity management less relevant. A new paradigm for capacity management is...
Guide
How to Manage IT Resource Consumption
At an application level with Vityl Capacity Management
In this guide, John Miecielica of Metavante, provides a step-by-step example showing how he uses Vityl Capacity Management to analyze IT resource consumption at an application level. This key capability is especially important in today’s environments where multiple applications run on a server or multiple servers might be required to...
Guide
Commercial Clusters and Scalability
In this paper we present an introductory analysis of throughput scalability for update intensive workloads (such as measured by the TPC-C or TPC-W benchmarks) and how that scaling is limited by serialization effects in the software-hardware combination that comprises any platform.
Guide
UNIX Load Average Part 1: How It Works
In this online article Dr. Gunther digs down into the UNIX kernel to find out how load averages (the “LA Triplets”) are calculated and how appropriate they are as capacity planning metrics.
Guide
UNIX Load Average Part 2: Not Your Average Average
This is the second in a two part-series where we explore the use of UNIX load averages in performance analysis and capacity planning.
Guide
UNIX Load Average: Reweighed
This is an unexpected Part 3 to the discussion about the UNIX load average metric answering the question of where the weight factor comes from.
Guide
How to Get Unbelievable Load Test Results
This article is about delusions that arise from incorrect interpretation of load test measurements.
Guide
Capacity Calculations: Handle with Care
This paper discusses avoiding calculation results that are more precise than is justified by precision of corresponding measurement input data.
Guide
Of Buses and Bunching: Strangeness in the Queue
This article explains how correlated or bunched requests can impact capacity planning results.
Guide
How to Measure an Elephant
This article explains how correlated or bunched requests can impact capacity planning results.
Guide
Evaluating Scalability Parameters: A Fitting End
This is the final online article concerning the concept of application scalability. Here, you will learn how to determine value of the parameters that control scalability.
Guide
The “LA Triplets” Quiz
This is a little quiz to test your understanding of the triplet of numbers that appear in the UNIX® load average (LA) performance metric.
Guide
How to Write Application Performance Agents in TeamQuest Performance Software 7.2 or 8
TeamQuest (now Vityl Capacity Management) provides unintrusive mechanisms for instrumenting
applications and analyzing application performance. Neil Gunther describes how to use
those mechanisms.