There are many ways for IT shops to tackle the problem of mainframe monitoring — we prefer a one stone, many birds approach.
There’s nothing small about big iron environments, which is to say that they generate data — and problems — of equal proportion. To maintain a tight command over processing stacks of such size, IT shops have to take a highly structured approach to monitoring their mountains of performance data. But as it is with any mountain, different approaches have their own peaks and valleys, and it’s up to IT professionals to determine which tools are right for their situation.
Yet while most mainframe environments allow for only several, commonly employed monitoring tool options, each strategy leaves something to be desired.
Three Routes Up the Mainframe
As TechTarget notes, IT shops generally employ one of three varieties of monitoring tools: real-time, near-time, and post-processor monitors. Each, as is implied, refers to the timescale on which the performance monitor is effective, and each has its advantages and drawbacks.
Real-time monitors: Real-time monitors have the distinct advantage of immediacy; if there’s a problem or a threshold is exceeded, you can begin diagnosing and addressing the issue right now, potentially before services are significantly impacted. Of course, their application tends to also be limited to the immediate present, being difficult as it is to extrapolate long-term performance trends from possibly ephemeral problems.
Their only real catch is that, in order for them to measure real-time performance, probes must be inserted into mission-critical systems paths, inviting the possibility for interference with said processes. This isn’t altogether common, but the goal is to select processes for monitoring (I/O, memory, etc) with some judiciousness.
These tools are the great compromisers of mainframe monitors, set between immediate reporting and long-term historical trending. They collect chunks of performance retroactively, usually in periods larger than one minute, which enables quick problem solving for recent problems while removing the need for invasive monitoring devices. At the same time, near-time monitors may allow fleeting problems — those less than 60 seconds or the preset value — to fall through the cracks.
Post-processors help IT shops to make sense of large amounts of long-term performance data, enabling them to spot trends and accurately plan their future capacity. Of course, as such data is collected after the fact, post-processes aren’t useful for solving any sort of immediate issue. Additionally, their significant CPU requirements can be burdensome. Still, this kind of monitoring is necessary for any effective mainframe capacity management plan.
Ideally, IT departments should support some degree of each ability; choosing one strategy over another necessarily leaves the department wanting in one category. In a similar sense, performing real-time mainframe monitoring doesn’t exclude near or long-term monitoring.
For these reasons, Fortra has wrapped up these three tools into one: Vityl Capacity Management, which collects and displays real-time performance data while also storing and analyzing historical data in order to predict future trends.Vityl also provides unprecedented levels of retroactive insight, tracking process-level details down to one second (and up to every few minutes, depending on your needs). And unlike the heavily involved, DIY model that many monitoring tools employ, it comes ready to track performance out of the box.
Big iron environments, almost as a rule, present IT professionals with a litany of operational choices — selecting which performance monitoring strategy to use shouldn’t be one of them. As IT tools become more advanced and comprehensive, it becomes easier for IT to cover all their bases at once.