The term ‘run book automation’ has a long history in the IT industry. Find out what the Fortra experts have to say about the history—and the future—of this topic.
Pat Cameron, Director of Automation Technologies, Automate
Automating the run book is a huge productivity increase for the operations team in any data center. Automation frees up operations to do more analysis of the job schedules, do more troubleshooting when errors occurred during processing, and to augment the Help Desk staff at times.
In 2015, data centers are more automated than ever. Run books may be used for periodic checks to make sure that tasks and jobs are running on schedule, but these checklists no longer need to be manually updated. The automation of most tasks includes the documentation of when these tasks run and the history of these jobs after they’ve completed. The benefit of this documentation is that operations, management, and auditors can view the job schedules and reports online or pulled form their database on demand. Because the scheduler can notify staff of scheduling exceptions, including errors or delays, the need for manual checklists is eliminated.
Now that job processes running on individual servers or virtual machines are nearly completely automated, the next step is to automate the entire enterprise so that it can be managed from a single server or a single display. Then automating the availability of computing resources using cloud technology and the virtualization of those resources will occur. Finally, triggering the movement of resources based on need as it changes throughout the day will be the next level of the automation of ‘run book’ technology.
Robin Tatam, Director of Security Technologies, Powertech
Many people don’t consider run book automation to be a component of their security plan; however, it is an important consideration for ensuring consistency and accountability. Security and compliance speak not only about authorizations and user permissions but also, by extension, about knowing the current state, completion success, and timing of critical processes and jobs executing on a server. Nefarious acts are arguably less likely to occur than simple human error, such as running a process out of order or not noticing when an error has occurred, but both can be impacted positively by reducing reliance on people and manual methodologies. Large organizations may run thousands of transactions each business day and, without any form of automation, this can be logistically challenging and error prone.
Modern data centers are rarely heterogeneous and typically play host to a wide variety of server technologies. Managing jobs across a combination of Windows, AIX, and Linux servers, as well as enterprise operating systems such as IBM i, leads to an increased requirement for skilled resources and operational complexities. The full value of automating tasks across all of these disparate platforms may not be recognized immediately, but I have never spoken to an organization that would ever consider reverting back to manual methods. In fact, most of them proudly quote the staggering number of jobs that are now being managed by their automation solution. This enables their operations and support teams to focus on more business-centric projects.
Governance, risk, and compliance (GRC) intends for the server to be available for legitimate business purposes. Obviously, anytime there is a data breach, there is high likelihood that the data will be used by unauthorized entities and may involve an immediate business interruption. But it is not only about breaches. Most regulatory and best practice mandates also require that servers be maintained on a current patch level as well as stay on a supported version of the operating system. This is partly due to the elevated risk of vulnerability associated with out of date server and application code, but it is also the business risk that comes when the code “breaks” and the enterprise is unable to obtain timely support. All of these factors can contribute to unplanned downtime. Organizations will experience increasing pressure to standardize, as staying abreast of code updates as well as automating run book tasks can effectively reduce risk.
Kevin Jackson, Technical Solutions Consultant, Intermapper
The idea of run book automation can be both an advantageous and challenging practice for any environment. Inherently, its primary benefit is to reduce some of the operational costs associated with day-to-day tasks and business processes. With automation, we could offload some of these daily tasks, reduce risk associated with human intervention, and hopefully improve upon the quality of service we were providing.
Run book automation has become a key cog in the technology management wheel. Businesses asking questions like “how can we get the most out of our infrastructure?” and “how can we save on operations costs?” will look to automation technology first. The reason automation technology helps is actually quite simple: it can step in when there are not enough IT professionals to meet the service needs and SLA levels required by customers.
I believe that, as the automation tools continue to improve, you will start to see a steady flow of regular people using these tools in their daily lives, in addition to an increasing amount of IT professionals relying on these tools. Although the promise of automation solutions is to “set it and forget it,” technology can still play awful jokes on us, so we can’t take our eyes off the processes just yet.
Mike Stegeman, Senior Data Access Consultant, SEQUEL
At businesses, governments, and universities, manual run books were a step toward ensuring that each step in a computing process was completed. Automated run books gave operators a way to go through multiple ‘normal’ steps by using one process. This was a huge step: daily batch jobs could be stacked up one after the other and placed into a new, single-step process. It left only monitoring to be done by an operator. When everything went well, the new process was great.
But soon, operators realized that finding which step failed or, more importantly, where to restart the process, was difficult. What steps did complete? What step failed? Where should the process be restarted to ensure accurate processing?
New automation technologies have addressed a lot of these concerns. When an automated process is set up, you can designate that a process must complete before moving to the next step. You can run processes concurrently. If a process fails, you can specify whether it is okay to continue or whether the whole process should stop. When restarting the full process, you can tell it where to begin.
As more organizations add different platforms to their environments, the need to schedule jobs across platforms becomes more important. Running one job that includes processes occurring on separate platforms is becoming the norm. Operations will need to be able to monitor the job schedule running across not only various platforms, but other business applications as well.
Tom Huntington, Vice President of Technical Services, Fortra
When it comes to automation, we still recommend asking the administrators and operations teams for documentation of processes that they run manually. What we typically find are items like checking for file arrivals or verifying that a process is running that has failed in the past.
This type of documentation is a great starting point for any automation project. It identifies tasks that might not have been automated because they were too difficult to automate previously—perhaps they required complex scripting—and so they needed to be run manually by operations teams.
Business process automation software can automate these types of tasks and document the workflows in useful diagrams, which is helpful for gaining increased visibility into your environment. Because many manual processes should be documented, automation managers should ask to see which items are available for workload automation.