Nearly every business is moving at least some of their applications to the cloud, whether they’re launching a new application, migrating part of a datacenter, or transforming to DevOps.
If your company is moving to public or private cloud, you’ll need to monitor your cloud infrastructure to make sure that you are managing cost and optimizing resource consumption.
Our newest release of Vityl Capacity Management increases its support for Azure virtual machines. In this webinar, you’ll see:
- Explanation of what Azure Monitor is
- Critical metrics needed for monitoring Azure virtual machines
- Vityl’s unique capabilities in monitoring Azure and other cloud environments
Scott Adams: 00:03 Thank you for attending today's webinar. My name is Scott Adams. I'm going to get started now here in a few moments. (silence)
Scott Adams: 00:25 Okay. I think we'll go ahead and get started. So today we're going to talk a little bit about what's new in Vityl Capacity Management. Our particular focus today is going to be on Azure monitoring.
Scott Adams: 00:40 As I mentioned, my name is Scott Adams. I'm the chief product owner for Vityl Capacity Management at HelpSystems and I've been working with analysts and IT technicians and planners for a number of years to develop solutions that help with both monitoring, analytics, and planning.
Scott Adams: 01:03 Our agenda for today is, I'm going to start off by just doing a quick overview of what our solution is. We'll talk a little bit about what's new in version 2.4, and then we'll spend some time discussing hybrid cloud monitoring and analytics, and then we'll follow up with a summary and questions. If you do happen to have questions, if you want to go ahead and add those to the questions area on the control panel for the GoToMeeting, and we'll answer those at the end.
Scott Adams: 01:38 Okay. So what is our objective with Vityl Capacity Management? What we want to be doing for you is ensuring that the applications that you have running in your hybrid environments are operating in both a reliable and a cost-effective manner. And how we achieve that is by providing a series of workflows that allow you to ensure that that's happening.
Scott Adams: 02:05 We start off by providing awareness to what's going on in your environment. And we do that by way of something we call Key Performance Indicators. We've got three Key Performance Indicators: health, risk, and efficiency. Health intending to let you know do you have any current problems that you should be addressing? Risk is related to are there any impending problems that you need to get in front of and potentially mitigate before they occur? And then efficiency is something new to 2.4 as well. And that's all about understanding am I using my systems in the most cost-effective manner? Do I have infrastructure in place that I'm not using or under utilizing that I can reclaim and get back some of that cost?
Scott Adams: 02:54 With regards to health, when we do identify that there's a performance issue or a problem that's occurring now, we can transition you over into a troubleshooting mode by way of Performance Monitor. Performance Monitor is there to help you investigate, look for a root cause, understand what's going on right now, what may be the cause of a particular application or service problem, and also to be able to go back and see recent history and other history to identify is this a situation that's occurred in the past or do I have any trends? Is there any cyclical behavior here? Again, all oriented around understanding what may have been or is the cause of a particular problem and resolving that problem.
Scott Adams: 03:42 On the flip side, if we're identifying for you that there's a potential problem that's forthcoming or coming in the future, what can we do to mitigate that problem? And that's where we start to talk about things like what if scenarios and planning to move workload, to change infrastructure settings, to add infrastructure, to remove infrastructure. So again, getting ahead of any impending performance problem is what we're after there.
Scott Adams: 04:15 And underpinning all of this, we've got something called Automated Analytics. Automated Analytics, what that does for us is it allows us to run analysis unattended and then notify you via KPIs whether or not you're having any performance issues or if there's any performance issues coming up. It also serves as an opportunity to create custom reports and analytics that may be unique to your environment, and schedule those to be run and provided to whomever it is that you want to see those reports.
Scott Adams: 04:50 So that's a high level overview of the various different capabilities that we have in the Vityl Capacity Management solution.
Scott Adams: 05:00 Today we're going to talk about what's new in version 2.4, specifically the Azure Monitor integration. I'll wait on that for a moment, but just briefly wanted to touch on some of the other capabilities that we've made available in 2.4. This is our fourth release of Vityl Capacity Management. We did add high resolution forensic analysis. So what this is, this is our ability to collect, manage, store, and provide analysis at a one second granularity, down to a one second granularity. So you may have unique situations with applications or services where you need to have that really detailed level understanding, and we can provide that for you with our high resolution forensic analysis.
Scott Adams: 05:47 Automated workloads. What we're doing here is we're providing workload characterization so you can group together work that comprises a service or application. And we do that for you automatically by combining those things related to a particular user or a particular service or a particular application. So you don't have to go in and define those things yourself. We've got process reductions, which is has affinity with the workload characterization as well, where we want to reduce the volume of data, but at the same time preserve all the details necessary to do root cause analysis and to do planning activity and other types of analysis.
Scott Adams: 06:27 The efficiency KPI I mentioned is also new in 2.4. There was a webinar that was presented several weeks ago. I'd encourage you to go back and look at that, to get more information about that capability to identify under utilized and unused systems and reclaim those resources so that you're not wasting time and money on those systems that aren't being used.
Scott Adams: 06:54 And then the last thing we have listed here is a demand calendar. This is an opportunity for you to capture information from the business about what their intentions are for service delivery and other activities and keep that alongside the monitoring and planning data within the Vityl Capacity Management solution itself. If you have more questions about any of these additional features, we'd be happy to talk to you about that. Just contact us. But today we're going to be focusing on Azure Monitor integration and going into some more detail about that.
Scott Adams: 07:33 Let's kind of set the stage for why Azure monitoring and really fits into this larger initiative to help you manage hybrid cloud and to be able to monitor and analyze various different types of infrastructure that you are managing in service of providing your applications and services that your business is providing.
Scott Adams: 08:00 First, monitoring and analytics. Really what this is is this is about the collection of the performance capacity metrics or having access to that, as well as the intelligence to analyze that data and to identify and prevent incidents. This kind of wraps back to the overview of the solution where we want to help you understand, number one, what's happening, observability, we need to have an understanding of that before we can make any plans to do anything differently. Then we want to understand when things did occur, why did they occur, what caused them to occur so that we can remove those things so we don't experience those problems again in the future. And then, we also want to be predictive and understand what potentially may happen based on trends in our business, trends in the way our infrastructure is used, things that we anticipate for use of our infrastructure as we go forward.
Scott Adams: 08:57 This is all central to what we want to be able to provide so that we can properly manage and help you provide reliable and dependable applications and services.
Scott Adams: 09:12 We want to do all of that across all the various different infrastructure that's now involved in providing those applications and services. And so what we can do with Vityl Capacity Management is we can provide a single pane of glass, if you will, or a common approach to managing all the various different infrastructure that you have, knowing that infrastructure is probably connected in some way in your application and service delivery.
Scott Adams: 09:40 Many of you, I'm sure still have some traditional IT on-premise, physical systems, maybe Unix systems, Linux, Windows running databases, servicing multi-tiered three-tiered applications. You're all probably more than likely using some type of virtualization, be it VMware or Microsoft Hyper-V or Xen hypervisor and need to continue to manage the services and applications that run in those environments. Along with a private cloud. If you're using OpenStack, OpenShift, some of the technologies there that allow you to start to provide these self-service portals or cloud portals within your own organization, either on-premise or co-located.
Scott Adams: 10:29 And then you're probably already dabbling with and have some of your infrastructure or some of your applications and services out in public cloud, be it AWS or Azure, some of the primary or two of the primary providers there. And you may be using a little bit of both in sort of a multi-cloud environment.
Scott Adams: 10:50 Again, what we want to be able to provide for you is the ability for you to be able to in a common way, address the performance issues that you have by way of monitoring and observation and troubleshooting and root cause analysis and planning across all of this, so that you don't have to use multiple tools, you don't have to be trained on multiple tools, you don't have to have different workflows for the different types of infrastructure that you're using.
Scott Adams: 11:18 We want to do all of this across all the parts of the technology stack in each one of these environments. So it's very important for us to have an understanding of what's going on within the context of the OS, within the context of containers, if you're using those, what is the hypervisor doing and what are the physical resources doing. When we're out in public cloud, what is the public cloud monitoring telling us about the infrastructure that underpins the virtual machines or instances that we're using? So again, folding this into the whole of the hybrid of monitoring as well.
Scott Adams: 11:57 So how do we do that? Well, specifically with Azure Monitor as we look at Azure Monitor and as you're using Azure Monitor, as you engage with Azure and you create a subscription and you start to add your virtual machines or web applications, what the Azure platform does for you is it automatically starts collecting a basic resource data, and that data is centered around CPU, Disk iO, and Network IO. And this is at no cost to you in addition to what you're already paying for your Azure services.
Scott Adams: 12:33 Now the metrics that are there are the types of metrics that people generally look at first. There is only a handful of metrics that are available in each one of these categories, but it is a good place to get started. We believe that it's extremely important based on those previous two slides that we saw to be able to understand what's going on in all parts of the stack. And we're not getting that from just the Azure Monitor by itself, although that's again a good place to start. We want to augment that with additional data and enrich that data with additional collection around memory, around disc space, no matter what containers are running, no matter what processes are running, what they're doing. You want to be able to do workload characterizations as well. And this is an extremely important across the, all the platforms, but in this particular case as we talk about Azure.
Scott Adams: 13:30 So what we provide then is a couple of different methodologies to get that access to that data and use that data for the purposes of those workloads that we've been talking about thus far. I'm going to start in the lower left hand corner of this particular slide and talk about a integration we have directly with Azure Monitor by way of a module. Again, Azure is monitoring, excuse me, collecting and storing and managing that data. So there is no need for us to restore that data for you. We can take advantage of it in the place where it currently resides. We can request whatever real-time or historical data that you want access to directly from the Azure API.
Scott Adams: 14:16 And then we also want to augment that, as I mentioned, with additional details and we've got a, what we call a lightweight collector, which is a single binary that runs on the operating system on the virtual machines that you have in Azure. And it collects these additional sets of metrics and streams that over to a central place to store and manage that data.
Scott Adams: 14:43 With both those sets of data available to us, then we can abstract that away if you will, and our analytics can run on top of both sets of data without regards to where the data is coming for or how it was collected. And this is true across the Azure platform and it's also true as you add in AWS, which we support VMware vSphere, data from Unix operating systems. Again, our analytics are designed to work with all those data sources regardless of how the data is collected and managed.
Scott Adams: 15:17 Once we have that common set of or common understanding of the data, then we can make that available through an API that our Vityl Capacity Management workflows take advantage of. And then also, if you're engaged in DevOps, you can do a direct integration yourself and tie it to other tools that you may have in support of your DevOps processes.
Scott Adams: 15:45 Now I wanted to kind of turn our attention to more of the workflow that we have available for the Azure platform along with all the other platforms. I've mentioned multiple times now that we sort of the top for us is Key Performance Indicators because that, again, we're running these analytics behind the scenes in real-time for you and providing you notification of where you have problems. You don't have to go investigate to find out where the problems are. We're bubbling those up for you. The first part of Key Performance Indicators I want to focus on was health and risk. And those two are very tightly coupled, health being what is my current performance and do I have bad performance or good performance right now? Risk being am I at risk for having performance problems going out into the future?
Scott Adams: 16:37 As you can see here, our notification is based on in this particular example by groups or services. And you can see that we've got Microsoft Azure included as a service or a group alongside of some of the other services, and those reflect some of the other data sources where we have data available. As you can see, we've got Amazon Web services listed here, we've got our VMware infrastructure, our Unix infrastructure as well.
Scott Adams: 17:07 Again, going back to that single pane of glass, if you will, and common approach, here you can see we can again across all of the hybrid cloud position Azure in there, right alongside of the other infrastructure platforms that you may be using.
Scott Adams: 17:27 If we drill down and show more details about what's going on within the platform, we can get finer, granular information about, in this particular case, the Azure VMs that are running. And we see we've got one with some health history that's been a problem for the last 24 hours. And then we've got other virtual machines in Azure that are running just fine with no problems indicated at all.
Scott Adams: 17:52 One of the things that we do within KPIs is we do identify for you when you have a performance problem, where does that performance problem reside. And in this particular example we're showing that we have a CPU resource issue on this Azure VM.
Scott Adams: 18:12 Then we also have the Key Performance Indicator for efficiency. Again, we've added support for the Azure platform along with the other platforms so you can identify again where you have unused or underutilized Azure VMs that may be able to be reclaimed. Again, I would encourage you to watch the webinar that was given a couple of weeks ago around efficiency and how to take the best advantage or make the best use out of the way that we do the efficiency reporting and analysis as well. I won't go into much more detail other than just encouraging you to watch that webinar.
Scott Adams: 18:52 As we look at our public cloud environments, in this particular case Azure, one of the things that's important to understand is to understand where the data's coming from because that does inform us as to the meaning of the metrics that we're looking at. So what we've done is, and now we're transitioning into the performance monitor, which we want to use for further investigation or troubleshooting or root cause analysis. What we've provided is two different perspectives from which to see the Azure VM. One is from the operating system's perspective. Again, we talked about the lightweight collector that allows us to get that perspective and get additional metrics that aren't available through Azure Monitor.
Scott Adams: 19:40 And then we also have the ability to see it from Azure Monitor's perspective. In the case of AWS, it would be from a Cloudwatch perspective. In the case of a virtualized environment, we would be talking about let's say a VMware. We would be able to see it from the hypervisor's perspective. Again, it's important to note when you're doing your analysis and investigation, what perspective are these metrics coming from and what are their meaning to me.
Scott Adams: 20:09 In addition to seeing each perspective independent of each other, we also have an ability, which I'll show you in a moment, to see those together. So you can see how the two relate to each other. So you can get, again, that single view of that VM. But know that you're getting several different perspectives and that maybe important to the way that you're doing your analysis.
Scott Adams: 20:34 Our performance monitor has the overview capabilities, and this workflow is designed to get you as the name implies an overview, a quick snapshot of what's going on on the particular system of interest. In this particular example, I'm showing you the data from the Azure Monitor. So if you look on the left hand side here, you'll see the three small graphs, one representing CPU, one representing Disk iO, one representing Network. And again, these are the three categories of data that we can get data from the Azure Monitor platform.
Scott Adams: 21:12 Again, from this particular workflow, you have access. If you look to the sort of the right, you can see the last 24 hours indicated in the slider for live mode or historical. So you can get reference historical data very quickly from here and or look at live mode. And again, when you're doing this, keep in mind, on the back end, we're going directly to the Azure Monitor API to get this information, either the real-time information or the historical information that's stored by Azure for the platform.
Scott Adams: 21:43 Then if we transition over to the details workflow that we provide with performance monitor, with the details workflow we have access to a lot more data and we have a lot more opportunity to manipulate the data in different ways to make sense for the type of analysis that we're doing. The first thing that I wanted to draw to your attention here is the dual perspective.
Scott Adams: 22:11 Here you can see in details we have included both the view from the Azure Monitor and the view from the OS Collector that we have running as well. So again, we can put those two sets of those two systems together here. When we do that, we can identify each one of those by looking at the attributes associated. So in this particular case, we're looking at the attributes associated with the Azure Monitor data and we can see that this is in fact running in Linux OS, the instance type is a standard B1s. It is Azure. The VM is currently running and it's located in central US, and we can flip over and see some operating system properties if we selected the other system here.
Scott Adams: 23:03 The other thing I wanted to point out is the metrics that are available for us to chart then are the combined set of metrics across both systems when we have both systems included in this particular type of analysis. So again, now we have the full complement of metrics that are available from Azure Monitor and those coming from the OS Collector as well. And we can see those side by side and together in the same graphs.
Scott Adams: 23:29 The last thing that I just draw your attention to here is again the ability to quickly pick a time range of interest and/or look at the data in real-time if you so choose to.
Scott Adams: 23:44 Then that leads us to some further investigative capabilities within this workflow that are available now for Azure, as well as the other platforms that we support. Here we're looking at the more detailed view of what we can do in Performance Monitor.
Scott Adams: 24:02 One thing we can do is we can get more granular control over the time period that we're looking for by using the calendar selection in addition to the time period. We can selectively pick dates and times that make sense for our analysis. So we have a lot more control there. We've got drill down analysis. We can look at a particular point in time, and when we select that, we can see the process detail that's associated with what was going on at that particular time as well. So again, that more detailed drill down to understand what is causing a particular problem. Again, just additional set of workflow that's now available for the Azure platform, as well as the other platforms that we have available.
Scott Adams: 24:50 That kind of brings me to the end. So just kind of in summary what I wanted to follow up with is, with the Azure integration now, you have that as a yet another platform that you can include in part of your common approach to doing monitoring and analytics. You can take advantage of the various different workflows that we provide from performance monitor to Key Performance Indicators to capacity plans and automated analytics.
Scott Adams: 25:21 Again, what we're doing is we're gathering data from a whole host of different hybrid cloud infrastructure, making that available, making it available in a common look and feel and single pane of glass provides you the ability to group things together around services and applications, whether it's running in Azure or across the various different platforms, providing the ability to do the predictive analytics for forecasting what's going to happen or predict what's going to happen, and then also do the what if scenarios to determine how to mitigate. And then again, we recently introduced this demand calendar which allows you to keep the business inputs available as well.
Scott Adams: 26:02 That does bring me to the end of the webinar, the prepared materials at least. I will take a look to see if any questions have come in. Again, if you do have questions, I would encourage you to add those into the question window.
Scott Adams: 26:21 Now, there's one question here about other cloud provider support. And yes, we do have, as I had mentioned, we have the AWS integration which has occurred in a previous release for public cloud. We also have the ability for you to connect to Stackdriver from Google. That's done by way of a services engagement at the moment, but we can work with that. And then our data integrations are open ended and there is the potential to integrate with any cloud provider that you may have, as long as they've got a way for us to access the data. And then from an internal cloud perspective, when we look across the technology stack, that makes up an OpenShift or an OpenStack support for the operating system, the hypervisor and the cluster management software is there as well.
Scott Adams: 27:25 I don't see any other questions at the moment. If anyone has a last minute question, feel free to ask. Otherwise, I encourage you again to watch some of our other webinars. There will be future webinars as well about additional capabilities that we have as part of 2.4, and again, thank you for participating or joining today and look forward to talking to you again. With that, I'm going to close the webinar.