Industry pundits have long proclaimed that capacity planning is dead. After all, we can now add capacity on demand by bursting into the cloud when demand rises. Are you worried about the impact of Cyber Monday on your eCommerce site? AWS, Azure, and others make the issue more about budget than available resources. And it’s often a better financial option than investing in full-time capacity planners. Today, organizations place greater emphasis on real-time capacity analytics.
What’s the difference between capacity planning and capacity analytics?
Traditional capacity planning models the long term needs of the business. Will I have enough physical space in my data center next year? How will mass adoption of our new application impact annual power and cooling requirements? Capacity planning tends to focus on data center consolidations/relocations or major technology uplifts. On the other hand, capacity analytics focuses on two critical items:
- Avoiding disruption to current applications and services – Do I have enough server, network, and storage resources to meet the demands of my customers today?
- Making better use of existing capacity – Can I allocate existing bandwidth or idle VMs to higher priority needs? How can I avoid wasteful spend associated with over-provisioning?
Let’s think of this in terms of personal financial management… Capacity planning is akin to saving for college tuition, planning to buy a house, or knowing if you can retire at 60 or 70 years old. Capacity analytics equates to managing a monthly household budget. Can you afford to pay the rent and other household bills on time? Do you have enough left over to splurge on that new 4K television? Organizations today concern themselves more with understanding real-time demand than managing supply limitations. Capacity analytics takes a modern approach towards tackling this challenge. It puts predictive analytics at the fingertips of operational staff.
Consolidate Capacity Tools
There are hundreds of capacity management tools on the market today. However, IT teams often use different tools for monitoring server, network, and storage resources. This forces them to either export that data into a centralized data warehouse and/or deploy another set of agents to collect data for capacity planning purposes. Once extracted and normalized, capacity teams can analyze the (out-of-date) infrastructure data in spreadsheets or more sophisticated capacity planning tools. And then make decisions about future capacity needs. This antiquated approach to capacity management is cumbersome and time consuming. More important, it’s not as relevant to the immediate needs of the business.
ScienceLogic has solved the need for centralized, real-time capacity analytics. A single platform provides many regression methods (exponential, linear, logarithmic, seasonal, etc.) for understanding real-time capacity demand. Planners can even establish their own desired algorithm within the system and run it against any data set. There’s no need to go outside the ScienceLogic platform for capacity analytics. Operations teams don’t need to be capacity experts. Built-in reports show which resources will most likely expire in the next 90 days. They also reveal unused physical, virtualized, converged, and cloud capacity across various platforms. For example, you can see which VMs are sitting idle because they’re no longer serving active IT projects. And because ScienceLogic monitors hybrid IT environments, when you burst to the cloud for additional capacity needs, you don’t lose visibility of infrastructure performance.