Are you chicken farming? Or is your data center the new business platform?

Antonio Piraino

www.stancourtney.comSay goodbye to hosting as we know it.  This was one of the messages that sprung from my panel at the recent Hosting & Cloud Transformation Summit (HCTS) with The 451 Group in London.  My dramatic point was intended to illustrate the mammoth changes afoot in the Managed Service Space in response to a rapidly changing and consolidating cloud landscape. 

This was hammered home by the unsurprising uptick in rollups being seen by Telcos and ISPs of managed hosting providers, in order to give them a leg up in the impending cloud wars.  As Peter Hopper from DH Capital calculated, there were 56 M&A transactions and 47 Private Capital Placements into hosters and Service Provider space in 2012, amounting to $6.4bn changing hands.

No wonder then that so many enterprises, bankers, technologists, and service providers attended the annual event, all looking to ensure they didn’t miss the next big investment or cloud trend among Service Providers.

Are you Chicken Farming? As Joe Baguley from VMware boldly explained it: IT departments tend to look after their kittens – giving them names, taking them to the vet when they’re sick and looking after their little workloads. 

AWS, on the other hand, is cultivating a culture of chicken farming – if one is sick, I kill the chicken and get on with the rest.  The issue with this approach is that whoever denounces the cloud as broken is either not trying it, or has put their kitten in the chicken farm. 

The driver is that people are deploying kittens – but want chicken farm pricing.  As enterprises become serious about the cloud, they seek the resiliency and fault tolerance that is critical to modern workloads which is why VMware feels that AWS will fall the way of IBM AS/400 that started with just as much a bang in the market place. 

However, the race is on, and demand will invariably drive AWS to attend to those same market needs.  In the meantime, modern day brokers are helping people choose execution venues based on internal policies that best suit different workloads. Other companies, such as CloudSoft, enable users to move apps around at the software/app layer, when, as the CTO put it, the wind blows the wrong way, and the chicken farm smells. 

But it’s the decision making prior to the portability and motility of those workloads that are evidently becoming key to any execution platform.  It’s also why we, at ScienceLogic, are aggressively building the platform for these hybrid cloud executions that many of our MSP customers themselves are leveraging alternate venues on occasion, rather than forcefully using their own infrastructure when unnecessary. 

The decision to execute workloads in one cloud may be a long-term decision, but where inside that cloud (i.e. which geographic region, and best performing or most efficient cost zone) is just as imperative.  As the Director for the UK Parliament’s ICT pointed out, the innumerous metrics now from any cloud platform needed to validate the performance of a workload require a modern kind of monitoring and management tool, in order to create any assurances.

Drivers of change in our cloud environment? So what are the workloads that are driving all of the cloud discussions today?  The tremendous growth of collaborative apps, and storage-defined networks, are both the pre-eminent drivers of cloud usage today.  From a storage perspective, a number of deficiencies to the existing systems were pointed out.

For example, most legacy storage systems were not designed for multi-tenancy, nor were they developed for virtual environments.  Similarly there has been a lack of QoS in this sector since it is so difficult to predict or guarantee performance.  Finally storage is hard to scale, and once you fill it up, you either end up with multiple systems as you scale or upgrade and go through painful migration. 

While physical storage costs are declining, the real cost comes in the form of managing this process, as the complexity continues its incline. So how have MSPs dealt with these issues to date?  Often Ring-Fencing is used, with an advanced reservation of compute storage alongside application profiling to ensure the availability of resources. This can be effective but costly due to the high overheads associated with this approach. 

What has instead emerged in the last 3-6 months is a software defined approach to storage. This is not necessarily new to RAID and how systems manage Cache, but there are new and interesting things starting to happen in the industry. Software defined storage in itself is not new. Vendors in this space have been making large margins off storage management software.  However, there is a set of new disruptive technologies entering the market – such as flash optimized storage, object based scale-out technologies, and SSD providers like our partners SolidFire and Intel – all driven by software defined approaches. Some of the attributes of a Software Defined approach as per The 451 Group:

    1. The software/storage runs on commodity (x86) hardware
    2. A software approach allows easier scale out of storage; such as the ability of CloudFounders to detach the storage from the physical hardware in order to move data around for scalability and DR purposes
    3. A unified storage layer – we are starting to replace the legacy storage silo’s speaking different protocols (such as NAS, SA, etc.) with a multi-protocol layer
    4. Leveraging open source – until today open source has had little impact compared to the services space
    5. API-based provisioning/management and integration, in an age of hybrid environments and toe-dipping in the cloud, to be able to point some data sets to public sources and others to internal IT storage

Ultimately things like de-duping, and having table arrays and flash drives is fast becoming the norm.  The real challenges however, are centered around scalability, common standards for portability for highly fluid applications, and the QoS currently missing in the cloud storage plays, where IOPS is fast becoming the true measurement of the effectiveness of a cloud platform.  To these points, there are increasing examples of very chatty and rapidly up and down-scaling aps, that are leveraging AWS S3 technology to such a degree that it is fast becoming the default standard in a market that is emulating its API in lieu of a formal interoperable standard.  All the more need then for a higher-level control plane from which to manage all of these technologies. Check back tomorrow for part two of my experiences at this year’s HCTS.

Share This Post

Most Popular

Archive

Comments