A few weeks back, a reporter from the National Business Review phoned me for an interview on a piece he was writing, the topic broadly being the “Cloud”. There seemed to be a lot of confusion, generally from the amount of noise that incumbents were making in regard to what their “cloud” offering is. Since this interview, I have realized that organizations simply need education to understand the real benefits of what cloud computing is and how it affects their company in a positive way. I’m going to release a series of blog posts detailing common topics for cloud, which I hope will help bring the New Zealand industry up to speed with this game changing technology.
Cloud computing is a broad term. Our definition, which is also hailed by Amazon Web Services, defines a paradigm shift in computing where compute and other IT commodity resources are sold as a utility and are entirely subscription based. Cloud adopts a new mindset for scale and elasticity of IT resources, which for traditional Infrastructure-as-a-Service (IaaS) as we know it is virtually impossible. Yes, in the old sense an organization can invest, procure and provision a large pool of IT resources and have these resources available on-demand, ready to run workloads when needed. The problem with this model is that the “pool of resources” is incredibly expensive to procure, and generally has a lifespan of only 3 to 5 years (meaning hardware refreshes at the end of this timeframe involving more investment). Along with the depreciative nature of these assets, the resources are only utilized for a small percentage of their running life. This phenomenal under utilization of IT resources, costs businesses large amounts of money not only for the initial capital expenditure associated with procurement, but with the operating expense attached with managing and maintaining this hardware, where the opex involved stays constant and linear as new hardware is introduced to address scale concerns and future-proofing.
An example of public cloud removing this problem is the case of a financial services company with an IaaS platform, where they experience a seasonal influx of users in the April month due to this month being tax-filing season. The company needs to make sure that their IaaS platform has enough IT resources to serve this influx of customers. The other 11 months of the year, the company recognizes very low levels of usage and only 5% of the entire IaaS platform is utilized at any one time. This means that the upfront capital expenditure associated with building the IaaS platform to cater to seasonal influx in April is enormous, where the other 11 months of the year this capex is barely utilized. Throughout the year, the costs associated with running the underutilized and ‘dormant’ infrastructure is still constant, where managed services and maintenance is still necessary. This model is not efficient.
Many incumbent IT providers have re-labeled the above scenario, and are simply calling this ‘private cloud’, and not addressing any of the fundamental issues associated with the traditional IaaS.
With public cloud, the Platform-as-a-Service (PaaS) vendor has invested significantly in the capital procurement component and transparently manages and maintains this hardware within their datacenters. This completely eliminates the opex cost (not to mention the capex associated with procuring the hardware) associated with managing hardware and physical equipment – especially useful as discussed above that this hardware is under-utilized and dormant the majority of the time.
Public cloud also removes this traditional thinking of “pools of compute”. There is no need to designate x number of physical hosts, and capacity plan the amount of virtual machines that can be run on these hosts. The public cloud vendor virtually opens up their entire platform of physical hosts, and allows a customer to utilize these resources on-demand, at any time, and pay only for what they use and when they use it, all billable by the hour.
Cloud vendors are constantly updating their underlying hardware, and staying at the forefront of new technologies. What this means for users of PaaS like Amazon Web Services, is that as soon as a brand new instance type is released with the latest underlying hardware, a user can simply transfer the virtual machine running on the old and outdated physical host, and move it onto the brand new instance type without a disruption (if the application is architected correctly). This means that your fleet is continually refreshed and running on the latest hardware with no need to invest and procure new hardware every 3 to 5 years once it is deemed depreciated and out of date.
In a nutshell, the public cloud allows you to focus on your core business, not running and maintaining data centers. There is no capital expense involved and no hardware to manage. Amazon Web Services allows you to simply pay for exactly what resources you use per hour, and focus on deploying applications and running your business.
Public cloud providers like AWS are allowing companies to innovate and lead their industries through cloud like never before, due to agility, cost reduction and allowing any organization no matter their size, leverage multi billion dollar data center fabric which equals anyone using AWS, to be on the same playing field as that of NASA, Netflix, Spotify and the Commonwealth Bank of Australia (to name a few).
This Public vs. Private Cloud blog was contributed by our partner friends at Cloud House. It was written by Jordan Greig, Founder and CTO. Cloud House is New Zealand’s leading AWS Advanced Consulting Partner. They assist high growth companies to develop and deliver business innovation through the Cloud.
For more information on Sumo Logic’s cloud-native AWS solutions please visit AWS Integrations for Rapid Time-to-Value.
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.