My father once told me “Always use the right tool for the job, Son”.
Anyone who has ever tried to use a Phillips “cross-head” screwdriver in lieu of a flathead will tell you anything less than what you need will take you more time, leave you frustrated, and at times with a bigger mess than where you started. The same is true for cloud computing. While my father’s advice was sound on that day, for that job, he forgot to tell me there were different sizes of Phillips head and flat head screwdrivers. I also didn’t realize there were different handle lengths/angles et cetera, which lead me to understand his advice was a foundational framework.
More and more organizations are finding themselves with a need for multiple clouds. There are many reasons customers find themselves either currently using or planning to use a multi-cloud architecture. This could include things like cost, requirements, available services, existing investments, business continuity and the list could go on and on. This can include private clouds, private hosted clouds (colocation environment), resources in commercial clouds, and specialized workloads/data which need to reside in certified/accredited Government clouds. It often also includes turnkey solutions from various Software as a Service (SaaS) providers. This is all quickly leading to “cloud sprawl” and has organizations seeking the right tools for the right job.
A colleague and I were recently discussing how interesting it is that information technology goes in full circles challenging Moore’s Law at every step of innovation. Hardware, and software, continue to get better as well as cheaper and cheaper these days due to competition driving innovation within the market. In the 1970/80’s, it was all about mainframe computing and dumb terminals (centralized computing). In the 1990’s, we started to see the proliferation of desktop computers and data centers filled with servers and storage as the internet continued to come to life across the globe. In the 2000’s, we saw mobile computing take off with laptops, tablets, and cell phones. Also, during the 2000’s, we saw the datacenters reduce 8U servers into 1U servers “pizza boxes” as we called them. We had “server sprawl” and “endpoint sprawl” and had to get a handle on how to effectively manage those servers as well as the endpoint computing devices. Blades became more and more popular as did the ability to virtualize our data centers. This in turn lead us to virtualization sprawl with various types of hypervisors running hundreds, sometimes thousands, of virtual servers but also virtual desktops for our remote or kiosk workers.
In the 2010’s, the push to “cloud” became popular for many reasons. It wasn’t just a fad or buzzword. The popularity could have been for downsizing data center footprint, moving complex workloads into cloud-based Software as a Service (SaaS) turnkey solutions. The list of reasons to move to the cloud are truly endless which takes us back to the days of “centralized” computing with low cost endpoints and virtual desktop infrastructures being run in the cloud and being managed by a managed services providers so we can focus on our missions instead of focusing on managing all things lurking within the data center. There are also many other added benefits such as not having to wait 6-12 months for a hardware acquisition to occur because time is money, racking/stacking hardware, running a checklist for weeks installing operating systems, applications, provide ongoing maintenance after hours missing important life events to spend time caring and feeding for our “data center”. With IT budgets being reduced capital expenditure reduction and operational expenditure reduction were front and center for most organizations.
Cloud also brought a deeper understanding to IT professionals that standing up something new in a data center was more than just buying hardware, software, and taking training on how to deploy it. The underlying costs, which were often quickly dismissed by IT, were brought to the forefront of the decision to move to the cloud. Physical security, electricity, cooling, backups, brick and mortar space all started to get calculated into the overall cost of on premises IT. I once worked with a customer circa 2005 where the massive data center had reached 90% electrical capacity. Their actual business process was before someone could order new hardware, they had to put in a detailed business plan on what piece(s) of hardware was going to be removed from the data center in order to support the new hardware/solution. Cloud computing in the CSP model removes those types of concerns where power, networking, storage are abundant and through a solid design can be limitlessly available. “
Enter 2020, our applications are running in the cloud with true elasticity, highly available, and fault tolerance. When designed, deployed, and managed correctly, we launch the application on our devices, it just works.. We no longer spend countless hours of downtime focused on “email is down” which historically lead to internal disagreements between our many on premises IT teams of, “is it the application, is it the virtual server, is it the virtual host, is it storage, is it networking is it X, Y, Z?”
As we see more and more workloads moving to the cloud, some live in the commercial cloud or in the government cloud or in a hyperconverged infrastructure from the CSP. This all depends upon the organizations requirements, security, budget and too many other factors to list in a blog. Multi-cloud brokerage and management is continuously evolving to combat “cloud sprawl” and there are many OEMs with offerings on the market to help you with “cloud sprawl” but there is no true one-size-fits-all solution. The answer for the right multi-cloud solution is the typical engineering response of “it depends.” As you might have guessed, cloud sprawl can lead to unplanned rising costs, security vulnerabilities which threatens data availability as well as data integrity.
Taking a multi-cloud approach at the start of, or during a cloud transformation, customers can avoid being locked into a specific infrastructure, pricing model, single operating model, and establish continuous governance, security, and compliance. Rather, organizations can easily adapt to meet different technical or business requirements and are in better control to manage both risks and costs. But there’s the trouble too: to take advantage of a multiload strategy, you must manage it. In regaining control, you now must exert control over IT end-to-end. This will drive visibility across IT to include governance, continuous security, continuous compliance, maximum uptime, and remove shadow IT thereby reducing costs, time to market for solutions, and the true ability to support the missions of the organization. The goal is often a single pane of glass for your multi-cloud environment, but you need to be careful in that it can quickly turn into a single glass of pain if not planned and executed optimally.
So, what’s the solution? More than one cloud provider is good, but too many can lead to trouble – so, what’s the magic number?
Unfortunately, the answer is still “it depends.” If you had one screw that needed tightening, it would be easy to tell you what kind of screwdriver you need. Your organization’s goals, processes, and requirements are likely a bit more complicated and include the need for more than a flathead or Philips head screwdriver.
We tell our customers, as ITIL principles outline, start where you are. Envision where you want to be (desired state/outcome) and build a plan to get there with a capable cloud service delivery company. Cloud moves faster than the traditional days of IT where we saw a new piece of hardware/model every year, which was lifecycle managed every 3-4 years, or a new operating system every 3 years with service packs every 6-12 months. In the time it took to read this far, there’s a good chance a new service, feature, or service/feature update has been released by one of the three leading CSPs.
You also may not be the greatest judge of what is broken. It’s smart to get a third-party assessment of your current infrastructure, storage, and security. After an assessment, in comes the cloud strategy that leverages the best in breed for what you need, with the flexibility for truly seeing where you are and help support where you want to go.
It could very well be that you need a multi-cloud environment – but how those workloads are balanced and managed needs to be a thoughtful plan, not an ad hoc smorgasbord. There’s no need to boil the ocean in the approach or execution of the plan. It is better to have an 80% plan today you are moving on than continuously waiting for that 100% perfect solution. If you are waiting for that 100% perfect multi-cloud solution, it’s highly likely cloud computing will altogether pass you by and leave you back at square one.
This may all sound overwhelming. That is much of the point: Cloud can be overwhelming with the many CSPs, features, functions, operating systems, development languages, development tools, virtual appliances, et cetera. Multi-cloud, while a smart, cost-saving approach, is even more overwhelming. Leverage expertise outside of your organization, one that is well-versed in many industry leading cloud service providers and will give vendor agonistic recommendations on the best value-based outcomes aligned to your organization’s needs.
Thanks to my father’s sage advice I’ve never had to drill out more than a handful of screws in my life and always perform research and seek out the right “tool” for the “job”.
There’s a reason Red River offers end-to-end cloud services, with more than 600 technical accreditations and certifications from the top three cloud service providers in the industry – we need to be prepared to use the tool that’s right for the job in support of our customer’s missions and business objectives.