By Greg Pierce
On February 28, Amazon’s Simple Storage Service (S3) went down after an employee accidently shut down more servers than intended during a debugging exercise, as Fortune magazine reported. Among those affected were big names such as Buzzfeed, Netflix, and Spotify.
Two weeks later, on March 16, issues with Microsoft Azure’s Cluster frustrated storage customers throughout the Eastern U.S., leaving their data inaccessible for hours. Later that month, Azure cloud services went down again, this time in Japan, when a Microsoft data center experienced a cooling system outage, as reported by Data Center Knowledge.
While some were surprised by these outages, most experts know that even the largest cloud providers are vulnerable, and, it follows, so are their customers. And this is no trivial issue. AWS alone accounts for more than 45 percent of public Infrastructure as a Service market share, according to Talkin’ Cloud, and customers likely see this scale as reason to trust that their mission-critical computing resources are subject to minimal downtime risk. But is this trust well-placed given how quickly technology is changing and demand is growing?
Why Hyper-Hyper Scale Requires Hyper-Hyper Vigilance
Amazon Web Services (AWS), Microsoft, Google Cloud, and others continue to push the envelope with cloud storage and services; the market has officially moved beyond hyper scale public cloud growth to “hyper-hyper scale” growth. And, vendors are learning to navigate the challenges serving this accelerated growth as they go—a fact that should be sobering to many.
Cloud services providers are traveling into uncharted waters, and they’re bringing their customers with them. Consider an outage that occurs at unprecedented load levels or because providers are still learning to manage the system—thousands of companies could experience costly disruption as new hard lessons are learned. Businesses must go into the cloud with their eyes wide open. After all, the outages highlighted above may be harbingers of more to come.
Growing pains are no reason to jump ship, however. The benefits of cloud computing are too numerous. But with hyper-hyper growth must also come hyper-hyper vigilance, and that requires businesses to probe more into what’s protecting them from downtime and loss.
Hybrid Cloud is the Future
Businesses increasingly take a closer look at hybrid cloud strategies, which feature holistic, customized approaches to application deployment and infrastructure architecture by combining public, private, and on-premise solutions into a seamless hybrid cloud platform. This is probably why MarketsandMarkets estimates that hybrid cloud spending will grow 22 percent annually through 2021 to reach almost $92 billion.
While AWS and Azure provide effective low-cost data storage and public cloud services that enterprises desire, their one-size-fits-all models are not always the best fit for managing each aspect of a company’s data management and allocation. The private cloud approach offers a more customized platform that gives an organization greater control over data management and security via a private environment, while also serving as an intermediary between the enterprise public cloud and on-premise data center. With a virtual private cloud, organizations can benefit from the services and applications of the cloud without the upfront investment in time and money.
There are benefits to public cloud, private cloud, and on-premise data centers. With a hybrid cloud approach, companies can benefit from a model that leverages the strengths of each of these technologies, allowing them to maximize collaboration, efficiency and cost-savings. Working with a managed cloud services provider helps an enterprise make this implementation as smooth as possible.
Already, the shift to the hybrid cloud has begun among the major cloud service providers in the industry. In October 2016, AWS announced a partnership with VMWare to provide customers with a public-private hybrid cloud offering. Andy Jassy, CEO, AWS, recently told CRN that the hybrid cloud enables enterprises to avoid a binary decision between public and private cloud, giving them flexibility in how they allocate their data, along with the ability to avoid cloud vendor lock in.
It is easy to forget that the cloud is susceptible to the same challenges and pitfalls that have historically accompanied IT growth. Just like with personal computers or wireless Internet, a cloud platform’s performance can be affected by an unsuccessful upgrade, a patch that doesn’t install correctly, or a myriad of other possible issues. The cloud needs to be viewed as a constantly evolving technology in progress and not a finished product. Enterprises should consider a hybrid cloud model that provides the reliability that their customers expect.
Fear of outages shouldn’t—and won’t—slow down cloud adoption, which is now touching smaller enterprises as well as the largest global companies, but it will force businesses to be more discerning about technology. Living through this period of hyper-hyper scale growth is exciting, but we must not ignore the risks, many of which are avoidable. What we must do is mitigate those risks with investments in strategies such as hybrid cloud, and by constantly evaluating the new technologies that sit between our business and our customers. SW
Greg Pierce is the chief cloud officer at Concerto Cloud Services.
June2017, Software Magazine