Industry Flyover: Q2 2022
It’s the end of a quarter, summer is here, and we’re taking a look back at some of the blogs, articles, and stories we found interesting in Q2.
Multi-Cloud Megadeals for Multi-Cloud Tooling
Boeing made waves in April when they announced a “megadeal” with all three major public cloud providers — AWS, Azure, and GCP. The deal is emblematic of a larger shift away from the traditional wisdom of going “all in” on a single cloud provider, heralding change in how the largest of enterprises weigh and measure not only cloud services but their relationships with cloud providers as a whole.
While we are squarely in the multi-cloud camp, there are undeniable benefits to using a single cloud on an enterprise scale: bulk deals, ease of service integration, white glove onboarding, and priority access to resources. Anecdotally, it’s not uncommon for providers to lend out engineers for internal platform development to their largest clients.
Organizations like Boeing have all the weight in the world to throw around in negotiations, ensuring good deals and premium services above and beyond what your average midmarket customer can command. If anything, there are more reasons for an industry Goliath like Boeing to put all their eggs in the monocloud basket. More than almost any other organization. Which begs the question, why give up a good thing?
There are some very obvious reasons a company would go multi-cloud: avoiding vendor lock-in, improving coverage and access to specialty hardware, a merger or acquisition, more availability in instances of an outage, customer requirements, and so on. It’s all the usual, intuitive reasons that are oft cited during discussions of multi-cloud adoption. However, none of those reasons were given by Boeing themselves, instead their answer was much more interesting:
“Boeing Co is hiring the three biggest U.S. cloud-computing companies — Amazon.com Inc., Microsoft Corp. and Alphabet Inc.’s Google — to help with a digital makeover aimed at giving its airplane designers and software developers more tools.”
Tooling was at the heart of the deal. Being provider agnostic allows companies to pick the best tool for any given job, and there’s a lot of power in having a screwdriver when you find a screw and a hammer when you find a nail.
Boeing is not unique in their reasoning. Wells Fargo famously made a deal with both Google and Amazon, despite AWS being their main provider, because Google provides “business-critical services” that AWS does not. Walmart devised a unique hybrid cloud strategy that allows them to switch seamlessly between cloud providers and their own private servers to “draw the best that the public cloud providers can offer and to be able to combine that with something that is really purpose-built for us.” Goldman Sachs also recently disclosed their own multi-cloud strategy, one centered around tooling:
"It's about picking best-of-breed offerings from different public-cloud vendors”
This echoes the findings of other reports and articles that go beyond the traditional pillars of a multi-cloud strategy (namely improved availability and avoiding vendor lock-in) to discuss leveraging the unique strengths of each provider. Through this lens, multi-cloud evolves beyond a simple reactionary measure to an outage or a bill hike—both valid reasons to make the switch—and becomes a far more proactive decision to arm developers with what they need, regardless of who is offering it.
Often the conversation around multi-cloud is one centered on any given provider’s failings or bad billing behavior. Fair criticisms, but multi-cloud strategies go above and beyond outage protection and cost cutting. They can be critical for arming developers with the best possible tools for the job, and, at large enterprises, there are many different jobs to be done.
Data Regulation and Staying Compliant
While the article is worth a read, the gist is in the title: The Era of Borderless Data is Dead. It’s old news for the software engineers in the trenches (just look at The Register’s GDPR tag) but contending with a handful of data privacy policies is very different from dealing with an entire globe’s worth. One workaround is simple, but multiple workarounds for a constantly shifting legislative landscape demands a more permanent and flexible solution.
A solution like…internal developer platforms. (You thought we were going to say Seaplane, didn’t you?)
We’ve written extensively about platform engineering, including in last quarter’s Flyover, as part of a much larger industry conversation surrounding what it is and what makes it different from DevOps and SRE. While definitions differ, an important piece of the puzzle lies in platform differentiation and standardization. Platform engineer Ivan Velichko captures it well in his blog:
“The problem of orchestration is already solved nowadays by Kubernetes or ECS. However, it's solved in a quite generic way. [Platform Engineering] makes it tailored for the company's needs.”
A defining characteristic of platform engineering is the ability to tailor readily available solutions (from Kubernetes to managed services) to a company’s unique needs, and one of those needs is rapidly becoming a way to cope with data legislation. Internal developer platforms, then, not only enable developers to do their jobs, they enable developers to do their jobs in standardized, compliant ways.
With internal platforms, companies can apply crosscutting policies automatically, shielding their app-level developers from having to worry about which laws are being passed where. It also allows for much faster, more comprehensive, and more controlled responses in the wake of large scale policy shifts. It’s far easier to make an internal platform compliant than it is to ensure every individual developer remains compliant.
The Need for Better Developer Experiences
All of the trends we noticed this quarter gesture vaguely toward the same elephant in the room: how do we build better developer experiences in an increasingly complex developer environment?
Altruism is certainly part of it. No one, not even software engineers, want to work through an untenable level of complexity that turns interesting problems into frustrating roadblocks. Any company who cares about their people strives to make working easier and more enjoyable for their people as a rule.
However, we posit “better developer experiences” isn’t just a matter of comfort, or even developer quality of life. Maybe it was at one point, but now it’s a matter of necessity. As part of our ongoing cloud research, we pulled a lot of quotes about developer aches and pains that are beginning to percolate through the industry, but one sticks out:
“We are already at a point with just our distributed services where an individual engineer can't reason about what is happening. It’s just impossible to do. A single human being can't pull this together.”
If you haunt DevOps-focused discord servers and Twitter Communities you’ll find a million permutations of “the fact that my employer expects me to understand and use every possible tool, provider, and framework is untenable” usually with a meme for good measure. Individual developers have long understood what the industry at large is starting to reflect: we’re reaching a critical mass of complexity that your average developer cannot navigate.
To quote Ambassador Lab's "Is Platform Engineering the New DevOps or SRE?":
"...Software engineering organizations are all grappling with and aiming to reduce complexity and lessen cognitive load for their developers."
Platform engineering, multi-cloud for tooling purposes—all attempts to find an exit to the labyrinth of infrastructure that awaits any business with big dreams and bigger user bases. It’s telling that enterprises are some of the first to go multi-cloud for strategic (non-acquisition) reasons and some of the first to build bespoke internal platforms we’d now recognize as developer control planes.
Ironically it’s the elephant, not the canary, dying in this coal mine.
This isn’t to say every developer is drowning, or that in three years' time everyone will drown. But we are saying that so many developers are currently drowning that they could really use a lifeboat. This quarter we saw a lot of big players investing in lumber, it'll be interesting to see if next quarter they start whittling it into oars.
Your Inflight Entertainment
It’s the most wonderful time of the year for we happy, hardware-loving few: TOP500 has released their semi-annual supercomputer rankings.
For the uninitiated, this list comes with two caveats. One, only public machines that submit test runs are considered. Two, these are tightly coupled machines built for capability, not just capacity. Since June 2020, the number 1 machine has been the all-CPU Arm-based Fugaku in Japan, and the other top five machines are split between IBM Power9 + NVidia GPU and AMD x86 + NVidia GPU.
Frontier, a U.S. machine using AMD x86 CPUs and AMD GPUs, became the very first exascale submission in the list’s history, dethroning Fugaku and claiming victory! While Fugaku didn't take the top prize, it did retain (for the fifth ranking) the number one spot in the HPCG and graph500 benchmarks — notably the hardest of the benchmarks. HPCG is also the most indicative of the actual workloads that most of these supercomputers run so needless to say, the competition was fierce.
Here’s a big, well-earned “congratulations!” to the Frontier team and Oak Ridge National Laboratories!
If you love hardware (but don’t have a billion dollars to build your own supercomputer) your dreams can still come true with Google’s new Open Silicon program. Submit open source integrated circuit designs and get them manufactured at no-cost.
Open source hardware, what a time to be alive.
That does it for this Industry Flyover. Thank you for flying with Seaplane, and we’ll see you next quarter!