The past 20 years saw a significant migration toward cloud-based services and applications. Many companies sought to outsource most computing to enhance applications and take advantage of the higher scaling and lower costs of the cloud. However, more recently the opposite is true.
Technology companies in particular are transitioning their computing and server infrastructure away from third-party public services and toward on-premises deployments.
This trend doesn’t appear to be letting up any time soon. In fact, IDC estimates that half of the spending on server and storage infrastructure in 2021 was driven by on-premises deployments and that 71% of enterprises are repatriating cloud workloads to improve cost and control. This same report suggests that on-prem spending is projected to grow at an annual rate of 2.9% over the next five years and is estimated to reach $77.5 billion by 2026.
Most public cloud services simply cannot offer the compute performance needed for more advanced AI and HPC-intensive workloads – at the cost and scalability levels larger organizations would like to have. Moreover, with the additional flexibility in technology choices and improved energy efficiency for servers, the demand for “general purpose” systems is dropping as today’s businesses look for optimized solutions for their specific workloads.
What will this mean for the future of on-prem workload capabilities and data center design? How will server architecture preference play a role in the transition? Evaluating the current hardware infrastructure is the first step to improving both the business and end-user experiences.
Cost Efficiency and Privacy
Companies seeking a solution strong enough to handle computing requirements during their highest-usage periods without interruption or any additional costs find that custom-built servers are both more efficient and, in total, less expensive than all-in-one style hardware in most public clouds. Though many still leverage these clouds for cold storage, computing may be better done on-prem. Better security and data privacy are also benefits; keeping computing closer to home means safer and more reliable information management.
For organizations handling sensitive information, a high level of control is key. Twitter, for example, operates three data centers in the U.S., with hundreds of thousands of servers and multiple petabytes of storage, an infrastructure replicated by many companies. While these servers may be hosted in co-location facilities or public cloud facilities, there is a need to design and manage an organization’s own infrastructures, identifying and addressing any capacity issues with an on-prem operations team. Public cloud resources are then used only for certain workloads and data storage that is designed to act as a counterpart to cloud IT systems.
To those arguing against this kind of investment, many organizations find that long-term cost efficiency far outweighs the challenges that come with fine-tuning server deployments for their own services and infrastructure ownership, most of which are still barriers regardless of chosen infrastructure.
Better Performance, Data Privacy, Energy Efficiency, and Workload Optimization
Energy efficiency is another key reason for a data center overhaul. Each generation of CPU and GPU does more work per watt of electricity used than the previous generation, which means existing workloads can be completed with less electrical need, or additional workloads can be performed at the same power usage. This improvement is more precisely quantified with benchmarking and power testing on the application. For example, an outdated dual-socket system may be replaced with a more recent offering with a more efficient single-socket system. Energy use significantly drops, and licensing fees may as well, all while maintaining SLAs to the user from the data center.
Companies have drastically reduced their total data center count by cycling out inefficient centers to bring in modern deployment, with high-density nodes, energy efficiency, and hyperscale versions. Intel, for example, has reduced its total data center count from 152 to 56. While carving out these inefficiencies, companies can more than double their workload capacity with a lower energy bill. These new data centers and the clusters are designed for cost-effective maintenance as well, with disaggregated designs, more frequent and cost-efficient CPU upgrades, and fewer changes to memory or other components. This reduces e-waste and allows components to be effective longer.
For small and medium-size businesses, overall workloads are much different than those of Intel or Twitter, of course. Solely on-prem may not be an all-in-one solution for these sizes of organizations, but rather a worthwhile augmentation to their current cloud or large public cloud setups. Hybrid deployments allow SMBs to take advantage of the security and cost efficiency of private clouds, all while bolstering their software stack by running other tools in the public cloud and on-prem systems. This can not only save them money but provide an additional level of personalization and flexibility.
With the right automation software, this also requires less human time to manage servers and storage systems. This leaves employees to spend more time innovating and delegating workloads, which streamlines and grows any data and compute management process. For SMBs, this reallocation of time is especially important, as most do not have a fully outfitted IT team.
On-Prem for the Future
The shift to public cloud-based applications and technology is slowing, and managed services providers should help their SMB customers determine if investing in the public cloud is the right solution for their organizations’ requirements and that they fully understand current and future billing scenarios. Every single company has a different workload requirement, and the overall system should be tuned to what is needed.
Finding the right balance between cloud and on-prem technologies should be a top priority for companies of all sizes. Moving from a public cloud to an on-prem data center can not only save organizations money, but provide for flexibility and growth in a hyper-changing environment.
MICHAEL MCNERNEY is vice president marketing and network security at Supermicro. He has over two decades of experience working in the enterprise hardware industry, with a proven track record of leading product strategy and software design. Prior to Supermicro, he also held leadership roles at Sun Microsystems and Hewlett-Packard.