Article Featured in Inside_Networks Magazine Page 14-15, Letter to Editor
By: Andy Connor, Director – EMEA Channel
Data centres that are designed to meet the needs of standard or enterprise business applications are plentiful. Yet flexible and user defined data centres for edge applications, which rely on dynamic real time data delivery, provisioning, processing and storage, are in short supply.
That’s partly because of the uncertainty over which applications demand such infrastructure and over what sort of timeframe. However, there’s also the question of flexibility. Many of today’s existing micro data centre solutions meet a predefined concept of edge or, more accurately, localised, low latency applications, which also require high levels of agility and scalability. This is due to their predetermined or specified approach to design and infrastructure components, often led by the vendor.
To date, the market has been met with small scale edge applications, which have been deployed in pre-populated, containerised solutions. A customer is often required to conform to a standard shape or size and there’s no flexibility in terms of their modularity, components or make-up.
One might argue it comes with the subjective nature of edge computing, which is often shaped to support a vendor defined technology. Standardisation has also been beneficial for our industry, offering several key advantages including the ability to replicate systems across multiple locations. But when it comes to the edge, some standardised systems aren’t built for the customer – they’re a product of vendor collaboration. This is also accompanied by high costs and long lead times.
On the one hand, having a piece of pre-integrated infrastructure with everything in it can undoubtedly solve some pain points, especially where deployment is concerned. But what happens if the customer has their own alliances, their own definition of the edge, or may not need all of the components? What happens if they run out of capacity in one site or need a modular system that scales?
Then those original promises of scalability or flexibility disappear, leaving the customer with just one option – to buy another container. One might consider that rigidity, when it comes to standardisation, can often be detrimental to the customer. The point here is that when it comes to micro data centres, a one size fits all approach does not work. End users need the ability to choose their infrastructure based on their business demands – whether they in industrial manufacturing, automotive, telco or colocation environments. But how can users achieve this?
Vendor agnostic and flexible micro data centres are the future for the industry – an approach that builds containment systems around customers’ needs, without forcing their infrastructure to fit into boxes. Users should have the flexibility to utilise their choice of best in class data centre components including the IT stack, uninterruptible power supplies (UPS), cooling architecture, racks, cabling or fire suppression systems.
By taking an infrastructure agnostic approach it’s possible to give customers the ability to define their edge, and use standardised and scalable infrastructure in a way that’s truly beneficial to their businesses.
Andy Connor Subzero Engineering
Growing data demands are forcing engineers to think creatively about the ways they design and develop data centres. Andy’s point about the rigidity of some micro data centre solutions is pertinent and one that needs to be addressed in order to fully meet the potential of the edge.
Article Featured in Data Centre & Network News
By Gordon Johnson, Senior CFD Engineer at Subzero Engineering
Today, edge data centers need to provide a highly efficient, resilient, dynamic, scalable and sustainable environment for critical IT applications. SubzeroEngineering, believes that containment has a vital role to play in addressing these requirements.
In recent years, edge computing has become one of the most prevalent topics of discussion within our industry. In many respects, the main purpose of edge data centers is to reduce latency and delays in transmitting data and to store critical IT applications securely. In other words, edge data centers store and process data and services as close to the end user as possible.
Edge is a term that’s also become synonymous with some of the world’s most cutting-edge technologies. Autonomous vehicles have often been discussed as one of the truest examples of the edge in action, where anything less than near real-time data processing and ultra-low latency could have fatal consequences for the user. There are also many mission-critical scenarios, including within retail, logistics and healthcare, where a typically high density computing environment, packed into a relatively small footprint and a high kW/rack load is housed within an edge environment.
Drivers at the edge
According to Gartner, by 2020, internet capable devices worldwide reached over 20 billion, and are expected to double by 2025. It is also estimated that approximately 463 exabytes of data (1 exabyte is equivalent to 1 billion gigabytes) will be generated each day by people as of 2025, which equates to the same volume of data as 212,765,957 DVDs per day!
While the Internet of Things (IoT) was the initial driver of edge computing, especially for smart devices, these examples have been joined by content delivery networks, video streaming and remote monitoring services, with augmented and virtual reality software, expected to be another key use case. What’s more, transformational 5G connectivity has yet to have its predicted, major impact on the edge.
Clearly, there are significant benefits in decentralizing computing power away from a traditional data center and moving it closer to the point where data is generated and/or consumed. Right now, edge computing is still evolving but one thing we can say with certainty, is that the demand for local, near real-time computing represents a major shift in what types of services edge data centers will need to provide.
Efficiency and optimization remain key
An optimized edge data center environment is required to meet a long list of criteria, the first being reliability as edge facilities are often remote and have no on-site maintenance capabilities. Secondly, they require modularity and scalability, the ability to grow with demands. Thirdly, there’s the issue of a lack of a ‘true’ definition. Customers still need to define the edge in the context of their business requirements, deploying infrastructure in line with business demands, which can of course affect the design of their environment. And finally, speed of installation. For many end-users time to market is critical, so an edge data center often needs to be built and delivered on-site in a matter of weeks.
There is, however, one more important factor to consider. An edge data center should offer true flexibility, allowing the user to quickly adapt or capitalize on new business opportunities while offering sustainable and energy efficient performance.
Edge data centers are, in many respects, no different from traditional facilities when it comes to the twin imperatives of efficiency and sustainability. PUE as a measure of energy efficiency applies to the edge as much as to large, centralized facilities.
And sustainability, especially the drive towards net zero, is a major focus for the sector in its entirety. However, what will change over time is the ratio of edge data centers. By 2040, it’s predicted that 80% of total data center energy consumption will be from edge data centers, which begs an obvious question: what will make the edge energy efficient, environmentally responsible, reliable and sustainable all at the same time?
The role of containment
Containment is almost certainly the easiest way to increase efficiency in the data center. It also makes a data center environmentally conscious because, instead of consuming energy, containment saves it. This is especially true at the edge.
Containment helps users get the most out of an edge deployment because containment prevents cold supply from mixing with hot exhaust air. This allows supply temperatures at the server inlets to be increased.
Since today’s servers are recommended to operate at temperatures as high as 80.6 degrees Fahrenheit (27 degrees Celsius), containment allows for higher supply temperatures, less overall cooling, lower fan speeds, increased use of free cooling and reduced water consumption – all important factors when it comes to improving efficiency and reducing carbon footprint at the edge.
Further, a contained solution consumes less power than an application without it, which means an environmentally friendly, cost-effective environment. Additionally, it improves reliability, delivering longer Mean Time Between Failures (MTBF) for the IT equipment, as well as lower PUE.
Uncertainty demands flexibility
Subzero believes that an edge data center needs to be flexible and both quick and easy to install. It needs to be right-sized for the here and now, but capable of incremental, scalable growth. Further, it should allow the customer to specify the key components, such as the IT, storage, power and cooling solutions, without constraining them by size or vendor selection.
Thankfully, there are edge data center providers who now offer an enclosure built on-site in a matter of days, with ground-supported or ceiling-hung infrastructure to support ladder racks, cable trays, racks and cooling equipment.
These architectures mean the customer can choose their own power and cooling systems and, once the IT stack is on-site and the power is connected, the data center can be up and running in a matter of days.
Back in 2018, Gartner predicted that, by 2023, three-quarters of all enterprise-generated data would be created and processed outside a traditional, centralized data center. As more and more applications move from large, centralized data centers to small edge environments, Subzero anticipates that only a flexible, containerized architecture will offer end-users the perfect balance of efficiency, sustainability and performance.
Article Featured in Data Centre Review
By: Andy Connor, Director – EMEA Channel at Subzero Engineering
For many years, the data centre industry has been engaged in a deep discussion on the concept of edge computing. Yet the definition varies from vendor to vendor and from customer to customer, creating not only mass confusion, but a fixed mindset in terms of solutions design.
One might argue that through its lack of a true definition, the subjective nature of the edge has led the industry down an often singular path, where edge technologies have been designed to hypothetically meet the customers’ needs, but without the application in mind.
IDC defines the edge as the multiform space between physical endpoints such as sensors and the ‘core’, or the physical infrastructure – the servers, storage and compute – within cloud locations and data centres. Yet within more traditional or conservative sectors, some customers are yet to truly understand how the edge relates to them, meaning the discussion needs to change, and fast.
Defining the edge
When the trend of edge computing began to gain traction, the Infrastructure Masons were one of the first to try and define it. But even they recognised its largely subjective nature was beginning to cause market confusion, and stated that a widely accepted definition would become more essential as the industry began to confront the challenges that will arise at the edge.
What’s clear is that the business case for edge technologies is becoming more prevalent, and according to Gartner, “by 2022, more than 50% of enterprise-generated data will be created and processed outside the data centre or cloud.” All this data invariably needs a home and depending on the type of data that is stored, whether it’s business or mission-critical, the design and location of the infrastructure will undoubtedly need to vary.
One size fits all?
Today in our industry, there’s a very real danger that, when it comes to the edge, many end-users will be sold infrastructure defined by the manufacturer and not based on the customer’s needs. And that’s because edge solutions are often found in one size, type or variable form factor. This creates a market whereby potential customers are persuaded that ‘one size fits all’, and that’s a far cry from the modular and agile approach that the industry has turned towards in recent years.
The reality is that the edge has almost as many definitions as there are organisations trying to define it. And, while there are a range of well-defined and well-understood edge applications already in use such as micro data centres in retail locations, localised infrastructure providing low latency content delivery to avid viewers, there are many edge applications yet to be fully understood, defined or implemented.
Many existing edge applications remain unpredictable in terms of their data centre and IT resources. And often local infrastructure is required to support the continued roll-out of a service looking to scale.
In summary, most, if not all, organisations are faced with making frequent decisions about the best place to build, or access, edge infrastructure resources. And in today’s dynamic, digital world such decisions need to focus on the customer’s business requirements, providing them with a flexible, agile and optimised architecture that’s truly fit-for-purpose.
Finding flexible solutions
A standard-size container or micro data centre might be far too big for the business’ needs – but the assumption is that maybe the user will grow into it. And then there’s the question of customisation. What if the solution needs to be liquid-immersion cooling enabled for GPU-intensive computing at the edge? Not every micro data centre architecture can be built for that technology, and certainly not if the customer needs to scale quickly.
There’s a question of cost. Micro data centres in standard form factors, or pre-integrated systems, often contain CAPEX-intensive server and storage technologies from manufacturers defined by the vendor. This, again, is a far cry from a solution that is defined to meet the business needs.
In our industry, relationships are everything, and one must acknowledge that customers will want to specify power, cooling and IT infrastructure from their own choice of suppliers, and at a cost that meets their budgetary requirements.
At Subzero Engineering, we believe customers need a solution that supports their business criterion, and one that helps them capitalise on the emerging opportunities of the edge. What’s more, we believe that containerised edge data centres, which are optimised for the application, built ready to scale and vendor-neutral for any type of infrastructure, are those that can truly meet the needs of the end-user.
What’s clear is that with the advent of edge computing, the customer needs to define their edge. And as design and build consultants, our goal must be to support their needs with flexible, mission-critical solutions.
By Sam Prudhomme, Vice President of Sales & Marketing, Subzero Engineering
For many years the industry has been in a deep discussion about the concept of edge computing. Yet the definition varies from vendor to vendor, creating confusion in the market, especially where end-users are concerned. In fact, within more traditional or conservative sectors, some customers are yet to truly understand how the edge relates to them, meaning the discussion needs to change, and fast.
According to Gartner, “the edge is the physical location where things and people connect with the networked, digital world, and by 2022, more than 50% of enterprise-generated data will be created and processed outside the data center or cloud.” All of this data invariably needs a home, and depending on the type of data that is secured, whether it’s business or mission-critical, the design and location of its home will vary.
Autonomous vehicles are but one example of an automated, low- latency and data-dependent application. The real-time control data required to operate the vehicle is created, processed and stored via two-way communications at a number of local and roadside levels. On a city-wide basis, the data produced by each autonomous vehicle will be processed, analyzed, stored and transmitted in real-time, in order to safely direct the vehicle and manage the traffic. Yet on a national level, the data produced by millions of AVs could be used to shape transport infrastructure policy and redefine the automotive landscape globally.
Each of these processing, analysis, and storage locations requires a different type of facility to support its demand. Right now, data centers designed to meet the needs of standard or enterprise business applications are plentiful. However, data centers designed for dynamic, real-time data delivery, provisioning, processing, and storage are in short supply.
That’s partly because of the uncertainty over which applications will demand such infrastructure and, importantly, over what sort of timeframe. However, there’s also the question of flexibility. Many of the existing micro data center solutions are unable to meet the demands of edge or, more accurately, localized, low-latency applications, which also require high levels of agility and scalability. This is due to their pre-determined or specified approach to design and infrastructure components.
Traditionally, the market has been met with small-scale, edge applications, which have been deployed in pre-populated, containerized solutions. A customer is often required to confirm to a standard shape or size and there’s no flexibility in terms of their modularity, components, or make-up. So how do we change the thinking?
A Flexible Edge
Standardization has, in many respects, been crucial to our industry. It offers a number of key benefits, including the ability to replicate systems predictably across multiple locations. But when it comes to the edge, some standardized systems aren’t built for the customer – they’re a product of vendor collaboration: One that’s also accompanied by high-costs and long lead times.
On the one hand, having a box with everything in it can undoubtedly solve some pain points, especially where integration is concerned. But what happens if the customer has its own alliances, or may not need all of the components? What happens if they run out of capacity in one site? Those original promises of scalability, or flexibility disappear, leaving the customer with just one option – to buy another container. One might consider that that rigidity, when it comes to ‘standardization’, can often be detrimental to the customer.
There is, however, the possibility that such modular, customizable, and scalable micro data center architectures can meet the end user’s requirements perfectly, allowing end-users to truly define and embrace their edge.
Is There a Simpler Way?
Today forecasting growth is a key challenge for customers. With demands increasing to support a rapidly developing digital landscape, many will have a reasonable idea of what capacity is required today. But predicting how it will grow over time is far more difficult, and this is where modularity is key.
For example, pre-pandemic, a content delivery network, with capacity located near large users groups may have found itself swamped with demand in the days of lockdown. Today, they may be considering how to scale up local data center capacity quickly and incrementally to meet customer expectations, without deploying additional infrastructure across more sites.
There is also the potential of 5G-enabled applications, so how does one define what’s truly needed to optimize and protect the infrastructure in a manufacturing environment. Should an end-user purchase a containerized micro data center because that’s what’s positioned as the ideal solution? Or, should they customize and engineer a solution that can grow incrementally with demands? Or would it be more beneficial to deploy a single room that offers a secure, high-strength, and walk-able roof that can host production equipment?
The point here is that when it comes to micro data centers, a one-size-fits-all approach does not work. End-users need the ability to choose their infrastructure based on their business demands – whether they be in industrial manufacturing, automotive, telco, or colocation environments. But how can users achieve this?
Infrastructure Agnostic Architectures
At Subzero Engineering, we believe that vendor-agnostic, flexible micro data centers are the future for the industry. For years we’ve been adding value to customers, and building containment systems around their needs, without forcing their infrastructure to fit into boxes.
We believe users should have the flexibility to utilize their choice of best-in-class data center components, including the IT stack, the uninterruptible power supply (UPS), cooling architecture, racks, cabling, or fire suppression system. So by taking an infrastructure-agnostic approach, we give customers the ability to define their edge, and use resilient, standardized, and scalable infrastructure in a way that’s truly beneficial to their business.
By taking this approach, we’re also able to meet demands for speed to market, delivering a fully customized solution to site within six weeks. Furthermore, by adopting a modular architecture that includes a stick-built enclosure, and the ability to incorporate a cleanroom, and a walk-able, mezzanine roof, users can scale as demands require it, and without the need to deploy additional containerized systems.
This approach alone offers significant benefits, including a 20-30% cost-saving, compared with conventional ‘pre-integrated’, micro data center designs.
For too long now, our industry has been shaped by vendors that have forced customers to base decisions on systems which are constrained by the solutions they offer. We believe now is the time to disrupt the market, eliminate this misalignment, and enable customers to define their edge as they go.
By providing customers with the physical data center infrastructure they need, no matter their requirements, we can help them plan for tomorrow. As I said, standardization can offer many benefits, but not when it’s detrimental to the customer.
Click here to download a pdf version of this article.
- Turnkey solution is building and infrastructure agnostic, providing 20%-30% cost-savings compared to other solutions in the market.
- Standardized solution meets demanding timescales, shipping within as little as 36 hours, while fully customized micro data centers can be delivered, installed and operational in 4-6 weeks.
- Provides flexible micro data center system for colocation, 5G, retail, enterprise and industrial applications.
October 13, 2021 – Subzero Engineering, a leading provider of data center containment solutions, has today introduced its Essential Micro Data Center, the world’s first modular, vendor agnostic and truly flexible modular micro data center architecture. Available for order in the United States of America, United Kingdom and Europe, the Essential Micro Data Center meets customer demands for a standardized, premium quality, cost-competitive and quick-to-install edge infrastructure system, that provides a reduced total cost of ownership of between 20%-30%.
Based on its Essential Series and AisleFrame product lines, the Essential Micro Data Center is a small-footprint, on-premises data center, engineered for distributed and remote infrastructure environments. Its modular architecture includes white-glove installation and support, power, cooling, infrastructure conveyance and containment. All of which are housed within a pre-fabricated, factory-assembled, modular room, and shipped flat-packed to site.
With increased requirements for real-time data processing, low latency, greater security and automation, the Essential Micro Data Center ensures predictability and performance for distributed applications. Furthermore, its customizable, modular design offers a fast, flexible and easy-to-build micro data center system, perfectly suited for colocation, 5G, retail, enterprise and industrial environments.
Strength, security, customization
The Essential Micro Data Center is comprised of two-parts including a physically secure, modular room containing critical power and cooling infrastructure, and Subzero’s high-strength AisleFrame. Using this approach, the Essential Micro Data Center can support a variety of load requirements and includes built-in, customizable containment, integrated with self-supporting ceiling modules and insert panels available in ABS, acrylic, polycarbonate, aluminum or glass.
The pre-fabricated system can accommodate all ladder racking, busway, fiber trays and infrastructure necessary for micro data center applications, and offers support for hot or cold aisle applications, regardless of cooling methodology. For example, the high-strength ceiling can support a range of cooling systems, including overhead Computer Room Air Conditioning (CRAC) units. This feature offers complete customization for users who can deploy their infrastructure in aisle, row or rack configurations.
Further, its flexible, vendor-agnostic design provides users with the ability to custom-specify their own choice of power and cooling infrastructure. This approach helps overcome the challenge of having to use inflexible, pre-specified power and cooling systems in a containerized system, while retaining the ability to standardize, repeat and scale quickly, as business requirements change.
“The Essential Micro Data Center’s flexible design makes it a perfect fit for customers searching for an alternative to the obstinate and expensive, pre-integrated solutions currently available,” said Sam Prudhomme, Vice President of Sales and Marketing, Subzero Engineering. “Our vendor-agnostic approach to component specification, combined with rapid speed of and installation and lower TCO, ensures customers can truly define and scale the edge on their own terms.”
The Subzero Engineering Essential Micro Data Center joins its recently launched Essentials Series, demonstrating the company’s commitment to delivering customer-focused, efficient and precision-engineered digital infrastructure solutions.
To learn more, click here.
Each day, technology touches nearly every aspect of our lives in one way or the other. For example, how many times a day do each of us access one or more apps on our smart phone? This trend of needing, creating, transferring, and accessing data in fractions of a second isn’t going away either. According to Gartner Research, by 2020, internet capable devices worldwide reached over 20 billion, and this number is expected to double by 2025. It is also estimated that approximately 463 exabytes of data (1 exabyte is equivalent to 1 billion gigabytes) will be generated each day by people as of 2025, that’s the equivalent of 212,765,957 DVDs per day!1 Along with this increase comes the need to have this data as fast as possible, with minimum delay or latency, something most of today’s data centers are not capable of.
The increase in data and the need for high-speed data transfers has inspired the recent trend known as edge computing. What exactly is the edge? What is an edge data center? How are edge data centers evolving and how can facility and data center managers be ready without being left behind? What about the challenge of making a resilient, modular, and scalable edge data center while maintaining high efficiency and reliability? This paper will answer these and many more questions about the edge in the following topics:
- What is an Edge Data Center
- The Evolution of Edge Computing
- How Organizations are Responding to Edge Data Centers
- Solving the Challenge of Modular and Scalable Edge Infrastructures
- Reliability and Efficiency Needed at the Edge
- Containment’s Critical Role in Edge Deployments
- Bridging the Gap to the Edge, Now and Future
Read the full white paper here.