Actions

Data Center Infrastructure

What is Data Center Infrastructure?

Data Center Infrastructure refers to the core physical or hardware-based resources and components – including all IT infrastructure devices, equipment, and technologies – that comprise a data center. It is modeled and identified in a design plan that includes a complete listing of necessary infrastructure components used to create a data center. A data center infrastructure may include:

  • Servers
  • Computers
  • Networking equipment, such as routers or switches
  • Security, such as a firewall or biometric security system
  • Storage, such as storage area network (SAN) or backup/tape storage
  • Data center management software/applications

It can also include non-computing resources, such as:

  • Power and cooling devices, such as air conditioners or generators
  • Physical server racks/chassis
  • Cables
  • Internet backbone[1]


Evaluating Data Center Infrastucture[2]

  • Data Center Redundancy: While most data centers are quick to claim their systems are fully redundant, the terminology has become so muddled in recent years that their actual backup capabilities may not be clear. For MSPs and other companies looking to deliver a variety of bundled services to customers, it’s worth taking a closer look at the approaches a facility takes to data center redundancy. The first thing to look for is the quality of the data center’s uninterruptible power supply (UPS) systems. A reliable facility will have thorough auditing policies in place to ensure that backup batteries are ready to spring into action at any moment. The key differentiator is often whether data center redundancy incorporates fault tolerance or high availability strategies. Fault tolerance is what most people think of when they hear the word “redundancy.” It incorporates two identical systems running in tandem on a completely separate circuit. When one system goes down, the backup system takes over without sacrificing uptime. This solution can be very expensive and complex to implement, so many facilities utilize high-availability systems. Rather than mirroring systems entirely, this approach uses clusters of servers with failover capabilities that restart applications the moment a primary server crashes. While cheaper to implement and less vulnerable to software problems, they do have more downtime lag.
  • Power Density: Many data center cabinets were designed to accommodate lower power densities than most of today’s servers provide. Over the last decade, however, vast improvements in servers have even changed the way facilities measure their power capacity. Wattage per square foot used to be the standard measurement, but today’s data centers measure power density at the server rack level. Ten years ago, 4-5 kW per rack was considered average, but that number is now closer to 15-20 kW per rack in high-performing facilities. Unfortunately, as power increases, servers generate more heat and require more efficient cooling equipment. When looking at a data center, customers should consider whether or not the facility can make efficient use of its available power. Just because it claims to provide high-density server deployments doesn’t mean it can get the most out of them. Substandard or outdated cooling systems, for instance, could prevent those servers from running at peak potential. This could also result in equipment and software failures due to overheating, which will contribute to increased incidents of downtime. Cutting-edge cooling infrastructure controlled by AI and machine learning algorithms is helping data centers manage power consumption and heat more effectively than ever before.
  • Uptime SLA Requirements: Every assessment of data center infrastructure should begin with a thorough examination of its service level agreement (SLA). This document provides details about the services a facility promises to deliver and stipulates penalties for the data center if it fails to comply. As legally binding documents, uptime SLAs are critical for customers looking to protect their data and assets. Expressed as a percentage, the uptime SLA’s guarantee indicates how often its servers will be up and running. Modern, enterprise-level data centers should provide at the very least 99.99% SLA uptime, with every additional “9” delivering a higher level of reliability. The uptime SLA will also lay out various responsibilities the data center has with regard to technical support, transparency, and remuneration.
  • Remote Hands: Providing services through a data center can sometimes be a challenging task. Implementing systems and building up networks within the data center environment takes planning and expertise that even experienced IT personnel may not possess. A facility that offers qualified technicians who can make migration and integration efforts work together smoothly helps customers focus more of their valuable resources on delivering services that benefit their business. When problems do develop, having remote hands personnel on call 24x7x365 to address issues quickly reduces the negative impact of downtime. These technicians are already familiar with the particulars of the data center environment and can address maintenance issues and other emergencies more effectively than external IT teams. With a good remote hands team in place, service-based companies like MSPs can devote more of their IT resources to developing new offerings for their customers rather than troubleshooting.
  • Visibility: Understanding what’s happening in a data center environment is absolutely crucial for any company that delivers services through that infrastructure. They need to know how power and network performance are being affected by traffic in order to plan effectively and determine how to best deploy their assets. Data center infrastructure management (DCIM) software can help provide this information. Security is also a huge concern when it comes to visibility. Sophisticated DCIM platforms make it easier to track assets at all times to ensure that every piece of equipment is where it’s supposed to be at all times. Any company hoping to utilize a data center environment to build or bundle services needs to know what safeguards a facility has in place to protect against cyberattacks and data breaches. A robust business intelligence platform (like vXchnge’s award-winning in\site software) can provide a comprehensive picture of every relevant detail about data center infrastructure. If a facility makes it difficult to review its operations or is less than transparent regarding its policies, service providers and MSPs will have a hard time reassuring their own customers that their sensitive data and valuable assets are in safe hands. Partnering with a data center is an important decision for any organization. By reviewing key aspects of data center infrastructure management, companies can predict whether or not it will be able to meet their needs and allow them to expand business opportunities in the future by providing a reliable IT environment for a range of services.


Standards for Data Center Infrastructure[3]

The most widely adopted standard for data center design and data center infrastructure is ANSI/TIA-942. It includes standards for ANSI/TIA-942-ready certification, which ensures compliance with one of four categories of data center tiers rated for levels of redundancy and fault tolerance.

  • Tier 1: Basic site infrastructure. A Tier 1 data center offers limited protection against physical events. It has single-capacity components and a single, nonredundant distribution path.
  • Tier 2: Redundant-capacity component site infrastructure. This data center offers improved protection against physical events. It has redundant-capacity components and a single, nonredundant distribution path.
  • Tier 3: Concurrently maintainable site infrastructure. This data center protects against virtually all physical events, providing redundant-capacity components and multiple independent distribution paths. Each component can be removed or replaced without disrupting services to end users.
  • Tier 4: Fault-tolerant site infrastructure. This data center provides the highest levels of fault tolerance and redundancy. Redundant-capacity components and multiple independent distribution paths enable concurrent maintainability and one fault anywhere in the installation without causing downtime.


See Also


References