Data Center

Revision as of 13:54, 19 July 2023 by User (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

What is a Data Center?[1]

Data Centers are simply centralized locations where computing and networking equipment is concentrated for the purpose of collecting, storing, processing, distributing, or allowing access to large amounts of data. They have existed in one form or another since the advent of computers.

A traditional data center is comprised of three main sections:[2]

  • Compute - Equipment Distribution Area (EDA)
  • Network - Main Distribution Area (MDA)
  • Storage - Storage Area Network (SAN)

The Importance of Data Centers[3]

In the world of enterprise IT, data centers are designed to support business applications and activities that include:

  • Email and file sharing
  • Productivity applications
  • Customer relationship management (CRM)
  • Enterprise resource planning (ERP) and databases
  • Big data, artificial intelligence, and machine learning
  • Virtual desktops, communications, and collaboration services

History of Data Centers[4]

Data Centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The advent of Unix in the early 1970s led to the subsequent proliferation of freely available Linux-compatible PC operating systems during the 1990s. These were called "servers", as timesharing operating systems like Unix rely heavily on the client-server model to facilitate sharing of unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center", as applied to specially designed computer rooms, started to gain popular recognition at this time.

The boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the time of the outage."

The term cloud data centers (CDCs) has been used. Data centers typically cost a lot to build and maintain. Increasingly, the division of these terms has almost disappeared and they are being integrated into the term "data center".

The Evolution to Modern Data Centers[5]

Data centers first emerged in the early 1940s, when computer hardware was complex to operate and maintain. Early computer systems required many large components that operators had to connect with many cables. They also consumed a large amount of power and required cooling to prevent overheating. To manage these computers, called mainframes, companies typically placed all the hardware in a single room, called a data center. Every company invested in and maintained its own data center facility. Over time, innovations in hardware technology reduced the size and power requirements of computers. However, at the same time, IT systems became more complex, such as in the following ways:

  • The amount of data generated and stored by companies increased exponentially.
  • Virtualization technology separated software from the underlying hardware.
  • Innovations in networking made it possible to run applications on remote hardware.

Modern data centers
Modern data center design evolved to better manage IT complexity. Companies used data centers to store physical infrastructure in a central location that they could access from anywhere. With the emergence of cloud computing, third-party companies manage and maintain data centers and offer infrastructure as a service to other organizations. As the world’s leading cloud services provider, AWS has created innovative cloud data centers around the globe.

Types of Data Centers[6]

  • Enterprise data centers: These are constructed, owned, and utilized by companies for their own internal computing needs. Enterprise data centers are custom-built to meet the requirements of the organizations that own them and are housed on-premises.
  • Managed services data centers: Managed data centers are deployed, managed, and monitored by third-party service providers. Companies opt for a leasing model and can access data center features and functions using a managed service platform. This eliminates the need to purchase equipment and infrastructure.
  • Colocation data centers: Colocation data centers allow businesses to rent space within an off-premises physical facility that hosts the infrastructure, including power supplies, cooling, and security. The business provides and manages its own components, such as computing hardware and servers.
  • Cloud data centers: This is an off-premises variation of a data center. Cloud-based data centers offer businesses leased, hosted infrastructure, which is managed by a third-party service provider, allowing customers to access resources via the internet.

Data Center Architecture[7]

Most modern data centers—even in-house on-premises data centers—have evolved from traditional IT architecture, where every application or workload run on its own dedicated hardware, to cloud architecture, in which physical hardware resources—CPUs, storage, networking—are virtualized. Virtualization enables these resources to be abstracted from their physical limits and pooled into capacity that can be allocated across multiple applications and workloads in whatever quantities they require.

Virtualization also enables software-defined infrastructure (SDI)—infrastructure that can be provisioned, configured, run, maintained, and ‘spun down’ programmatically, without human intervention.

The combination of cloud architecture and SDI offers many advantages to data centers and their users, including the following:

  • Optimal utilization of computing, storage, and networking resources. Virtualization enables companies or clouds to serve the most users using the least hardware, and with the least unused or idle capacity.
  • Rapid deployment of applications and services. SDI automation makes provisioning new infrastructure as easy as making a request via a self-service portal.
  • Scalability. Virtualized IT infrastructure is far easier to scale than traditional IT infrastructure. Even companies using on-premises data centers can add capacity on demand by bursting workloads to the cloud when necessary.
  • Variety of services and data center solutions. Companies and clouds can offer users a range of ways to consume and deliver IT, all from the same infrastructure. Choices are made based on workload demands and include infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). These services can be offered in a private data center, or as cloud solutions in either a private cloud, public cloud, hybrid cloud, or multi-cloud environment.
  • Cloud-native development. Containerization and serverless computing, along with a robust open-source ecosystem, enable and accelerate DevOps cycles and application modernization as well as enable develop-once-deploy-anywhere apps.

Data Centers: Location and Management[8]

Technically, a data center can be located anywhere. But, more often than not, data centers are found where the following are frequently readily available:

  • Uninterrupted electrical supplies; it’s even better if electricity is cheap or even can be generated on-site.
  • Unlikely to be negatively affected by natural disasters, and out-of-100-year flood zones when possible.
  • Close proximity to business centers and fiber backbone routes.
  • Available access to cooling for equipment such as cool outside air, power for air conditioning, and/or water for heat transfer infrastructure - some data centers are even located underwater or underground for this reason.

Every data center is managed slightly differently, depending on who built it and for what purposes. If just one organization owns a data center and uses it just for its own purposes, then it’d be the one managing it, with its own staff on hand to keep tabs on everything.

One common type of data center is known as a colocation facility. Under such an arrangement, a business rents out a set amount of space within a larger location. The renter is responsible for installing servers, racks, etc., and for maintaining said equipment. They would also have to pay for power and cooling, although the core power and cooling infrastructure, along with the raised floor itself, is usually provided by the facility owner. With colocation, the facility owner is also responsible for maintaining the building at large and for security.

Another common situation is where a company will rent out everything from a central provider. The facility owner owns and maintains everything inside the data center, and allows organizations to purchase space on/the use of servers.

Cloud data centers[9]

Cloud data centers (also called cloud computing data centers) house IT infrastructure resources for shared use by multiple customers—from scores to millions of customers—via an Internet connection.

Many of the largest cloud data centers—called hyperscale data centers—are run by major cloud services providers like Amazon Web Services (AWS), Google Cloud Platform, IBM Cloud, Microsoft Azure, and Oracle Cloud Infrastructure. In fact, most leading cloud providers run several hyperscale data centers around the world. Typically, cloud service providers maintain smaller, edge data centers located closer to cloud customers (and cloud customers’ customers). For real-time, data-intensive workloads such as big data analytics, artificial intelligence (AI), and content delivery applications, edge data centers can help minimize latency, improving overall application performance and customer experience.

See Also