top of page
Glossary

Glossary of

Important Concepts

GPU Hosting

The use of graphics processing units (GPUs) in a hosted environment to run large-scale AI computations and high-performance computing tasks.

Mask group.png
Green Data Center

A data center designed to minimize environmental impact through the use of renewable energy sources, efficient cooling technologies, and sustainable infrastructure.

Mask group.png
AI Data Center

A data center specifically optimized to support artificial intelligence workloads, including training and inference. These centers house GPUs or other AI-specific hardware for intensive computing tasks.

Mask group.png
Training

Training in AI involves feeding data to a model and adjusting its parameters to learn patterns, enabling it to make accurate predictions or decisions on new data.

Mask group.png
Tier 3

A Tier-3 data center offers 99.982% uptime, with redundant systems for power and cooling, allowing maintenance without shutting down operations for high availability.

Mask group.png
LLM

LLM (Large Language Model) is an AI model trained on vast text data, capable of understanding, generating, and predicting human-like text in various languages and contexts.

Mask group.png
East-West Data Traffic

East-West traffic refers to data that moves laterally within a data center or corporate network. This includes server-to-server communications, data replication, backups, and inter-process communications.

Mask group.png
DLC

DLC (Direct Liquid to Chip Cooling) uses liquid coolant directly on processors, efficiently removing heat and improving cooling performance in high-density data centers.

Mask group.png
PUE

PUE (Power Usage Effectiveness) measures a data center’s energy efficiency by comparing total energy use to energy used by IT equipment. Lower PUE means higher efficiency.

Mask group.png
HPC

HPC (High-Performance Computing) uses powerful servers and GPUs in data centers to process complex tasks like simulations, AI training, and big data analysis at high speed.

Mask group.png
Inference

Inference in AI is the process where trained models make predictions or decisions on new data, applying learned patterns from the training phase to real-world tasks.

Mask group.png
Free Cooling

Free cooling in data centers uses natural cold air or water to reduce energy consumption for cooling, lowering PUE and improving overall energy efficiency.

Mask group.png
Checkpointing

Checkpointing in AI training saves model states at intervals, allowing progress recovery after interruptions, reducing time lost from system failures or crashes.

Mask group.png
GPU Cloud

GPU Cloud provides remote access to powerful GPUs for AI, ML, and HPC tasks, enabling scalable, high-performance computing without on-premise hardware.

Mask group.png
RDx

RDx (Rear Door Cooling) uses a heat exchanger on the back of server racks to cool equipment, reducing the need for traditional air conditioning and improving efficiency.

Mask group.png
AI Factory

An AI Factory is a data center optimized for AI workloads, using high-performance GPUs, efficient cooling, and scalable infrastructure for AI training and inference tasks.

Mask group.png
Multinode AI Workload

A multinode AI workload distributes AI computational tasks across multiple computing nodes or servers to improve performance, scalability, and efficiency.

Mask group.png
North-South Data Traffic

North-South traffic refers to data that moves between an internal network and external networks. This includes traffic that flows from clients (such as users or external applications) to servers in a data center and vice versa.

Mask group.png
Mask group (5).png

GPU

Hosting

The use of graphics processing units (GPUs) in a hosted environment to run large-scale AI computations and high-performance computing tasks.

Frame 22.png

GPU Hosting refers to the service of providing access to powerful Graphics Processing Units (GPUs) in a remote data center for high-performance computing (HPC), artificial intelligence (AI), machine learning (ML), and other compute-intensive tasks. These hosted GPUs deliver the computational power needed to process large datasets, train AI models, and run complex simulations, without the need for businesses to invest in or maintain their own hardware infrastructure.


The main difference with traditional computing (CPUs based) is that accelerated computing (done by GPUs) requires significantly denser layouts, in other words: 
Higher electricity and cooling requirements (from 10 to 40kW racks)
Low latency between racks (higher bandwidth requirements in between racks)

bottom of page