If (top != self) { window.location = 'about:blank'; }
NASA Logo, National Aeronautics and Space Administration
High-End Computing Program

+ Home > About Us > Facilities & Services > Computing Systems Overview

COMPUTING SYSTEMS OVERVIEW

This table shows the systems and related resources at the NASA Advanced Supercomputing (NAS) Facility and the NASA Center for Climate Simulation (NCCS).


Information about HEC Systems and Related Resources
  NAS NCCS
Systems

Aitken

SGI/HPE modular system

4 E-Cells (1,152 nodes), 16 Apollo 9000 racks (2,048 nodes)
Total theoretical peak: 13.12 petaflops
Total LINPACK rating: 9.07 petaflops (#58 on June 2022 TOP500 list,)
Total HPCG rating: 172.38 teraflops (#44 on June 2022 HPCG list)
Total cores: 308,224
Total memory: 1.27 petabytes

Intel Xeon Cascade Lake processors (2.5 GHz)

AMD Rome processos (2.25 GHz)

Electra

SGI modular system

24 racks (3,456 nodes)

124,416 cores
8.32 petaflops peak
5.44 petaflops LINPACK rating (#53 on November 2020 TOP500 list)
106.54 teraflops HPCG rating (#349 on November 2020 HPCG list)

589 terabytes of memory

ntel Xeon Gold 6148 Skylake processors (2.4 GHz) and Intel Xeon E5-2680v4 Broadwell processors (2.4 GHz)

Pleiades

SGI ICE cluster

158 racks (11,207 nodes)

241,324 cores
7.09 petaflops peak
5.95 petaflops LINPACK rating (#90 on June 2022 TOP500 list)
175 teraflops HPCG rating (#43 on June 2022 HPCG list)
927 terabytes of memory
Intel Xeon Sandy Bridge E5-2670 processors (2.6 GHz); Intel Xeon Ivy Bridge E5-2680v2 processors (2.8 GHz); Intel Xeon Haswell E5-2680v3 (2.5 GHz) processors; and Intel Xeon Broadwell E5-2680v4 processors (2.4 GHz)

935 terabytes of memory

GPU nodes: 3 racks (83 nodes total) enhanced with NVIDIA graphics processing units (GPUs)

1,024 Intel Xeon Sandy Bridge cores and 684 Intel Xeon Skylake cores
614,400 GPU cores
646 teraflops, peak

 

Endeavour

2-node HPE Superdome Flex system

1,792 cores
154.8 teraflops, peak

12 terabytes of memory

Intel Xeon Platinum 8280 Cascade Lake processors (2.7 GHz)

Cabeus Supercomputer

22 racks

187 nodes

10,956 CPU cores + 2,428,928 double-precision GPU cores

Theoretical-double precision peak performance: 7.56 petaflops (0.57 petaflops from CPUs + 6.99 petaflops from GPUs)

Total memory: 75 terabytes (26 terabytes from CPU host memory + 49 terabytes from GPU memory)

NVIDIA GPU A100 nodes

Discover
Aggregate System:

2,552 nodes

169,536 cores

8.96 petaflops peak

Scalable Compute Unit 14 = Supermicro FatTwin Rack Scale System
20,800 cores
Intel Xeon Skylake (2.4 GHz)

Scalable Compute Unit 15 = Aspen Systems and Supermicro TwinPro Rack Scale System
25,600 cores
Intel Xeon Skylake (2.4 GHz)

Scalable Compute Unit 16 CPU-Only Nodes = Aspen Systems and Supermicro TwinPro nodes
32,448 cores
Intel Xeon Cascade Lake Refresh processor cores (2.4 GHz)
Scalable Compute Unit 16 – CPU & GPU Nodes
12 Supermicro GPU nodes, each with AMD EPYC Rome and 4 NVIDIA A100 GPUs

576 total AMD EPYC Rome processor cores (2.8 GHz)

6,912 CUDA cores

Scalable Compute Unit 17 CPU-Only Nodes = Aspen Systems and Supermicro TwinPro nodes
90,112 cores
AMD Milan EPYC processor cores (2.0 GHz)

Storage

Online:
90 petabytes of RAID disk capacity (combined total for all systems)

Archive Capacity:
1,040 petabytes (1 exabyte)

Online:
82 petabytes of RAID

Archive Capacity:
120 petabytes, going read-only in 2023

Centralized Storage System (CSS):
24 Intel Xeon nodes
4 Ethernet connections
2 InfiniBand connections
3 GPFS Quorum nodes
1 GPFS GUI/Management
72.15 petabytes of disk

Networking SGI NUMAlink
Voltaire InfiniBand
10-Gigabit Ethernet
1-Gigabit Ethernet
Mellanox Technologies InfiniBand
Intel Omni-Path
40-Gigabit Ethernet
10-Gigabit Ethernet
1-Gigabit Ethernet

Visualization and Analysis

Hyperwall-2
128-screen tiled LCD wall arranged in 8x16 configuration
Measures 23-ft. wide by 10-ft. high
128 graphics processing units (Nvidia GeForce GTX 780 Ti)
646 teraflops, peak processing power
2,560 Intel Xeon E5-2680v2 (Ivy Bridge) cores (10-core)
57 teraflops, peak processing power
393 gigabytes of GDDR5 graphics memory
1.5 petabytes of storage

Data Visualization Theater
Hyperwall

15 Samsung UD55C 55-inch displays in 5x3 configuration
Measures 20 ft. wide by 6-ft.10-in. high
DVI connection
1920 x 1080 screen resolution @1080p

Hyperwall Cluster
16 Dell Precision WorkStation R5400s
2 dual-core Intel Xeon Harpertown processors per node
4 GB of memory per node
NVIDIA Quadro FX 1700 graphics
1 Gigabit Ethernet network connectivity
Control Station
One Dell FX100 Thin Client

Explore/ADAPT Science Cloud

Managed Virtual Machine Environment

550+ Hypervisors – Intel Xeon Westmere, Ivy Bridge, Sandy Bridge, and Broadwell processor cores and AMD Rome and Milan processor cores

High-speed InfiniBand and 10 Gigabit Ethernet networks

Linux and Windows Virtual Machines

7 petabytes of Panasas storage

Explore/ADAPT: Prism GPU Cluster
22 Supermicro Compute Nodes:
4x NVIDIA V100 GPUs with 32 GB of VRAM and NVLink
Dual Intel Xeon Cascade Lake Gold 6248 CPUs; 20 cores each (2.50 GHz)
768 gigabytes of RAM
Dual 25-Gb Ethernet network interfaces
Dual 100-Gb HDR100 InfiniBand high-speed network interfaces
3.8-terabyte RAID protected NVMe drives, mounted as /lscratch
One NVIDIA DGX Node:
8x NVIDIA A100 GPUs with 40 gigabytes of VRAM and NVLink
Dual AMD EPYC Rome 7742 CPUs; 64 cores each (2.25 GHz)
1 terabyte of RAM
Dual 25-Gb Ethernet network interfaces
Dual 100-Gb HDR100 InfiniBand high-speed network interfaces
14 terabytes of RAID protected NVMe drives, mounted as /lscratch

DataPortal

HP ProLiant DL380p Gen8

Dual-socket, 10-core Intel Xeon 2.5 GHz Ivy Bridge processors

128 gigabytes of RAM

Mellanox ConnectX-3 MT27500 Interconnect

2 x 500GB SAS drives and 3 x 4TB SAS drives

JupyterHub

Available on ADAPT/Explore and Prism, coming soon to Discover

 

USA.gov NASA Logo - nasa.gov