If (top != self) { window.location = 'about:blank'; }
NASA Logo, National Aeronautics and Space Administration
High-End Computing Program

+ Home > About Us > Facilities & Services > Computing Systems Overview

COMPUTING SYSTEMS OVERVIEW

This table shows the systems and related resources at the NASA Advanced Supercomputing (NAS) Facility and the NASA Center for Climate Simulation (NCCS).


Information about HEC Systems and Related Resources
  NAS NCCS
Systems

Aitken

SGI/HPE modular system

4 E-Cells (1,152 nodes)

2019 Deployment:

  • 3.69 petaflops theoretical peak
  • 2.38 petaflops LINPACK rating (#169 on November 2020 TOP500 list)
  • 45.47 teraflops HPCG rating (#58 on November 2020 HPCG list)

2020 Deployment (Available March 2021):

  • 4.72 petaflops theoretical peak
  • 4.01 petaflops LINPACK rating (#71 on November 2020 TOP500 list)
  • 70.6 teraflops HPCG rating (#42 on November 2020 HPCG list)

Total theoretical peak: 8.41 petaflops
Total LINPACK rating: 6.39 petaflops
Total HPCG rating: 116.07 teraflops
Total cores: 177,152
Total memory: 740 terabytes

Intel Xeon Cascade Lake processors (2.5 GHz)

AMD Rome processos (2.25 GHz)

Electra

SGI modular system

24 racks (3,456 nodes)

124,416 cores
8.32 petaflops peak
5.44 petaflops LINPACK rating (#53 on November 2020 TOP500 list)
106.54 teraflops HPCG rating (#349 on November 2020 HPCG list)

589 terabytes of memory

ntel Xeon Gold 6148 Skylake processors (2.4 GHz) and Intel Xeon E5-2680v4 Broadwell processors (2.4 GHz)

Pleiades

SGI ICE cluster

158 racks (11,207 nodes)

241,324 cores
7.09 petaflops peak
5.95 petaflops LINPACK rating (#46 on November 2020 TOP500 list)
175 teraflops HPCG rating (#25 on November 2020 HPCG list)
927 terabytes of memory
Intel Xeon Sandy Bridge E5-2670 processors (2.6 GHz); Intel Xeon Ivy Bridge E5-2680v2 processors (2.8 GHz); Intel Xeon Haswell E5-2680v3 (2.5 GHz) processors; and Intel Xeon Broadwell E5-2680v4 processors (2.4 GHz)

935 terabytes of memory

GPU nodes: 3 racks (83 nodes total) enhanced with NVIDIA graphics processing units (GPUs)

1,024 Intel Xeon Sandy Bridge cores and 684 Intel Xeon Skylake cores
614,400 GPU cores
646 teraflops, peak

 

Endeavour

2-node HPE Superdome Flex system

1,792 cores
154.8 teraflops, peak

12 terabytes of memory

Intel Xeon Platinum 8280 Cascade Lake processors (2.7 GHz)

 

Merope

56 racks (half-population; 1,792 nodes)

21,504 cores
252 teraflops, peak

86 terabytes of memory

Intel Xeon X5670 Westmere processors (2.93 GHz)

Discover
Aggregate System:

103 racks
129,056 cores

6.798 petaflops peak
600.576 terabytes of memory

Scalable Units 10, 11, 12, and 13 = SGI Rackable System
81,954 cores
Intel Xeon Haswell (2.6 GHz)

Scalable Compute Unit 14 = Supermicro FatTwin Rack Scale System
20,800 cores
Intel Xeon Skylake (2.4 GHz)

Scalable Compute Unit 15 = Aspen Systems and Supermicro TwinPro Rack Scale System
25,600 cores
Intel Xeon Skylake (2.4 GHz)

Storage

Online:
29 petabytes of RAID disk capacity (combined total for all systems)

Archive Capacity:
1,040 petabytes (1 exabyte)

Online:
75 petabytes of RAID
12 petabytes in Centralized Storage System

Archive Capacity:
150 petabytes

Networking SGI NUMAlink
Voltaire InfiniBand
10-Gigabit Ethernet
1-Gigabit Ethernet
Mellanox Technologies InfiniBand
Intel Omni-Path
40-Gigabit Ethernet
10-Gigabit Ethernet
1-Gigabit Ethernet

Visualization and Analysis

Hyperwall-2
128-screen tiled LCD wall arranged in 8x16 configuration
Measures 23-ft. wide by 10-ft. high
128 graphics processing units (Nvidia GeForce GTX 780 Ti)
646 teraflops, peak processing power
2,560 Intel Xeon E5-2680v2 (Ivy Bridge) cores (10-core)
57 teraflops, peak processing power
393 gigabytes of GDDR5 graphics memory
1.5 petabytes of storage

Data Visualization Theater
Hyperwall

15 Samsung UD55C 55-inch displays in 5x3 configuration
Measures 20 ft. wide by 6-ft.10-in. high
DVI connection
1920 x 1080 screen resolution @1080p

Hyperwall Cluster
16 Dell Precision WorkStation R5400s
2 dual-core Intel Xeon Harpertown processors per node
4 GB of memory per node
NVIDIA Quadro FX 1700 graphics
1 Gigabit Ethernet network connectivity
Control Station
One Dell FX100 Thin Client

ADAPT—Advanced Data Analytics Platform

Managed Virtual Machine Environment

550+ Hypervisors – Intel Xeon Westmere, Ivy Bridge, Sandy Bridge, and Broadwell processor cores

High-speed InfiniBand and 10 Gigabit Ethernet networks

Linux and Windows Virtual Machines

10+ petabytes of raw storage under Gluster file system management

ADAPT GPU Cluster (Aspen Systems)
880 Intel Xeon Gold 6248 Cascade Lake cores (2.5 GHz)
88 NVIDIA V100 GPUs with 32 gigabytes of VRAM each
16.896 terabytes of RAM
83.6 terabytes of local NVMe storage
Dual 100-gigabit (Gb) HDR100 InfiniBand
Dual 25-Gb Ethernet, bonded for high availability

ADAPT: Prism GPU Cluster
22 Supermicro Compute Nodes:
4x NVIDIA V100 GPUs with 32 GB of VRAM and NVLink
Dual Intel Xeon Cascade Lake Gold 6248 CPUs; 20 cores each (2.50 GHz)
768 gigabytes of RAM
Dual 25-Gb Ethernet network interfaces
Dual 100-Gb HDR100 InfiniBand high-speed network interfaces
3.8-terabyte RAID protected NVMe drives, mounted as /lscratch
One NVIDIA DGX Node—initially for pilot users:
8x NVIDIA A100 GPUs with 40 gigabytes of VRAM and NVLink
Dual AMD EPYC Rome 7742 CPUs; 64 cores each (2.25 GHz)
1 terabyte of RAM
Dual 25-Gb Ethernet network interfaces
Dual 100-Gb HDR100 InfiniBand high-speed network interfaces
14 terabytes of RAID protected NVMe drives, mounted as /lscratch

DataPortal

HP ProLiant DL380p Gen8

Dual-socket, 10-core Intel Xeon 2.5 GHz Ivy Bridge processors

128 gigabytes of RAM

Mellanox ConnectX-3 MT27500 Interconnect

2 x 500GB SAS drives and 3 x 4TB SAS drives

Remote Visualization

HP DL380 G8

20 E5-2670 v2/2.50GHz cores

128 gigabytes of RAM

NVIDIA K5000 GPU card

 

USA.gov NASA Logo - nasa.gov