If (top != self) { window.location = 'about:blank'; }
+ Home > About Us > Facilities & Services > Computing Systems Overview
This table shows the systems and related resources at the NASA Advanced Supercomputing (NAS) Facility and the NASA Center for Climate Simulation (NCCS).
NAS | NCCS | |
---|---|---|
Systems | Aitken SGI/HPE modular system 4 E-Cells (1,152 nodes), 16 Apollo 9000 racks (2,048 nodes) Intel Xeon Cascade Lake processors (2.5 GHz) AMD Rome processos (2.25 GHz) Electra SGI modular system 24 racks (3,456 nodes) 124,416 cores 589 terabytes of memory ntel Xeon Gold 6148 Skylake processors (2.4 GHz) and Intel Xeon E5-2680v4 Broadwell processors (2.4 GHz) Pleiades SGI ICE cluster 158 racks (11,207 nodes) 241,324 cores 935 terabytes of memory GPU nodes: 3 racks (83 nodes total) enhanced with NVIDIA graphics processing units (GPUs) 1,024 Intel Xeon Sandy Bridge cores and 684 Intel Xeon Skylake cores
Endeavour 2-node HPE Superdome Flex system 1,792 cores 12 terabytes of memory Intel Xeon Platinum 8280 Cascade Lake processors (2.7 GHz) Cabeus Supercomputer 22 racks 187 nodes 10,956 CPU cores + 2,428,928 double-precision GPU cores Theoretical-double precision peak performance: 7.56 petaflops (0.57 petaflops from CPUs + 6.99 petaflops from GPUs) Total memory: 75 terabytes (26 terabytes from CPU host memory + 49 terabytes from GPU memory) NVIDIA GPU A100 nodes | Discover 2,552 nodes 169,536 cores 8.96 petaflops peak Scalable Compute Unit 14 = Supermicro FatTwin Rack Scale System Scalable Compute Unit 15 = Aspen Systems and Supermicro TwinPro Rack Scale System Scalable Compute Unit 16 CPU-Only Nodes = Aspen Systems and Supermicro TwinPro nodes 576 total AMD EPYC Rome processor cores (2.8 GHz) 6,912 CUDA cores Scalable Compute Unit 17 CPU-Only Nodes = Aspen Systems and Supermicro TwinPro nodes |
Storage | Online: Archive Capacity: |
Online: Archive Capacity: Centralized Storage System (CSS): |
Networking | SGI NUMAlink Voltaire InfiniBand 10-Gigabit Ethernet 1-Gigabit Ethernet |
Mellanox Technologies InfiniBand Intel Omni-Path 40-Gigabit Ethernet 10-Gigabit Ethernet 1-Gigabit Ethernet |
Visualization and Analysis | Hyperwall-2 |
Data Visualization Theater 15 Samsung UD55C 55-inch displays in 5x3 configuration 16 Dell Precision WorkStation R5400s 2 dual-core Intel Xeon Harpertown processors per node 4 GB of memory per node NVIDIA Quadro FX 1700 graphics 1 Gigabit Ethernet network connectivity Control Station One Dell FX100 Thin Client Explore/ADAPT Science Cloud Managed Virtual Machine Environment 550+ Hypervisors – Intel Xeon Westmere, Ivy Bridge, Sandy Bridge, and Broadwell processor cores and AMD Rome and Milan processor cores High-speed InfiniBand and 10 Gigabit Ethernet networks Linux and Windows Virtual Machines 7 petabytes of Panasas storage Explore/ADAPT: Prism GPU Cluster DataPortal HP ProLiant DL380p Gen8 Dual-socket, 10-core Intel Xeon 2.5 GHz Ivy Bridge processors 128 gigabytes of RAM Mellanox ConnectX-3 MT27500 Interconnect 2 x 500GB SAS drives and 3 x 4TB SAS drives JupyterHub Available on ADAPT/Explore and Prism, coming soon to Discover |