Facilities

ACME

ACME is a dedicated local cluster for R&D purposes composed of

  • 14 PowerEdge C6420 nodes composed of:
    • 2 Intel Gold 6138 processor (20C/40T) @2,0 GHz
    • 192 GB RDIMM, 2667MT/s
    • IB FDR port
    • 2TB SATA @7.2 krpm and 240GB SSD SATA
  • 2 bullx R424-E4 chasis with 4 computing nodes each
    • 2 Intel Xeon 8C processors E5-2640 V3 @2,6 GHz
    • DDR4 memory of 64 GB @2133 MHz
    • IB FDR port
    • 1 TB SAS2 disc @7.2 krpm and 1 SSD of 240 GB
  • 2 bullx R421-E4 chasis
    • 2 Intel Xeon 8C processors E5-2640 V3 @2,6 GHz
    • DDR4 memory of 64 GB @2133 MHz
    • 1 TB SATA III disc @7.2 krpm and 1 SSD of 240 GB
    • 2 Tesla P100
  • 1 storage server composed of 48 disks (168 TB raw storage)
  • Rpeak = 59.40 TFlops

"New"

The "new" HPC in-production cluster at CIEMAT is composed of

  • 168 Gold 6148 processors (3,360 cores)
  • 51,200 Nvidia cores
  • 3,200 Tensor cores
  • DDR4 RAM memory of ~17.1 TB
  • RAM GPU memory of 320 GB
  • Interconnected by InfiniBand
  • Fully devoted to the execution of jobs
  • Lustre File System (1.68 PB)
  • Rpeak~361.2 Tflops

GPGPU cluster

The GPGPU cluster located at CETA-CIEMAT (Trujillo) is composed of
  • Near to the 100.000 GPU cores:
    • 22 nodes with TESLA S1070 (2 C1060 each node)
    • 16 nodes with TESLA S2050 (2 C2050 each node)
    • 16 nodes with TESLA S2070 (2 M2075 each node)
    • 8 nodes with TESLA K80 (1 K80 each node)
    • 1 node with TESLA K20 (1 K20 each node)
  • Processors
    • R422E2/R423E2: 2 x Quad Core Intel¿ Xeon E5520 @ 2.27GHz
    • R424E2: 2 x Quad Core Intel¿ Xeon¿ E5649 @ 2.53GHz
    • R425E2: 2 x Quad Core Intel¿ Xeon¿ X5570 @ 2.93GHz
    • R421E3: 2 x 12-Core Intel¿ Xeon E5-2680v3 @ 2.5GHz
  • Memory:
    • R422E2/R423E2/R424E2: 24 GBytes DDR3
    • R425E2: 96 GBytes DDR3
    • R421E3: 64 GBytes DDR4
  • Internal storage SATA 500GB and Shared Storage  Lustre
  • Rpeak greater than200 Tflops (32 bits) and 100 Tflops (64 bits)

EULER

The former Euler in-production cluster at CIEMAT was composed of:

  • 144 blade nodes Dual Xeon quad-core 3.0 GHz (2 GB per core)
  • 96 blade nodes Dual Xeon quad-core 2.96 GHz (2 GB per core)
  • RAM memory of 3.8 TB for the 1920 cores
  • Interconnected by InfiniBand
  • Fully devoted to the execution of jobs
  • Lustre File System (120 TB)
  • Rpeak=23 Tflops; Rmax=19.55 Tflops

 

Free access database of execution logs

In this link you can find the execution logs of the former supercomputer of CIEMAT (Euler) in Parallel Workload Archive format. It contains traces of 9 years (2008-2018).

Please reference the following article whenever you will download and use it:

  • <TO_ADD_REFERENCE>

Heterogeneous Grid and Cloud clusters of ~400 cores

Sci-Track counts with two heterogeneous clusters in Madrid and Trujillo composed of around 800 CPU cores dedicated to High Throughput Computing infrastructure, mainly cloud.