User Tools

Site Tools


hpc:hpc_clusters

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
hpc:hpc_clusters [2026/02/10 17:41] – [CPUs on Bamboo] Adrien Alberthpc:hpc_clusters [2026/02/20 14:30] (current) – [Key Rules and Details] Yann Sagon
Line 121: Line 121:
  
   * **Shared Integration**: The compute node is added to the corresponding shared partition. Other users may utilize it when the owning group is not using it. For details, refer to the [[hpc/slurm#partitions|partitions]] section.   * **Shared Integration**: The compute node is added to the corresponding shared partition. Other users may utilize it when the owning group is not using it. For details, refer to the [[hpc/slurm#partitions|partitions]] section.
-  * **Usage Limit**: Each research group may consume up to **60% of the theoretical usage credit associated with the compute node**. This policy ensures fair access to shared cluster resources. . See  the [[hpc:hpc_clusters#usage_limit|Usage limit]] policy for more details+  * **Usage Limit**: Each research group may consume up to **60% of the theoretical usage credit associated with the compute node**. This policy ensures fair access to shared cluster resources. . See  the [[hpc:hpc_clusters#usage_limits|Usage limits]] policy for more details
   * **Cost**: In addition to the base cost of the compute node, a **15% surcharge** is applied to cover operational expenses such as cables, racks, switches, and storage (not yet valid).   * **Cost**: In addition to the base cost of the compute node, a **15% surcharge** is applied to cover operational expenses such as cables, racks, switches, and storage (not yet valid).
   * **Ownership Period**: The compute node remains the property of the research group for **5 years**. After this period, the node may remain in production but will only be accessible via public and shared partitions.   * **Ownership Period**: The compute node remains the property of the research group for **5 years**. After this period, the node may remain in production but will only be accessible via public and shared partitions.
Line 258: Line 258:
  
 ^ Model ^ Generation ^ Architecture ^ Cores per Socket ^ Freq ^ ^ Model ^ Generation ^ Architecture ^ Cores per Socket ^ Freq ^
 +| [[https://www.intel.fr/content/www/fr/fr/products/sku/81706/intel-xeon-processor-e52660-v3-25m-cache-2-60-ghz/specifications.html | E5-2660V0]] | V3 | Sandy Bridge EP | 8 |  |
 +| [[https://www.intel.com/content/www/us/en/products/sku/81900/intel-xeon-processor-e52643-v3-20m-cache-3-40-ghz/specifications.html | E5-2643V3]] | V5 | Haswell-EP | 6 | 3.4GHz |
 | [[https://www.intel.fr/content/www/fr/fr/products/sku/92981/intel-xeon-processor-e52630-v4-25m-cache-2-20-ghz/specifications.html | E5-2630V4]] | V6 | Broadwell-EP | 10 | 2.2GHz | | [[https://www.intel.fr/content/www/fr/fr/products/sku/92981/intel-xeon-processor-e52630-v4-25m-cache-2-20-ghz/specifications.html | E5-2630V4]] | V6 | Broadwell-EP | 10 | 2.2GHz |
 | [[https://www.intel.com/content/www/us/en/products/sku/92983/intel-xeon-processor-e52637-v4-15m-cache-3-50-ghz/specifications.html | E5-2637V4]] | V6 | Broadwell-EP | 4 | 2.2GHz | | [[https://www.intel.com/content/www/us/en/products/sku/92983/intel-xeon-processor-e52637-v4-15m-cache-3-50-ghz/specifications.html | E5-2637V4]] | V6 | Broadwell-EP | 4 | 2.2GHz |
-| [[https://www.intel.com/content/www/us/en/products/sku/81900/intel-xeon-processor-e52643-v3-20m-cache-3-40-ghz/specifications.html | E5-2643V3]] | V5 | Haswell-EP | 6 | 3.4GHz | 
 | [[https://www.intel.com/content/www/us/en/products/sku/92989/intel-xeon-processor-e52643-v4-20m-cache-3-40-ghz/specifications.html | E5-2643V4]] | V6 | Broadwell-EP | 6 | 3.4GHz | | [[https://www.intel.com/content/www/us/en/products/sku/92989/intel-xeon-processor-e52643-v4-20m-cache-3-40-ghz/specifications.html | E5-2643V4]] | V6 | Broadwell-EP | 6 | 3.4GHz |
-| [[https://www.intel.fr/content/www/fr/fr/products/sku/81706/intel-xeon-processor-e52660-v3-25m-cache-2-60-ghz/specifications.html | E5-2660V0]] | V3 | Sandy Bridge EP | 8 |  | 
 | [[https://www.intel.com/content/www/us/en/products/sku/91754/intel-xeon-processor-e52680-v4-35m-cache-2-40-ghz/specifications.html | E5-2680V4]] | V6 | Broadwell-EP | 14 | 2.4GHz | | [[https://www.intel.com/content/www/us/en/products/sku/91754/intel-xeon-processor-e52680-v4-35m-cache-2-40-ghz/specifications.html | E5-2680V4]] | V6 | Broadwell-EP | 14 | 2.4GHz |
-| [[https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-72f3.html | EPYC-72F3]] | V10 Milan 3.7GHz |+| [[https://www.amd.com/en/support/downloads/drivers.html/processors/epyc/epyc-7001-series/amd-epyc-7601.html#amd_support_product_spec | EPYC-7601]] | V7 Naples 32 2.2GHz |
 | [[https://www.amd.com/en/products/processors/server/epyc/7002-series.html | EPYC-7302P]] | V8 | Rome | 16 | 3.0GHz | | [[https://www.amd.com/en/products/processors/server/epyc/7002-series.html | EPYC-7302P]] | V8 | Rome | 16 | 3.0GHz |
-| [[https://www.amd.com/en/support/downloads/drivers.html/processors/epyc/epyc-7001-series/amd-epyc-7601.html#amd_support_product_spec | EPYC-7601]] | V7 | Naples | 32 | 2.2GHz | 
 | EPYC-7742 | V8 | Rome | 64 | 2.25GHz | | EPYC-7742 | V8 | Rome | 64 | 2.25GHz |
-| [[https://www.amd.com/fr/products/processors/server/epyc/7003-series/amd-epyc-7763.html | EPYC-7763]] | V10 | Milan | 64 | 2.45GHz | 
-| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9554.html | EPYC-9554]] | V11 | Genoa | 64 | 3.10GHz | 
-| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9654.html | EPYC-9654]] | V12 | Genoa | 96 | 3.70GHz | 
-| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9754.html | EPYC-9754]] | V13 | Genoa | 128 | 3.70GHz | 
 | [[https://ark.intel.com/content/www/us/en/ark/products/193954/intel-xeon-gold-6234-processor-24-75m-cache-3-30-ghz.html | GOLD-6234]] | V9 | Cascade Lake | 8 | 3.30GHz | | [[https://ark.intel.com/content/www/us/en/ark/products/193954/intel-xeon-gold-6234-processor-24-75m-cache-3-30-ghz.html | GOLD-6234]] | V9 | Cascade Lake | 8 | 3.30GHz |
 | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 18 | 2.60GHz | | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 18 | 2.60GHz |
Line 277: Line 272:
 | [[https://ark.intel.com/content/www/fr/fr/ark/products/193390/intel-xeon-silver-4208-processor-11m-cache-2-10-ghz.html | SILVER-4208]] | V9 | Cascade Lake | 8 | 2.10GHz | | [[https://ark.intel.com/content/www/fr/fr/ark/products/193390/intel-xeon-silver-4208-processor-11m-cache-2-10-ghz.html | SILVER-4208]] | V9 | Cascade Lake | 8 | 2.10GHz |
 | [[https://www.intel.com/content/www/us/en/products/sku/197098/intel-xeon-silver-4210r-processor-13-75m-cache-2-40-ghz/specifications.html | SILVER-4210R]] | V9 | Cascade Lake | 10 | 2.6GHz | | [[https://www.intel.com/content/www/us/en/products/sku/197098/intel-xeon-silver-4210r-processor-13-75m-cache-2-40-ghz/specifications.html | SILVER-4210R]] | V9 | Cascade Lake | 10 | 2.6GHz |
 +| [[https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-72f3.html | EPYC-72F3]] | V10 | Milan | 8 | 3.7GHz |
 +| [[https://www.amd.com/fr/products/processors/server/epyc/7003-series/amd-epyc-7763.html | EPYC-7763]] | V10 | Milan | 64 | 2.45GHz |
 +| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9554.html | EPYC-9554]] | V11 | Genoa | 64 | 3.10GHz |
 +| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9654.html | EPYC-9654]] | V12 | Genoa | 96 | 3.70GHz |
 +| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9754.html | EPYC-9754]] | V13 | Genoa | 128 | 3.70GHz |
  
  
Line 282: Line 282:
 Several GPU models are available across the three clusters. The table below summarizes the available resources. Several GPU models are available across the three clusters. The table below summarizes the available resources.
  
-^ Model ^ Memory ^ GRES ^ Constraint gpu arch ^ Compute Capability ^ CUDA min → max    ^ Feature ^ Billing Weight ^ +^ Model ^ Memory ^ GRES ^ Constraint gpu arch ^ Compute Capability ^ CUDA min → max ^ Feature ^ Billing Weight ^ 
-| [[https://www.nvidia.com/en-us/data-center/a100/ | A100 40GB]] | 40GB nvidia_a100-pcie-40gb COMPUTE_TYPE_AMPERE COMPUTE_CAPABILITY_8_0 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_A100_PCIE_40GB +| [[https://www.nvidia.com/fr-be/titan/titan-rtx/ | Titan RTX]] | 24GB nvidia_titan_rtx COMPUTE_TYPE_TURING COMPUTE_CAPABILITY_7_5 10.0 → 13.0 | COMPUTE_MODEL_NVIDIA_TITAN_RTX 1 | 
-| [[https://www.nvidia.com/en-us/data-center/a100/ | A100 80GB]] | 80GB nvidia_a100_80gb_pcie COMPUTE_TYPE_AMPERE COMPUTE_CAPABILITY_8_0 11.0 → 13.COMPUTE_MODEL_NVIDIA_A100_80GB_PCIE |+| Titan X | 12GB | nvidia_titan_x | COMPUTE_TYPE_PASCAL | COMPUTE_CAPABILITY_6_1 | 8.0 → 12.9 | COMPUTE_MODEL_NVIDIA_TITAN_X | 1 
 +| [[https://www.nvidia.com/en-in/data-center/tesla-p100/ | P100]] | 12GB tesla_p100-pcie-12gb COMPUTE_TYPE_PASCAL COMPUTE_CAPABILITY_6_0 8.0 → 12.COMPUTE_MODEL_TESLA_P100_PCIE_12GB |
 | [[https://www.nvidia.com/en-us/geforce/20-series/ | RTX 2080 Ti]] | 11GB | nvidia_geforce_rtx_2080_ti | COMPUTE_TYPE_TURING | COMPUTE_CAPABILITY_7_5 | 10.0 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_2080_TI | 2 | | [[https://www.nvidia.com/en-us/geforce/20-series/ | RTX 2080 Ti]] | 11GB | nvidia_geforce_rtx_2080_ti | COMPUTE_TYPE_TURING | COMPUTE_CAPABILITY_7_5 | 10.0 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_2080_TI | 2 |
 | [[https://www.nvidia.com/fr-fr/geforce/graphics-cards/30-series/rtx-3080-3080ti/ | RTX 3080]] | 10GB | nvidia_geforce_rtx_3080 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_7_0 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_3080 | 3 | | [[https://www.nvidia.com/fr-fr/geforce/graphics-cards/30-series/rtx-3080-3080ti/ | RTX 3080]] | 10GB | nvidia_geforce_rtx_3080 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_7_0 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_3080 | 3 |
 +| [[https://images.nvidia.com/content/technologies/volta/pdf/volta-v100-datasheet-update-us-1165301-r5.pdf | V100]] | 32GB | tesla_v100-pcie-32gb | COMPUTE_TYPE_VOLTA | COMPUTE_CAPABILITY_7_0 | 9.0 → 12.9 | COMPUTE_MODEL_TESLA_V100_PCIE_32GB | 3 |
 +| [[https://www.nvidia.com/en-us/data-center/a100/ | A100 40GB]] | 40GB | nvidia_a100-pcie-40gb | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_0 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_A100_PCIE_40GB | 5 |
 | [[https://www.nvidia.com/fr-fr/geforce/graphics-cards/30-series/rtx-3090-3090ti/ | RTX 3090]] | 24GB | nvidia_geforce_rtx_3090 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_6 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_3090 | 5 | | [[https://www.nvidia.com/fr-fr/geforce/graphics-cards/30-series/rtx-3090-3090ti/ | RTX 3090]] | 24GB | nvidia_geforce_rtx_3090 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_6 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_3090 | 5 |
-| [[https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4090/ | RTX 4090]] | 24GB | nvidia_geforce_rtx_4090 | COMPUTE_TYPE_ADA | COMPUTE_CAPABILITY_8_9 | 11.8 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_4090 | 8 | 
-| [[https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/ | RTX 5090]] | 32GB | nvidia_geforce_rtx_5090 | COMPUTE_TYPE_BLACKWELL | COMPUTE_CAPABILITY_12_0 | 12.8 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_5090 | 10 | 
-| [[https://www.nvidia.com/en-us/data-center/h100/ | H100]] | 94GB | nvidia_h100_nvl | COMPUTE_TYPE_HOPPER | COMPUTE_CAPABILITY_9_0 | 11.8 → 13.0 | COMPUTE_MODEL_NVIDIA_H100_NVL | 14 | 
-| [[https://www.nvidia.com/en-us/data-center/h200/ | H200]] | 141GB | nvidia_h200_nvl | COMPUTE_TYPE_HOPPER | COMPUTE_CAPABILITY_9_0 | 11.8 → 13.0 | COMPUTE_MODEL_NVIDIA_H200_NVL | 17 | 
-| [[https://www.nvidia.com/en-us/products/workstations/rtx-5000/ | RTX 5000]] | 32GB | nvidia_rtx_5000 | COMPUTE_TYPE_ADA | COMPUTE_CAPABILITY_8_9 | 11.8 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_5000 | 9 | 
 | [[https://www.nvidia.com/en-us/products/workstations/rtx-a5000/ | RTX A5000]] | 25GB | nvidia_rtx_a5000 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_6 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_A5000 | 5 | | [[https://www.nvidia.com/en-us/products/workstations/rtx-a5000/ | RTX A5000]] | 25GB | nvidia_rtx_a5000 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_6 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_A5000 | 5 |
 | [[https://www.nvidia.com/en-us/products/workstations/rtx-a5500/ | RTX A5500]] | 24GB | nvidia_rtx_a5500 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_6 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_A5500 | 5 | | [[https://www.nvidia.com/en-us/products/workstations/rtx-a5500/ | RTX A5500]] | 24GB | nvidia_rtx_a5500 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_6 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_A5500 | 5 |
 +| [[https://www.nvidia.com/en-us/data-center/a100/ | A100 80GB]] | 80GB | nvidia_a100_80gb_pcie | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_0 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_A100_80GB_PCIE | 8 |
 +| [[https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4090/ | RTX 4090]] | 24GB | nvidia_geforce_rtx_4090 | COMPUTE_TYPE_ADA | COMPUTE_CAPABILITY_8_9 | 11.8 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_4090 | 8 |
 | [[https://www.nvidia.com/en-us/products/workstations/rtx-a6000/ | RTX A6000]] | 48GB | nvidia_rtx_a6000 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_6 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_A6000 | 8 | | [[https://www.nvidia.com/en-us/products/workstations/rtx-a6000/ | RTX A6000]] | 48GB | nvidia_rtx_a6000 | COMPUTE_TYPE_AMPERE | COMPUTE_CAPABILITY_8_6 | 11.0 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_A6000 | 8 |
 +| [[https://www.nvidia.com/en-us/products/workstations/rtx-5000/ | RTX 5000]] | 32GB | nvidia_rtx_5000 | COMPUTE_TYPE_ADA | COMPUTE_CAPABILITY_8_9 | 11.8 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_5000 | 9 |
 +| [[https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/ | RTX 5090]] | 32GB | nvidia_geforce_rtx_5090 | COMPUTE_TYPE_BLACKWELL | COMPUTE_CAPABILITY_12_0 | 12.8 → 13.0 | COMPUTE_MODEL_NVIDIA_GEFORCE_RTX_5090 | 10 |
 +| [[https://www.nvidia.com/en-us/data-center/h100/ | H100]] | 94GB | nvidia_h100_nvl | COMPUTE_TYPE_HOPPER | COMPUTE_CAPABILITY_9_0 | 11.8 → 13.0 | COMPUTE_MODEL_NVIDIA_H100_NVL | 14 |
 | [[https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/ | RTX Pro 6000]] | 96GB | nvidia_rtx_pro_6000_blackwell | COMPUTE_TYPE_BLACKWELL | COMPUTE_CAPABILITY_9_0 | 12.8 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_PRO_6000_BLACKWELL | 16 | | [[https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/ | RTX Pro 6000]] | 96GB | nvidia_rtx_pro_6000_blackwell | COMPUTE_TYPE_BLACKWELL | COMPUTE_CAPABILITY_9_0 | 12.8 → 13.0 | COMPUTE_MODEL_NVIDIA_RTX_PRO_6000_BLACKWELL | 16 |
-| [[https://www.nvidia.com/fr-be/titan/titan-rtx/ | Titan RTX]] | 24GB | nvidia_titan_rtx | COMPUTE_TYPE_TURING | COMPUTE_CAPABILITY_7_5 | 10.0 → 13.0 | COMPUTE_MODEL_NVIDIA_TITAN_RTX | 1 | +| [[https://www.nvidia.com/en-us/data-center/h200/ | H200]] | 141GB nvidia_h200_nvl COMPUTE_TYPE_HOPPER COMPUTE_CAPABILITY_9_0 11.8 → 13.0 | COMPUTE_MODEL_NVIDIA_H200_NVL 17 |
-| Titan X | 12GB | nvidia_titan_x | COMPUTE_TYPE_PASCAL | COMPUTE_CAPABILITY_6_1 | 8.0 → 12.9 | COMPUTE_MODEL_NVIDIA_TITAN_X | 1 | +
-| [[https://www.nvidia.com/en-in/data-center/tesla-p100/ | P100]] | 12GB tesla_p100-pcie-12gb COMPUTE_TYPE_PASCAL COMPUTE_CAPABILITY_6_0 | 8.0 → 12.9 | COMPUTE_MODEL_TESLA_P100_PCIE_12GB | 1 | +
-| [[https://images.nvidia.com/content/technologies/volta/pdf/volta-v100-datasheet-update-us-1165301-r5.pdf | V100]] | 32GB | tesla_v100-pcie-32gb | COMPUTE_TYPE_VOLTA | COMPUTE_CAPABILITY_7_0 | 9.0 → 12.9 COMPUTE_MODEL_TESLA_V100_PCIE_32GB |+
  
  
Line 315: Line 315:
  
 ^ Model ^ Generation ^ Architecture ^ Freq ^ Nb core ^ Memory ^ Nodeset ^ ^ Model ^ Generation ^ Architecture ^ Freq ^ Nb core ^ Memory ^ Nodeset ^
-| [[https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-72f3.html | EPYC-72F3]] | V10 | Milan | 3.7GHz | 128 | 1024GB | cpu[044-045] | 
 | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 251GB | cpu[049-052] | | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 251GB | cpu[049-052] |
 | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 512GB | cpu[001-043] | | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 512GB | cpu[001-043] |
 +| [[https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-72f3.html | EPYC-72F3]] | V10 | Milan | 3.7GHz | 128 | 1024GB | cpu[044-045] |
 | [[https://www.amd.com/fr/products/processors/server/epyc/7003-series/amd-epyc-7763.html | EPYC-7763]] | V10 | Milan | 2.45GHz | 128 | 512GB | cpu[046-048] | | [[https://www.amd.com/fr/products/processors/server/epyc/7003-series/amd-epyc-7763.html | EPYC-7763]] | V10 | Milan | 2.45GHz | 128 | 512GB | cpu[046-048] |
 +
  
 === GPUs on Bamboo === === GPUs on Bamboo ===
  
-GPU model               Architecture ^ Mem   ^ Compute Capability ^ Slurm resource                ^ Nb per node Nodes            ^ Peer access between GPUs (nvlink) +Model Memory per GPU Nodeset 
-RTX 3090                | Ampere       | 25GB  | 8.6                nvidia_geforce_rtx_3090       | 8           | gpu[001,002    NO                                | +[[https://www.nvidia.com/en-us/data-center/a100/ A100 80GB]] | 80GB gpu003 | 
-A100                    Ampere       | 80GB  | 8.0                nvidia_a100_80gb_pcie         4           | gpu[003        | YES                               +[[https://www.nvidia.com/fr-fr/geforce/graphics-cards/30-series/rtx-3090-3090ti/ RTX 3090]] 24GB | gpu[001-002] | 
-H100                    | Hopper       | 94GB  | 9.0                nvidia_h100_nvl               1           | gpu[004        | NO                                +[[https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/ RTX 5090]] 32GB | gpu[009-010] | 
-H200                    | Hopper       | 144GB | 9.0                nvidia_h200_nvl               | 4           | gpu[005        NO                                | +[[https://www.nvidia.com/en-us/data-center/h100/ H100]] | 94GB gpu004 | 
-H200                    Hopper       | 144GB | 9.0                nvidia_h200_nvl               4           | gpu[006]         | YES                               +[[https://www.nvidia.com/en-us/data-center/h200/ H200]] 141GB | gpu[005-006] | 
-| RTX Pro 6000            | Blackwell    | 97GB  | 9.0                | nvidia_rtx_pro_6000_blackwell | 4           | gpu[008        NO                                | +| [[https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/ | RTX Pro 6000]96GB | gpu[007-008,011] | 
-| RTX 5090                | Blackwell    | 32GB  | 12                 | nvidia_geforce_rtx_5090       | 4           | gpu[009-010    NO             +
  
 ==== Baobab ==== ==== Baobab ====
Line 337: Line 338:
 Since our clusters are regularly expanded, the nodes are not all from the same generation. You can see the details in the following table. Since our clusters are regularly expanded, the nodes are not all from the same generation. You can see the details in the following table.
  
 +===== CPU MODELS — baobab =====
 ^ Model ^ Generation ^ Architecture ^ Freq ^ Nb core ^ Memory ^ Nodeset ^ ^ Model ^ Generation ^ Architecture ^ Freq ^ Nb core ^ Memory ^ Nodeset ^
 +| [[https://www.intel.fr/content/www/fr/fr/products/sku/81706/intel-xeon-processor-e52660-v3-25m-cache-2-60-ghz/specifications.html | E5-2660V0]] | V3 | Sandy Bridge EP |  | 16 | 62GB | cpu001 |
 | [[https://www.intel.fr/content/www/fr/fr/products/sku/92981/intel-xeon-processor-e52630-v4-25m-cache-2-20-ghz/specifications.html | E5-2630V4]] | V6 | Broadwell-EP | 2.2GHz | 20 | 86GB | cpu199 | | [[https://www.intel.fr/content/www/fr/fr/products/sku/92981/intel-xeon-processor-e52630-v4-25m-cache-2-20-ghz/specifications.html | E5-2630V4]] | V6 | Broadwell-EP | 2.2GHz | 20 | 86GB | cpu199 |
 | [[https://www.intel.fr/content/www/fr/fr/products/sku/92981/intel-xeon-processor-e52630-v4-25m-cache-2-20-ghz/specifications.html | E5-2630V4]] | V6 | Broadwell-EP | 2.2GHz | 20 | 94GB | cpu[173-185,187-198,200-201,205-213,220-229,237-244,247-264] | | [[https://www.intel.fr/content/www/fr/fr/products/sku/92981/intel-xeon-processor-e52630-v4-25m-cache-2-20-ghz/specifications.html | E5-2630V4]] | V6 | Broadwell-EP | 2.2GHz | 20 | 94GB | cpu[173-185,187-198,200-201,205-213,220-229,237-244,247-264] |
Line 344: Line 347:
 | [[https://www.intel.com/content/www/us/en/products/sku/92983/intel-xeon-processor-e52637-v4-15m-cache-3-50-ghz/specifications.html | E5-2637V4]] | V6 | Broadwell-EP | 2.2GHz | 8 | 503GB | cpu[218-219] | | [[https://www.intel.com/content/www/us/en/products/sku/92983/intel-xeon-processor-e52637-v4-15m-cache-3-50-ghz/specifications.html | E5-2637V4]] | V6 | Broadwell-EP | 2.2GHz | 8 | 503GB | cpu[218-219] |
 | [[https://www.intel.com/content/www/us/en/products/sku/92989/intel-xeon-processor-e52643-v4-20m-cache-3-40-ghz/specifications.html | E5-2643V4]] | V6 | Broadwell-EP | 3.4GHz | 12 | 62GB | cpu[202,216-217] | | [[https://www.intel.com/content/www/us/en/products/sku/92989/intel-xeon-processor-e52643-v4-20m-cache-3-40-ghz/specifications.html | E5-2643V4]] | V6 | Broadwell-EP | 3.4GHz | 12 | 62GB | cpu[202,216-217] |
-| [[https://www.intel.fr/content/www/fr/fr/products/sku/81706/intel-xeon-processor-e52660-v3-25m-cache-2-60-ghz/specifications.html | E5-2660V0]] | V3 | Sandy Bridge EP |  | 16 | 62GB | cpu001 | 
 | [[https://www.intel.com/content/www/us/en/products/sku/91754/intel-xeon-processor-e52680-v4-35m-cache-2-40-ghz/specifications.html | E5-2680V4]] | V6 | Broadwell-EP | 2.4GHz | 28 | 503GB | cpu203 | | [[https://www.intel.com/content/www/us/en/products/sku/91754/intel-xeon-processor-e52680-v4-35m-cache-2-40-ghz/specifications.html | E5-2680V4]] | V6 | Broadwell-EP | 2.4GHz | 28 | 503GB | cpu203 |
 | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 503GB | cpu[273-277,285-307,314-335] | | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 503GB | cpu[273-277,285-307,314-335] |
 | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 1007GB | cpu[312-313] | | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 1007GB | cpu[312-313] |
-| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9654.html | EPYC-9654]] | V12 | Genoa | 3.70GHz | 192 | 768GB | cpu[350,352] | 
 | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 2.60GHz | 36 | 187GB | cpu[084-090,265-272,278-284,308-311,336-349] | | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 2.60GHz | 36 | 187GB | cpu[084-090,265-272,278-284,308-311,336-349] |
 | [[https://ark.intel.com/content/www/us/en/ark/products/192442/intel-xeon-gold-6244-processor-24-75m-cache-3-60-ghz.html | GOLD-6244]] | V9 | Cascade Lake | 3.60GHz | 16 | 754GB | cpu351 | | [[https://ark.intel.com/content/www/us/en/ark/products/192442/intel-xeon-gold-6244-processor-24-75m-cache-3-60-ghz.html | GOLD-6244]] | V9 | Cascade Lake | 3.60GHz | 16 | 754GB | cpu351 |
 +| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9654.html | EPYC-9654]] | V12 | Genoa | 3.70GHz | 192 | 768GB | cpu[350,352] |
 +
  
  
Line 371: Line 374:
 In the following table you can see which type of GPU is available on Baobab. In the following table you can see which type of GPU is available on Baobab.
  
-GPU model   Architecture ^ Mem  ^ Compute Capability^Slurm resource              ^ Legacy Slurm resource^Nb per nodeNodes            +Model Memory per GPU Nodeset 
-Titan X     | Pascal       | 12GB  | 6.1               | nvidia_titan_x             | titan                | 6         | gpu[002]         | +| [[https://www.nvidia.com/en-us/data-center/a100/ A100 40GB]] 40GB | gpu[020,022,027-028,030-031] | 
-| P100        | Pascal       | 12GB  | 6.0               | tesla_p100-pcie-12gb       pascal               6         | gpu[004]         | +[[https://www.nvidia.com/en-us/data-center/a100/ A100 80GB]] 80GB | gpu[027,029,032-033,045] | 
-| P100        | Pascal       | 12GB  | 6.0               | tesla_p100-pcie-12gb       | pascal               | 5         | gpu[005        +| [[https://www.nvidia.com/en-us/geforce/20-series/ | RTX 2080 Ti]] | 11GB | gpu[011,013-016,018-019] | 
-P100        | Pascal       | 12GB  | 6.0               | tesla_p100-pcie-12gb       pascal               8         | gpu[006]         | +[[https://www.nvidia.com/fr-fr/geforce/graphics-cards/30-series/rtx-3080-3080ti/ RTX 3080]] | 10GB | gpu[023-024,036-043] | 
-| P100        | Pascal       | 12GB  | 6.0               | tesla_p100-pcie-12gb       | pascal               | 4         | gpu[007        +[[https://www.nvidia.com/fr-fr/geforce/graphics-cards/30-series/rtx-3090-3090ti/ | RTX 3090]24GB | gpu[017,021,025-026,034-035] | 
-Titan X     | Pascal       | 12GB  | 6.1               | nvidia_titan_x             | titan                | 7         | gpu[008]         | +| [[https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4090/ | RTX 4090]] | 24GB gpu049 | 
-| Titan X     | Pascal       | 12GB  | 6.1               | nvidia_titan_x             | titan                | 8         | gpu[009-010]     | +| [[https://www.nvidia.com/en-us/products/workstations/rtx-5000/ RTX 5000]] 32GB gpu050 
-| RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 2         | gpu[011]         +[[https://www.nvidia.com/en-us/products/workstations/rtx-a5000/ RTX A5000]] 25GB | gpu[044,047] | 
-RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti turing               | 8         | gpu[015        | +[[https://www.nvidia.com/en-us/products/workstations/rtx-a5500/ RTX A5500]] 24GB gpu046 
-| RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 8         | gpu[013,016    +[[https://www.nvidia.com/en-us/products/workstations/rtx-a6000/ RTX A6000]] 48GB gpu048 
-RTX 2080 Ti | Turing       | 11GB  | 7.5               | nvidia_geforce_rtx_2080_ti | turing               | 4         | gpu[018-019]     | +Titan X 12GB | gpu[002,008-010] | 
-| RTX 3090    | Ampere       | 25GB  | 8.6               | nvidia_geforce_rtx_3090    | ampere               | 8         | gpu[025                | +[[https://www.nvidia.com/en-in/data-center/tesla-p100/ P100]] 12GB | gpu[004-007] | 
-| RTX 3090    | Ampere       | 25GB  | 8.6               | nvidia_geforce_rtx_3090    | ampere               | 8         | gpu[017,021,026,034-035] | +
-RTX A5000   | Ampere       | 25GB  | 8.6               | nvidia_rtx_a5000           | ampere               | 8         | gpu[044,047]     | +
-| RTX A5500   | Ampere       | 25GB  | 8.6               | nvidia_rtx_a5500           | ampere               | 8         | gpu[046]           +
-| RTX A6000   | Ampere       | 48GB  | 8.6               | nvidia_rtx_a6000           | ampere               | 8         | gpu[048]           +
-| RTX 3080    | Ampere       | 10GB  | 8.6               | nvidia_geforce_rtx_3080    | ampere               | 8         | gpu[023-024,036-43] | +
-A100        Ampere       40GB  | 8.0               | nvidia_a100_40gb_pcie      | ampere               | 3         | gpu[027]         | +
-| A100        | Ampere       | 40GB  | 8.0               | nvidia_a100-pcie-40gb      ampere               6         gpu[022]         +
-A100        | Ampere       | 40GB  | 8.0               | nvidia_a100-pcie-40gb      ampere               1         | gpu[028        +
-A100        | Ampere       | 40GB  | 8.0               | nvidia_a100-pcie-40gb      ampere               4         gpu[020,030-031] +
-A100        | Ampere       | 80GB  | 8.0               | nvidia_a100-pcie-80gb      ampere               4         gpu[029]         +
-A100        Ampere       | 80GB  | 8.0               | nvidia_a100-pcie-80gb      | ampere               | 3         | gpu[032-033    +
-A100        | Ampere       | 80GB  | 8.0               | nvidia_a100-pcie-80gb      ampere               2         | gpu[045]         | +
-| RTX 4090    | Ada Lovelace | 24GB  | 8.9               | nvidia_geforce_rtx_4090    |                    | 8         | gpu[049        +
-| RTX 5000    | Ada Lovelace | 32GB  | 8.9               | nvidia_rtx_5000            | -                    | 4         | gpu[050]         |+
  
    
Line 406: Line 396:
 ==== Yggdrasil ==== ==== Yggdrasil ====
  
-=== CPUs on Yggdrasil ===+=== CPU MODELS — yggdrasil ===
  
 Since our clusters are regularly expanded, the nodes are not all from the same generation. You can see the details in the following table. Since our clusters are regularly expanded, the nodes are not all from the same generation. You can see the details in the following table.
  
-===== CPU MODELS — yggdrasil =====+
 ^ Model ^ Generation ^ Architecture ^ Freq ^ Nb core ^ Memory ^ Nodeset ^ ^ Model ^ Generation ^ Architecture ^ Freq ^ Nb core ^ Memory ^ Nodeset ^
 | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 503GB | cpu[123-124,135-150] | | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 503GB | cpu[123-124,135-150] |
 | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 1007GB | cpu[125-134] | | EPYC-7742 | V8 | Rome | 2.25GHz | 128 | 1007GB | cpu[125-134] |
-| [[https://www.amd.com/fr/products/processors/server/epyc/7003-series/amd-epyc-7763.html | EPYC-7763]] | V10 | Milan | 2.45GHz | 128 | 503GB | cpu[151-158] | 
-| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9654.html | EPYC-9654]] | V12 | Genoa | 3.70GHz | 192 | 773GB | cpu[159-164] | 
 | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 2.60GHz | 36 | 184GB | cpu001 | | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 2.60GHz | 36 | 184GB | cpu001 |
 | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 2.60GHz | 36 | 187GB | cpu[002-057,059-082,091-097] | | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 2.60GHz | 36 | 187GB | cpu[002-057,059-082,091-097] |
Line 421: Line 409:
 | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 2.60GHz | 36 | 1510GB | cpu[120-122] | | [[https://ark.intel.com/content/www/fr/fr/ark/products/192443/intel-xeon-gold-6240-processor-24-75m-cache-2-60-ghz.html | GOLD-6240]] | V9 | Cascade Lake | 2.60GHz | 36 | 1510GB | cpu[120-122] |
 | [[https://ark.intel.com/content/www/us/en/ark/products/192442/intel-xeon-gold-6244-processor-24-75m-cache-3-60-ghz.html | GOLD-6244]] | V9 | Cascade Lake | 3.60GHz | 16 | 754GB | cpu[113-115] | | [[https://ark.intel.com/content/www/us/en/ark/products/192442/intel-xeon-gold-6244-processor-24-75m-cache-3-60-ghz.html | GOLD-6244]] | V9 | Cascade Lake | 3.60GHz | 16 | 754GB | cpu[113-115] |
 +| [[https://www.amd.com/fr/products/processors/server/epyc/7003-series/amd-epyc-7763.html | EPYC-7763]] | V10 | Milan | 2.45GHz | 128 | 503GB | cpu[151-158] |
 +| [[https://www.amd.com/en/products/processors/server/epyc/4th-generation-9004-and-8004-series/amd-epyc-9654.html | EPYC-9654]] | V12 | Genoa | 3.70GHz | 192 | 773GB | cpu[159-164] |
 +
  
  
Line 438: Line 429:
 In the following table you can see which type of GPU is available on Yggdrasil. In the following table you can see which type of GPU is available on Yggdrasil.
  
-GPU model   Architecture ^ Mem  ^ Compute Capability ^ Slurm resource ^ Nb per node ^ Nodes            Peer access between GPUs +Model Memory per GPU Nodeset 
-Titan RTX   | Turing       | 24GB | 7.5                | turing         | 8           | gpu[001,002,004] | NO                       | +[[https://www.nvidia.com/fr-be/titan/titan-rtx/ | Titan RTX]| 24GB | gpu[001,003-007],gpustack 
-| Titan RTX   | Turing       | 24GB | 7.5                | turing         | 6           | gpu[003,005    | NO                       | +[[https://images.nvidia.com/content/technologies/volta/pdf/volta-v100-datasheet-update-us-1165301-r5.pdf V100]] | 32GB gpu008 | 
-| Titan RTX   | Turing       | 24GB | 7.5                | turing         | 4           | gpu[006,007]     | NO                       +
-V100        | Volta        | 32GB | 7.0                | volta          1           | gpu[008        YES                      |+
  
 Link to see the GPU details https://developer.nvidia.com/cuda-gpus#compute Link to see the GPU details https://developer.nvidia.com/cuda-gpus#compute
hpc/hpc_clusters.1770745261.txt.gz · Last modified: by Adrien Albert