Skip to content

Constraint Suggestions

Slurm jobs submitted to guest partitions, using  #SBATCH --account owner-guest and  #SBATCH --partition cluster-guest substituting the proper cluster name, are eligible for preemption by jobs submitted by the group that owns the nodes. To help minimize the chance of preemption and avoid wasting time and other resources, nodes owned by groups with (historically) low utilization can be targeted directly in batch scripts and interactive submissions. It is important to note that any constraint suggestions are based solely on historical usage information and are not indicative of the future behavior of any groups.

Suggestions by cluster

How to use constraint suggestions

Information about owner utilization is presented as a heatmap generated from Slurm logs. Lighter colors mean fewer nodes with the given node feature were in use by owner groups (which is beneficial for guest jobs) while darker colors mean more of the nodes were being used by the owners. The size of the pool of nodes must also be considered when selecting constraints; if an owner group has many nodes available and utilizes only some of them, the remainder will still be available for guest jobs. Selecting constraints for both effective size and owner utilization, then, can help further reduce the likelihood of preemption.

Multiple constraints can be specified at once with logical operators in Slurm directives. This allows for submission to nodes owned by one of several owner groups at a time (which might help reduce queue times and increase the number of nodes available) as well as the specification of exact core counts and available memory. To select from multiple owner groups' nodes, use the "or" operator; a directive like #SBATCH -C "group1|group2|group3" will select from nodes in any of the constraints listed. By contrast, the "and" operator can be used to achieve further specificity in requests. To request nodes owned by a group and with only some amount of memory, for example, a directive like #SBATCH -C "group1&m256" could be used. (This will only work where multiple node features are associated with the nodes and the combination is valid. To view the available node features, the sinfo aliases si and si2documented on the Slurm page are helpful.)

When using in Open OnDemand, enter only the constraint string into the Constraints text entry, e.g. "group1|group2|group3" .

If the images are not updating, it may be because your browser is caching older versions. Try an uncached reload of the page.

CPU microarchitecture constraints

Due to the variety of CPU microarchitectures on some CHPC clusters, each node is identified with a specific three letter constraint that specifies the node's CPU microarchitecture. Use these constraints to restrict runs to certain CPU types. The most common restriction is to use only Intel or only AMD nodes, since some codes don't work when CPUs of both manufacturers are in a single job. For example, to use only AMD nodes, use #SBATCH -C "rom|mil|gen".

Notchpeak

  •  skl Intel Sky Lake microarchitecture (Xeon 51xx or 61xx)  
  •  csl Intel Cascade Lake microarchitecture (Xeon 52xx or 62xx)
  •  icl Intel Ice Lake microarchitecture (Xeon 53xx or 63xx)
  • srp Intel Sapphire Rapids microarchitecture (Xeon 54xx or 64xx)
  •  npl AMD Naples microarchitecture (Zen1, EPYC 7xx1 )
  •  rom AMD Rome microarchitecture (Zen2, EPYC 7xx2)
  •  mil AMD Milan microarchitecture (Zen3, EPYC 7xx3)
  • gen AMD Genoa microarchitecture (Zen4, EPYC 9xx1)
Last Updated: 3/5/24