from together import Together
client = Together()
cluster = client.beta.clusters.update("cluster_id", cluster_type="KUBERNETES", num_gpus=24)
print(cluster){
"cluster_id": "<string>",
"cluster_type": "KUBERNETES",
"region": "<string>",
"gpu_type": "H100_SXM",
"cluster_name": "<string>",
"duration_hours": 123,
"driver_version": "CUDA_12_5_555",
"volumes": [
{
"volume_id": "<string>",
"volume_name": "<string>",
"size_tib": 123,
"status": "<string>"
}
],
"status": "WaitingForControlPlaneNodes",
"control_plane_nodes": [
{
"node_id": "<string>",
"node_name": "<string>",
"status": "<string>",
"host_name": "<string>",
"num_cpu_cores": 123,
"memory_gib": 123,
"network": "<string>"
}
],
"gpu_worker_nodes": [
{
"node_id": "<string>",
"node_name": "<string>",
"status": "<string>",
"host_name": "<string>",
"num_cpu_cores": 123,
"num_gpus": 123,
"memory_gib": 123,
"networks": [
"<string>"
]
}
],
"kube_config": "<string>",
"num_gpus": 123
}Update the configuration of an existing GPU cluster.
from together import Together
client = Together()
cluster = client.beta.clusters.update("cluster_id", cluster_type="KUBERNETES", num_gpus=24)
print(cluster){
"cluster_id": "<string>",
"cluster_type": "KUBERNETES",
"region": "<string>",
"gpu_type": "H100_SXM",
"cluster_name": "<string>",
"duration_hours": 123,
"driver_version": "CUDA_12_5_555",
"volumes": [
{
"volume_id": "<string>",
"volume_name": "<string>",
"size_tib": 123,
"status": "<string>"
}
],
"status": "WaitingForControlPlaneNodes",
"control_plane_nodes": [
{
"node_id": "<string>",
"node_name": "<string>",
"status": "<string>",
"host_name": "<string>",
"num_cpu_cores": 123,
"memory_gib": 123,
"network": "<string>"
}
],
"gpu_worker_nodes": [
{
"node_id": "<string>",
"node_name": "<string>",
"status": "<string>",
"host_name": "<string>",
"num_cpu_cores": 123,
"num_gpus": 123,
"memory_gib": 123,
"networks": [
"<string>"
]
}
],
"kube_config": "<string>",
"num_gpus": 123
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
OK
KUBERNETES, SLURM H100_SXM, H200_SXM, RTX_6000_PCI, L40_PCIE, B200_SXM, H100_SXM_INF CUDA_12_5_555, CUDA_12_6_560, CUDA_12_6_565, CUDA_12_8_570 Show child attributes
Current status of the GPU cluster.
WaitingForControlPlaneNodes, WaitingForDataPlaneNodes, WaitingForSubnet, WaitingForSharedVolume, InstallingDrivers, RunningAcceptanceTests, Paused, OnDemandComputePaused, Ready, Degraded, Deleting Show child attributes
Show child attributes
Was this page helpful?