[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"blog-kubernetes-fuer-ki-anwendungen-skalierung-richtig-umsetzen":3},{"id":4,"title":5,"author":6,"body":7,"date":1054,"description":1055,"extension":1056,"image":1057,"meta":1058,"navigation":865,"path":1059,"readingTime":433,"seo":1060,"stem":1061,"tags":1062,"__hash__":1069},"content/blog/kubernetes-fuer-ki-anwendungen-skalierung-richtig-umsetzen.md","Kubernetes für KI-Anwendungen: Skalierung richtig umsetzen","KIyara",{"type":8,"value":9,"toc":1028},"minimark",[10,14,17,20,25,44,48,51,62,65,69,119,125,129,243,247,287,292,296,299,316,319,760,764,778,783,787,801,805,819,823,850,854,922,926,946,950,955,958,962,965,969,972,976,979,983,986,990,993,997,1000,1004,1007,1011,1014,1018,1021,1024],[11,12,13],"p",{},"Ihre KI-Services sollen im Tagesgeschäft stabil antworten, während Trainingsjobs GPUs effizient auslasten – und das alles zu planbaren Kosten. Kubernetes ist dafür das stärkste Fundament, wenn Sie Skalierung sauber umsetzen.",[11,15,16],{},"In diesem Leitfaden zeigen wir, wie Sie KI-Workloads auf Kubernetes so designen, dass Durchsatz, Latenz und Budget im Gleichgewicht bleiben. Mit klaren Architektur-Bausteinen, erprobten Mustern und konkreten YAML-Beispielen.",[11,18,19],{},"Ergebnis: Weniger Firefighting, mehr reproduzierbare Performance – von der ersten GPU bis zum produktionsreifen KI-Cluster.",[21,22,24],"h2",{"id":23},"tldr","TL;DR",[26,27,28,32,35,38,41],"ul",{},[29,30,31],"li",{},"Trennen Sie Workload-Pfade: Online-Inferenz (Deployment/Service) vs. Batch/Training (Jobs/Operatoren).",[29,33,34],{},"Für GPUs: Device Plugin/Operator, dedizierte Nodepools, Taints/Tolerations, Affinity und ggf. MIG/Zeitslicing.",[29,36,37],{},"Autoscaling: HPA (custom/external Metrics) für Inferenz, KEDA für Event-/Queue-Last, Cluster Autoscaler für Nodes.",[29,39,40],{},"Datenpfade optimieren: Read-Only Modelvolumes, Cache/Warmup, objektbasiertes Storage für große Artefakte.",[29,42,43],{},"Observability & Kosten: GPU-, Latenz- und Durchsatz-Metriken, SLOs definieren, Quotas/Budgets als Guardrails.",[21,45,47],{"id":46},"was-bedeutet-skalierung-von-ki-workloads-auf-kubernetes-definition","Was bedeutet Skalierung von KI-Workloads auf Kubernetes? (Definition)",[11,49,50],{},"Skalierung in Kubernetes für KI umfasst drei Dimensionen:",[26,52,53,56,59],{},[29,54,55],{},"Horizontal: Mehr/ weniger Pods und bei Bedarf mehr/ weniger Nodes (Cluster Autoscaler).",[29,57,58],{},"Vertikal: Passende CPU/GPU/Memory pro Pod (VPA im Recommend-Modus, feste Requests/Limits).",[29,60,61],{},"Orchestrierung: Richtige Zuweisung seltener Ressourcen (GPUs), Datenpfade mit ausreichendem Durchsatz und SLO-gerechte Latenzen.",[11,63,64],{},"Ziel ist nicht “maximale Auslastung um jeden Preis”, sondern ein belastbares Gleichgewicht aus Verfügbarkeit, Performance und Kosten.",[21,66,68],{"id":67},"architektur-bausteine-für-ki-auf-k8s","Architektur-Bausteine für KI auf K8s",[26,70,71,79,87,95,103,111],{},[29,72,73,74],{},"Dedizierte GPU-Nodepools\n",[26,75,76],{},[29,77,78],{},"Labels (z. B. accelerator=nvidia), Taints (gpu=true:NoSchedule), passende Instance-Typen.",[29,80,81,82],{},"NVIDIA-Ökosystem\n",[26,83,84],{},[29,85,86],{},"GPU Operator (Treiber/Toolkit), Device Plugin, DCGM Exporter für GPU-Metriken.",[29,88,89,90],{},"Netzwerk & Ingress\n",[26,91,92],{},[29,93,94],{},"Stabiler L7-Zugriff, optional Service Mesh (mTLS/Traffic-Shaping) für Inferenz.",[29,96,97,98],{},"Storage & Datenpfade\n",[26,99,100],{},[29,101,102],{},"StorageClasses für Modelle/Features (ReadOnlyMany für Modelle, Objekt-Storage für Artefakte).",[29,104,105,106],{},"MLOps-Orchestrierung\n",[26,107,108],{},[29,109,110],{},"Kubeflow, Ray, Argo Workflows, Operators (z. B. PyTorchJob, MPIJob) für verteiltes Training.",[29,112,113,114],{},"Observability\n",[26,115,116],{},[29,117,118],{},"Prometheus + Adapter für Custom/External Metrics (HPA), Logging/Tracing und Kostenexport.",[120,121,122],"blockquote",{},[11,123,124],{},"Praxis-Tipp: Trennen Sie die Verantwortung: “Infrastruktur (Cluster/GPU)” vs. “ML-Plattform (Pipelines/Modelle)”. So bleiben Upgrades kalkulierbar und Teams entkoppelt.",[21,126,128],{"id":127},"workload-typen-und-passende-kubernetes-objekte","Workload-Typen und passende Kubernetes-Objekte",[130,131,132,154],"table",{},[133,134,135],"thead",{},[136,137,138,142,145,148,151],"tr",{},[139,140,141],"th",{},"KI-Use Case",[139,143,144],{},"K8s-Objekt(e)",[139,146,147],{},"Skalierungsansatz",[139,149,150],{},"State/Persistenz",[139,152,153],{},"Hinweise",[155,156,157,175,192,209,226],"tbody",{},[136,158,159,163,166,169,172],{},[160,161,162],"td",{},"Online-Inferenz",[160,164,165],{},"Deployment + Service (+ HPA)",[160,167,168],{},"Horizontal per Latenz/QPS-Metriken",[160,170,171],{},"Stateless (Cache möglich)",[160,173,174],{},"Schnelles Rollout, Canary mit Mesh",[136,176,177,180,183,186,189],{},[160,178,179],{},"Batch-Inferenz",[160,181,182],{},"Job, CronJob",[160,184,185],{},"Parallelisierung per Jobs/Queues",[160,187,188],{},"Output in Objekt-Storage",[160,190,191],{},"KEDA für Queue-Länge",[136,193,194,197,200,203,206],{},[160,195,196],{},"Single-/Multi-GPU-Training",[160,198,199],{},"Job, PyTorchJob/MPIJob (Operator)",[160,201,202],{},"Node/Pod-Affinity, feste Ressourcen",[160,204,205],{},"Checkpoints in PVC/S3",[160,207,208],{},"Preemption vermeiden",[136,210,211,214,217,220,223],{},[160,212,213],{},"Feature-Serving",[160,215,216],{},"Deployment/StatefulSet",[160,218,219],{},"HPA auf Throughput/Latenz",[160,221,222],{},"Eventuell Stateful",[160,224,225],{},"Warmup/Caching wichtig",[136,227,228,231,234,237,240],{},[160,229,230],{},"Pipelines/Orchestrierung",[160,232,233],{},"Argo/Kubeflow (CRDs)",[160,235,236],{},"Stufenweise, event-/zeitgesteuert",[160,238,239],{},"Artefakte RO+Versioniert",[160,241,242],{},"Wiederholbar & idempotent",[21,244,246],{"id":245},"autoscaling-richtig-einsetzen-hpa-vpa-keda-cluster-autoscaler","Autoscaling richtig einsetzen: HPA, VPA, KEDA & Cluster Autoscaler",[26,248,249,260,268,276],{},[29,250,251,252],{},"Horizontal Pod Autoscaler (HPA v2)\n",[26,253,254,257],{},[29,255,256],{},"Nutzen Sie Custom/External Metrics (z. B. P95-Latenz, QPS, GPU-Utilization). CPU allein ist für Inferenz oft irreführend.",[29,258,259],{},"Prometheus Adapter macht Metriken HPA-fähig.",[29,261,262,263],{},"KEDA für Event-getriebene Skalierung\n",[26,264,265],{},[29,266,267],{},"Skaliert Deployments/Jobs anhand von Queue-Länge (Kafka, RabbitMQ, SQS etc.) oder Streams.",[29,269,270,271],{},"Vertical Pod Autoscaler (VPA)\n",[26,272,273],{},[29,274,275],{},"Als “recommendation only” für Richtwerte, nicht parallel zu HPA auf denselben Ressourcen aktiv nutzen.",[29,277,278,279],{},"Cluster Autoscaler\n",[26,280,281,284],{},[29,282,283],{},"Sorgt für Node-Skalierung; kombinieren Sie mit dedizierten GPU-Nodegroups.",[29,285,286],{},"Guardrails: Max/Min Größen, Pod PriorityClasses, Budgetlimits.",[120,288,289],{},[11,290,291],{},"Praxis-Tipp: Definieren Sie SLOs (z. B. “P95 \u003C 150 ms bei 200 RPS”). Leiten Sie daraus HPA-Ziele ab und testen Sie Lastsprünge mit synthetischem Traffic.",[21,293,295],{"id":294},"gpu-scheduling-und-ressourcenmanagement","GPU-Scheduling und Ressourcenmanagement",[11,297,298],{},"GPU-Kapazitäten sind knapp – Planbarkeit schlägt Bauchgefühl. Kernelemente:",[26,300,301,304,307,310,313],{},[29,302,303],{},"Device Plugin & Operator: Stellt nvidia.com/gpu bereit, automatisiert Treiber.",[29,305,306],{},"Node Affinity: Pods nur auf GPU-Nodes planen.",[29,308,309],{},"Taints/Tolerations: Abschirmen, damit nur GPU-Workloads diese Nodes nutzen.",[29,311,312],{},"Sharing: MIG (Multi-Instance GPU) oder Zeitslicing, falls Workloads es erlauben.",[29,314,315],{},"QoS: PriorityClasses und PodDisruptionBudgets für planbare Wartungsfenster.",[11,317,318],{},"Beispiel-Deployment für Inferenz mit 1 GPU und klaren Scheduling-Regeln:",[320,321,326],"pre",{"className":322,"code":323,"language":324,"meta":325,"style":325},"language-yaml shiki shiki-themes github-light github-dark","apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: inference-gpu\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: inference-gpu\n  template:\n    metadata:\n      labels:\n        app: inference-gpu\n    spec:\n      nodeSelector:\n        accelerator: nvidia\n      tolerations:\n        - key: \"gpu\"\n          operator: \"Equal\"\n          value: \"true\"\n          effect: \"NoSchedule\"\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n              - matchExpressions:\n                  - key: accelerator\n                    operator: In\n                    values: [\"nvidia\"]\n      containers:\n        - name: server\n          image: yourrepo/inference:stable\n          resources:\n            requests:\n              nvidia.com/gpu: \"1\"\n              cpu: \"1\"\n              memory: \"4Gi\"\n            limits:\n              nvidia.com/gpu: \"1\"\n              cpu: \"2\"\n              memory: \"8Gi\"\n          ports:\n            - containerPort: 8080\n","yaml","",[327,328,329,346,357,366,377,385,397,405,413,423,431,439,447,457,465,473,484,492,506,517,528,539,547,555,563,571,582,595,606,621,629,642,653,661,669,680,690,701,709,718,728,738,746],"code",{"__ignoreMap":325},[330,331,334,338,342],"span",{"class":332,"line":333},"line",1,[330,335,337],{"class":336},"s9eBZ","apiVersion",[330,339,341],{"class":340},"sVt8B",": ",[330,343,345],{"class":344},"sZZnC","apps/v1\n",[330,347,349,352,354],{"class":332,"line":348},2,[330,350,351],{"class":336},"kind",[330,353,341],{"class":340},[330,355,356],{"class":344},"Deployment\n",[330,358,360,363],{"class":332,"line":359},3,[330,361,362],{"class":336},"metadata",[330,364,365],{"class":340},":\n",[330,367,369,372,374],{"class":332,"line":368},4,[330,370,371],{"class":336},"  name",[330,373,341],{"class":340},[330,375,376],{"class":344},"inference-gpu\n",[330,378,380,383],{"class":332,"line":379},5,[330,381,382],{"class":336},"spec",[330,384,365],{"class":340},[330,386,388,391,393],{"class":332,"line":387},6,[330,389,390],{"class":336},"  replicas",[330,392,341],{"class":340},[330,394,396],{"class":395},"sj4cs","2\n",[330,398,400,403],{"class":332,"line":399},7,[330,401,402],{"class":336},"  selector",[330,404,365],{"class":340},[330,406,408,411],{"class":332,"line":407},8,[330,409,410],{"class":336},"    matchLabels",[330,412,365],{"class":340},[330,414,416,419,421],{"class":332,"line":415},9,[330,417,418],{"class":336},"      app",[330,420,341],{"class":340},[330,422,376],{"class":344},[330,424,426,429],{"class":332,"line":425},10,[330,427,428],{"class":336},"  template",[330,430,365],{"class":340},[330,432,434,437],{"class":332,"line":433},11,[330,435,436],{"class":336},"    metadata",[330,438,365],{"class":340},[330,440,442,445],{"class":332,"line":441},12,[330,443,444],{"class":336},"      labels",[330,446,365],{"class":340},[330,448,450,453,455],{"class":332,"line":449},13,[330,451,452],{"class":336},"        app",[330,454,341],{"class":340},[330,456,376],{"class":344},[330,458,460,463],{"class":332,"line":459},14,[330,461,462],{"class":336},"    spec",[330,464,365],{"class":340},[330,466,468,471],{"class":332,"line":467},15,[330,469,470],{"class":336},"      nodeSelector",[330,472,365],{"class":340},[330,474,476,479,481],{"class":332,"line":475},16,[330,477,478],{"class":336},"        accelerator",[330,480,341],{"class":340},[330,482,483],{"class":344},"nvidia\n",[330,485,487,490],{"class":332,"line":486},17,[330,488,489],{"class":336},"      tolerations",[330,491,365],{"class":340},[330,493,495,498,501,503],{"class":332,"line":494},18,[330,496,497],{"class":340},"        - ",[330,499,500],{"class":336},"key",[330,502,341],{"class":340},[330,504,505],{"class":344},"\"gpu\"\n",[330,507,509,512,514],{"class":332,"line":508},19,[330,510,511],{"class":336},"          operator",[330,513,341],{"class":340},[330,515,516],{"class":344},"\"Equal\"\n",[330,518,520,523,525],{"class":332,"line":519},20,[330,521,522],{"class":336},"          value",[330,524,341],{"class":340},[330,526,527],{"class":344},"\"true\"\n",[330,529,531,534,536],{"class":332,"line":530},21,[330,532,533],{"class":336},"          effect",[330,535,341],{"class":340},[330,537,538],{"class":344},"\"NoSchedule\"\n",[330,540,542,545],{"class":332,"line":541},22,[330,543,544],{"class":336},"      affinity",[330,546,365],{"class":340},[330,548,550,553],{"class":332,"line":549},23,[330,551,552],{"class":336},"        nodeAffinity",[330,554,365],{"class":340},[330,556,558,561],{"class":332,"line":557},24,[330,559,560],{"class":336},"          requiredDuringSchedulingIgnoredDuringExecution",[330,562,365],{"class":340},[330,564,566,569],{"class":332,"line":565},25,[330,567,568],{"class":336},"            nodeSelectorTerms",[330,570,365],{"class":340},[330,572,574,577,580],{"class":332,"line":573},26,[330,575,576],{"class":340},"              - ",[330,578,579],{"class":336},"matchExpressions",[330,581,365],{"class":340},[330,583,585,588,590,592],{"class":332,"line":584},27,[330,586,587],{"class":340},"                  - ",[330,589,500],{"class":336},[330,591,341],{"class":340},[330,593,594],{"class":344},"accelerator\n",[330,596,598,601,603],{"class":332,"line":597},28,[330,599,600],{"class":336},"                    operator",[330,602,341],{"class":340},[330,604,605],{"class":344},"In\n",[330,607,609,612,615,618],{"class":332,"line":608},29,[330,610,611],{"class":336},"                    values",[330,613,614],{"class":340},": [",[330,616,617],{"class":344},"\"nvidia\"",[330,619,620],{"class":340},"]\n",[330,622,624,627],{"class":332,"line":623},30,[330,625,626],{"class":336},"      containers",[330,628,365],{"class":340},[330,630,632,634,637,639],{"class":332,"line":631},31,[330,633,497],{"class":340},[330,635,636],{"class":336},"name",[330,638,341],{"class":340},[330,640,641],{"class":344},"server\n",[330,643,645,648,650],{"class":332,"line":644},32,[330,646,647],{"class":336},"          image",[330,649,341],{"class":340},[330,651,652],{"class":344},"yourrepo/inference:stable\n",[330,654,656,659],{"class":332,"line":655},33,[330,657,658],{"class":336},"          resources",[330,660,365],{"class":340},[330,662,664,667],{"class":332,"line":663},34,[330,665,666],{"class":336},"            requests",[330,668,365],{"class":340},[330,670,672,675,677],{"class":332,"line":671},35,[330,673,674],{"class":336},"              nvidia.com/gpu",[330,676,341],{"class":340},[330,678,679],{"class":344},"\"1\"\n",[330,681,683,686,688],{"class":332,"line":682},36,[330,684,685],{"class":336},"              cpu",[330,687,341],{"class":340},[330,689,679],{"class":344},[330,691,693,696,698],{"class":332,"line":692},37,[330,694,695],{"class":336},"              memory",[330,697,341],{"class":340},[330,699,700],{"class":344},"\"4Gi\"\n",[330,702,704,707],{"class":332,"line":703},38,[330,705,706],{"class":336},"            limits",[330,708,365],{"class":340},[330,710,712,714,716],{"class":332,"line":711},39,[330,713,674],{"class":336},[330,715,341],{"class":340},[330,717,679],{"class":344},[330,719,721,723,725],{"class":332,"line":720},40,[330,722,685],{"class":336},[330,724,341],{"class":340},[330,726,727],{"class":344},"\"2\"\n",[330,729,731,733,735],{"class":332,"line":730},41,[330,732,695],{"class":336},[330,734,341],{"class":340},[330,736,737],{"class":344},"\"8Gi\"\n",[330,739,741,744],{"class":332,"line":740},42,[330,742,743],{"class":336},"          ports",[330,745,365],{"class":340},[330,747,749,752,755,757],{"class":332,"line":748},43,[330,750,751],{"class":340},"            - ",[330,753,754],{"class":336},"containerPort",[330,756,341],{"class":340},[330,758,759],{"class":395},"8080\n",[21,761,763],{"id":762},"daten-storage-und-durchsatz","Daten, Storage und Durchsatz",[26,765,766,769,772,775],{},[29,767,768],{},"Modelle: Read-Only Volumes (ReadOnlyMany) oder On-Demand Download aus Objekt-Storage; Versionierung beibehalten.",[29,770,771],{},"Caching/Warmup: Init-Container zum Vorladen heißer Modelle; Sidecar-Cache oder lokaler NVMe-Speicher für niedrige Latenz.",[29,773,774],{},"Trainingsdaten: Große Datasets per Objekt-Storage streamen; Checkpoints regelmäßig wegspeichern.",[29,776,777],{},"Durchsatzpfade: Für verteiltes Training auf Netzwerk-Latenz/Throughput achten; ggf. getrennte StorageClasses (fast/standard).",[120,779,780],{},[11,781,782],{},"Praxis-Tipp: Messen Sie First-Byte-Latenz nach Pod-Start. Warmup senkt “kalte” Latenzspitzen drastisch und stabilisiert HPA-Verhalten.",[21,784,786],{"id":785},"observability-kosten-und-slos","Observability, Kosten und SLOs",[26,788,789,792,795,798],{},[29,790,791],{},"Metriken: GPU-Utilization, GPU-Memory, Inferenz-Latenzen (P50/P95), QPS, Queue-Längen; Export via DCGM/Prometheus.",[29,793,794],{},"Logs/Tracing: Korrelation von Request-IDs zwischen Ingress, Service und Modell-Handler.",[29,796,797],{},"Kosten-Transparenz: Labels/Annotations für Kostenstellen, GPU-Stunden pro Team/Service.",[29,799,800],{},"SLO-Guardrails: Alerting auf Zielverletzungen, Autoscaling-Limits bewusst setzen, Preemption vermeiden.",[21,802,804],{"id":803},"sicherheit-und-governance","Sicherheit und Governance",[26,806,807,810,813,816],{},[29,808,809],{},"Isolation: Separate Namespaces/Nodegroups für Teams; NetworkPolicies für minimalen Traffic.",[29,811,812],{},"Images & SBOM: Reproduzierbare Builds, Scans in der CI.",[29,814,815],{},"Secrets/Keys: Zugriff auf Modelle/Objekt-Storage über Secret Stores (z. B. CSI Secrets Store).",[29,817,818],{},"Compliance: Audit-Logging, nachvollziehbare Pipelines, reproduzierbare Artefakte.",[21,820,822],{"id":821},"schritt-für-schritt-von-poc-zur-produktionsreifen-skalierung","Schritt-für-Schritt: Von PoC zur produktionsreifen Skalierung",[824,825,826,829,832,835,838,841,844,847],"ol",{},[29,827,828],{},"Anforderungen klären: SLOs (Latenz/Durchsatz), Kostenrahmen, Datenquellen, Security.",[29,830,831],{},"Nodepools planen: GPU-Typen, Labels/Taints, Min/Max-Kapazität, StorageClasses.",[29,833,834],{},"Baseline observierbar machen: DCGM, Prometheus Adapter, Request-Metriken, Tracing.",[29,836,837],{},"Workload-Schnitt: Inferenz (Deployment/Service) vs. Batch/Training (Jobs/Operatoren).",[29,839,840],{},"Autoscaling definieren: HPA-Ziele, KEDA-Trigger, Cluster Autoscaler-Grenzen.",[29,842,843],{},"Datenpfade bauen: Modelle versioniert, Warmup/Cache, Artefakte in Objekt-Storage.",[29,845,846],{},"Failover & Upgrades testen: PodDisruptionBudgets, Rolling Updates, Chaos/Load Tests.",[29,848,849],{},"Kosten-Guardrails: Quotas, PriorityClasses, Preemption-Policy, Reporting.",[21,851,853],{"id":852},"produktions-ready-checkliste-für-ki-auf-kubernetes","Produktions-Ready Checkliste für KI auf Kubernetes",[26,855,858,868,874,880,886,892,898,904,910,916],{"className":856},[857],"contains-task-list",[29,859,862,867],{"className":860},[861],"task-list-item",[863,864],"input",{"disabled":865,"type":866},true,"checkbox"," GPU-Nodes mit Labels/Taints, Device Plugin/Operator betriebsbereit",[29,869,871,873],{"className":870},[861],[863,872],{"disabled":865,"type":866}," Klare Requests/Limits inkl. nvidia.com/gpu je Pod",[29,875,877,879],{"className":876},[861],[863,878],{"disabled":865,"type":866}," HPA/KEDA-Ziele an SLOs gekoppelt, Prometheus Adapter aktiv",[29,881,883,885],{"className":882},[861],[863,884],{"disabled":865,"type":866}," Modelle als RO-Volume oder über Objekt-Storage versioniert eingebunden",[29,887,889,891],{"className":888},[861],[863,890],{"disabled":865,"type":866}," Warmup/Cache für kalte Starts implementiert",[29,893,895,897],{"className":894},[861],[863,896],{"disabled":865,"type":866}," DCGM-/App-Metriken, Logs und Tracing verknüpft",[29,899,901,903],{"className":900},[861],[863,902],{"disabled":865,"type":866}," PodDisruptionBudgets und PriorityClasses definiert",[29,905,907,909],{"className":906},[861],[863,908],{"disabled":865,"type":866}," Cluster Autoscaler mit Guardrails konfiguriert",[29,911,913,915],{"className":912},[861],[863,914],{"disabled":865,"type":866}," Sicherheitsregeln: NetworkPolicies, Secrets-Management, Image-Scanning",[29,917,919,921],{"className":918},[861],[863,920],{"disabled":865,"type":866}," Last-/Resilienztests regelmäßig automatisiert",[21,923,925],{"id":924},"typische-fehler-und-wie-sie-sie-vermeiden","Typische Fehler – und wie Sie sie vermeiden",[26,927,928,931,934,937,940,943],{},[29,929,930],{},"Nur auf CPU skalieren: Für Inferenz irrelevant. Nutzen Sie Latenz/QPS/GPU-Metriken.",[29,932,933],{},"Keine Requests/Limits: Scheduler plant unzuverlässig; definieren Sie Ressourcen sauber.",[29,935,936],{},"Gemischte Nodes ohne Taints: CPU-Workloads belegen GPU-Knoten – trennen und schützen.",[29,938,939],{},"Kalte Modelle: Start-Latenz explodiert. Warmup/Cache frühzeitig einplanen.",[29,941,942],{},"HPA und VPA gleichzeitig “aktiv”: VPA nur als Empfehlung, sonst Flattern.",[29,944,945],{},"Datenpfade ignoriert: Training/Inferenz drosseln, wenn Storage/Netz nicht mitzieht.",[21,947,949],{"id":948},"häufige-fragen-faq","Häufige Fragen (FAQ)",[951,952,954],"h3",{"id":953},"brauche-ich-zwingend-den-nvidia-gpu-operator","Brauche ich zwingend den NVIDIA GPU Operator?",[11,956,957],{},"Nicht zwingend, aber er vereinfacht Treiber-, Toolkit- und Monitoring-Setup erheblich. In produktiven Umgebungen reduziert er Drift und verkürzt die Mean Time to Recovery nach Node-Replacements.",[951,959,961],{"id":960},"wie-messe-ich-richtige-metriken-für-hpa-bei-inferenz","Wie messe ich “richtige” Metriken für HPA bei Inferenz?",[11,963,964],{},"Nutzen Sie SLO-nahe Signale wie P95-Latenz oder Anfragen in Bearbeitung. GPU-Utilization kann ergänzen, ersetzt aber keine User-zentrierten Metriken. Der Prometheus Adapter macht diese Werte HPA-fähig.",[951,966,968],{"id":967},"kann-ich-gpus-zwischen-pods-teilen","Kann ich GPUs zwischen Pods teilen?",[11,970,971],{},"Ja, mit NVIDIA MIG oder Zeitslicing – sofern Ihr Workload das unterstützt. Achten Sie auf Scheduling- und Isolationsimplikationen und testen Sie die Latenzstabilität unter Last.",[951,973,975],{"id":974},"was-ist-besser-für-batch-inferenz-jobs-oder-deployments","Was ist besser für Batch-Inferenz: Jobs oder Deployments?",[11,977,978],{},"Jobs. Sie sind für einmalige/finite Ausführungen gedacht und integrieren gut mit KEDA/Queues. Deployments eignen sich für dauerhafte Dienste mit klaren SLOs.",[951,980,982],{"id":981},"wie-gehe-ich-mit-großen-modellen-um","Wie gehe ich mit großen Modellen um?",[11,984,985],{},"Nutzen Sie Read-Only Volumes oder Objekt-Storage mit lokalem Cache. Laden und erwärmen Sie Modelle beim Start, um kalte Latenzen zu vermeiden, und versionieren Sie Artefakte konsequent.",[951,987,989],{"id":988},"kann-ich-hpa-und-vpa-zusammen-verwenden","Kann ich HPA und VPA zusammen verwenden?",[11,991,992],{},"Ja, aber vorsichtig: VPA im “recommendation”-Modus, HPA übernimmt die horizontale Skalierung. Vermeiden Sie konkurrierende Ziele auf denselben Ressourcen.",[951,994,996],{"id":995},"wie-plane-ich-gemischte-workloads-training-inferenz-im-selben-cluster","Wie plane ich gemischte Workloads (Training + Inferenz) im selben Cluster?",[11,998,999],{},"Trennen Sie per Namespace, Nodegroup, Taints/Tolerations und PriorityClasses. So bleibt die Inferenz SLO-stabil, während Trainingsjobs Auslastungsspitzen nutzen.",[951,1001,1003],{"id":1002},"welche-rolle-spielt-ein-service-mesh","Welche Rolle spielt ein Service Mesh?",[11,1005,1006],{},"Für Inferenz kann ein Mesh mTLS, Retries, Timeouts und Canary-Releases standardisieren. Es ersetzt jedoch kein sauberes HPA-Design und keine optimierten Datenpfade.",[951,1008,1010],{"id":1009},"wie-begrenze-ich-kosten-effektiv","Wie begrenze ich Kosten effektiv?",[11,1012,1013],{},"Setzen Sie ResourceQuotas/Budgets, definieren Sie Obergrenzen für Autoscaler und exportieren Sie GPU-Stunden pro Team. Reviews der SLOs vs. Kosten regelmäßig durchführen.",[21,1015,1017],{"id":1016},"fazit","Fazit",[11,1019,1020],{},"Kubernetes liefert die Bausteine, um KI-Workloads kontrolliert und reproduzierbar zu skalieren – vorausgesetzt, GPU-Scheduling, Autoscaling und Datenpfade sind sauber designt. Wer Inferenz und Batch klar trennt, erhält stabile Latenzen und planbare Kosten.",[11,1022,1023],{},"Nutzen Sie die Checkliste und das YAML-Beispiel als Startpunkt für Ihr Cluster-Blueprint. Für fortgeschrittene Szenarien wie MIG/Zeitslicing, verteiltes Training und SLO-Feintuning folgen Sie unserem Blog für weitere Deep-Dives. Abonnieren Sie den technischen Newsletter und bleiben Sie bei Kubernetes & KI auf dem neuesten Stand.",[1025,1026,1027],"style",{},"html pre.shiki code .s9eBZ, html code.shiki .s9eBZ{--shiki-default:#22863A;--shiki-dark:#85E89D}html pre.shiki code .sVt8B, html code.shiki .sVt8B{--shiki-default:#24292E;--shiki-dark:#E1E4E8}html pre.shiki code .sZZnC, html code.shiki .sZZnC{--shiki-default:#032F62;--shiki-dark:#9ECBFF}html pre.shiki code .sj4cs, html code.shiki .sj4cs{--shiki-default:#005CC5;--shiki-dark:#79B8FF}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":325,"searchDepth":348,"depth":348,"links":1029},[1030,1031,1032,1033,1034,1035,1036,1037,1038,1039,1040,1041,1042,1053],{"id":23,"depth":348,"text":24},{"id":46,"depth":348,"text":47},{"id":67,"depth":348,"text":68},{"id":127,"depth":348,"text":128},{"id":245,"depth":348,"text":246},{"id":294,"depth":348,"text":295},{"id":762,"depth":348,"text":763},{"id":785,"depth":348,"text":786},{"id":803,"depth":348,"text":804},{"id":821,"depth":348,"text":822},{"id":852,"depth":348,"text":853},{"id":924,"depth":348,"text":925},{"id":948,"depth":348,"text":949,"children":1043},[1044,1045,1046,1047,1048,1049,1050,1051,1052],{"id":953,"depth":359,"text":954},{"id":960,"depth":359,"text":961},{"id":967,"depth":359,"text":968},{"id":974,"depth":359,"text":975},{"id":981,"depth":359,"text":982},{"id":988,"depth":359,"text":989},{"id":995,"depth":359,"text":996},{"id":1002,"depth":359,"text":1003},{"id":1009,"depth":359,"text":1010},{"id":1016,"depth":348,"text":1017},"2026-03-29","So skalieren Sie KI-Workloads auf Kubernetes zuverlässig: GPU-Scheduling, Autoscaling, Datenpfade und Kostenkontrolle. Mit Best Practices, Patterns und Checklisten.","md","/images/blog/ki-anwendungsfaelle-hero.png",{},"/blog/kubernetes-fuer-ki-anwendungen-skalierung-richtig-umsetzen",{"title":5,"description":1055},"blog/kubernetes-fuer-ki-anwendungen-skalierung-richtig-umsetzen",[1063,1064,1065,1066,1067,1068],"Kubernetes","KI","MLOps","Skalierung","GPU Scheduling","Cloud Native","xq-9pfUNQdkjJLyEp5d3KQPFv1zHsihPsKR84KS38Ag"]