logo
Alauda Container Platform
logo
Alauda Container Platform
Navigation

Overview

Architecture
Release Notes

Install

Overview

Prepare for Installation

Prerequisites
Download
Node Preprocessing
Installing
Global Cluster Disaster Recovery

Upgrade

Overview
Pre-Upgrade Preparation
Upgrade the global cluster
Upgrade Workload Clusters

User Interface

Web Console

Overview
Accessing the Web Console
Customizing the Web Console
Customizing the Left Navigation

CLI Tools

ACP CLI (ac)

Getting Started with ACP CLI
Configuring ACP CLI
Usage of ac and kubectl Commands
Managing CLI Profiles
Extending ACP CLI with Plugins
AC CLI Developer Command Reference
AC CLI Administrator Command Reference
violet CLI

Configure

Feature Gate

Clusters

Overview
Immutable Infrastructure

Node Management

Overview
Add Nodes to On-Premises Clusters
Manage Nodes
Node Monitoring

Managed Clusters

overview

Import Clusters

Overview
Import Standard Kubernetes Cluster
Import OpenShift Cluster
Import Amazon EKS Cluster
Import GKE Cluster
Import Huawei Cloud CCE Cluster (Public Cloud)
Import Azure AKS Cluster
Import Alibaba Cloud ACK Cluster
Import Tencent Cloud TKE Cluster
Register Cluster

Public Cloud Cluster Initialization

Network Initialization

AWS EKS Cluster Network Initialization Configuration
AWS EKS Supplementary Information
Huawei Cloud CCE Cluster Network Initialization Configuration
Azure AKS Cluster Network Initialization Configuration
Google GKE Cluster Network Initialization Configuration

Storage Initialization

Overview
AWS EKS Cluster Storage Initialization Configuration
Huawei Cloud CCE Cluster Storage Initialization Configuration
Azure AKS Cluster Storage Initialization Configuration
Google GKE Cluster Storage Initialization Configuration

How to

Network Configuration for Import Clusters
Fetch import cluster information
Trust an insecure image registry
Collect Network Data from Custom Named Network Cards
Creating an On-Premise Cluster
Hosted Control Plane
Cluster Node Planning
etcd Encryption

How to

Add External Address for Built-in Registry
Choosing a Container Runtime
Optimize Pod Performance with Manager Policies
Updating Public Repository Credentials

Backup and Recovery

Overview
Install
Backup repository

Backup Management

ETCD Backup
Create an application backup schedule
Hooks

Recovery Management

Run an Application Restore Task
Image Registry Replacement

Networking

Guides

Configure Domain
Creating Certificates
Configure Services
Configure Ingresses
Configure Subnets
Configure MetalLB
Configure GatewayAPI Gateway
Configure GatewayAPI Route
Configure ALB
Configure NodeLocal DNSCache
Configure CoreDNS

How To

Tasks for Ingress-Nginx
Tasks for Envoy Gateway
Soft Data Center LB Solution (Alpha)

Kube OVN

Understanding Kube-OVN CNI
Preparing Kube-OVN Underlay Physical Network
Automatic Interconnection of Underlay and Overlay Subnets
Cluster Interconnection (Alpha)
Configure Egress Gateway
Configuring Kube-OVN Network to Support Pod Multi-Network Interfaces (Alpha)
Configure Endpoint Health Checker

alb

Tasks for ALB

Trouble Shooting

How to Solve Inter-node Communication Issues in ARM Environments?
Find Who Cause the Error

Storage

Introduction

Concepts

Core Concepts
Persistent Volume
Access Modes and Volume Modes

Guides

Creating CephFS File Storage Type Storage Class
Creating CephRBD Block Storage Class
Create TopoLVM Local Storage Class
Creating an NFS Shared Storage Class
Deploy Volume Snapshot Component
Creating a PV
Creating PVCs
Using Volume Snapshots

How To

Generic ephemeral volumes
Using an emptyDir
Configuring Persistent Storage Using Local volumes
Configuring Persistent Storage Using NFS
Third‑Party Storage Capability Annotation Guide

Troubleshooting

Recover From PVC Expansion Failure

Object Storage

Introduction
Concepts
Installing

Guides

Creating a BucketClass for Ceph RGW
Creating a BucketClass for MinIO
Create a Bucket Request

How To

Control Access & Quotas for COSI Buckets with CephObjectStoreUser (Ceph Driver)
Machine Configuration

Scalability and Performance

Evaluating Resources for Workload Cluster
Disk Configuration
Evaluating Resources for Global Cluster
Improving Kubernetes Stability for Large-Scale Clusters

Storage

Ceph Distributed Storage

Introduction

Install

Create Standard Type Cluster
Create Stretch Type Cluster
Architecture

Concepts

Core Concepts

Guides

Accessing Storage Services
Managing Storage Pools
Node-specific Component Deployment
Adding Devices/Device Classes
Monitoring and Alerts

How To

Configure a Dedicated Cluster for Distributed Storage
Cleanup Distributed Storage

Disaster Recovery

File Storage Disaster Recovery
Block Storage Disaster Recovery
Object Storage Disaster Recovery
Update the optimization parameters
Create Ceph Object Store User

MinIO Object Storage

Introduction
Install
Architecture

Concepts

Core Concepts

Guides

Adding a Storage Pool
Monitoring & Alerts

How To

Data Disaster Recovery

TopoLVM Local Storage

Introduction
Install

Guides

Device Management
Monitoring and Alerting

How To

Backup and Restore TopoLVM Filesystem PVCs with Velero
Configuring Striped Logical Volumes

Networking

Overview

Networking Operators

MetalLB Operator
Ingress Nginx Operator
Envoy Gateway Operator

ALB Operator

Understanding ALB
Auth
Deploy High Available VIP for ALB
Bind NIC in ALB
Decision‑Making for ALB Performance Selection
Load Balancing Session Affinity Policy in ALB
L4/L7 Timeout
HTTP Redirect
CORS
Header Modification
URL Rewrite
ModSecurity
OTel
TCP/HTTP Keepalive
ALB with Ingress-NGINX Annotation Compatibility
ALB Monitoring

Network Security

Understanding Network Policy APIs
Admin Network Policy
Network Policy

Ingress and Load Balancing

Ingress and Load Balancing with Envoy Gateway
Network Observability

Security

Alauda Container Security

Security and Compliance

Compliance

Introduction
Install Alauda Container Platform Compliance with Kyverno

HowTo

Private Registry Access Configuration
Image Signature Verification Policy
Image Signature Verification Policy with Secrets
Image Registry Validation Policy
Container Escape Prevention Policy
Security Context Enforcement Policy
Network Security Policy
Volume Security Policy

API Refiner

Introduction
Install Alauda Container Platform API Refiner
About Alauda Container Platform Compliance Service

Users and Roles

User

Introduction

Guides

Manage User Roles
Create User
User Management

Group

Introduction

Guides

Manage User Group Roles
Create Local User Group
Manage Local User Group Membership

Role

Introduction

Guides

Create Role
Manage Custom Roles

IDP

Introduction

Guides

LDAP Management
OIDC Management

Troubleshooting

Delete User

User Policy

Introduction

Multitenancy(Project)

Introduction

Guides

Create Project
Manage Project Quotas
Manage Project
Manage Project Cluster
Manage Project Members

Audit

Introduction

Telemetry

Install

Certificates

Automated Kubernetes Certificate Rotation
cert-manager
OLM Certificates
Certificate Monitoring
Rotate TLS Certs of Platform Access Addresses

Virtualization

Virtualization

Overview

Introduction
Install

Images

Introduction

Guides

Adding Virtual Machine Images
Update/Delete Virtual Machine Images
Update/Delete Image Credentials

How To

Creating Windows Images Based on ISO using KubeVirt
Creating Linux Images Based on ISO Using KubeVirt
Exporting Virtual Machine Images
Permissions

Virtual Machine

Introduction

Guides

Creating Virtual Machines/Virtual Machine Groups
Batch Operations on Virtual Machines
Logging into the Virtual Machine using VNC
Managing Key Pairs
Managing Virtual Machines
Monitoring and Alerts
Quick Location of Virtual Machines

How To

Configuring USB host passthrough
Virtual Machine Hot Migration
Virtual Machine Recovery
Clone Virtual Machines on KubeVirt
Physical GPU Passthrough Environment Preparation
Configuring High Availability for Virtual Machines
Create a VM Template from an Existing Virtual Machine

Troubleshooting

Pod Migration and Recovery from Abnormal Shutdown of Virtual Machine Nodes
Hot Migration Error Messages and Solutions

Network

Introduction

Guides

Configure Network

How To

Control Virtual Machine Network Requests Through Network Policy
Configuring SR-IOV
Configuring Virtual Machines to Use Network Binding Mode for IPv6 Support

Storage

Introduction

Guides

Managing Virtual Disks

Backup and Recovery

Introduction

Guides

Using Snapshots
Using Velero

Developer

Overview

Quick Start

Creating a simple application via image

Building Applications

Build application architecture

Concepts

Application Types
Custom Applications
Workload Types
Understanding Parameters
Understanding Environment Variables
Understanding Startup Commands
Resource Unit Description

Namespaces

Creating Namespaces
Importing Namespaces
Resource Quota
Limit Range
Pod Security Policies
UID/GID Assignment
Overcommit Ratio
Managing Namespace Members
Updating Namespaces
Deleting/Removing Namespaces

Creating Applications

Creating applications from Image
Creating applications from Chart
Creating applications from YAML
Creating applications from Code
Creating applications from Operator Backed
Creating applications by using CLI

Operation and Maintaining Applications

Application Rollout

Installing Alauda Container Platform Argo Rollouts
Application Blue Green Deployment
Application Canary Deployment
Status Description

KEDA(Kubernetes Event-driven Autoscaling)

KEDA Overview
Installing KEDA

How To

Integrating ACP Monitoring with Prometheus Plugin
Pausing Autoscaling in KEDA
Configuring HPA
Starting and Stopping Applications
Configuring VerticalPodAutoscaler (VPA)
Configuring CronHPA
Updating Applications
Exporting Applications
Updating and deleting Chart Applications
Version Management for Applications
Deleting Applications
Handling Out of Resource Errors
Health Checks

Workloads

Deployments
DaemonSets
StatefulSets
CronJobs
Jobs
Pods
Containers
Working with Helm charts

Configurations

Configuring ConfigMap
Configuring Secrets

Application Observability

Monitoring Dashboards
Logs
Events

How To

Setting Scheduled Task Trigger Rules
Add ImagePullSecrets to ServiceAccount

Images

Overview of images

How To

Creating images
Managing images

Registry

Introduction

Install

Install Via YAML
Install Via Web UI

How To

Common CLI Command Operations
Using Alauda Container Platform Registry in Kubernetes Clusters

Source to Image

Overview

Introduction
Architecture
Release Notes
Lifecycle Policy

Install

Installing Alauda Container Platform Builds

Upgrade

Upgrading Alauda Container Platform Builds

Guides

Managing applications created from Code

How To

Creating an application from Code

Node Isolation Strategy

Introduction
Architecture

Concepts

Core Concepts

Guides

Create Node Isolation Strategy
Permissions
FAQ

Alauda Container Platform GitOps

About Alauda Container Platform GitOps

Extend

Overview
Operator
Cluster Plugin
Chart Repository
Upload Packages

Observability

Overview

Monitoring

Introduction
Install

Architecture

Monitoring Module Architecture
Monitoring Component Selection Guide
Monitor Component Capacity Planning
Concepts

Guides

Management of Metrics
Management of Alert
Management of Notification
Management of Monitoring Dashboards
Management of Probe

How To

Backup and Restore of Prometheus Monitoring Data
VictoriaMetrics Backup and Recovery of Monitoring Data
Collect Network Data from Custom-Named Network Interfaces
Isolating Monitoring Components on Kubernetes Infra Nodes

Distributed Tracing

Introduction
Install
Architecture
Concepts

Guides

Query Tracing
Query Trace Logs

How To

Non-Intrusive Integration of Tracing in Java Applications
Business Log Associated with the TraceID

Troubleshooting

Unable to Query the Required Tracing
Incomplete Tracing Data

Logs

About Logging Service

Events

Introduction
Events

Inspection

Introduction
Architecture

Guides

Inspection
Component Health Status

Hardware accelerators

About Alauda Build of Hami
About Alauda Build of NVIDIA GPU Device Plugin

Alauda Service Mesh

Service Mesh 1.x
Service Mesh 2.x

Alauda AI

About Alauda AI

Alauda DevOps

About Alauda DevOps

Alauda Cost Management

About Alauda Cost Management

Alauda Application Services

Overview

Introduction
Architecture
Install
Upgrade

Alauda Database Service for MySQL

About Alauda Database Service for MySQL-MGR
About Alauda Database Service for MySQL-PXC

Alauda Cache Service for Redis OSS

About Alauda Cache Service for Redis OSS

Alauda Streaming Service for Kafka

About Alauda Streaming Service for Kafka

Alauda Streaming Service for RabbitMQ

About Alauda Streaming Service for RabbitMQ

Alauda support for PostgreSQL

About Alauda support for PostgreSQL

Operations Management

Introduction

Parameter Template Management

Introduction

Guides

Parameter Template Management

Backup Management

Introduction

Guides

External S3 Storage
Backup Management

Inspection Management

Introduction

Guides

Create Inspection Task
Exec Inspection Task
Update and Delete Inspection Tasks

How To

How to set Inspection scheduling?

Inspection Optimization Recommendations

MySQL

MySQL IO Load Optimization
MySQL Memory Usage Optimization
MySQL Storage Space Optimization
MySQL Active Thread Count Optimization
MySQL Row Lock Optimization

Redis

Redis BigKey
High CPU Usage in Redis
High Memory Usage in Redis

Kafka

High CPU Utilization in Kafka
Kafka Rebalance Optimization
Kafka Memory Usage Optimization
Kafka Storage Space Optimization

RabbitMQ

RabbitMQ Mnesia Database Exception Handling

Alert Management

Introduction

Guides

Relationship with Platform Capabilities

Upgrade Management

Introduction

Guides

Instance Upgrade

API Reference

Overview

Introduction
Kubernetes API Usage Guide

Advanced APIs

Alert APIs

AlertHistories [v1]
AlertHistoryMessages [v1]
AlertStatus [v2]
SilenceStatus [v2]

Event APIs

Search

GitOps APIs

Core
Application
ApplicationSet

Log APIs

Aggregation
Archive
Context
Search

Monitoring APIs

Indicators [monitoring.alauda.io/v1beta1]
Metrics [monitoring.alauda.io/v1beta1]
Variables [monitoring.alauda.io/v1beta1]

Kubernetes APIs

Alert APIs

AlertTemplate [alerttemplates.aiops.alauda.io/v1beta1]
PrometheusRule [prometheusrules.monitoring.coreos.com/v1]

AutoScaling APIs

HorizontalPodAutoscaler [autoscaling/v2]

Configuration APIs

ConfigMap [v1]
Secret [v1]

Inspection APIs

Inspection [inspections.ait.alauda.io/v1alpha1]

MachineConfiguration APIs

MachineConfig [machineconfiguration.alauda.io/v1alpha1]
MachineConfigPool [machineconfiguration.alauda.io/v1alpha1]
MachineConfiguration [machineconfiguration.alauda.io/v1alpha1]

ModulePlugin APIs

ModuleConfig [moduleconfigs.cluster.alauda.io/v1alpha1]
ModuleInfo [moduleinfoes.cluster.alauda.io/v1alpha1]
ModulePlugin [moduleplugins.cluster.alauda.io/v1alpha1]

Namespace APIs

LimitRange [v1]
Namespace [v1]
ResourceQuota [v1]

Networking APIs

HTTPRoute [httproutes.gateway.networking.k8s.io/v1]
Service [v1]
VpcEgressGateway [vpc-egress-gateways.kubeovn.io/v1]
Vpc [vpcs.kubeovn.io/v1]

Notification APIs

Notification [notifications.ait.alauda.io/v1beta1]
NotificationGroup [notificationgroups.ait.alauda.io/v1beta1]
NotificationTemplate [notificationtemplates.ait.alauda.io/v1beta1]

Operator APIs

Operator [operators.operators.coreos.com/v1]

Workload APIs

Cronjob [batch/v1]
DameonSet [apps/v1]
Deployment [apps/v1]
Job [batch/v1]
Pod [v1]
Replicaset [apps/v1]
ReplicationController [v1]
Statefulset [apps/v1]
Previous PageWorkloads
Next PageDaemonSets

#Deployments

#TOC

#Understanding Deployments

Refer to the official Kubernetes documentation: Deployments

Deployment is a Kubernetes higher-level workload resource used to declaratively manage and update Pod replicas for your applications. It provides a robust and flexible way to define how your application should run, including how many replicas to maintain and how to safely perform rolling updates.

A Deployment is an object in the Kubernetes API that manages Pods and ReplicaSets. When you create a Deployment, Kubernetes automatically creates a ReplicaSet, which is then responsible for maintaining the specified number of Pod replicas.

By using Deployments, you can:

  • Declarative Management: Define the desired state of your application, and Kubernetes automatically ensures the cluster's actual state matches the desired state.
  • Version Control and Rollback: Track each revision of a Deployment and easily roll back to a previous stable version if issues arise.
  • Zero-Downtime Updates: Gradually update your application using a rolling update strategy without service interruption.
  • Self-Healing: Deployments automatically replace Pod instances if they crash, are terminated, or are removed from a node, ensuring the specified number of Pods are always available.

How it works:

  1. You define the desired state of your application through a Deployment (e.g., which image to use, how many replicas to run).
  2. The Deployment creates a ReplicaSet to ensure the specified number of Pods are running.
  3. The ReplicaSet creates and manages the actual Pod instances.
  4. When you update a Deployment (e.g., change the image version), the Deployment creates a new ReplicaSet and gradually replaces the old Pods with new ones according to the predefined rolling update strategy until all new Pods are running, then it removes the old ReplicaSet.

#Creating Deployments

#Creating a Deployment by using CLI

#Prerequisites

  • Ensure you have kubectl configured and connected to your cluster.

#YAML file example

# example-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment # Name of the Deployment
  labels:
    app: nginx # Labels for identification and selection
spec:
  replicas: 3 # Desired number of Pod replicas
  selector:
    matchLabels:
      app: nginx # Selector to match Pods managed by this Deployment
  template:
    metadata:
      labels:
        app: nginx # Pod's labels, must match selector.matchLabels
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2 # Container image
          ports:
            - containerPort: 80 # Container exposed port
          resources: # Resource limits and requests
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 256Mi

#Creating a Deployment via YAML

# Step 1: Create Deployment via yaml
kubectl apply -f example-deployment.yaml

# Step 2: Check the Deployment status
kubectl get deployment nginx-deployment # View Deployment
kubectl get pod -l app=nginx # View Pods created by this Deployment

#Creating a Deployment by using web console

#Prerequisites

Obtain the image address. The source of the images can be from the image repository integrated by the platform administrator through the toolchain or from third-party platforms' image repositories.

  • For the former, the Administrator typically assigns the image repository to your project, and you can use the images within it. If the required image repository is not found, please contact the Administrator for allocation.

  • If it is a third-party platform's image repository, ensure that images can be pulled directly from it in the current cluster.

  • If the image registry requires authentication, you need to configure the corresponding image pull secret. For more information, see Add ImagePullSecrets to ServiceAccount.

#Procedure - Configure Basic Info

  1. Container Platform, navigate to Workloads > Deployments in the left sidebar.

  2. Click on Create Deployment.

  3. Select or Input an image, and click Confirm.

INFO

Note: When using images from the image repository integrated into web console, you can filter images by Already Integrated. The Integration Project Name, for example, images (docker-registry-projectname), which includes the project name projectname in this web console and the project name containers in the image repository.

  1. In the Basic Info section, configure declarative parameters for Deployment workloads:

    ParametersDescription
    ReplicasDefines the desired number of Pod replicas in the Deployment (default: 1). Adjust based on workload requirements.
    More > Update StrategyConfigures the rollingUpdate strategy for zero-downtime deployments:
    Max surge (maxSurge):
    • Maximum number of Pods that can exceed the desired replica count during an update.
    • Accepts absolute values (e.g., 2) or percentages (e.g., 20%).
    • Percentage calculation: ceil(current_replicas × percentage).
    • Example: 4.1 → 5 when calculated from 10 replicas.
    Max unavailable (maxUnavailable):
    • Maximum number of Pods that can be temporarily unavailable during an update.
    • Percentage values cannot exceed 100%.
    • Percentage calculation: floor(current_replicas × percentage).
    • Example: 4.9 → 4 when calculated from 10 replicas.
    Notes:
    1. Default values: maxSurge=1, maxUnavailable=1 if not explicitly set.
    2. Non-running Pods (e.g., in Pending/CrashLoopBackOff states) are considered unavailable.
    3. Simultaneous constraints:
    • maxSurge and maxUnavailable cannot both be 0 or 0%.
    • If percentage values resolve to 0 for both parameters, Kubernetes forces maxUnavailable=1 to ensure update progress.
    Example:
    For a Deployment with 10 replicas:
    • maxSurge=2 → Total Pods during update: 10 + 2 = 12.
    • maxUnavailable=3 → Minimum available Pods: 10 - 3 = 7.
    • This ensures availability while allowing controlled rollout.

#Procedure - Configure Pod

Note: In mixed-architecture clusters deploying single-architecture images, ensure proper Node Affinity Rules are configured for Pod scheduling.

  1. Pod section, configure container runtime parameters and lifecycle management:

    ParametersDescription
    VolumesMount persistent volumes to containers. Supported volume types include PVC, ConfigMap, Secret,emptyDir, hostPath, and so on. For implementation details, see Volume Mounting Guide.
    Pull SecretRequired only when pulling images from third-party registries (via manual image URL input).
    Note: Secret for authentication when pulling image from a secured registry.
    Close Grace PeriodDuration (default: 30s) allowed for a Pod to complete graceful shutdown after receiving termination signal.
    - During this period, the Pod completes inflight requests and releases resources.
    - Setting 0 forces immediate deletion (SIGKILL), which may cause request interruptions.
  1. Node Affinity Rules
ParametersDescription
More > Node SelectorConstrain Pods to nodes with specific labels (e.g. kubernetes.io/os: linux).
Node OS Selector
More > AffinityDefine fine-grained scheduling rules based on existing.
Affinity Types:
  • Pod Affinity: Schedule new Pods to nodes hosting specific Pods(same topology domain).
  • Pod Anti-affinity: Prevent co-location of new Pods with specific Pods.
Enforcement Modes:
  • requiredDuringSchedulingIgnoredDuringExecution: Pods are scheduled only if rules are satisfied.
  • preferredDuringSchedulingIgnoredDuringExecution: Prioritize nodes meeting rules, but allow exceptions.
Configuration Fields:
  • topologyKey: Node label defining topology domains (default:kubernetes.io/hostname).
  • labelSelector: Filters target Pods using label queries.
  1. Network Configuration
    • Kube-OVN

      ParametersDescription
      Bandwidth LimitsEnforce QoS for Pod network traffic:
      • Egress rate limit: Maximum outbound traffic rate (e.g., 10Mbps).
      • Ingress rate limit: Maximum inbound traffic rate.
      SubnetAssign IPs from a predefined subnet pool. If unspecified, uses the namespace's default subnet.
      Static IP AddressBind persistent IP addresses to Pods:
      • Multiple Pods across Deployments can claim the same IP, but only one Pod can use it concurrently.
      • Critical: Number of static IPs must ≥ Pod replica count.
    • Calico

      ParametersDescription
      Static IP AddressAssign fixed IPs with strict uniqueness:
      • Each IP can be bound to only one Pod in the cluster.
      • Critical: Static IP count must ≥ Pod replica count.

#Procedure - Configure Containers

  1. Container section, refer to the following instructions to configure the relevant information.

    ParametersDescription
    Resource Requests & Limits
    • Requests: Minimum CPU/memory required for container operation.
    • Limits: Maximum CPU/memory allowed during container execution. For unit definitions, see Resource Units.
    Namespace overcommit ratio:
    • Without overcommit ratio:
      If namespace resource quotas exist: Container requests/limits inherit namespace defaults (modifiable).
      No namespace quotas: No defaults; custom Request.
    • With overcommit ratio:
      Requests auto-calculated as Limits / Overcommit ratio (immutable).
    Constraints:
    • Request ≤ Limit ≤ Namespace quota maximum.
    • Overcommit ratio changes require pod recreation to take effect.
    • Overcommit ratio disables manual request configuration.
    • No namespace quotas → no container resource constraints.
    Extended ResourcesConfigure cluster-available extended resources (e.g., vGPU, pGPU).
    Volume MountsPersistent storage configuration. See Storage Volume Mounting Instructions.
    Operations:
    • Existing pod volumes: Click Add
    • No pod volumes: Click Add & Mount
    Parameters:
    • mountPath: Container filesystem path (e.g., /data)
    • subPath: Relative file/directory path within volume.
      For ConfigMap/Secret: Select specific key
    • readOnly: Mount as read-only (default: read-write)
    See Kubernetes Volumes.
    PortsExpose container ports.
    Example: Expose TCP port 6379 with name redis.
    Fields:
    • protocol: TCP/UDP
    • Port: Exposed port (e.g., 6379)
    • name: DNS-compliant identifier (e.g., redis)
    Startup Commands & ArgumentsOverride default ENTRYPOINT/CMD:
    Example 1: Execute top -b
    - Command: ["top", "-b"]
    - OR Command: ["top"], Args: ["-b"]
    Example 2: Output $MESSAGE:
    /bin/sh -c "while true; do echo $(MESSAGE); sleep 10; done"
    See Defining Commands.
    More > Environment Variables
    • Static values: Direct key-value pairs
    • Dynamic values: Reference ConfigMap/Secret keys, pod fields (fieldRef), resource metrics (resourceFieldRef)
    Note: Env variables override image/configuration file settings.
    More > Referenced ConfigMapsInject entire ConfigMap/Secret as env variables. Supported Secret types: Opaque, kubernetes.io/basic-auth.
    More > Health Checks
    • Liveness Probe: Detect container health (restart if failing)
    • Readiness Probe: Detect service availability (remove from endpoints if failing)
    See Health Check Parameters.
    More > Log FilesConfigure log paths:
    - Default: Collect stdout
    - File patterns: e.g., /var/log/*.log
    Requirements:
    • Storage driver overlay2: Supported by default
    • devicemapper: Manually mount EmptyDir to log directory
    • Windows nodes: Ensure parent directory is mounted (e.g., c:/a for c:/a/b/c/*.log)
    More > Exclude Log FilesExclude specific logs from collection (e.g., /var/log/aaa.log).
    More > Execute before StoppingExecute commands before container termination.
    Example: echo "stop"
    Note: Command execution time must be shorter than pod's terminationGracePeriodSeconds.
  2. Click Add Container (upper right) OR Add Init Container.

    See Init Containers. Init Container:

    1. Start before app containers (sequential execution).
    2. Release resources after completion.
    3. Deletion allowed when:
      • Pod has >1 app container AND ≥1 init container.
      • Not allowed for single-app-container pods.
  3. Click Create.

#Reference Information

#Storage Volume Mounting instructions
TypePurpose
Persistent Volume ClaimBinds an existing PVC to request persistent storage.

Note: Only bound PVCs (with associated PV) are selectable. Unbound PVCs will cause pod creation failures.
ConfigMapMounts full/partial ConfigMap data as files:
  • Full ConfigMap: Creates files named after keys under mount path
  • Subpath selection: Mount specific key (e.g., my.cnf)
SecretMounts full/partial Secret data as files:
  • Full Secret: Creates files named after keys under mount path
  • Subpath selection: Mount specific key (e.g., tls.crt)
Ephemeral VolumesCluster-provisioned temporary volume with features:
  • Dynamic provisioning
  • Lifecycle tied to pod
  • Supports declarative configuration
Use Case: Temporary data storage. See Ephemeral Volumes
Empty DirectoryEphemeral storage sharing between containers in same pod:
  • Created on node when pod starts
  • Deleted with pod removal
Use Case: Inter-container file sharing, temporary data storage. See EmptyDir
Host PathMounts host machine directory (must start with /, e.g., /volumepath).

#Heath Checks

  • Health checks YAML file example
  • Health checks configuration parameters in web console

#Managing Deployments

#Managing a Deployment by using CLI

#Viewing a Deployment

  • Check the Deployment was created.

    kubectl get deployments
  • Get details of your Deployment.

    kubectl describe deployments

#Updating a Deployment

Follow the steps given below to update your Deployment:

  1. Let's update the nginx Pods to use the nginx:1 .16.1 image.

    kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1

    or use the following command:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1

    Alternatively, you can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1:

    kubectl edit deployment/nginx-deployment
  2. To see the rollout status, run:

    kubectl rollout status deployment/nginx-deployment

    Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.

    kubectl get rs

    Running get pods should now show only the new Pods:

    kubectl get pods

#Scaling a Deployment

You can scale a Deployment by using the following command:

kubectl scale deployment/nginx-deployment --replicas=10

#Rolling Back a Deployment

  • Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.161 instead of nginx:1.16.1:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.161
  • The rollout gets stuck. You can verify it by checking the rollout status:

    kubectl rollout status deployment/nginx-deployment

#Deleting a Deployment

Deleting a Deployment will also delete its managed ReplicaSet and all associated Pods.

kubectl delete deployment <deployment-name>

#Managing a Deployment by using web console

#Viewing a Deployment

You can view a deployment to get information of your application.

  1. Container Platform, and navigate to Workloads > Deployments.
  2. Locate the Deployment you wish to view.
  3. Click the deployment name to see the Details, Topology, Logs, Events, Monitoring, etc.

#Updating a Deployment

  1. Container Platform, and navigate to Workloads > Deployments.
  2. Locate the Deployment you wish to update.
  3. In the Actions drop-down menu, select Update to view the Edit Deployment page.

#Deleting a Deployment

  1. Container Platform, and navigate to Workloads > Deployments.
  2. Locate the Deployment you wish to delete.
  3. In the Actions drop-down menu, Click the Delete button in the operations column and confirm.

#Troubleshooting by using CLI

When a Deployment encounters issues, here are some common troubleshooting methods.

#Check Deployment status

kubectl get deployment nginx-deployment
kubectl describe deployment nginx-deployment # View detailed events and status

#Check ReplicaSet status

kubectl get rs -l app=nginx
kubectl describe rs <replicaset-name>

#Check Pod status

kubectl get pods -l app=nginx
kubectl describe pod <pod-name>

#View Logs

kubectl logs <pod-name> -c <container-name> # View logs for a specific container
kubectl logs <pod-name> --previous         # View logs for the previously terminated container

#Enter Pod for debugging

kubectl exec -it <pod-name> -- /bin/bash # Enter the container shell

#Check Health configuration

Ensure livenessProbe and readinessProbe are correctly configured, and your application's health check endpoints are responding properly. Troubleshooting probe failures

#Check Resource Limits

Ensure container resource requests and limits are reasonable and that containers are not being killed due to insufficient resources.