mirror of https://github.com/kubevela/kubevela.git
|
…
|
||
|---|---|---|
| .. | ||
| crds | ||
| templates | ||
| .helmignore | ||
| Chart.yaml | ||
| README.md | ||
| values.yaml | ||
README.md
Make shipping applications more enjoyable.
KubeVela helm chart
KubeVela is a modern application platform that makes it easier and faster to deliver and manage applications across hybrid, multi-cloud environments. At the mean time, it is highly extensible and programmable, which can adapt to your needs as they grow.
TL;DR
helm repo add kubevela https://kubevela.github.io/charts
helm repo update
helm install --create-namespace -n vela-system kubevela kubevela/vela-core --wait
Prerequisites
- Kubernetes >= v1.19 && < v1.22
Parameters
KubeVela core parameters
| Name | Description | Value |
|---|---|---|
systemDefinitionNamespace |
System definition namespace, if unspecified, will use built-in variable .Release.Namespace. |
nil |
applicationRevisionLimit |
Application revision limit | 2 |
definitionRevisionLimit |
Definition revision limit | 2 |
concurrentReconciles |
concurrentReconciles is the concurrent reconcile number of the controller | 4 |
controllerArgs.reSyncPeriod |
The period for resync the applications | 5m |
KubeVela workflow parameters
| Name | Description | Value |
|---|---|---|
workflow.enableSuspendOnFailure |
Enable suspend on workflow failure | false |
workflow.enableExternalPackageForDefaultCompiler |
Enable external package for default cuex compiler | true |
workflow.enableExternalPackageWatchForDefaultCompiler |
Enable external package watch for default cuex compiler | false |
workflow.backoff.maxTime.waitState |
The max backoff time of workflow in a wait condition | 60 |
workflow.backoff.maxTime.failedState |
The max backoff time of workflow in a failed condition | 300 |
workflow.step.errorRetryTimes |
The max retry times of a failed workflow step | 10 |
KubeVela controller parameters
| Name | Description | Value |
|---|---|---|
replicaCount |
KubeVela controller replica count | 1 |
imageRegistry |
Image registry | "" |
image.repository |
Image repository | oamdev/vela-core |
image.tag |
Image tag | latest |
image.pullPolicy |
Image pull policy | Always |
resources.limits.cpu |
KubeVela controller's cpu limit | 500m |
resources.limits.memory |
KubeVela controller's memory limit | 1Gi |
resources.requests.cpu |
KubeVela controller's cpu request | 50m |
resources.requests.memory |
KubeVela controller's memory request | 20Mi |
webhookService.type |
KubeVela webhook service type | ClusterIP |
webhookService.port |
KubeVela webhook service port | 9443 |
healthCheck.port |
KubeVela health check port | 9440 |
KubeVela controller optimization parameters
| Name | Description | Value |
|---|---|---|
optimize.cachedGvks |
Optimize types of resources to be cached. | "" |
optimize.markWithProb |
Optimize ResourceTracker GC by only run mark with probability. Side effect: outdated ResourceTracker might not be able to be removed immediately. | 0.1 |
optimize.disableComponentRevision |
Optimize componentRevision by disabling the creation and gc | true |
optimize.disableApplicationRevision |
Optimize ApplicationRevision by disabling the creation and gc. | false |
optimize.enableInMemoryWorkflowContext |
Optimize workflow by use in-memory context. | false |
optimize.disableResourceApplyDoubleCheck |
Optimize workflow by ignoring resource double check after apply. | false |
optimize.enableResourceTrackerDeleteOnlyTrigger |
Optimize resourcetracker by only trigger reconcile when resourcetracker is deleted. | true |
featureGates.gzipResourceTracker |
compress ResourceTracker using gzip (good) before being stored. This is reduces network throughput when dealing with huge ResourceTrackers. | false |
featureGates.zstdResourceTracker |
compress ResourceTracker using zstd (fast and good) before being stored. This is reduces network throughput when dealing with huge ResourceTrackers. Note that zstd will be prioritized if you enable other compression options. | true |
featureGates.applyOnce |
if enabled, the apply-once feature will be applied to all applications, no state-keep and no resource data storage in ResourceTracker | false |
featureGates.multiStageComponentApply |
if enabled, the multiStageComponentApply feature will be combined with the stage field in TraitDefinition to complete the multi-stage apply. | true |
featureGates.gzipApplicationRevision |
compress apprev using gzip (good) before being stored. This is reduces network throughput when dealing with huge apprevs. | false |
featureGates.zstdApplicationRevision |
compress apprev using zstd (fast and good) before being stored. This is reduces network throughput when dealing with huge apprevs. Note that zstd will be prioritized if you enable other compression options. | true |
featureGates.preDispatchDryRun |
enable dryrun before dispatching resources. Enable this flag can help prevent unsuccessful dispatch resources entering resourcetracker and improve the user experiences of gc but at the cost of increasing network requests. | true |
featureGates.validateComponentWhenSharding |
enable component validation in webhook when sharding mode enabled | false |
featureGates.disableWebhookAutoSchedule |
disable auto schedule for application mutating webhook when sharding enabled | false |
featureGates.disableBootstrapClusterInfo |
disable the cluster info bootstrap at the starting of the controller | false |
featureGates.informerCacheFilterUnnecessaryFields |
filter unnecessary fields for informer cache | true |
featureGates.sharedDefinitionStorageForApplicationRevision |
use definition cache to reduce duplicated definition storage for application revision, must be used with InformerCacheFilterUnnecessaryFields | true |
featureGates.disableWorkflowContextConfigMapCache |
disable the workflow context's configmap informer cache | true |
featureGates.enableCueValidation |
enable the strict cue validation for cue required parameter fields | false |
featureGates.enableApplicationStatusMetrics |
enable application status metrics and structured logging | false |
featureGates.validateResourcesExist |
enable webhook validation to check if resource types referenced in definition templates exist in the cluster | false |
MultiCluster parameters
| Name | Description | Value |
|---|---|---|
multicluster.enabled |
Whether to enable multi-cluster | true |
multicluster.metrics.enabled |
Whether to enable multi-cluster metrics collect | false |
multicluster.clusterGateway.direct |
controller will connect to ClusterGateway directly instead of going to Kubernetes APIServer | true |
multicluster.clusterGateway.replicaCount |
ClusterGateway replica count | 1 |
multicluster.clusterGateway.port |
ClusterGateway port | 9443 |
multicluster.clusterGateway.image.repository |
ClusterGateway image repository | oamdev/cluster-gateway |
multicluster.clusterGateway.image.tag |
ClusterGateway image tag | v1.9.0-alpha.2 |
multicluster.clusterGateway.image.pullPolicy |
ClusterGateway image pull policy | IfNotPresent |
multicluster.clusterGateway.resources.requests.cpu |
ClusterGateway cpu request | 50m |
multicluster.clusterGateway.resources.requests.memory |
ClusterGateway memory request | 20Mi |
multicluster.clusterGateway.resources.limits.cpu |
ClusterGateway cpu limit | 500m |
multicluster.clusterGateway.resources.limits.memory |
ClusterGateway memory limit | 200Mi |
multicluster.clusterGateway.secureTLS.enabled |
Whether to enable secure TLS | true |
multicluster.clusterGateway.secureTLS.certPath |
Path to the certificate file | /etc/k8s-cluster-gateway-certs |
multicluster.clusterGateway.secureTLS.certManager.enabled |
Whether to enable cert-manager | false |
multicluster.clusterGateway.serviceMonitor.enabled |
Whether to enable service monitor | false |
multicluster.clusterGateway.serviceMonitor.additionalLabels |
Additional labels for service monitor | {} |
Test parameters
| Name | Description | Value |
|---|---|---|
test.app.repository |
Test app repository | oamdev/hello-world |
test.app.tag |
Test app tag | v1 |
test.k8s.repository |
Test k8s repository | oamdev/alpine-k8s |
test.k8s.tag |
Test k8s tag | 1.18.2 |
Common parameters
| Name | Description | Value |
|---|---|---|
imagePullSecrets |
Image pull secrets | [] |
nameOverride |
Override name | "" |
fullnameOverride |
Fullname override | "" |
serviceAccount.create |
Specifies whether a service account should be created | true |
serviceAccount.annotations |
Annotations to add to the service account | {} |
serviceAccount.name |
The name of the service account to use. If not set and create is true, a name is generated using the fullname template | nil |
nodeSelector |
Node selector | {} |
tolerations |
Tolerations | [] |
affinity |
Affinity | {} |
rbac.create |
Specifies whether a RBAC role should be created | true |
logDebug |
Enable debug logs for development purpose | false |
devLogs |
Enable formatted logging support for development purpose | false |
logFilePath |
If non-empty, write log files in this path | "" |
logFileMaxSize |
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. | 1024 |
admissionWebhookTimeout |
Timeout seconds for admission webhooks | 10 |
kubeClient.qps |
The qps for reconcile clients | 400 |
kubeClient.burst |
The burst for reconcile clients | 600 |
authentication.enabled |
Enable authentication framework for applications | false |
authentication.withUser |
Application authentication will impersonate as the request User (must be true for security) | true |
authentication.defaultUser |
Application authentication will impersonate as the User if no user provided or withUser is false | kubevela:vela-core |
authentication.groupPattern |
Application authentication will impersonate as the request Group that matches the pattern | kubevela:* |
authorization.definitionValidationEnabled |
Enable definition permission validation for RBAC checks on definitions | false |
sharding.enabled |
When sharding enabled, the controller will run as master mode. Refer to https://github.com/kubevela/kubevela/blob/master/design/vela-core/sharding.md for details. | false |
sharding.schedulableShards |
The shards available for scheduling. If empty, dynamic discovery will be used. | "" |
core.metrics.enabled |
Enable metrics for vela-core | false |
core.metrics.serviceMonitor.enabled |
Enable service monitor for metrics | false |
core.metrics.serviceMonitor.additionalLabels |
Additional labels for service monitor | {} |
Uninstallation
Vela CLI
To uninstall KubeVela, you can just run the following command by vela CLI:
vela uninstall --force
Helm CLI
Notice: You must disable all the addons before uninstallation, this is a script for convenience.
#! /bin/sh
addon=$(vela addon list|grep enabled|awk {'print $1'})
fluxcd=false
for var in ${addon[*]}
do
if [ $var == "fluxcd" ]; then
fluxcd=true
continue
else
vela addon disable $var
fi
done
if [ $fluxcd ]; then
vela addon disable fluxcd
fi
Make sure all existing KubeVela resources deleted before uninstallation:
kubectl delete applicationrevisions.core.oam.dev --all
kubectl delete applications.core.oam.dev --all
kubectl delete componentdefinitions.core.oam.dev --all
kubectl delete definitionrevisions.core.oam.dev --all
kubectl delete policies.core.oam.dev --all
kubectl delete policydefinitions.core.oam.dev --all
kubectl delete resourcetrackers.core.oam.dev --all
kubectl delete traitdefinitions.core.oam.dev --all
kubectl delete workflows.core.oam.dev --all
kubectl delete workflowstepdefinitions.core.oam.dev --all
kubectl delete workloaddefinitions.core.oam.dev --all
To uninstall the KubeVela helm release:
$ helm uninstall -n vela-system kubevela
Finally, this command will remove all the Kubernetes resources associated with KubeVela and remove this chart release.