Kubernetes deployments on new clusters will now have
a separate namespace per project environment, instead
of sharing a single namespace for the project.
Behaviour of existing clusters is unchanged.
All new functionality is controlled by the
:kubernetes_namespace_per_environment feature flag,
which is safe to enable/disable at any time.
Also creates specs
Only allow Helm to be uninstalled if it's the only app
- Remove Tiller leftovers after reser command
- Fixes specs and offenses
Adds changelog file
Fix reset_command specs
This MR adds new application setting to network section
`allow_local_requests_from_system_hooks`. Prior to this change
system hooks were allowed to do local network requests by default
and we are adding an ability for admins to control it.
- After uninstalling the knative helm chart it's necessary to also
remove some leftover resources to allow the cluster to be clean
and knative to be reinstalleable.
- Adds knative uninstall disclaimer
- Uninstall ksvc before uninstalling knative
Make list of Knative and Ingres resources explicit
- To avoid deleting unwanted resources we are listing exact
which resources will be deleted rather than simply deleting any
resource that contains istio or knative words.
The TLS opts were missing from helm version command which meant that it
was just perpetually failing and hence wasting 30s of time waiting for a
command to be successful that was never going to be successful. This
never actually caused any errors because this loop will happily just
fail 30 times without breaking the overall script but it was just a
waste of installation time so now installing apps should be ~30s faster.
GitLab uses a kubernetes service account to perform deployments. For
serverless deployments to work as expected with externally created
clusters with their own knative installations (e.g. via Cloud Run), this
account requires additional permissions in the serving.knative.dev API
group.
Both the `install-<app>` and `uninstall-<app>` pods loads the
`values-content-configuration-<app>` configmap into the pod
(see `#volume_specification`). This configmap contains the cert
necessary to connect to Tiller. The cert though is only valid for 30
minutes.
So this fixes the bug where the configmap when uninstalling should be
updated as well.
- Creates new route
- Creates new controller action
- Creates call stack:
Clusterss::ApplciationsController calls -->
Clusters::Applications::UpdateService calls -->
Clusters::Applications::ScheduleUpdateService calls -->
ClusterUpdateAppWorker calls -->
Clusters::Applications::PatchService -->
ClusterWaitForAppInstallationWorker
DRY req params
Adds gcp_cluster:cluster_update_app queue
Schedule_update_service is uneeded
Extract common logic to a parent class (UpdateService will need it)
Introduce new UpdateService
Fix rescue class namespace
Fix RuboCop offenses
Adds BaseService for create and update services
Remove request_handler code duplication
Fixes update command
Move update_command to ApplicationCore so all apps can use it
Adds tests for Knative update_command
Adds specs for PatchService
Raise error if update receives an unistalled app
Adds update_service spec
Fix RuboCop offense
Use subject in favor of go
Adds update endpoint specs for project namespace
Adds update endpoint specs for group namespace
Use existing `public_url` validation to block various local urls. Note
that this validation will allow local urls if the "Allow requests to the
local network from hooks and services" admin setting is enabled.
Block KubeClient from using local addresses
It will also respect `allow_local_requests_from_hooks_and_services` so
if that is enabled KubeClinet will allow local addresses
Bump the helm and kubectl used in our Kubernetes integration, used e.g.
to install apps.
Note I have only bumped to the latest patch of the v1.11 series for
kubectl as GKE clusters are still on 1.10/1.11
http_max_redirects was introduced in 4.2.2, so upgrade kubeclient.
The monkey-patch was global so we will have to check that all instances
of Kubeclient::Client are handled.
Spec all methods of KubeClient
This should provide better confidence that we are indeed disallowing
redirection in all cases
If the service fails mid-point, then we should be able to re-run this
service. So, detect presence of any previously created Kubernetes
resource and update or create accordingly.
Fix specs accordingly. In the case of finalize_creation_service_spec.rb,
I decided to stub out the async worker rather than maintaining
individual stubs for various kubeclient calls for that worker.
Also add test cases for group clusters
We want to keep failed install pods around so that it is easier to debug
why a failure occured. With this change we also need to ensure that we
remove a previous pod with the same name before installing so that
re-install does not fail.
Another change here is that we no longer need to catch errors from
delete_pod! in CheckInstallationProgressService as we now catch the
ResourceNotFoundError in Helm::Api. The catch statement in
CheckInstallationProgressService was also probably too broad before and
should have been narrowed down simply to ResourceNotFoundError.
When an application install fails, and the user retries install, the
configmap for the application will already exists. If so, we simply
update instead of create.
This will reduce dependencies and failure points during installation. It
will also reduce security risks from untrusted dependencies being able
to effect all our users
This removes the ability to pass in a different version. We can instead
create a new entry in the SUPPORTED_API_GROUPS hash for a different
version if need be.
We should have access to #core_client, #rbac_client,
and #extensions_client without having to pass in an awkward array.
Also change api_version to default_api_version, which allows us to use a
different version for an individual client. Special case for
apis/extensions which only go up to v1beta1
Makes #hashed_client private
Removes the #clients and #discover! methods which are un-used
This also solves the async nature of the automatic creation of default
service tokens for service accounts. It also makes explicit which
service account token we always use.
create cluster role binding only if the provider has legacy_abac
disabled.