- Show Knative install button on group/instance cluster pages
- Allow Knative to be installed on group/instance clusters
- Add feature specs for installing applications on group/instance
clusters
- Add changelog entry
- Update docs to reflect that Knative can now be installed on
group-level and instance-level clusters
Kubernetes deployments on new clusters will now have
a separate namespace per project environment, instead
of sharing a single namespace for the project.
Behaviour of existing clusters is unchanged.
All new functionality is controlled by the
:kubernetes_namespace_per_environment feature flag,
which is safe to enable/disable at any time.
Whenever we are selecting a namespace to use for a
deployment or to query a cluster we want to exclude
Kubernetes namespace records that don't have a token
set as they will not have the required permissions.
However when configuring clusters, we want to
use the original namespace record even if it has no
token, as a namespace has to be unique on a cluster.
Remove Kn services cache from Clusters::Application::Knative
Knative function can exist even if user did not installed Knative via
GitLab managed apps.
-> Move responsibility of finding services into the Cluster
-> Responsability is inside Clusters::Cluster::KnativeServiceFinder
-> Projects::Serverless::FunctionsFinder now calls depends solely on a
cluster to find the Kn services.
-> Detect Knative by resource presence instead of service presence
-> Mock knative_installed response temporarily for frontend to develop
Display loader while `installed === 'checking'`
Added frontend work to determine if Knative is installed
Memoize with_reactive_cache(*args, &block) to avoid race conditions
When calling with_reactive_cache more than once, it's possible that the
second call will already have the value populated. Therefore, in cases
where we need the sequential calls to have consistent results, we'd fall
under a race condition.
Check knative installation via Knative resource presence
Only load pods if Knative is discovered
Always return a response in FunctionsController#index
- Always indicate if Knative is installed, not installed or checking
- Always indicate the partial response for functions. Final response is
guaranteed when knative_installed is either true | false.
Adds specs for Clusters::Cluster#knative_services_finder
Fix method name when calling on specs
Add an explicit check for functions
Added an explicit check to see if there are any functions available
Fix Serverless feature spec
- we don't find knative installation via database anymore,
rather via Knative resource
Display error message for request timeouts
Display an error message if the request times out
Adds feature specs for when functions exist
Remove a test purposed hardcoded flag
Add ability to partially load functions
Added the ability to partially load functions on the frontend
Add frontend unit tests
Added tests for the new frontend additions
Generate new translations
Generated new frontend translations
Address review comments
Cleaned up the frontend unit test.
Added computed prop for `isInstalled`.
Move string to constant
Simplify nil to array conversion
Put knative_installed states in a frozen hash for better read
Pluralize list of Knative states
Quey services and pods filtering name
This way we don't need to filter the namespace in memory.
Also, the data we get from the network is much smaller.
Simplify cache_key and fix bug
- Simplifies the cache_key by removing namespace duplicate
- Fixes a bug with reactive_cache memoization
When Kubernetes clusters were originally built they could only
exist at the project level, and so there was logic included
that assumed there would only ever be a single Kubernetes
namespace per cluster. We now support clusters at the group
and instance level, which allows multiple namespaces.
This change consolidates various project-specific fallbacks to
generate namespaces, and hands all responsibility to the
Clusters::KubernetesNamespace model. There is now no concept of
a single namespace for a Clusters::Platforms::Kubernetes; to
retrieve a namespace a project must now be supplied in all cases.
This simplifies upcoming work to use a separate Kubernetes
namespace per project environment (instead of a namespace
per project).
There are two cluster hierarchies one for the deployment platform and
one for controllers. The main difference is that deployment platforms do
not check user permissions and only return the first match.
When this option is enabled, GitLab will create namespaces and service
accounts as usual. When disabled, GitLab wont create any project
specific kubernetes resources
Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/56557
- Dry create_service.rb and update_service.rb duplicated code
- Remove known list of applications responsibility from services
- Refactor the complex builders->builder call from base_service.rb
- Fixes multiple typos on AutoDevops script
- Add an alias to Clusters::Cluster#domain as base_domain, so it's more
descriptive
- Removes unnecessary memoization on qa specs
- Changes migration to a post migration to deal better with traffic on
big instances (like gitlab.com)
Changes domain field to be on the Cluster page show, removing it from
Auto DevOps setting. Also injects the new environment variable
KUBE_INGRESS_BASE_DOMAIN into kubernetes#predefined_variables.
Migration to move the information from ProjectAutoDevops#domain
to Clusters::Cluster#domain. As well as necessary modifications to qa
selectors
Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/52363
This also means we need to apply the `current_scope` otherwise this
method will return all clusters associated with the groups regardless of
any scopes applied to this method
- Rename ordered_group_clusters_for_project ->
ancestor_clusters_for_clusterable
- Improve name of order option. It makes much more sense to have `hierarchy_order: :asc`
and `hierarchy_order: :desc`
- Allow ancestor_clusters_for_clusterable for group
- Re-use code already present in Project
AFAIK the only relevant place is Projects::CreateService, this gets
called when user creates a new project, forks a new project and does
those things via the api.
Also create k8s namespace for new group hierarchy
when transferring project between groups
Uses new Refresh service to create k8s namespaces
- Ensure we use Cluster#cluster_project
If a project has multiple clusters (EE), using Project#cluster_project
is not guaranteed to return the cluster_project for this cluster. So
switch to using Cluster#cluster_project - at this stage a cluster can
only have 1 cluster_project.
Also, remove rescue so that sidekiq can retry
Look for matching clusters starting from the closest ancestor, then go
up the ancestor tree.
Then use Ruby to get clusters for each group in order. Not that
efficient, considering we will doing up to `NUMBER_OF_ANCESTORS_ALLOWED`
number of queries, but it's a finite number
Explicitly order query by depth
This allows us to control ordering explicitly and also to reverse the
order which is useful to allow us to be consistent with
Clusters::Cluster.on_environment (EE) which does reverse ordering.
Puts querying group clusters behind Feature Flag. Just in case we have
issues with performance, we can easily disable this
Otherwise adding to `groups` will cause validation (in EE) based on
`group` to fail.
This is consistent with how Clusters::Cluster#first_project. The
alternative is to switch validation to use `groups` as well.
Even though we currently only should have one group for a cluster, we
allow the flexibility to associate to other groups in the future.
This also matches the runner <=> groups association.
- Adds Cluster#first_group, aliased to Cluster#group. For the
conceivable future, a cluster will have at most one group.
- Prevent mixing of group and project clusters. If project type
clusters, it should only have projects assigned. Similarly with groups.
- Default cluster_type to :project_type. As it's very small table we can
set default and null: false in one release.
This reverts the addition of the "goldiloader" Gem and all use of it.
While this Gem is very promising it's causing a variety of problems on
GitLab.com due to it eager-loading too much data in places where we
don't expect/can handle this. At least for the time being this means we
have to go back to manually fixing N+1 query problems, but at least
those should not cause a negative impact on availability.
Goldiloader (https://github.com/salsify/goldiloader) can eager load
associations automatically. This removes the need for adding "includes"
calls in a variety of different places. This also comes with the added
benefit of not having to eager load data if it's not used.