This also means we need to apply the `current_scope` otherwise this
method will return all clusters associated with the groups regardless of
any scopes applied to this method
- Rename ordered_group_clusters_for_project ->
ancestor_clusters_for_clusterable
- Improve name of order option. It makes much more sense to have `hierarchy_order: :asc`
and `hierarchy_order: :desc`
- Allow ancestor_clusters_for_clusterable for group
- Re-use code already present in Project
AFAIK the only relevant place is Projects::CreateService, this gets
called when user creates a new project, forks a new project and does
those things via the api.
Also create k8s namespace for new group hierarchy
when transferring project between groups
Uses new Refresh service to create k8s namespaces
- Ensure we use Cluster#cluster_project
If a project has multiple clusters (EE), using Project#cluster_project
is not guaranteed to return the cluster_project for this cluster. So
switch to using Cluster#cluster_project - at this stage a cluster can
only have 1 cluster_project.
Also, remove rescue so that sidekiq can retry
Look for matching clusters starting from the closest ancestor, then go
up the ancestor tree.
Then use Ruby to get clusters for each group in order. Not that
efficient, considering we will doing up to `NUMBER_OF_ANCESTORS_ALLOWED`
number of queries, but it's a finite number
Explicitly order query by depth
This allows us to control ordering explicitly and also to reverse the
order which is useful to allow us to be consistent with
Clusters::Cluster.on_environment (EE) which does reverse ordering.
Puts querying group clusters behind Feature Flag. Just in case we have
issues with performance, we can easily disable this
We do not want group level clusters to fall back to what was old
behaviour for project level clusters. So instead we will not return any
KUBE_TOKEN if we cannot find a suitable kubernetes_namespace for the
project, in the group level cluster case.
Add test cases to assert above
Having an invalid KUBECONFIG without a token in it is not helpful. This
only became possible recently now that we are creating a separate
namespace and service account (and hence token) to send to the runners.
This led to somewhat surprising results when troubleshooting
https://gitlab.com/gitlab-org/gitlab-ce/issues/53879 as I found that the
KUBECONFIG was still being passed but KUBE_TOKEN was not. These things
really should have been linked.
Furthermore now that we are also using the [presence of KUBECONFIG to
decide whether or not to run build steps in Auto
DevOps](294d15be3e/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml (L164))
I think it makes even more sense to ensure that KUBECONFIG is a complete
config if passed to a job.