Also, play manual jobs once dependency jobs are done instead of polling
for the dependent jobs to be finished.
Signed-off-by: Rémy Coutable <remy@rymai.me>
It seems the deploy function causes the job to fail if it doesn't
succeed. That wasn't the intent as we want to curl the Review App after
the deploy finished (even if it failed) because sometimes the Review App
is just a bit long to be ready.
This change wraps the Review App deployment with "set +e"/"set -e" to
ensure that the job doesn't fail right away if the deploy fails.
Signed-off-by: Rémy Coutable <remy@rymai.me>
Instead of inserting a row after each example to an external database,
we save the CI profiling reports into the `rspec_profiling` directory
and insert the data in the update-tests-metadata CI stage. This should
make each spec run faster and also reduce the number of PostgreSQL
connections needed by concurrent CI builds.
`scripts/insert-rspec-profiling-data` also inserts one file at a time
via the PostgreSQL COPY command for faster inserts. The one side effect
is that the `created_at` and `updated_at` timestamps aren't available
since they aren't generated in the CSV.
Closes https://gitlab.com/gitlab-org/gitlab-ee/issues/10154
This brings back some of the changes in
https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/20339.
For users using Gitaly on top of NFS, accessing the Git data directly
via Rugged is more performant than Gitaly. This merge request introduces
the feature flag `rugged_find_commit` to activate Rugged paths.
There are also Rake tasks `gitlab:features:enable_rugged` and
`gitlab:features:disable_rugged` to enable/disable these feature
flags altogether.
Part of four Rugged changes identified in
https://gitlab.com/gitlab-org/gitlab-ce/issues/57317.
* This will upload the sha of the docker image containing assets to
assist with building specific sha builds in the future.
* Addresses: gitlab-org/release/framework#51
The function that retrieves the dependable job is pretty basic and
retrieves the first job found with the matching name, but this job can
be failed but then successfully retried. In that case, we would exit the
depending job even though the dependable job actually succeeded (the
second time). Let's simplify things, be optimistic and continue with the
depending job even if the dependable job fails.
That reverts to the original behavior.
Signed-off-by: Rémy Coutable <remy@rymai.me>
If a script is waiting for a job to be done and that job fails,
exit with an error status so that the script doesn't continue
with a prerequisite in an invalid state.
This sets up GitLab CI to automatically push CE master changes into EE
master, or revert them if the changes cause merge conflicts. The CI
configuration contains a single job to do this: `merge:master`. This job
is executed for every push to master, and periodically using a CI
schedule.
The periodic job is necessary because incremental jobs may not be able
to revert commits if newly added commits depend on these commits. By
re-running the job periodically (including all changes since a large
enough time frame), we can ensure that such commits are also reverted
(if they still conflict at that time).
The job runs in its own "merge" stage, _after_ the build and prepare
stages, but _before_ running the tests. This ensures that randomly
failing tests won't prevent code from being merged into EE. Running the
stage after the "prepare" stage reduces the chances of the job reverting
CE changes just because it ran before a corresponding EE MR was merged
into EE master.
The `gitlab:assets:compile` job isn't run for the QA branches, thus
there's no Docker image correspinding these branches in the registry.
By overriding `CI_COMMIT_REF_SLUG` to `master` for QA branches, the
`fetch-assets` job in the `omnibus-gitlab` pipeline will pull the
`master` assets Docker image.
Signed-off-by: Rémy Coutable <remy@rymai.me>
CI jobs will be triggered both with rails 4 and 5
to make sure we keep backward compatibility if it turns out
we have to switch back to rails 4.
Rails 4 jobs are not allowed to fail for now, these jobs will be
removed in a follow-up MR next cycle.
In `deploy`, if the previous deployment failed, we delete/cleanup all
the objects related to the release, including secrets. The problem is
that if we create the root password before that, it will be then
recreated during the deploy with a random value!
By creatigng the secret just before actually deplying a new release, we
ensure that it won't be overriden.
Signed-off-by: Rémy Coutable <remy@rymai.me>
* Uses the same supporting code as used in EE
* Includes automated cleanup
* Install external-dns helm chart to review apps cluster if it isn't
already
* Adds variables REVIEW_APPS_AWS_SECRET_KEY and
REVIEW_APPS_AWS_ACCESS_key
* review-apps-ce uses a different cipher
This is needed because `GITLAB_VERSION` has a special meaning in
`omnibus-gitlab` triggers: this is the GitLab version to build.
The problem is that `omnibus-gitlab` also has triggers to run QA for an
`omnibus-gitlab` commit, and if we use `GITLAB_VERSION` in that case,
the comment would be posted on the GitLab CE/EE commit (stored in
`GITLAB_VERSION`), whci hwouldn't make any sense.
Thus we need `TOP_UPSTREAM_SOURCE_SHA` to represent the commit on
which we want to leave a comment.
Signed-off-by: Rémy Coutable <remy@rymai.me>
Cleanup code, and refactor tests that still use Rugged. After this, there should
be no Rugged code that access the instance's repositories on non-test
environments. There is still some rugged code for other tasks like the
repository import task, but since it doesn't access any repository storage path
it can stay.
- Stop review app's environment after 2 days
- Delete review app's environment after 3 days
- Delete Helm release after 4 days
Signed-off-by: Rémy Coutable <remy@rymai.me>