In 11.8, we added a fix for the SearchFilesByContent RPC in gitaly to
send back the response in chunks. However, we kept in the old code path
for backwards compatibility. Now that the change is fully deployed, we
can remove that old codepath.
This brings back some of the changes in
https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/20339.
For users using Gitaly on top of NFS, accessing the Git data directly
via Rugged is more performant than Gitaly. This merge request introduces
the feature flag `rugged_find_commit` to activate Rugged paths.
There are also Rake tasks `gitlab:features:enable_rugged` and
`gitlab:features:disable_rugged` to enable/disable these feature
flags altogether.
Part of four Rugged changes identified in
https://gitlab.com/gitlab-org/gitlab-ce/issues/57317.
Adds the ground work for writing into
the merge ref refs/merge-requests/:iid/merge the
merge result between source and target branches of
a MR, without further side-effects such as
mailing, MR updates and target branch changes.
updates gitaly proto to 1.7.0, modifies the search files gitaly client
call to use the new chunked_response flag in the rpc request, and stitch
the responses together.
maintains backwards compatibility with older gitaly servers.
This commit, introduced in https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/23812,
fixes a problem creating a displaying image diff notes when the image
is stored in LFS. The main problem was that `Gitlab::Diff::File` was
returning an invalid valid in `text?` for this kind of files.
It also fixes a rendering problem with other LFS files, like text
ones. They LFS pointer shouldn't be shown when LFS is enabled
for the project, but they were.
When the BFG object map file is in object storage (i.e., uploads in
general are placed into object storage), we get an instance of the
Gitlab::HttpIO class. This doesn't behave as expected when you try to
read past EOF, so we need to explicitly check for this condition to
avoid ending up in a tight loop around io.read
When a project is forked, the new repository used to be a deep copy of everything
stored on disk by leveraging `git clone`. This works well, and makes isolation
between repository easy. However, the clone is at the start 100% the same as the
origin repository. And in the case of the objects in the object directory, this
is almost always going to be a lot of duplication.
Object Pools are a way to create a third repository that essentially only exists
for its 'objects' subdirectory. This third repository's object directory will be
set as alternate location for objects. This means that in the case an object is
missing in the local repository, git will look in another location. This other
location is the object pool repository.
When Git performs garbage collection, it's smart enough to check the
alternate location. When objects are duplicated, it will allow git to
throw one copy away. This copy is on the local repository, where to pool
remains as is.
These pools have an origin location, which for now will always be a
repository that itself is not a fork. When the root of a fork network is
forked by a user, the fork still clones the full repository. Async, the
pool repository will be created.
Either one of these processes can be done earlier than the other. To
handle this race condition, the Join ObjectPool operation is
idempotent. Given its idempotent, we can schedule it twice, with the
same effect.
To accommodate the holding of state two migrations have been added.
1. Added a state column to the pool_repositories column. This column is
managed by the state machine, allowing for hooks on transitions.
2. pool_repositories now has a source_project_id. This column in
convenient to have for multiple reasons: it has a unique index allowing
the database to handle race conditions when creating a new record. Also,
it's nice to know who the host is. As that's a short link to the fork
networks root.
Object pools are only available for public project, which use hashed
storage and when forking from the root of the fork network. (That is,
the project being forked from itself isn't a fork)
In this commit message I use both ObjectPool and Pool repositories,
which are alike, but different from each other. ObjectPool refers to
whatever is on the disk stored and managed by Gitaly. PoolRepository is
the record in the database.
Use shelling out to git to write refs instead of rugged, hoping to
avoid creating invalid refs.
To update HEAD we switched to using `git symbolic-ref`.
By specifying `key`, we get a different lazy batch loader for each
repository, which means that accessing a lazy object from one repository
will only result in that repository's objects being fetched, not those
of other repositories, saving us some unnecessary Gitaly lookups.
This allows users to add patches as attachments to merge request
created via email.
When an email to create a merge request is sent, all the attachments
ending in `.patch` will be applied to the branch specified in the
subject of the email. If the branch did not exist, it will be created
from the HEAD of the repository.
When the patches could not be applied, the error message will be
replied to the user.
The patches can have a maximum combined size of 2MB for now.
Having this in a concern allows us to reuse it for different single
purpose classes that call out to git without going through the
repository every time.
Inlining this code allows us to remove a dependency on gitlab_grit in
gitlab-ce. We can't stop maintaining gitlab_grit yet, since gitaly-ruby
still depends on this gem, but it moves us a step closer.
This saves about 128 MB of baseline RAM usage per Unicorn and
Sidekiq process (!).
Linguist wasn't detecting languages anymore from CE/EE since
9ae8b57467. However, Linguist::BlobHelper
was still being depended on by BlobLike and others.
This removes the Linguist gem, given it isn't required anymore.
EscapeUtils were pulled in as dependency, but given Banzai depends on
it, it is now added explicitly.
Previously, Linguist was used to detect the best ACE mode. Instead,
we rely on ACE to guess the best mode based on the file extension.
Was introduced in the time that GitLab still used NFS, which is not
required anymore in most cases. By removing this, the API it calls will
return empty responses. This interface has to be removed in the next
major release, expected to be 12.0.
Cleanup code, and refactor tests that still use Rugged. After this, there should
be no Rugged code that access the instance's repositories on non-test
environments. There is still some rugged code for other tasks like the
repository import task, but since it doesn't access any repository storage path
it can stay.
Even if it doesn’t save lines of code, since people will tend to use
code they’ve seen. And `SafeRequestStore` is safer since you
don’t have to remember to check `RequestStore.active?`.
Without this parameter, every load of a Wiki page will load all the Wiki pages
in the repository for the sidebar. This is a significant performance penalty
that can significant slow the display of all Wiki pages.
Relates to #40101
After trying to remove the whole method in
8f69014af2902d8d53fe931268bec60f6858f160, this is a more gentle
approach to the method. :)
Prior to this change, new commit detection wasn't implemented in Gitaly,
this was done through: https://gitlab.com/gitlab-org/gitaly/merge_requests/779
As the new implemented got moved around a bit, the whole RevList class
got removed.
Part of https://gitlab.com/gitlab-org/gitaly/issues/1233
Prior to this change, most the commits counted were done through Gitaly.
This removes the last point where this wasn't the case.
This makes the `rugged_count_commits` method obsolete, with its tests.
Closes https://gitlab.com/gitlab-org/gitaly/issues/315
Batching commits for performance improvements, might lead to empty
batches being used. This isn't the case yet, but to guard against this
in future cases, a guard clause is added.
OPT_OUT status has been removed, and alternative implementation removed.
Also checks if the repository exists before executing the checksum RPC
to guard against NotFound errors.
Closesgitlab-org/gitaly#1105
Prior to this change, this was done through unicorn. In theory this
could time out. Workhorse has been sending these raw patches and diffs
for a long time and is stable in doing so.
Added bonus is the fact that `Commit#to_patch` can be removed.
`Commit#to_diff` too, which closes
https://gitlab.com/gitlab-org/gitaly/issues/324
Closes https://gitlab.com/gitlab-org/gitaly/issues/1196
Direct disk access is done through Gitaly now, so the legacy path was
deprecated. This path was used in Gitlab::Shell however. This required
the refactoring in this commit.
Added is the removal of direct path access on the project model, as that
lookup wasn't needed anymore is most cases.
Closes https://gitlab.com/gitlab-org/gitaly/issues/1111
This is called repeatedly when viewing a merge request, and this should
improve performance significantly by avoiding shelling out to git every time.
This should help https://gitlab.com/gitlab-com/infrastructure/issues/4027.
Clients can now request the attributes from `$GIT_DIR/info/attributes`
through Gitaly. The Gitaly migration is described in gitlab-org/gitaly#1082.
The parser algorithm was implemented in a way it could handle both file
contents or a File handle, and both were already tested.
Other than that, using the boy scout rule, I've removed a class,
InfoAttributes, as it was delegating everything to the parser and
therefor wasn't really needed in my opinion.
Repository archives are always named `<project>-<ref>-<sha>` even if
the ref is a commit. A consequence of always including the sha even
for tags is that packaging a release is more difficult because both
the ref and sha must be known by the packager.
- add `<project>/-/archive/<ref>/<filename>.<format>` route using the
`-` separator to prevent namespace collisions. If the filename is
`<project>-<ref>` or the ref is a sha, the sha will be omitted,
otherwise the default filename will be used.
- deprecate previous archive route `repository/<ref>/archive`
Repository archives are always named `<project>-<ref>-<sha>` even if
the ref is a commit. A consequence of always including the sha even
for tags is that packaging a release is more difficult because both
the ref and sha must be known by the packager.
- add append_sha option (defaults true) to provide a method for
toggling this feature.
Support added to GitLab Workhorse by gitlab-org/gitlab-workhorse!232
When we added caching, this meant that calling `can_be_resolved_in_ui?` didn't
always call `lines`, which meant that we didn't get the benefit of the
side-effect from that, where it forced the conflict data itself to UTF-8.
To fix that, make this explicit by separating the `raw_content` (any encoding)
from the `content` (which is either UTF-8, or an exception is raised).
Prior to this change, this method was called add_namespace, which broke
the CRUD convention and made it harder to grep for what I was looking
for. Given the change was a find and replace kind of fix, this was
changed without opening an issue and on another feature branch.
If more dynamic calls are made to add_namespace, these could've been
missed which might lead to incorrect bahaviour. However, going through
the commit log it seems thats not the case.
By default, --prune is added to the command-line of a `git fetch` operation,
but for repositories with many references this can take a long time to run. We
shouldn't need to run --prune the first time we fetch a new repository.
A field didn't call the needed encoding helper, thus some UTF-8 encoding
couldn't be encoded to ASCII. Using the helper method this was fixed.
Tests are now run against Gitaly and Rugged too, to ensure both remain
working correctly.
Fixesgitlab-org/gitaly#1032, gitlab-org/gitlab-ce#43278
Adds a test where a branch name is also a valid commit id. Git, the
binary should create an error message which is difficult to parse and
leading to errors later, as seen in: gitlab-org/gitlab-ce#43222
To catch these cases in the future,
gitlab-test@1942eed5cc108b19c7405106e81fa96125d0be22 was created. Which
a branch name matching the commit
When the applied diff contains UTF-8 or some other encoded data, the diff
returned back from the git process may be in ASCII-8BIT format. Writing this
data to stdin may fail if the data because stdin expects this data to be in
UTF-8. By switching the output to binmode, we ensure that the diff will
always be written as-is.
Closesgitlab-org/gitlab-ee#4960
The refs hash is used to determine what branches and tags have a commit
as head in the network graph. The previous implementation depended on
Rugged#references. The problem with this implementation was that it
depended on rugged, but also that it iterated over all references and
thus loading more data than needed if for example the project uses CI/CD
environments, Pipelines, or Merge Requests.
Given only refs are checked the network cares about the GraphHelper#refs
method has no need to reject those, simplifying the method.
Closesgitlab-org/gitaly#880
Uses Lfs::FileModificationHandler to coordinate LFS detection, creation of LfsObject, etc
Caveats:
1. This isn't used by the multi-file editor / Web IDE
2. This isn't used on rename. We'd need to be able to download LFS files
and add them to the commit if they no longer match so not as simple.
3. We only check the root .gitattributes file, so this should be improved
to correctly check for nested .gitattributes files in subfolders.
We stop relying on Gitlab::Git::Env for the RevList class, and use
Gitlab::Git::Repository#run_git methods inteaad. The refactor also fixes
another issue, since we now top using "path_to_repo" (which is a
Repository model method).
Migration is done through a small refactoring, which makes us call
endpoins which are performing the same actions for namespaces.
Tests are added to ensure only the project is removed that should be
removed.
Closesgitlab-org/gitaly#873
The previous implementation iterated across the entire patch set
to determine the number of lines added, deleted, and changed. Rugged
has a native method `Rugged::Diff#stat` that does this already,
which appears to be a little faster and require less RAM than doing
this ourselves.
Improves performance in #41524
Given the priorities shifted for the Gitaly team, this endpoint does not
get a dedicated endpoint yet. To make it work in a cloud native
environment the request needs to go to Gitaly, not rugged. This is
achieved by rerouting to the generic TreeEntry endpoint.
By importing this Ruby code into gitlab-rails (and gitaly-ruby), we avoid
200ms of startup time for each gitlab_projects subprocess we are eliminating.
By not having a gitlab_projects subprocess between gitlab-rails / sidekiq and
any git subprocesses (e.g. for fork_project, fetch_remote, etc, calls), we can
also manage these git processes more cleanly, and avoid sending SIGKILL to them
Moving the check out of the general requests, makes sure we don't have
any slowdown in the regular requests.
To keep the process performing this checks small, the check is still
performed inside a unicorn. But that is called from a process running
on the same server.
Because the checks are now done outside normal request, we can have a
simpler failure strategy:
The check is now performed in the background every
`circuitbreaker_check_interval`. Failures are logged in redis. The
failures are reset when the check succeeds. Per check we will try
`circuitbreaker_access_retries` times within
`circuitbreaker_storage_timeout` seconds.
When the number of failures exceeds
`circuitbreaker_failure_count_threshold`, we will block access to the
storage.
After `failure_reset_time` of no checks, we will clear the stored
failures. This could happen when the process that performs the checks
is not running.
Prior to this MR there were two GitHub related importers:
* Github::Import: the main importer used for GitHub projects
* Gitlab::GithubImport: importer that's somewhat confusingly used for
importing Gitea projects (apparently they have a compatible API)
This MR renames the Gitea importer to Gitlab::LegacyGithubImport and
introduces a new GitHub importer in the Gitlab::GithubImport namespace.
This new GitHub importer uses Sidekiq for importing multiple resources
in parallel, though it also has the ability to import data sequentially
should this be necessary.
The new code is spread across the following directories:
* lib/gitlab/github_import: this directory contains most of the importer
code such as the classes used for importing resources.
* app/workers/gitlab/github_import: this directory contains the Sidekiq
workers, most of which simply use the code from the directory above.
* app/workers/concerns/gitlab/github_import: this directory provides a
few modules that are included in every GitHub importer worker.
== Stages
The import work is divided into separate stages, with each stage
importing a specific set of data. Stages will schedule the work that
needs to be performed, followed by scheduling a job for the
"AdvanceStageWorker" worker. This worker will periodically check if all
work is completed and schedule the next stage if this is the case. If
work is not yet completed this worker will reschedule itself.
Using this approach we don't have to block threads by calling `sleep()`,
as doing so for large projects could block the thread from doing any
work for many hours.
== Retrying Work
Workers will reschedule themselves whenever necessary. For example,
hitting the GitHub API's rate limit will result in jobs rescheduling
themselves. These jobs are not processed until the rate limit has been
reset.
== User Lookups
Part of the importing process involves looking up user details in the
GitHub API so we can map them to GitLab users. The old importer used
an in-memory cache, but this obviously doesn't work when the work is
spread across different threads.
The new importer uses a Redis cache and makes sure we only perform
API/database calls if absolutely necessary. Frequently used keys are
refreshed, and lookup misses are also cached; removing the need for
performing API/database calls if we know we don't have the data we're
looking for.
== Performance & Models
The new importer in various places uses raw INSERT statements (as
generated by `Gitlab::Database.bulk_insert`) instead of using Rails
models. This allows us to bypass any validations and callbacks,
drastically reducing the number of SQL queries and Gitaly RPC calls
necessary to import projects.
To ensure the code produces valid data the corresponding tests check if
the produced rows are valid according to the model validation rules.
This allows input to start processing immediately without waiting for the process to complete.
This also allows long or infinite inputs to be partially processed,
which will termiate the process when reading stops with SIGPIPE.
also, I refactored the MergeRequest#fetch_ref method to express
the side-effect that this method has.
MergeRequest#fetch_ref -> MergeRequest#fetch_ref!
Repository#fetch_source_branch -> Repository#fetch_source_branch!
Instead of only checking once within a timeout, check multiple times
within a timeout.
That means with a timeout of 30 seconds and 3 retries. Each try would
be allowed 20 seconds.
The circuitbreaker now has 2 failure modes:
- Backing off: This will raise the `Gitlab::Git::Storage::Failing`
exception. Access to the shard is blocked temporarily.
- Circuit broken: This will raise the
`Gitlab::Git::Storage::CircuitBroken` exception. Access to the shard
will be blocked until the failures are reset.