Remove legacy Ci::StaticModel we do not use anymore
## What does this MR do?
This removes class that according to our code coverage report and
`grep` is a legacy and unused class.
See merge request !5710
Stop 'git push' over HTTP early
Before this change we always let users push Git data over HTTP before
deciding whether to accept to push. This was different from pushing
over SSH where we terminate a 'git push' early if we already know the
user is not allowed to push.
This change let Git over HTTP follow the same behavior as Git over
SSH. We also distinguish between HTTP 404 and 403 responses when
denying Git requests, depending on whether the user is allowed to know
the project exists.
See merge request !5639
Log base64-decoded PostReceive arguments
The change to base64-encoding the third argument to PostReceive in
gitlab-shell made our Sidekiq ArgumentsLogger a little less useful.
This change adds decoded data to the log statement.
Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/20381
See merge request !5547
The change to base64-encoding the third argument to PostReceive in
gitlab-shell made our Sidekiq ArgumentsLogger a little less useful.
This change adds a log statement for the decoded data.
Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/20381
Developer cannot push to protected branch when project is empty or he has not been granted permission to do so
This MR was created following !1979 and !1978Closes#14898
See merge request !1980
Fix Rename `add_users_into_project` and `projects_ids`
## What does this MR do?
Only modifies the name of a method that leaves more semantic and expressive and the name of the keywords arguments to the rails convention.
## Are there points in the code the reviewer needs to double check?
Only if it has been changed at every point that is calling this method and that passing arguments.
## Why was this MR needed?
To make the code more expressive.
## What are the relevant issue numbers?
Closes#20512.
- [x] [CHANGELOG](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/CHANGELOG) entry added
- Tests
- [x] All builds are passing
- [x] Conform by the [style guides](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/CONTRIBUTING.md#style-guides)
- [x] Branch has no merge conflicts with `master` (if you do - rebase it please)
- [x] [Squashed related commits together](https://git-scm.com/book/en/Git-Tools-Rewriting-History#Squashing-Commits)
See merge request !5659
We never add things `into` projects, we just add them `to` projects. So how about we rename this to `add_users_to_project`.
Rename `projects_ids` to `project_ids` by following the convention of rails.
Update the gitlab-shell version in the tmp/tests directory to the right version
Previously the gitlab-shell version would never be updated if the directory existed via the `gitlab🐚install` Rake task. This could lead to incompatibility issues or random errors.
See merge request !5646
Previously the gitlab-shell version would never be updated if the directory
existed via the `gitlab🐚install` Rake task. This could lead to
incompatibility issues or random errors.
By using Rouge::Lexer.find instead of find_fancy() and memoizing the
HTML formatter we can speed up the highlighting process by between 1.7
and 1.8 times (at least when measured using synthetic benchmarks). To
measure this I used the following benchmark:
require 'benchmark/ips'
input = ''
Dir['./app/controllers/**/*.rb'].each do |controller|
input << <<-EOF
<pre><code class="ruby">#{File.read(controller).strip}</code></pre>
EOF
end
document = Nokogiri::HTML.fragment(input)
filter = Banzai::Filter::SyntaxHighlightFilter.new(document)
puts "Input size: #{(input.bytesize.to_f / 1024).round(2)} KB"
Benchmark.ips do |bench|
bench.report 'call' do
filter.call
end
end
This benchmark produces 250 KB of input. Before these changes the timing
output would be as follows:
Calculating -------------------------------------
call 1.000 i/100ms
-------------------------------------------------
call 22.439 (±35.7%) i/s - 93.000
After these changes the output instead is as follows:
Calculating -------------------------------------
call 1.000 i/100ms
-------------------------------------------------
call 41.283 (±38.8%) i/s - 148.000
Note that due to the fairly high standard deviation and this being a
synthetic benchmark it's entirely possible the real-world improvements
are smaller.
Before this change we always let users push Git data over HTTP before
deciding whether to accept to push. This was different from pushing
over SSH where we terminate a 'git push' early if we already know the
user is not allowed to push.
This change let Git over HTTP follow the same behavior as Git over
SSH. We also distinguish between HTTP 404 and 403 responses when
denying Git requests, depending on whether the user is allowed to know
the project exists.
By using clever XPath queries we can quite significantly improve the
performance of this method. The actual improvement depends a bit on the
amount of links used but in my tests the new implementation is usually
around 8 times faster than the old one. This was measured using the
following benchmark:
require 'benchmark/ips'
text = '<p>' + Note.select("string_agg(note, '') AS note").limit(50).take[:note] + '</p>'
document = Nokogiri::HTML.fragment(text)
filter = Banzai::Filter::AutolinkFilter.new(document, autolink: true)
puts "Input size: #{(text.bytesize.to_f / 1024 / 1024).round(2)} MB"
filter.rinku_parse
Benchmark.ips(time: 15) do |bench|
bench.report 'text_parse' do
filter.text_parse
end
bench.report 'text_parse_fast' do
filter.text_parse_fast
end
bench.compare!
end
Here the "text_parse_fast" method is the new implementation and
"text_parse" the old one. The input size was around 180 MB. Running this
benchmark outputs the following:
Input size: 181.16 MB
Calculating -------------------------------------
text_parse 1.000 i/100ms
text_parse_fast 9.000 i/100ms
-------------------------------------------------
text_parse 13.021 (±15.4%) i/s - 188.000
text_parse_fast 112.741 (± 3.5%) i/s - 1.692k
Comparison:
text_parse_fast: 112.7 i/s
text_parse: 13.0 i/s - 8.66x slower
Again the production timings may (and most likely will) vary depending
on the input being processed.