- Offloads uploading to GitLab Workhorse
- Use /authorize request for fast uploading
- Added backup recipes for artifacts
- Support download acceleration using X-Sendfile
Performance is improved in two steps:
1. On PostgreSQL an expression index is used for checking lower(email)
and lower(username).
2. The check to determine if we're searching for a username or Email is
moved to Ruby. Thanks to @haynes for suggesting and writing the
initial implementation of this.
Moving the check to Ruby makes this method an additional 1.5 times
faster compared to doing the check in the SQL query.
With performance being improved I've now also tweaked the amount of
iterations required by the User.by_login benchmark. This method now runs
between 900 and 1000 iterations per second.
This benchmark suite uses benchmark-ips
(https://github.com/evanphx/benchmark-ips) behind the scenes. Specs can
be turned into benchmark specs by setting "benchmark" to "true" in the
top-level describe block like so:
describe SomeClass, benchmark: true do
end
Writing benchmarks can be done using custom RSpec matchers, for example:
describe MaruTheCat, benchmark: true do
describe '#jump_in_box' do
it 'should run 1000 iterations per second' do
maru = described_class.new
expect { maru.jump_in_box }.to iterate_per_second(1000)
end
end
end
By default the "iterate_per_second" expectation requires a standard
deviation under 30% (this is just an arbitrary default for now). You can
change this by chaining "with_maximum_stddev" on the expectation:
expect { maru.jump_in_box }.to iterate_per_second(1000)
.with_maximum_stddev(10)
This will change the expectation to require a maximum deviation of 10%.
Alternatively you can use the it block style to write specs:
describe MaruTheCat, benchmark: true do
describe '#jump_in_box' do
subject { -> { described_class.new } }
it { is_expected.to iterate_per_second(1000) }
end
end
Because "iterate_per_second" operates on a block, opposed to a static
value, the "subject" method must return a Proc. This looks a bit goofy
but I have been unable to find a nice way around this.
Improve repo cleanup task
I accidentally wrote a new script, not seeing we already had one.
But the old one did not do enough (it only handled global namespace orhpans) so I figured I should just drop in the new script.
See merge request !1298