The pattern in the `::reference_pattern` class method in the
ExternalIssue model does not match all valid forms of JIRA project
names. I have updated the regex to match JIRA project names with numbers
and underscores. More information on valid JIRA project names can be
found here:
https://confluence.atlassian.com/jira/changing-the-project-key-format-192534.html
* The first character must be a letter,
* All letters used in the project key must be from the Modern Roman Alphabet and upper case, and
* Only letters, numbers or the underscore character can be used.
This ensures that an instrumented method that doesn't take arguments
reports an arity of 0, instead of -1.
If Ruby had a proper method for finding out the required arguments of a
method (e.g. Method#required_arguments) this would not have been an
issue. Sadly the only two methods we have are Method#parameters and
Method#arity, and both are equally painful to use.
Fixesgitlab-org/gitlab-ce#12450
This changes the format of metadata to handle paths, that may contain
whitespace characters, new line characters and non-UTF-8 characters.
Now those paths along with metadata in JSON format are stored as
length-prefixed strings (uint32 prefix).
Metadata file has a custom format:
1. First string field is metadata version field (string)
2. Second string field is metadata errors field (JSON strong)
3. All subsequent fields is pair of path (string) and path metadata
in JSON format.
Path's metadata contains all fields that where possible to extract from
ZIP archive like date of modification, CRC, compressed size,
uncompressed size and comment.
`StringPath` class is something similar to Ruby's `Pathname` class,
but does not involve any IO operations. `StringPath` objects require
passing string representation of path, and array of paths that
represents universe to constructor to be intantiated.
LDAP Sync blocked user edgecases
Allow GitLab admins to block otherwise valid GitLab LDAP users
(https://gitlab.com/gitlab-org/gitlab-ce/issues/3462)
Based on the discussion on the original issue, we are going to differentiate "normal" block operations to the ldap automatic ones in order to make some decisions when its one or the other.
Expected behavior:
- [x] "ldap_blocked" users respond to both `blocked?` and `ldap_blocked?`
- [x] "ldap_blocked" users can't be unblocked by the Admin UI
- [x] "ldap_blocked" users can't be unblocked by the API
- [x] Block operations that are originated from LDAP synchronization will flag user as "ldap_blocked"
- [x] Only "ldap_blocked" users will be automatically unblocked by LDAP synchronization
- [x] When LDAP identity is removed, we should convert `ldap_blocked` into `blocked`
Mockup for the Admin UI with both "ldap_blocked" and normal "blocked" users:

There will be another MR for the EE version.
See merge request !2242
Sampling data at a fixed interval means we can potentially miss data
from events occurring between sampling intervals. For example, say we
sample data every 15 seconds but Unicorn workers get killed after 10
seconds. In this particular case it's possible to miss interesting data
as the sampler will never get to actually submitting data.
To work around this (at least for the most part) the sampling interval
is randomized as following:
1. Take the user specified sampling interval (15 seconds by default)
2. Divide it by 2 (referred to as "half" below)
3. Generate a range (using a step of 0.1) from -"half" to "half"
4. Every time the sampler goes to sleep we'll grab the user provided
interval and add a randomly chosen "adjustment" to it while making
sure we don't pick the same value twice in a row.
For a specified timeout of 15 this means the actual intervals can be
anywhere between 7.5 and 22.5, but never can the same interval be used
twice in a row.
The rationale behind this change is that on dev.gitlab.org I'm sometimes
seeing certain Gitlab::Git/Rugged objects being retained, but only for a
few minutes every 24 hours. Knowing the code of Gitlab and how much
memory it uses/leaks I suspect we're missing data due to workers getting
terminated before the sampler can write its data to InfluxDB.