Compare commits

...

83 Commits

Author SHA1 Message Date
Harshavardhana 62383dfbfe
Fix formatting of features in README.md
VulnCheck / Analysis (push) Waiting to run Details
2025-10-07 09:59:23 -07:00
Ravind Kumar bde0d5a291
Updating readme for MinIO docs (#21625)
VulnCheck / Analysis (push) Waiting to run Details
2025-10-06 22:36:26 -07:00
yangw 534f4a9fb1
fix: timeN function return final closure not be called (#21615)
VulnCheck / Analysis (push) Has been cancelled Details
2025-09-30 23:06:01 -07:00
Klaus Post b8631cf531
Use new gofumpt (#21613)
VulnCheck / Analysis (push) Has been cancelled Details
Update tinylib. Should fix CI.

`gofumpt -w .&&go generate ./...`
2025-09-28 13:59:21 -07:00
jiuker 456d9462e5
fix: after saveRebalanceStats cancel will be empty (#21597) 2025-09-19 21:51:57 -07:00
jiuker 756f3c8142
fix: incorrect poolID when after decommission adding pools (#21590) 2025-09-18 04:47:48 -07:00
mosesdd 7a80ec1cce
fix: LDAP TLS handshake fails with StartTLS and tls_skip_verify=off (#21582)
Fixes #21581
2025-09-17 00:58:27 -07:00
M Alvee ae71d76901
fix: remove unnecessary replication checks (#21569) 2025-09-08 10:43:13 -07:00
M Alvee 07c3a429bf
fix: conditional checks write for multipart (#21567) 2025-09-07 09:13:09 -07:00
Minio Trusted 0cde982902 Update yaml files to latest version RELEASE.2025-09-06T17-38-46Z 2025-09-07 05:14:10 +00:00
Ian Roberts d0f50cdd9b
fix: use correct dummy ARN for claim-based OIDC provider when listing access keys (#21549)
fix: use correct dummy ARN for claim-based OIDC provider

When listing OIDC access keys, use the correct ARN when looking up the provider configuration for the claim-based provider.  Without this it was impossible to list access keys for a claim-based provider, only for a role-policy-based provider.

Fixes minio/minio#21548
2025-09-06 10:38:46 -07:00
WGH da532ab93d
Fix support for legacy compression env variables (#21533)
Commit b6eb8dff64 renamed compression
setting environment variables to follow consistent style.

Although it preserved backward compatibility for the most part (i.e. it
handled MINIO_COMPRESS_ALLOW_ENCRYPTION, MINIO_COMPRESS_EXTENSIONS, and
MINIO_COMPRESS_MIME_TYPES), MINIO_COMPRESS_ENABLE was left behind.

Additionally, due to incorrect fallback ordering, and DefaultKVS
containing enable=off allow_encryption=off (so kvs.Get should've been
tried last), that commit broke MINIO_COMPRESS_ALLOW_ENCRYPTION (even
though it appeared to be handled), and even older MINIO_COMPRESS, too.

The legacy MIME types and extensions variables take precedence over both
config and new variables, so they don't need fixing.
2025-09-06 10:37:10 -07:00
M Alvee 558fc1c09c
fix: return error on conditional write for non existing object (#21550) 2025-09-06 10:34:38 -07:00
Alex 9fdbf6fe83
Updated object-browser to the latest version v2.0.4 (#21564)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-09-06 10:33:19 -07:00
jiuker 5c87d4ae87
fix: when save the rebalanceStats not found the config file (#21547) 2025-09-04 13:47:24 -07:00
Klaus Post f0b91e5504
Run modernize (#21546)
`go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...` executed.

`go generate ./...` ran afterwards to keep generated.
2025-08-28 19:39:48 -07:00
Manuel Reis 3b7cb6512c
Revert `dns.msgUnPath`, fixes #21541 (#21542)
* Add more tests to UnPath function
* Revert implementation on dns.msgUnPath. Fixes: #21541
2025-08-28 10:31:12 -07:00
Mark Theunissen 4ea6f3b06b
fix: invalid checksum on site replication with conforming checksum types (#21535) 2025-08-22 07:15:21 -07:00
jiuker 86d9d9b55e
fix: use amqp.ParseURL to parse amqp url (#21528) 2025-08-20 21:25:07 -07:00
Denis Peshkov 5a35585acd
http/listener: fix bugs and simplify (#21514)
* Store `ctx.Done` channel in a struct instead of a `ctx`. See: https://go.dev/blog/context-and-structs
* Return from `handleListener` on `ctx` cancellation, preventing goroutine leaks
* Simplify `handleListener` by removing the `send` closure. The `handleListener` is inlined by the compiler
* Return the first error from `Close`
* Preallocate slice in `Addrs`
* Reduce duplication in handling `opts.Trace`
* http/listener: revert error propagation from Close()
* http/listener: preserve original listener address in Addr()
* Preserve the original address when calling Addr() with multiple listeners
* Remove unused listeners from the slice
2025-08-12 11:22:12 -07:00
Daryl White 0848e69602
Update docs links throughout (#21513) 2025-08-12 11:20:36 -07:00
M Alvee 02ba581ecf
custom user-agent transport wrapper (#21483) 2025-08-08 10:51:53 -07:00
Ian Roberts b44b2a090c
fix: when claim-based OIDC is configured, treat unknown roleArn as claim-based auth (#21512)
RoleARN is a required parameter in AssumeRoleWithWebIdentity, 
according to the standard AWS implementation, and the official 
AWS SDKs and CLI will not allow you to assume a role from a JWT 
without also specifying a RoleARN.  This meant that it was not 
possible to use the official SDKs for claim-based OIDC with Minio 
(minio/minio#21421), since Minio required you to _omit_ the RoleARN in this case.

minio/minio#21468 attempted to fix this by disabling the validation 
of the RoleARN when a claim-based provider was configured, but this had 
the side effect of making it impossible to have a mixture of claim-based 
and role-based OIDC providers configured at the same time - every 
authentication would be treated as claim-based, ignoring the RoleARN entirely.

This is an alternative fix, whereby:

- _if_ the `RoleARN` is one that Minio knows about, then use the associated role policy
- if the `RoleARN` is not recognised, but there is a claim-based provider configured, then ignore the role ARN and attempt authentication with the claim-based provider
- if the `RoleARN` is not recognised, and there is _no_ claim-based provider, then return an error.
2025-08-08 10:51:23 -07:00
dorman c7d6a9722d
Modify permission verification type (#21505) 2025-08-08 02:47:37 -07:00
jiuker a8abdc797e
fix: add name and description to ldap accesskey list (#21511) 2025-08-07 19:46:04 -07:00
M Alvee 0638ccc5f3
fix: claim based oidc for official aws libraries (#21468) 2025-08-07 19:42:38 -07:00
jiuker b1a34fd63f
fix: errUploadIDNotFound will be ignored when err is from peer client (#21504) 2025-08-07 19:38:41 -07:00
Klaus Post ffcfa36b13
Check legalHoldPerm (#21508)
The provided parameter should be checked before accepting legal hold
2025-08-07 19:38:25 -07:00
Aditya Kotra 376fbd11a7
fix(helm): do not suspend versioning by default for buckets, only set versioning if specified(21349) (#21494)
Signed-off-by: Aditya Kotra <kaditya030@gmail.com>
2025-08-07 02:47:02 -07:00
dorman c76f209ccc
Optimize outdated commands in the log (#21498) 2025-08-06 16:48:58 -07:00
M Alvee 7a6a2256b1
imagePullSecrets consistent types for global , local (#21500) 2025-08-06 16:48:24 -07:00
Johannes Horn d002beaee3
feat: add variable for datasource in grafana dashboards (#21470) 2025-08-03 18:46:49 -07:00
jiuker 71f293d9ab
fix: record extral skippedEntry for listObject (#21484)
VulnCheck / Analysis (push) Has been cancelled Details
Lock Threads / action (push) Has been cancelled Details
2025-08-01 08:53:35 -07:00
jiuker e3d183b6a4
bring more idempotent behavior to AbortMultipartUpload() (#21475)
VulnCheck / Analysis (push) Has been cancelled Details
fix #21456
2025-07-30 23:57:23 -07:00
Alex 752abc2e2c
Update console to v2.0.3 (#21474)
VulnCheck / Analysis (push) Waiting to run Details
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
Co-authored-by: Benjamin Perez <benjamin@bexsoft.net>
2025-07-30 10:57:17 -07:00
Minio Trusted b9f0e8c712 Update yaml files to latest version RELEASE.2025-07-23T15-54-02Z
VulnCheck / Analysis (push) Has been cancelled Details
2025-07-23 18:28:46 +00:00
M Alvee 7ced9663e6
simplify validating policy mapping (#21450) 2025-07-23 08:54:02 -07:00
MagicPig 50fcf9b670
fix boundary value bug when objTime ends in whole seconds (without sub-second) (#21419)
VulnCheck / Analysis (push) Waiting to run Details
2025-07-23 05:36:06 -07:00
Harshavardhana 64f5c6103f
wait for metadata reads on minDisks+1 for HEAD/GET when data==parity (#21449)
fixes a regression since #19741
2025-07-23 04:21:15 -07:00
Poorna e909be6380 send replication requests to correct pool (#1162)
VulnCheck / Analysis (push) Has been cancelled Details
Fixes incorrect application of ilm expiry rules on versioned objects
when replication is enabled.

Regression from https://github.com/minio/minio/pull/20441 which sends
DeleteObject calls to all pools. This is a problem for replication + ilm
scenario since replicated version can end up in a pool by itself instead of
pool where remaining object versions reside.

For example, if the delete marker is set on pool1 and object versions exist on
pool2, the second rule below will cause the delete marker to be expired by ilm
policy since it is the single version present in pool1
```
{
  "Rules": [
   {
    "ID": "cs6il1ri2hp48g71mdjg",
    "NoncurrentVersionExpiration": {
     "NoncurrentDays": 14
    },
    "Status": "Enabled"
   },
   {
    "Expiration": {
     "ExpiredObjectDeleteMarker": true
    },
    "ID": "cs6inj3i2hp4po19cil0",
    "Status": "Enabled"
   }
  ]
}
```
2025-07-19 13:27:52 -07:00
jiuker 83b2ad418b
fix: restrict SinglePool by the minimum free drive threshold (#21115)
VulnCheck / Analysis (push) Waiting to run Details
2025-07-18 23:25:44 -07:00
Loganaden Velvindron 7a64bb9766
Add support for X25519MLKEM768 (#21435)
Signed-off-by: Bhuvanesh Fokeer <fokeerbhuvanesh@cyberstorm.mu>
Signed-off-by: Nakul Baboolall <nkb@cyberstorm.mu>
Signed-off-by: Sehun Bissessur <sehun.bissessur@cyberstorm.mu>
2025-07-18 23:23:15 -07:00
Minio Trusted 34679befef Update yaml files to latest version RELEASE.2025-07-18T21-56-31Z
VulnCheck / Analysis (push) Waiting to run Details
2025-07-18 23:28:59 +00:00
Harshavardhana 4021d8c8e2
fix: lambda handler response to match the lambda return status (#21436) 2025-07-18 14:56:31 -07:00
Burkov Egor de234b888c
fix: admin api - SetPolicyForUserOrGroup avoid nil deref (#21400)
VulnCheck / Analysis (push) Has been cancelled Details
Lock Threads / action (push) Has been cancelled Details
2025-07-01 09:00:17 -07:00
Mark Theunissen 2718d9a430
CopyObject must preserve checksums and encrypt them if required (#21399)
VulnCheck / Analysis (push) Has been cancelled Details
Lock Threads / action (push) Has been cancelled Details
2025-06-25 08:08:54 -07:00
Alex a65292cab1
Update Console to latest version (#21397)
VulnCheck / Analysis (push) Waiting to run Details
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-06-24 17:33:22 -07:00
Minio Trusted e0c79be251 Update yaml files to latest version RELEASE.2025-06-13T11-33-47Z
VulnCheck / Analysis (push) Has been cancelled Details
2025-06-23 20:28:38 +00:00
jiuker a6c538c5a1
fix: honor renamePart's PathNotFound (#21378)
VulnCheck / Analysis (push) Has been cancelled Details
2025-06-13 04:33:47 -07:00
jiuker e1fcaebc77
fix: when ListMultipartUploads append result from cache should filter with bucket (#21376)
VulnCheck / Analysis (push) Has been cancelled Details
2025-06-12 00:09:12 -07:00
Johannes Horn 21409f112d
add networkpolicy for job and add possibility to define egress ports (#20951)
VulnCheck / Analysis (push) Has been cancelled Details
2025-06-08 09:14:18 -07:00
Sung Jeon 417c8648f0
use provided region in tier configuration for S3 backend (#21365)
fixes #21364
2025-06-08 09:13:30 -07:00
ffgan e2245a0b12
allow cross-compiling support for RISC-V 64 (#21348)
this is minor PR that supports building on RISC-V 64,
this PR is for compilation only. There is no guarantee 
that code is tested and will work in production.
2025-06-08 09:12:05 -07:00
Shubhendu b4b3d208dd
Add `targetArn` label for bucket replication metrics (#21354)
VulnCheck / Analysis (push) Has been cancelled Details
Signed-off-by: Shubhendu Ram Tripathi <shubhendu@minio.io>
2025-06-04 13:45:31 -07:00
ILIYA 0a36d41dcd
modernizes for loop in cmd/, internal/ (#21309)
VulnCheck / Analysis (push) Has been cancelled Details
2025-05-27 08:19:03 -07:00
jiuker ea77bcfc98
fix: panic for TestListObjectsWithILM (#21322) 2025-05-27 08:18:36 -07:00
jiuker 9f24ca5d66
fix: empty fileName cause Reader nil for PostPolicyBucketHandler (#21323) 2025-05-27 08:18:26 -07:00
VARUN SHARMA 816666a4c6
make some targeted updates to README.md (#21125)
VulnCheck / Analysis (push) Waiting to run Details
2025-05-26 12:34:56 -07:00
Anis Eleuch 2c7fe094d1
s3: Fix early listing stopping when ILM is enabled (#472) (#21246)
VulnCheck / Analysis (push) Waiting to run Details
S3 listing call is usually sent with a 'max-keys' parameter. This
'max-keys' will also be passed to WalkDir() call. However, when ILM is
enabled in a bucket and some objects are skipped, the listing can
return IsTruncated set to false even if there are more entries in
the drives.

The reason is that drives stop feeding the listing code because it has
max-keys parameter and the listing code thinks listing is finished
because it is being fed anymore.

Ask the drives to not stop listing and relies on the context
cancellation to stop listing in the drives as fast as possible.
2025-05-26 00:06:43 -07:00
Harshavardhana 9ebe168782 add pull requests etiquette
VulnCheck / Analysis (push) Waiting to run Details
2025-05-25 09:32:03 -07:00
Minio Trusted ee2028cde6 Update yaml files to latest version RELEASE.2025-05-24T17-08-30Z
VulnCheck / Analysis (push) Waiting to run Details
2025-05-24 21:37:47 +00:00
Frank Elsinga ecde75f911
docs: use github-style-notes in the readme (#21308)
use notes in the readme
2025-05-24 10:08:30 -07:00
jiuker 12a6ea89cc
fix: Use mime encode for Non-US-ASCII metadata (#21282)
VulnCheck / Analysis (push) Has been cancelled Details
2025-05-22 08:42:54 -07:00
Anis Eleuch 63e102c049
heal: Avoid disabling scanner healing in single and dist erasure mode (#21302)
A typo disabled the scanner healing in erasure mode. Fix it.
2025-05-22 08:42:29 -07:00
Alex 160f8a901b
Update Console UI to latest version (#21294)
VulnCheck / Analysis (push) Waiting to run Details
2025-05-21 08:59:37 -07:00
jiuker ef9b03fbf5
fix: unable to get net.Interface cause panic (#21277)
VulnCheck / Analysis (push) Has been cancelled Details
2025-05-16 07:28:04 -07:00
Andreas Auernhammer 1d50cae43d
remove support for FIPS 140-2 with boringcrypto (#21292)
This commit removes FIPS 140-2 related code for the following
reasons:
 - FIPS 140-2 is a compliance, not a security requirement. Being
   FIPS 140-2 compliant has no security implication on its own.
   From a tech. perspetive, a FIPS 140-2 compliant implementation
   is not necessarily secure and a non-FIPS 140-2 compliant implementation
   is not necessarily insecure. It depends on the concret design and
   crypto primitives/constructions used.
 - The boringcrypto branch used to achieve FIPS 140-2 compliance was never
   officially supported by the Go team and is now in maintainance mode.
   It is replaced by a built-in FIPS 140-3 module. It will be removed
   eventually. Ref: https://github.com/golang/go/issues/69536
 - FIPS 140-2 modules are no longer re-certified after Sep. 2026.
   Ref: https://csrc.nist.gov/projects/cryptographic-module-validation-program

Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-05-16 07:27:42 -07:00
Klaus Post c0a33952c6
Allow FTPS to force TLS (#21251)
VulnCheck / Analysis (push) Has been cancelled Details
Fixes #21249

Example params: `-ftp=force-tls=true -ftp="tls-private-key=ftp/private.key" -ftp="tls-public-cert=ftp/public.crt"`

If MinIO is set up for TLS those certs will be used.
2025-05-09 13:10:19 -07:00
Alex 8cad40a483
Update UI console to the latest version (#21278)
Signed-off-by: Benjamin Perez <benjamin@bexsoft.net>
2025-05-09 13:09:54 -07:00
Harshavardhana 6d18dba9a2
return error for AppendObject() API (#21272)
VulnCheck / Analysis (push) Has been cancelled Details
2025-05-07 08:37:12 -07:00
jiuker 9ea14c88d8
cleanup: use NewWithOptions replace the Deprecated one (#21243)
VulnCheck / Analysis (push) Has been cancelled Details
2025-04-29 08:35:51 -07:00
jiuker 30a1261c22
fix: track object and bucket for exipreAll (#21241) 2025-04-27 21:19:38 -07:00
Matt Lloyd 0e017ab071
feat: support nats nkey seed auth (#21231) 2025-04-26 21:30:57 -07:00
Harshavardhana f14198e3dc update with newer pkger release 2025-04-26 17:44:22 -07:00
Burkov Egor 93c389dbc9
typo: return actual error from RemoveRemoteTargetsForEndpoint (#21238) 2025-04-26 01:43:10 -07:00
jiuker ddd9a84cd7
allow concurrent aborts on active uploadParts() (#21229)
allow aborting on active uploads in progress, however fail these
uploads subsequently during commit phase and return appropriate errors
2025-04-24 22:41:04 -07:00
Celis b7540169a2
Add documentation for replication_max_lrg_workers (#21236) 2025-04-24 16:34:26 -07:00
Klaus Post f01374950f
Use go mod tool to install tools for go generate (#21232)
Use go tool for generators

* Use go.mod tool section
* Install tools with go generate
* Update dependencies
* Remove madmin fork.
2025-04-24 16:34:11 -07:00
Taran Pelkey 18aceae620
Fix nil dereference in adding service account (#21235)
Fixes #21234
2025-04-24 11:14:00 -07:00
Andreas Auernhammer 427826abc5
update `minio/kms-go/kms` SDK (#21233)
Signed-off-by: Andreas Auernhammer <github@aead.dev>
2025-04-24 08:33:57 -07:00
Harshavardhana 2780778c10 Revert "Fix: Change TTFB metric type to histogram (#20999)"
This reverts commit 8d223e07fb.
2025-04-23 13:56:18 -07:00
Shubhendu 2d8ba15b9e
Correct spelling (#21225) 2025-04-23 08:13:23 -07:00
Minio Trusted bd6dd55e7f Update yaml files to latest version RELEASE.2025-04-22T22-12-26Z 2025-04-22 22:34:07 +00:00
490 changed files with 4033 additions and 3550 deletions

View File

@ -1,59 +0,0 @@
name: FIPS Build Test
on:
pull_request:
branches:
- master
# This ensures that previous jobs for the PR are canceled when the PR is
# updated.
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
name: Go BoringCrypto ${{ matrix.go-version }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
go-version: [1.24.x]
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Setup dockerfile for build test
run: |
GO_VERSION=$(go version | cut -d ' ' -f 3 | sed 's/go//')
echo Detected go version $GO_VERSION
cat > Dockerfile.fips.test <<EOF
FROM golang:${GO_VERSION}
COPY . /minio
WORKDIR /minio
ENV GOEXPERIMENT=boringcrypto
RUN make
EOF
- name: Build
uses: docker/build-push-action@v3
with:
context: .
file: Dockerfile.fips.test
push: false
load: true
tags: minio/fips-test:latest
# This should fail if grep returns non-zero exit
- name: Test binary
run: |
docker run --rm minio/fips-test:latest ./minio --version
docker run --rm -i minio/fips-test:latest /bin/bash -c 'go tool nm ./minio | grep FIPS | grep -q FIPS'

View File

@ -21,7 +21,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: 1.24.0 go-version: 1.24.x
cached: false cached: false
- name: Get official govulncheck - name: Get official govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest run: go install golang.org/x/vuln/cmd/govulncheck@latest

View File

@ -24,8 +24,6 @@ help: ## print this help
getdeps: ## fetch necessary dependencies getdeps: ## fetch necessary dependencies
@mkdir -p ${GOPATH}/bin @mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOLANGCI_DIR) @echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOLANGCI_DIR)
@echo "Installing msgp" && go install -v github.com/tinylib/msgp@v1.2.5
@echo "Installing stringer" && go install -v golang.org/x/tools/cmd/stringer@latest
crosscompile: ## cross compile minio crosscompile: ## cross compile minio
@(env bash $(PWD)/buildscripts/cross-compile.sh) @(env bash $(PWD)/buildscripts/cross-compile.sh)
@ -188,9 +186,9 @@ hotfix-vars:
$(eval VERSION := $(shell git describe --tags --abbrev=0).hotfix.$(shell git rev-parse --short HEAD)) $(eval VERSION := $(shell git describe --tags --abbrev=0).hotfix.$(shell git rev-parse --short HEAD))
hotfix: hotfix-vars clean install ## builds minio binary with hotfix tags hotfix: hotfix-vars clean install ## builds minio binary with hotfix tags
@wget -q -c https://github.com/minio/pkger/releases/download/v2.3.10/pkger_2.3.10_linux_amd64.deb @wget -q -c https://github.com/minio/pkger/releases/download/v2.3.11/pkger_2.3.11_linux_amd64.deb
@wget -q -c https://raw.githubusercontent.com/minio/minio-service/v1.1.0/linux-systemd/distributed/minio.service @wget -q -c https://raw.githubusercontent.com/minio/minio-service/v1.1.1/linux-systemd/distributed/minio.service
@sudo apt install ./pkger_2.3.10_linux_amd64.deb --yes @sudo apt install ./pkger_2.3.11_linux_amd64.deb --yes
@mkdir -p minio-release/$(GOOS)-$(GOARCH)/archive @mkdir -p minio-release/$(GOOS)-$(GOARCH)/archive
@cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio @cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio
@cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio.$(VERSION) @cp -af ./minio minio-release/$(GOOS)-$(GOARCH)/minio.$(VERSION)

View File

@ -0,0 +1,93 @@
# MinIO Pull Request Guidelines
These guidelines ensure high-quality commits in MinIOs GitHub repositories, maintaining
a clear, valuable commit history for our open-source projects. They apply to all contributors,
fostering efficient reviews and robust code.
## Why Pull Requests?
Pull Requests (PRs) drive quality in MinIOs codebase by:
- Enabling peer review without pair programming.
- Documenting changes for future reference.
- Ensuring commits tell a clear story of development.
**A poor commit lasts forever, even if code is refactored.**
## Crafting a Quality PR
A strong MinIO PR:
- Delivers a complete, valuable change (feature, bug fix, or improvement).
- Has a concise title (e.g., `[S3] Fix bucket policy parsing #1234`) and a summary with context, referencing issues (e.g., `#1234`).
- Contains well-written, logical commits explaining *why* changes were made (e.g., “Add S3 bucket tagging support so that users can organize resources efficiently”).
- Is small, focused, and easy to review—ideally one commit, unless multiple commits better narrate complex work.
- Adheres to MinIOs coding standards (e.g., Go style, error handling, testing).
PRs must flow smoothly through review to reach production. Large PRs should be split into smaller, manageable ones.
## Submitting PRs
1. **Title and Summary**:
- Use a scannable title: `[Subsystem] Action Description #Issue` (e.g., `[IAM] Add role-based access control #567`).
- Include context in the summary: what changed, why, and any issue references.
- Use `[WIP]` for in-progress PRs to avoid premature merging or choose GitHub draft PRs.
2. **Commits**:
- Write clear messages: what changed and why (e.g., “Refactor S3 API handler to reduce latency so that requests process 20% faster”).
- Rebase to tidy commits before submitting (e.g., `git rebase -i main` to squash typos or reword messages), unless multiple contributors worked on the branch.
- Keep PRs focused—one feature or fix. Split large changes into multiple PRs.
3. **Testing**:
- Include unit tests for new functionality or bug fixes.
- Ensure existing tests pass (`make test`).
- Document testing steps in the PR summary if manual testing was performed.
4. **Before Submitting**:
- Run `make verify` to check formatting, linting, and tests.
- Reference related issues (e.g., “Closes #1234”).
- Notify team members via GitHub `@mentions` if urgent or complex.
## Reviewing PRs
Reviewers ensure MinIOs commit history remains a clear, reliable record. Responsibilities include:
1. **Commit Quality**:
- Verify each commit explains *why* the change was made (e.g., “So that…”).
- Request rebasing if commits are unclear, redundant, or lack context (e.g., “Please squash typo fixes into the parent commit”).
2. **Code Quality**:
- Check adherence to MinIOs Go standards (e.g., error handling, documentation).
- Ensure tests cover new code and pass CI.
- Flag bugs or critical issues for immediate fixes; suggest non-blocking improvements as follow-up issues.
3. **Flow**:
- Review promptly to avoid blocking progress.
- Balance quality and speed—minor issues can be addressed later via issues, not PR blocks.
- If unable to complete the review, tag another reviewer (e.g., `@username please take over`).
4. **Shared Responsibility**:
- All MinIO contributors are reviewers. The first commenter on a PR owns the review unless they delegate.
- Multiple reviewers are encouraged for complex PRs.
5. **No Self-Edits**:
- Dont modify the PR directly (e.g., fixing bugs). Request changes from the submitter or create a follow-up PR.
- If you edit, youre a collaborator, not a reviewer, and cannot merge.
6. **Testing**:
- Assume the submitter tested the code. If testing is unclear, ask for details (e.g., “How was this tested?”).
- Reject untested PRs unless testing is infeasible, then assist with test setup.
## Tips for Success
- **Small PRs**: Easier to review, faster to merge. Split large changes logically.
- **Clear Commits**: Use `git rebase -i` to refine history before submitting.
- **Engage Early**: Discuss complex changes in issues or Slack (https://slack.min.io) before coding.
- **Be Responsive**: Address reviewer feedback promptly to keep PRs moving.
- **Learn from Reviews**: Use feedback to improve future contributions.
## Resources
- [MinIO Coding Standards](https://github.com/minio/minio/blob/master/CONTRIBUTING.md)
- [Effective Commit Messages](https://mislav.net/2014/02/hidden-documentation/)
- [GitHub PR Tips](https://github.com/blog/1943-how-to-write-the-perfect-pull-request)
By following these guidelines, we ensure MinIOs codebase remains high-quality, maintainable, and a joy to contribute to. Happy coding!

View File

@ -1,7 +0,0 @@
# MinIO FIPS Builds
MinIO creates FIPS builds using a patched version of the Go compiler (that uses BoringCrypto, from BoringSSL, which is [FIPS 140-2 validated](https://csrc.nist.gov/csrc/media/projects/cryptographic-module-validation-program/documents/security-policies/140sp2964.pdf)) published by the Golang Team [here](https://github.com/golang/go/tree/dev.boringcrypto/misc/boring).
MinIO FIPS executables are available at <http://dl.min.io> - they are only published for `linux-amd64` architecture as binary files with the suffix `.fips`. We also publish corresponding container images to our official image repositories.
We are not making any statements or representations about the suitability of this code or build in relation to the FIPS 140-2 standard. Interested users will have to evaluate for themselves whether this is useful for their own purposes.

262
README.md
View File

@ -4,253 +4,109 @@
[![MinIO](https://raw.githubusercontent.com/minio/minio/master/.github/logo.svg?sanitize=true)](https://min.io) [![MinIO](https://raw.githubusercontent.com/minio/minio/master/.github/logo.svg?sanitize=true)](https://min.io)
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads. To learn more about what MinIO is doing for AI storage, go to [AI storage documentation](https://min.io/solutions/object-storage-for-ai). MinIO is a high-performance, S3-compatible object storage solution released under the GNU AGPL v3.0 license.
Designed for speed and scalability, it powers AI/ML, analytics, and data-intensive workloads with industry-leading performance.
This README provides quickstart instructions on running MinIO on bare metal hardware, including container-based installations. For Kubernetes environments, use the [MinIO Kubernetes Operator](https://github.com/minio/operator/blob/master/README.md). - S3 API Compatible Seamless integration with existing S3 tools
- Built for AI & Analytics Optimized for large-scale data pipelines
- High Performance Ideal for demanding storage workloads.
## Container Installation This README provides instructions for building MinIO from source and deploying onto baremetal hardware.
For more complete documentation, see [the MinIO documentation website](https://docs.min.io/community/minio-object-store/index.html)
Use the following commands to run a standalone MinIO server as a container. ## MinIO is Open Source Software
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication We designed MinIO as Open Source software for the Open Source software community.
require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, We encourage the community to remix, redesign, and reshare MinIO under the terms of the AGPLv3 license.
with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)
for more complete documentation.
### Stable All usage of MinIO in your application stack requires validation against AGPLv3 obligations, which include but are not limited to the release of modified code to the community from which you have benefited.
Any commercial/proprietary usage of the AGPLv3 software, including repackaging or reselling services/features, is done at your own risk.
Run the following command to run the latest stable image of MinIO as a container using an ephemeral data volume: The AGPLv3 provides no obligation by any party to support, maintain, or warranty the original or any modified work.
All support is provided on a best-effort basis through Github and our [Slack](https//slack.min.io) channel, and any member of the community is welcome to contribute and assist others in their usage of the software.
```sh MinIO [AIStor](https://www.min.io/product/aistor) includes enterprise-grade support and licensing for workloads which require commercial or proprietary usage and production-level SLA/SLO-backed support.
podman run -p 9000:9000 -p 9001:9001 \ For more information, [reach out for a quote](https://min.io/pricing).
quay.io/minio/minio server /data --console-address ":9001"
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded ## Legacy Releases
object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the
root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See MinIO has no planned or scheduled releases for this repository.
[Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, While a new release may be cut at any time, there is no timeline for when a subsequent release may occur.
see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages. All existing releases remain accessible through Github or at https://dl.min.io/server/minio/release/ .
> NOTE: To deploy MinIO on with persistent storage, you must map local persistent directories from the host OS to the container using the `podman -v` option. For example, `-v /mnt/data:/data` maps the host OS drive at `/mnt/data` to `/data` on the container.
## macOS
Use the following commands to run a standalone MinIO server on macOS.
Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html) for more complete documentation.
### Homebrew (recommended)
Run the following command to install the latest stable MinIO package using [Homebrew](https://brew.sh/). Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
brew install minio/stable/minio
minio server /data
```
> NOTE: If you previously installed minio using `brew install minio` then it is recommended that you reinstall minio from `minio/stable/minio` official repo instead.
```sh
brew uninstall minio
brew install minio/stable/minio
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html/> to view MinIO SDKs for supported languages.
### Binary Download
Use the following command to download and run a standalone MinIO server on macOS. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
wget https://dl.min.io/server/minio/release/darwin-amd64/minio
chmod +x minio
./minio server /data
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
## GNU/Linux
Use the following command to run a standalone MinIO server on Linux hosts running 64-bit Intel/AMD architectures. Replace ``/data`` with the path to the drive or directory in which you want MinIO to store data.
```sh
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
./minio server /data
```
The following table lists supported architectures. Replace the `wget` URL with the architecture for your Linux host.
| Architecture | URL |
| -------- | ------ |
| 64-bit Intel/AMD | <https://dl.min.io/server/minio/release/linux-amd64/minio> |
| 64-bit ARM | <https://dl.min.io/server/minio/release/linux-arm64/minio> |
| 64-bit PowerPC LE (ppc64le) | <https://dl.min.io/server/minio/release/linux-ppc64le/minio> |
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html#) for more complete documentation.
## Microsoft Windows
To run MinIO on 64-bit Windows hosts, download the MinIO executable from the following URL:
```sh
https://dl.min.io/server/minio/release/windows-amd64/minio.exe
```
Use the following command to run a standalone MinIO server on the Windows host. Replace ``D:\`` with the path to the drive or directory in which you want MinIO to store data. You must change the terminal or powershell directory to the location of the ``minio.exe`` executable, *or* add the path to that directory to the system ``$PATH``:
```sh
minio.exe server D:\
```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html#) for more complete documentation.
## Install from Source ## Install from Source
Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.24](https://golang.org/dl/#stable) Use the following commands to compile and run a standalone MinIO server from source.
If you do not have a working Golang environment, please follow [How to install Golang](https://golang.org/doc/install). Minimum version required is [go1.24](https://golang.org/dl/#stable)
```sh ```sh
go install github.com/minio/minio@latest go install github.com/minio/minio@latest
``` ```
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server. You can alternatively run `go build` and use the `GOOS` and `GOARCH` environment variables to control the OS and architecture target.
For example:
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool. See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool. For application developers, see <https://min.io/docs/minio/linux/developers/minio-drivers.html> to view MinIO SDKs for supported languages. ```
env GOOS=linux GOARCh=arm64 go build
> NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a *minimum* of 4 drives per MinIO server. See [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html) for more complete documentation.
MinIO strongly recommends *against* using compiled-from-source MinIO servers for production environments.
## Deployment Recommendations
### Allow port access for Firewalls
By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.
### ufw
For hosts with ufw enabled (Debian based distros), you can use `ufw` command to allow traffic to specific ports. Use below command to allow access to port 9000
```sh
ufw allow 9000
``` ```
Below command enables all incoming traffic to ports ranging from 9000 to 9010. Start MinIO by running `minio server PATH` where `PATH` is any empty folder on your local filesystem.
The MinIO deployment starts using default root credentials `minioadmin:minioadmin`.
You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server.
Point a web browser running on the host machine to <http://127.0.0.1:9000> and log in with the root credentials.
You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.
You can also connect using any S3-compatible tool, such as the MinIO Client `mc` commandline tool:
```sh ```sh
ufw allow 9000:9010/tcp mc alias set local http://localhost:9000 minioadmin minioadmin
mc admin info local
``` ```
### firewall-cmd See [Test using MinIO Client `mc`](#test-using-minio-client-mc) for more information on using the `mc` commandline tool.
For application developers, see <https://docs.min.io/community/minio-object-store/developers/minio-drivers.html> to view MinIO SDKs for supported languages.
For hosts with firewall-cmd enabled (CentOS), you can use `firewall-cmd` command to allow traffic to specific ports. Use below commands to allow access to port 9000 > [!NOTE]
> Production environments using compiled-from-source MinIO binaries do so at their own risk.
```sh > The AGPLv3 license provides no warranties nor liabilites for any such usage.
firewall-cmd --get-active-zones
```
This command gets the active zone(s). Now, apply port rules to the relevant zones returned above. For example if the zone is `public`, use
```sh
firewall-cmd --zone=public --add-port=9000/tcp --permanent
```
Note that `permanent` makes sure the rules are persistent across firewall start, restart or reload. Finally reload the firewall for changes to take effect.
```sh
firewall-cmd --reload
```
### iptables
For hosts with iptables enabled (RHEL, CentOS, etc), you can use `iptables` command to enable all traffic coming to specific ports. Use below command to allow
access to port 9000
```sh
iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart
```
Below command enables all incoming traffic to ports ranging from 9000 to 9010.
```sh
iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart
```
## Test MinIO Connectivity ## Test MinIO Connectivity
### Test using MinIO Console ### Test using MinIO Console
MinIO Server comes with an embedded web based object browser. Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully. MinIO Server comes with an embedded web based object browser.
Point your web browser to <http://127.0.0.1:9000> to ensure your server has started successfully.
> NOTE: MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port. > [!NOTE]
> MinIO runs console on random port by default, if you wish to choose a specific port use `--console-address` to pick a specific interface and port.
### Things to consider ### Test using MinIO Client `mc`
MinIO redirects browser access requests to the configured server port (i.e. `127.0.0.1:9000`) to the configured Console port. MinIO uses the hostname or IP address specified in the request when building the redirect URL. The URL and port *must* be accessible by the client for the redirection to work. `mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services.
For deployments behind a load balancer, proxy, or ingress rule where the MinIO host IP address or port is not public, use the `MINIO_BROWSER_REDIRECT_URL` environment variable to specify the external hostname for the redirect. The LB/Proxy must have rules for directing traffic to the Console port specifically. The following commands set a local alias, validate the server information, create a bucket, copy data to that bucket, and list the contents of the bucket.
For example, consider a MinIO deployment behind a proxy `https://minio.example.net`, `https://console.minio.example.net` with rules for forwarding traffic on port :9000 and :9001 to MinIO and the MinIO Console respectively on the internal network. Set `MINIO_BROWSER_REDIRECT_URL` to `https://console.minio.example.net` to ensure the browser receives a valid reachable URL.
| Dashboard | Creating a bucket |
| ------------- | ------------- |
| ![Dashboard](https://github.com/minio/minio/blob/master/docs/screenshots/pic1.png?raw=true) | ![Dashboard](https://github.com/minio/minio/blob/master/docs/screenshots/pic2.png?raw=true) |
## Test using MinIO Client `mc`
`mc` provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the MinIO Client [Quickstart Guide](https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart) for further instructions.
## Upgrading MinIO
Upgrades require zero downtime in MinIO, all upgrades are non-disruptive, all transactions on MinIO are atomic. So upgrading all the servers simultaneously is the recommended way to upgrade MinIO.
> NOTE: requires internet access to update directly from <https://dl.min.io>, optionally you can host any mirrors at <https://my-artifactory.example.com/minio/>
- For deployments that installed the MinIO server binary by hand, use [`mc admin update`](https://min.io/docs/minio/linux/reference/minio-mc-admin/mc-admin-update.html)
```sh ```sh
mc admin update <minio alias, e.g., myminio> mc alias set local http://localhost:9000 minioadmin minioadmin
mc admin info
mc mb data
mc cp ~/Downloads/mydata data/
mc ls data/
``` ```
- For deployments without external internet access (e.g. airgapped environments), download the binary from <https://dl.min.io> and replace the existing MinIO binary let's say for example `/opt/bin/minio`, apply executable permissions `chmod +x /opt/bin/minio` and proceed to perform `mc admin service restart alias/`. Follow the MinIO Client [Quickstart Guide](https://docs.min.io/community/minio-object-store/reference/minio-mc.html#quickstart) for further instructions.
- For installations using Systemd MinIO service, upgrade via RPM/DEB packages **parallelly** on all servers or replace the binary lets say `/opt/bin/minio` on all nodes, apply executable permissions `chmod +x /opt/bin/minio` and process to perform `mc admin service restart alias/`.
### Upgrade Checklist
- Test all upgrades in a lower environment (DEV, QA, UAT) before applying to production. Performing blind upgrades in production environments carries significant risk.
- Read the release notes for MinIO *before* performing any upgrade, there is no forced requirement to upgrade to latest release upon every release. Some release may not be relevant to your setup, avoid upgrading production environments unnecessarily.
- If you plan to use `mc admin update`, MinIO process must have write access to the parent directory where the binary is present on the host system.
- `mc admin update` is not supported and should be avoided in kubernetes/container environments, please upgrade containers by upgrading relevant container images.
- **We do not recommend upgrading one MinIO server at a time, the product is designed to support parallel upgrades please follow our recommended guidelines.**
## Explore Further ## Explore Further
- [MinIO Erasure Code Overview](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html) - [The MinIO documentation website](https://docs.min.io/community/minio-object-store/index.html)
- [Use `mc` with MinIO Server](https://min.io/docs/minio/linux/reference/minio-mc.html) - [MinIO Erasure Code Overview](https://docs.min.io/community/minio-object-store/operations/concepts/erasure-coding.html)
- [Use `minio-go` SDK with MinIO Server](https://min.io/docs/minio/linux/developers/go/minio-go.html) - [Use `mc` with MinIO Server](https://docs.min.io/community/minio-object-store/reference/minio-mc.html)
- [The MinIO documentation website](https://min.io/docs/minio/linux/index.html) - [Use `minio-go` SDK with MinIO Server](https://docs.min.io/community/minio-object-store/developers/go/minio-go.html)
## Contribute to MinIO Project ## Contribute to MinIO Project
Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md) Please follow MinIO [Contributor's Guide](https://github.com/minio/minio/blob/master/CONTRIBUTING.md) for guidance on making new contributions to the repository.
## License ## License

View File

@ -74,11 +74,11 @@ check_minimum_version() {
assert_is_supported_arch() { assert_is_supported_arch() {
case "${ARCH}" in case "${ARCH}" in
x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64) x86_64 | amd64 | aarch64 | ppc64le | arm* | s390x | loong64 | loongarch64 | riscv64)
return return
;; ;;
*) *)
echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64]" echo "Arch '${ARCH}' is not supported. Supported Arch: [x86_64, amd64, aarch64, ppc64le, arm*, s390x, loong64, loongarch64, riscv64]"
exit 1 exit 1
;; ;;
esac esac

View File

@ -9,7 +9,7 @@ function _init() {
export CGO_ENABLED=0 export CGO_ENABLED=0
## List of architectures and OS to test coss compilation. ## List of architectures and OS to test coss compilation.
SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64" SUPPORTED_OSARCH="linux/ppc64le linux/mips64 linux/amd64 linux/arm64 linux/s390x darwin/arm64 darwin/amd64 freebsd/amd64 windows/amd64 linux/arm linux/386 netbsd/amd64 linux/mips openbsd/amd64 linux/riscv64"
} }
function _build() { function _build() {

View File

@ -193,27 +193,27 @@ func (a adminAPIHandlers) SetConfigKVHandler(w http.ResponseWriter, r *http.Requ
func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (result setConfigResult, err error) { func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (result setConfigResult, err error) {
result.Cfg, err = readServerConfig(ctx, objectAPI, nil) result.Cfg, err = readServerConfig(ctx, objectAPI, nil)
if err != nil { if err != nil {
return return result, err
} }
result.Dynamic, err = result.Cfg.ReadConfig(bytes.NewReader(kvBytes)) result.Dynamic, err = result.Cfg.ReadConfig(bytes.NewReader(kvBytes))
if err != nil { if err != nil {
return return result, err
} }
result.SubSys, _, _, err = config.GetSubSys(string(kvBytes)) result.SubSys, _, _, err = config.GetSubSys(string(kvBytes))
if err != nil { if err != nil {
return return result, err
} }
tgts, err := config.ParseConfigTargetID(bytes.NewReader(kvBytes)) tgts, err := config.ParseConfigTargetID(bytes.NewReader(kvBytes))
if err != nil { if err != nil {
return return result, err
} }
ctx = context.WithValue(ctx, config.ContextKeyForTargetFromConfig, tgts) ctx = context.WithValue(ctx, config.ContextKeyForTargetFromConfig, tgts)
if verr := validateConfig(ctx, result.Cfg, result.SubSys); verr != nil { if verr := validateConfig(ctx, result.Cfg, result.SubSys); verr != nil {
err = badConfigErr{Err: verr} err = badConfigErr{Err: verr}
return return result, err
} }
// Check if subnet proxy being set and if so set the same value to proxy of subnet // Check if subnet proxy being set and if so set the same value to proxy of subnet
@ -222,12 +222,12 @@ func setConfigKV(ctx context.Context, objectAPI ObjectLayer, kvBytes []byte) (re
// Update the actual server config on disk. // Update the actual server config on disk.
if err = saveServerConfig(ctx, objectAPI, result.Cfg); err != nil { if err = saveServerConfig(ctx, objectAPI, result.Cfg); err != nil {
return return result, err
} }
// Write the config input KV to history. // Write the config input KV to history.
err = saveServerConfigHistory(ctx, objectAPI, kvBytes) err = saveServerConfigHistory(ctx, objectAPI, kvBytes)
return return result, err
} }
// GetConfigKVHandler - GET /minio/admin/v3/get-config-kv?key={key} // GetConfigKVHandler - GET /minio/admin/v3/get-config-kv?key={key}

View File

@ -445,8 +445,10 @@ func (a adminAPIHandlers) ListAccessKeysLDAP(w http.ResponseWriter, r *http.Requ
for _, svc := range serviceAccounts { for _, svc := range serviceAccounts {
expiryTime := svc.Expiration expiryTime := svc.Expiration
serviceAccountList = append(serviceAccountList, madmin.ServiceAccountInfo{ serviceAccountList = append(serviceAccountList, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey, AccessKey: svc.AccessKey,
Expiration: &expiryTime, Expiration: &expiryTime,
Name: svc.Name,
Description: svc.Description,
}) })
} }
for _, sts := range stsKeys { for _, sts := range stsKeys {
@ -625,8 +627,10 @@ func (a adminAPIHandlers) ListAccessKeysLDAPBulk(w http.ResponseWriter, r *http.
} }
for _, svc := range serviceAccounts { for _, svc := range serviceAccounts {
accessKeys.ServiceAccounts = append(accessKeys.ServiceAccounts, madmin.ServiceAccountInfo{ accessKeys.ServiceAccounts = append(accessKeys.ServiceAccounts, madmin.ServiceAccountInfo{
AccessKey: svc.AccessKey, AccessKey: svc.AccessKey,
Expiration: &svc.Expiration, Expiration: &svc.Expiration,
Name: svc.Name,
Description: svc.Description,
}) })
} }
// if only service accounts, skip if user has no service accounts // if only service accounts, skip if user has no service accounts

View File

@ -173,6 +173,8 @@ func (a adminAPIHandlers) ListAccessKeysOpenIDBulk(w http.ResponseWriter, r *htt
if _, ok := accessKey.Claims[iamPolicyClaimNameOpenID()]; !ok { if _, ok := accessKey.Claims[iamPolicyClaimNameOpenID()]; !ok {
continue // skip if no roleArn and no policy claim continue // skip if no roleArn and no policy claim
} }
// claim-based provider is in the roleArnMap under dummy ARN
arn = dummyRoleARN
} }
matchingCfgName, ok := roleArnMap[arn] matchingCfgName, ok := roleArnMap[arn]
if !ok { if !ok {

View File

@ -61,7 +61,7 @@ func (a adminAPIHandlers) StartDecommission(w http.ResponseWriter, r *http.Reque
return return
} }
if z.IsRebalanceStarted() { if z.IsRebalanceStarted(ctx) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL) writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL)
return return
} }
@ -277,7 +277,7 @@ func (a adminAPIHandlers) RebalanceStart(w http.ResponseWriter, r *http.Request)
return return
} }
if pools.IsRebalanceStarted() { if pools.IsRebalanceStarted(ctx) {
writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL) writeErrorResponseJSON(ctx, w, errorCodes.ToAPIErr(ErrAdminRebalanceAlreadyStarted), r.URL)
return return
} }
@ -380,7 +380,7 @@ func (a adminAPIHandlers) RebalanceStop(w http.ResponseWriter, r *http.Request)
func proxyDecommissionRequest(ctx context.Context, defaultEndPoint Endpoint, w http.ResponseWriter, r *http.Request) (proxy bool) { func proxyDecommissionRequest(ctx context.Context, defaultEndPoint Endpoint, w http.ResponseWriter, r *http.Request) (proxy bool) {
host := env.Get("_MINIO_DECOM_ENDPOINT_HOST", defaultEndPoint.Host) host := env.Get("_MINIO_DECOM_ENDPOINT_HOST", defaultEndPoint.Host)
if host == "" { if host == "" {
return return proxy
} }
for nodeIdx, proxyEp := range globalProxyEndpoints { for nodeIdx, proxyEp := range globalProxyEndpoints {
if proxyEp.Host == host && !proxyEp.IsLocal { if proxyEp.Host == host && !proxyEp.IsLocal {
@ -389,5 +389,5 @@ func proxyDecommissionRequest(ctx context.Context, defaultEndPoint Endpoint, w h
} }
} }
} }
return return proxy
} }

View File

@ -70,7 +70,7 @@ func (a adminAPIHandlers) SiteReplicationAdd(w http.ResponseWriter, r *http.Requ
func getSRAddOptions(r *http.Request) (opts madmin.SRAddOptions) { func getSRAddOptions(r *http.Request) (opts madmin.SRAddOptions) {
opts.ReplicateILMExpiry = r.Form.Get("replicateILMExpiry") == "true" opts.ReplicateILMExpiry = r.Form.Get("replicateILMExpiry") == "true"
return return opts
} }
// SRPeerJoin - PUT /minio/admin/v3/site-replication/join // SRPeerJoin - PUT /minio/admin/v3/site-replication/join
@ -304,7 +304,7 @@ func (a adminAPIHandlers) SRPeerGetIDPSettings(w http.ResponseWriter, r *http.Re
} }
} }
func parseJSONBody(ctx context.Context, body io.Reader, v interface{}, encryptionKey string) error { func parseJSONBody(ctx context.Context, body io.Reader, v any, encryptionKey string) error {
data, err := io.ReadAll(body) data, err := io.ReadAll(body)
if err != nil { if err != nil {
return SRError{ return SRError{
@ -422,7 +422,7 @@ func (a adminAPIHandlers) SiteReplicationEdit(w http.ResponseWriter, r *http.Req
func getSREditOptions(r *http.Request) (opts madmin.SREditOptions) { func getSREditOptions(r *http.Request) (opts madmin.SREditOptions) {
opts.DisableILMExpiryReplication = r.Form.Get("disableILMExpiryReplication") == "true" opts.DisableILMExpiryReplication = r.Form.Get("disableILMExpiryReplication") == "true"
opts.EnableILMExpiryReplication = r.Form.Get("enableILMExpiryReplication") == "true" opts.EnableILMExpiryReplication = r.Form.Get("enableILMExpiryReplication") == "true"
return return opts
} }
// SRPeerEdit - PUT /minio/admin/v3/site-replication/peer/edit // SRPeerEdit - PUT /minio/admin/v3/site-replication/peer/edit
@ -484,7 +484,7 @@ func getSRStatusOptions(r *http.Request) (opts madmin.SRStatusOptions) {
opts.EntityValue = q.Get("entityvalue") opts.EntityValue = q.Get("entityvalue")
opts.ShowDeleted = q.Get("showDeleted") == "true" opts.ShowDeleted = q.Get("showDeleted") == "true"
opts.Metrics = q.Get("metrics") == "true" opts.Metrics = q.Get("metrics") == "true"
return return opts
} }
// SiteReplicationRemove - PUT /minio/admin/v3/site-replication/remove // SiteReplicationRemove - PUT /minio/admin/v3/site-replication/remove

View File

@ -89,7 +89,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
// Create a policy policy // Create a policy policy
policy := "mypolicy" policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -104,7 +104,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
] ]
} }
] ]
}`, bucket)) }`, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes) err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil { if err != nil {
c.Fatalf("policy add error: %v", err) c.Fatalf("policy add error: %v", err)
@ -113,7 +113,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
userCount := 50 userCount := 50
accessKeys := make([]string, userCount) accessKeys := make([]string, userCount)
secretKeys := make([]string, userCount) secretKeys := make([]string, userCount)
for i := 0; i < userCount; i++ { for i := range userCount {
accessKey, secretKey := mustGenerateCredentials(c) accessKey, secretKey := mustGenerateCredentials(c)
err = s.adm.SetUser(ctx, accessKey, secretKey, madmin.AccountEnabled) err = s.adm.SetUser(ctx, accessKey, secretKey, madmin.AccountEnabled)
if err != nil { if err != nil {
@ -133,7 +133,7 @@ func (s *TestSuiteIAM) TestDeleteUserRace(c *check) {
} }
g := errgroup.Group{} g := errgroup.Group{}
for i := 0; i < userCount; i++ { for i := range userCount {
g.Go(func(i int) func() error { g.Go(func(i int) func() error {
return func() error { return func() error {
uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "") uClient := s.getUserClient(c, accessKeys[i], secretKeys[i], "")

View File

@ -24,6 +24,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"maps"
"net/http" "net/http"
"os" "os"
"slices" "slices"
@ -157,9 +158,7 @@ func (a adminAPIHandlers) ListUsers(w http.ResponseWriter, r *http.Request) {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL) writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
return return
} }
for k, v := range ldapUsers { maps.Copy(allCredentials, ldapUsers)
allCredentials[k] = v
}
// Marshal the response // Marshal the response
data, err := json.Marshal(allCredentials) data, err := json.Marshal(allCredentials)
@ -1827,16 +1826,18 @@ func (a adminAPIHandlers) SetPolicyForUserOrGroup(w http.ResponseWriter, r *http
iamLogIf(ctx, err) iamLogIf(ctx, err)
} else if foundGroupDN == nil || !underBaseDN { } else if foundGroupDN == nil || !underBaseDN {
err = errNoSuchGroup err = errNoSuchGroup
} else {
entityName = foundGroupDN.NormDN
} }
entityName = foundGroupDN.NormDN
} else { } else {
var foundUserDN *xldap.DNSearchResult var foundUserDN *xldap.DNSearchResult
if foundUserDN, err = globalIAMSys.LDAPConfig.GetValidatedDNForUsername(entityName); err != nil { if foundUserDN, err = globalIAMSys.LDAPConfig.GetValidatedDNForUsername(entityName); err != nil {
iamLogIf(ctx, err) iamLogIf(ctx, err)
} else if foundUserDN == nil { } else if foundUserDN == nil {
err = errNoSuchUser err = errNoSuchUser
} else {
entityName = foundUserDN.NormDN
} }
entityName = foundUserDN.NormDN
} }
if err != nil { if err != nil {
writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL) writeErrorResponseJSON(ctx, w, toAdminAPIErr(ctx, err), r.URL)
@ -2947,7 +2948,7 @@ func commonAddServiceAccount(r *http.Request, ldap bool) (context.Context, auth.
name: createReq.Name, name: createReq.Name,
description: description, description: description,
expiration: createReq.Expiration, expiration: createReq.Expiration,
claims: make(map[string]interface{}), claims: make(map[string]any),
} }
condValues := getConditionValues(r, "", cred) condValues := getConditionValues(r, "", cred)
@ -2959,7 +2960,7 @@ func commonAddServiceAccount(r *http.Request, ldap bool) (context.Context, auth.
denyOnly := (targetUser == cred.AccessKey || targetUser == cred.ParentUser) denyOnly := (targetUser == cred.AccessKey || targetUser == cred.ParentUser)
if ldap && !denyOnly { if ldap && !denyOnly {
res, _ := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(targetUser) res, _ := globalIAMSys.LDAPConfig.GetValidatedDNForUsername(targetUser)
if res.NormDN == cred.ParentUser { if res != nil && res.NormDN == cred.ParentUser {
denyOnly = true denyOnly = true
} }
} }

View File

@ -332,7 +332,7 @@ func (s *TestSuiteIAM) TestUserPolicyEscalationBug(c *check) {
// 2.2 create and associate policy to user // 2.2 create and associate policy to user
policy := "mypolicy-test-user-update" policy := "mypolicy-test-user-update"
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -355,7 +355,7 @@ func (s *TestSuiteIAM) TestUserPolicyEscalationBug(c *check) {
] ]
} }
] ]
}`, bucket, bucket)) }`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes) err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil { if err != nil {
c.Fatalf("policy add error: %v", err) c.Fatalf("policy add error: %v", err)
@ -562,7 +562,7 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
// 1. Create a policy // 1. Create a policy
policy := "mypolicy" policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -585,7 +585,7 @@ func (s *TestSuiteIAM) TestPolicyCreate(c *check) {
] ]
} }
] ]
}`, bucket, bucket)) }`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes) err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil { if err != nil {
c.Fatalf("policy add error: %v", err) c.Fatalf("policy add error: %v", err)
@ -680,7 +680,7 @@ func (s *TestSuiteIAM) TestCannedPolicies(c *check) {
c.Fatalf("bucket creat error: %v", err) c.Fatalf("bucket creat error: %v", err)
} }
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -703,7 +703,7 @@ func (s *TestSuiteIAM) TestCannedPolicies(c *check) {
] ]
} }
] ]
}`, bucket, bucket)) }`, bucket, bucket)
// Check that default policies can be overwritten. // Check that default policies can be overwritten.
err = s.adm.AddCannedPolicy(ctx, "readwrite", policyBytes) err = s.adm.AddCannedPolicy(ctx, "readwrite", policyBytes)
@ -739,7 +739,7 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
} }
policy := "mypolicy" policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -762,7 +762,7 @@ func (s *TestSuiteIAM) TestGroupAddRemove(c *check) {
] ]
} }
] ]
}`, bucket, bucket)) }`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes) err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil { if err != nil {
c.Fatalf("policy add error: %v", err) c.Fatalf("policy add error: %v", err)
@ -911,7 +911,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
// Create policy, user and associate policy // Create policy, user and associate policy
policy := "mypolicy" policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -934,7 +934,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByUser(c *check) {
] ]
} }
] ]
}`, bucket, bucket)) }`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes) err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil { if err != nil {
c.Fatalf("policy add error: %v", err) c.Fatalf("policy add error: %v", err)
@ -995,7 +995,7 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
// Create policy, user and associate policy // Create policy, user and associate policy
policy := "mypolicy" policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -1026,7 +1026,7 @@ func (s *TestSuiteIAM) TestServiceAccountDurationSecondsCondition(c *check) {
] ]
} }
] ]
}`, bucket, bucket)) }`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes) err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil { if err != nil {
c.Fatalf("policy add error: %v", err) c.Fatalf("policy add error: %v", err)
@ -1093,7 +1093,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
// Create policy, user and associate policy // Create policy, user and associate policy
policy := "mypolicy" policy := "mypolicy"
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -1116,7 +1116,7 @@ func (s *TestSuiteIAM) TestServiceAccountOpsByAdmin(c *check) {
] ]
} }
] ]
}`, bucket, bucket)) }`, bucket, bucket)
err = s.adm.AddCannedPolicy(ctx, policy, policyBytes) err = s.adm.AddCannedPolicy(ctx, policy, policyBytes)
if err != nil { if err != nil {
c.Fatalf("policy add error: %v", err) c.Fatalf("policy add error: %v", err)
@ -1367,7 +1367,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
svcAK, svcSK := mustGenerateCredentials(c) svcAK, svcSK := mustGenerateCredentials(c)
// This policy does not allow listing objects. // This policy does not allow listing objects.
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -1381,7 +1381,7 @@ func (s *TestSuiteIAM) TestAccMgmtPlugin(c *check) {
] ]
} }
] ]
}`, bucket)) }`, bucket)
cr, err := userAdmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{ cr, err := userAdmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: policyBytes, Policy: policyBytes,
TargetUser: accessKey, TargetUser: accessKey,
@ -1558,7 +1558,7 @@ func (c *check) mustDownload(ctx context.Context, client *minio.Client, bucket s
func (c *check) mustUploadReturnVersions(ctx context.Context, client *minio.Client, bucket string) []string { func (c *check) mustUploadReturnVersions(ctx context.Context, client *minio.Client, bucket string) []string {
c.Helper() c.Helper()
versions := []string{} versions := []string{}
for i := 0; i < 5; i++ { for range 5 {
ui, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{}) ui, err := client.PutObject(ctx, bucket, "some-object", bytes.NewBuffer([]byte("stuff")), 5, minio.PutObjectOptions{})
if err != nil { if err != nil {
c.Fatalf("upload did not succeed got %#v", err) c.Fatalf("upload did not succeed got %#v", err)
@ -1627,7 +1627,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
svcAK, svcSK := mustGenerateCredentials(c) svcAK, svcSK := mustGenerateCredentials(c)
// This policy does not allow listing objects. // This policy does not allow listing objects.
policyBytes := []byte(fmt.Sprintf(`{ policyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -1641,7 +1641,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
] ]
} }
] ]
}`, bucket)) }`, bucket)
cr, err := madmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{ cr, err := madmClient.AddServiceAccount(ctx, madmin.AddServiceAccountReq{
Policy: policyBytes, Policy: policyBytes,
TargetUser: accessKey, TargetUser: accessKey,
@ -1655,7 +1655,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
c.mustNotListObjects(ctx, svcClient, bucket) c.mustNotListObjects(ctx, svcClient, bucket)
// This policy allows listing objects. // This policy allows listing objects.
newPolicyBytes := []byte(fmt.Sprintf(`{ newPolicyBytes := fmt.Appendf(nil, `{
"Version": "2012-10-17", "Version": "2012-10-17",
"Statement": [ "Statement": [
{ {
@ -1668,7 +1668,7 @@ func (c *check) assertSvcAccSessionPolicyUpdate(ctx context.Context, s *TestSuit
] ]
} }
] ]
}`, bucket)) }`, bucket)
err = madmClient.UpdateServiceAccount(ctx, svcAK, madmin.UpdateServiceAccountReq{ err = madmClient.UpdateServiceAccount(ctx, svcAK, madmin.UpdateServiceAccountReq{
NewPolicy: newPolicyBytes, NewPolicy: newPolicyBytes,
}) })

View File

@ -954,7 +954,7 @@ func (a adminAPIHandlers) ForceUnlockHandler(w http.ResponseWriter, r *http.Requ
var args dsync.LockArgs var args dsync.LockArgs
var lockers []dsync.NetLocker var lockers []dsync.NetLocker
for _, path := range strings.Split(vars["paths"], ",") { for path := range strings.SplitSeq(vars["paths"], ",") {
if path == "" { if path == "" {
continue continue
} }
@ -1193,7 +1193,7 @@ type dummyFileInfo struct {
mode os.FileMode mode os.FileMode
modTime time.Time modTime time.Time
isDir bool isDir bool
sys interface{} sys any
} }
func (f dummyFileInfo) Name() string { return f.name } func (f dummyFileInfo) Name() string { return f.name }
@ -1201,7 +1201,7 @@ func (f dummyFileInfo) Size() int64 { return f.size }
func (f dummyFileInfo) Mode() os.FileMode { return f.mode } func (f dummyFileInfo) Mode() os.FileMode { return f.mode }
func (f dummyFileInfo) ModTime() time.Time { return f.modTime } func (f dummyFileInfo) ModTime() time.Time { return f.modTime }
func (f dummyFileInfo) IsDir() bool { return f.isDir } func (f dummyFileInfo) IsDir() bool { return f.isDir }
func (f dummyFileInfo) Sys() interface{} { return f.sys } func (f dummyFileInfo) Sys() any { return f.sys }
// DownloadProfilingHandler - POST /minio/admin/v3/profiling/download // DownloadProfilingHandler - POST /minio/admin/v3/profiling/download
// ---------- // ----------
@ -1243,17 +1243,17 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if hip.objPrefix != "" { if hip.objPrefix != "" {
// Bucket is required if object-prefix is given // Bucket is required if object-prefix is given
err = ErrHealMissingBucket err = ErrHealMissingBucket
return return hip, err
} }
} else if isReservedOrInvalidBucket(hip.bucket, false) { } else if isReservedOrInvalidBucket(hip.bucket, false) {
err = ErrInvalidBucketName err = ErrInvalidBucketName
return return hip, err
} }
// empty prefix is valid. // empty prefix is valid.
if !IsValidObjectPrefix(hip.objPrefix) { if !IsValidObjectPrefix(hip.objPrefix) {
err = ErrInvalidObjectName err = ErrInvalidObjectName
return return hip, err
} }
if len(qParams[mgmtClientToken]) > 0 { if len(qParams[mgmtClientToken]) > 0 {
@ -1275,7 +1275,7 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if (hip.forceStart && hip.forceStop) || if (hip.forceStart && hip.forceStop) ||
(hip.clientToken != "" && (hip.forceStart || hip.forceStop)) { (hip.clientToken != "" && (hip.forceStart || hip.forceStop)) {
err = ErrInvalidRequest err = ErrInvalidRequest
return return hip, err
} }
// ignore body if clientToken is provided // ignore body if clientToken is provided
@ -1284,12 +1284,12 @@ func extractHealInitParams(vars map[string]string, qParams url.Values, r io.Read
if jerr != nil { if jerr != nil {
adminLogIf(GlobalContext, jerr, logger.ErrorKind) adminLogIf(GlobalContext, jerr, logger.ErrorKind)
err = ErrRequestBodyParse err = ErrRequestBodyParse
return return hip, err
} }
} }
err = ErrNone err = ErrNone
return return hip, err
} }
// HealHandler - POST /minio/admin/v3/heal/ // HealHandler - POST /minio/admin/v3/heal/
@ -2022,7 +2022,7 @@ func extractTraceOptions(r *http.Request) (opts madmin.ServiceTraceOpts, err err
opts.OS = true opts.OS = true
// Older mc - cannot deal with more types... // Older mc - cannot deal with more types...
} }
return return opts, err
} }
// TraceHandler - POST /minio/admin/v3/trace // TraceHandler - POST /minio/admin/v3/trace

View File

@ -402,7 +402,7 @@ func (b byResourceUID) Less(i, j int) bool {
func TestTopLockEntries(t *testing.T) { func TestTopLockEntries(t *testing.T) {
locksHeld := make(map[string][]lockRequesterInfo) locksHeld := make(map[string][]lockRequesterInfo)
var owners []string var owners []string
for i := 0; i < 4; i++ { for i := range 4 {
owners = append(owners, fmt.Sprintf("node-%d", i)) owners = append(owners, fmt.Sprintf("node-%d", i))
} }
@ -410,7 +410,7 @@ func TestTopLockEntries(t *testing.T) {
// request UID, but 10 different resource names associated with it. // request UID, but 10 different resource names associated with it.
var lris []lockRequesterInfo var lris []lockRequesterInfo
uuid := mustGetUUID() uuid := mustGetUUID()
for i := 0; i < 10; i++ { for i := range 10 {
resource := fmt.Sprintf("bucket/delete-object-%d", i) resource := fmt.Sprintf("bucket/delete-object-%d", i)
lri := lockRequesterInfo{ lri := lockRequesterInfo{
Name: resource, Name: resource,
@ -425,7 +425,7 @@ func TestTopLockEntries(t *testing.T) {
} }
// Add a few concurrent read locks to the mix // Add a few concurrent read locks to the mix
for i := 0; i < 50; i++ { for i := range 50 {
resource := fmt.Sprintf("bucket/get-object-%d", i) resource := fmt.Sprintf("bucket/get-object-%d", i)
lri := lockRequesterInfo{ lri := lockRequesterInfo{
Name: resource, Name: resource,

View File

@ -22,6 +22,7 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"maps"
"net/http" "net/http"
"sort" "sort"
"sync" "sync"
@ -520,9 +521,7 @@ func (h *healSequence) getScannedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value // Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.scannedItemsMap)) retMap := make(map[madmin.HealItemType]int64, len(h.scannedItemsMap))
for k, v := range h.scannedItemsMap { maps.Copy(retMap, h.scannedItemsMap)
retMap[k] = v
}
return retMap return retMap
} }
@ -534,9 +533,7 @@ func (h *healSequence) getHealedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value // Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.healedItemsMap)) retMap := make(map[madmin.HealItemType]int64, len(h.healedItemsMap))
for k, v := range h.healedItemsMap { maps.Copy(retMap, h.healedItemsMap)
retMap[k] = v
}
return retMap return retMap
} }
@ -549,9 +546,7 @@ func (h *healSequence) getHealFailedItemsMap() map[madmin.HealItemType]int64 {
// Make a copy before returning the value // Make a copy before returning the value
retMap := make(map[madmin.HealItemType]int64, len(h.healFailedItemsMap)) retMap := make(map[madmin.HealItemType]int64, len(h.healFailedItemsMap))
for k, v := range h.healFailedItemsMap { maps.Copy(retMap, h.healFailedItemsMap)
retMap[k] = v
}
return retMap return retMap
} }

View File

@ -23,6 +23,7 @@ import (
"encoding/json" "encoding/json"
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"mime"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
@ -64,7 +65,7 @@ func setCommonHeaders(w http.ResponseWriter) {
} }
// Encodes the response headers into XML format. // Encodes the response headers into XML format.
func encodeResponse(response interface{}) []byte { func encodeResponse(response any) []byte {
var buf bytes.Buffer var buf bytes.Buffer
buf.WriteString(xml.Header) buf.WriteString(xml.Header)
if err := xml.NewEncoder(&buf).Encode(response); err != nil { if err := xml.NewEncoder(&buf).Encode(response); err != nil {
@ -82,7 +83,7 @@ func encodeResponse(response interface{}) []byte {
// Do not use this function for anything other than ListObjects() // Do not use this function for anything other than ListObjects()
// variants, please open a github discussion if you wish to use // variants, please open a github discussion if you wish to use
// this in other places. // this in other places.
func encodeResponseList(response interface{}) []byte { func encodeResponseList(response any) []byte {
var buf bytes.Buffer var buf bytes.Buffer
buf.WriteString(xxml.Header) buf.WriteString(xxml.Header)
if err := xxml.NewEncoder(&buf).Encode(response); err != nil { if err := xxml.NewEncoder(&buf).Encode(response); err != nil {
@ -93,7 +94,7 @@ func encodeResponseList(response interface{}) []byte {
} }
// Encodes the response headers into JSON format. // Encodes the response headers into JSON format.
func encodeResponseJSON(response interface{}) []byte { func encodeResponseJSON(response any) []byte {
var bytesBuffer bytes.Buffer var bytesBuffer bytes.Buffer
e := json.NewEncoder(&bytesBuffer) e := json.NewEncoder(&bytesBuffer)
e.Encode(response) e.Encode(response)
@ -168,6 +169,32 @@ func setObjectHeaders(ctx context.Context, w http.ResponseWriter, objInfo Object
if !stringsHasPrefixFold(k, userMetadataPrefix) { if !stringsHasPrefixFold(k, userMetadataPrefix) {
continue continue
} }
// check the doc https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html
// For metadata values like "ö", "ÄMÄZÕÑ S3", and "öha, das sollte eigentlich
// funktionieren", tested against a real AWS S3 bucket, S3 may encode incorrectly. For
// example, "ö" was encoded as =?UTF-8?B?w4PCtg==?=, producing invalid UTF-8 instead
// of =?UTF-8?B?w7Y=?=. This mirrors errors like the ä½ in another string.
//
// S3 uses B-encoding (Base64) for non-ASCII-heavy metadata and Q-encoding
// (quoted-printable) for mostly ASCII strings. Long strings are split at word
// boundaries to fit RFC 2047s 75-character limit, ensuring HTTP parser
// compatibility.
//
// However, this splitting increases header size and can introduce errors, unlike Gos
// mime package in MinIO, which correctly encodes strings with fixed B/Q encodings,
// avoiding S3s heuristic-driven issues.
//
// For MinIO developers, decode S3 metadata with mime.WordDecoder, validate outputs,
// report encoding bugs to AWS, and use ASCII-only metadata to ensure reliable S3 API
// compatibility.
if needsMimeEncoding(v) {
// see https://github.com/golang/go/blob/release-branch.go1.24/src/net/mail/message.go#L325
if strings.ContainsAny(v, "\"#$%&'(),.:;<>@[]^`{|}~") {
v = mime.BEncoding.Encode("UTF-8", v)
} else {
v = mime.QEncoding.Encode("UTF-8", v)
}
}
w.Header()[strings.ToLower(k)] = []string{v} w.Header()[strings.ToLower(k)] = []string{v}
isSet = true isSet = true
break break
@ -229,3 +256,14 @@ func setObjectHeaders(ctx context.Context, w http.ResponseWriter, objInfo Object
return nil return nil
} }
// needsEncoding reports whether s contains any bytes that need to be encoded.
// see mime.needsEncoding
func needsMimeEncoding(s string) bool {
for _, b := range s {
if (b < ' ' || b > '~') && b != '\t' {
return true
}
}
return false
}

View File

@ -31,7 +31,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
var err error var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil { if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys errCode = ErrInvalidMaxKeys
return return prefix, marker, delimiter, maxkeys, encodingType, errCode
} }
} else { } else {
maxkeys = maxObjectList maxkeys = maxObjectList
@ -41,7 +41,7 @@ func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string,
marker = values.Get("marker") marker = values.Get("marker")
delimiter = values.Get("delimiter") delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type") encodingType = values.Get("encoding-type")
return return prefix, marker, delimiter, maxkeys, encodingType, errCode
} }
func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimiter string, maxkeys int, encodingType, versionIDMarker string, errCode APIErrorCode) { func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimiter string, maxkeys int, encodingType, versionIDMarker string, errCode APIErrorCode) {
@ -51,7 +51,7 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
var err error var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil { if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys errCode = ErrInvalidMaxKeys
return return prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode
} }
} else { } else {
maxkeys = maxObjectList maxkeys = maxObjectList
@ -62,7 +62,7 @@ func getListBucketObjectVersionsArgs(values url.Values) (prefix, marker, delimit
delimiter = values.Get("delimiter") delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type") encodingType = values.Get("encoding-type")
versionIDMarker = values.Get("version-id-marker") versionIDMarker = values.Get("version-id-marker")
return return prefix, marker, delimiter, maxkeys, encodingType, versionIDMarker, errCode
} }
// Parse bucket url queries for ListObjects V2. // Parse bucket url queries for ListObjects V2.
@ -73,7 +73,7 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
if val, ok := values["continuation-token"]; ok { if val, ok := values["continuation-token"]; ok {
if len(val[0]) == 0 { if len(val[0]) == 0 {
errCode = ErrIncorrectContinuationToken errCode = ErrIncorrectContinuationToken
return return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
} }
} }
@ -81,7 +81,7 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
var err error var err error
if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil { if maxkeys, err = strconv.Atoi(values.Get("max-keys")); err != nil {
errCode = ErrInvalidMaxKeys errCode = ErrInvalidMaxKeys
return return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
} }
} else { } else {
maxkeys = maxObjectList maxkeys = maxObjectList
@ -97,11 +97,11 @@ func getListObjectsV2Args(values url.Values) (prefix, token, startAfter, delimit
decodedToken, err := base64.StdEncoding.DecodeString(token) decodedToken, err := base64.StdEncoding.DecodeString(token)
if err != nil { if err != nil {
errCode = ErrIncorrectContinuationToken errCode = ErrIncorrectContinuationToken
return return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
} }
token = string(decodedToken) token = string(decodedToken)
} }
return return prefix, token, startAfter, delimiter, fetchOwner, maxkeys, encodingType, errCode
} }
// Parse bucket url queries for ?uploads // Parse bucket url queries for ?uploads
@ -112,7 +112,7 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
var err error var err error
if maxUploads, err = strconv.Atoi(values.Get("max-uploads")); err != nil { if maxUploads, err = strconv.Atoi(values.Get("max-uploads")); err != nil {
errCode = ErrInvalidMaxUploads errCode = ErrInvalidMaxUploads
return return prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, encodingType, errCode
} }
} else { } else {
maxUploads = maxUploadsList maxUploads = maxUploadsList
@ -123,7 +123,7 @@ func getBucketMultipartResources(values url.Values) (prefix, keyMarker, uploadID
uploadIDMarker = values.Get("upload-id-marker") uploadIDMarker = values.Get("upload-id-marker")
delimiter = values.Get("delimiter") delimiter = values.Get("delimiter")
encodingType = values.Get("encoding-type") encodingType = values.Get("encoding-type")
return return prefix, keyMarker, uploadIDMarker, delimiter, maxUploads, encodingType, errCode
} }
// Parse object url queries // Parse object url queries
@ -134,7 +134,7 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
if values.Get("max-parts") != "" { if values.Get("max-parts") != "" {
if maxParts, err = strconv.Atoi(values.Get("max-parts")); err != nil { if maxParts, err = strconv.Atoi(values.Get("max-parts")); err != nil {
errCode = ErrInvalidMaxParts errCode = ErrInvalidMaxParts
return return uploadID, partNumberMarker, maxParts, encodingType, errCode
} }
} else { } else {
maxParts = maxPartsList maxParts = maxPartsList
@ -143,11 +143,11 @@ func getObjectResources(values url.Values) (uploadID string, partNumberMarker, m
if values.Get("part-number-marker") != "" { if values.Get("part-number-marker") != "" {
if partNumberMarker, err = strconv.Atoi(values.Get("part-number-marker")); err != nil { if partNumberMarker, err = strconv.Atoi(values.Get("part-number-marker")); err != nil {
errCode = ErrInvalidPartNumberMarker errCode = ErrInvalidPartNumberMarker
return return uploadID, partNumberMarker, maxParts, encodingType, errCode
} }
} }
uploadID = values.Get("uploadId") uploadID = values.Get("uploadId")
encodingType = values.Get("encoding-type") encodingType = values.Get("encoding-type")
return return uploadID, partNumberMarker, maxParts, encodingType, errCode
} }

View File

@ -100,7 +100,6 @@ func TestObjectLocation(t *testing.T) {
}, },
} }
for _, testCase := range testCases { for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
gotLocation := getObjectLocation(testCase.request, testCase.domains, testCase.bucket, testCase.object) gotLocation := getObjectLocation(testCase.request, testCase.domains, testCase.bucket, testCase.object)
if testCase.expectedLocation != gotLocation { if testCase.expectedLocation != gotLocation {

View File

@ -387,6 +387,11 @@ func registerAPIRouter(router *mux.Router) {
HeadersRegexp(xhttp.AmzSnowballExtract, "true"). HeadersRegexp(xhttp.AmzSnowballExtract, "true").
HandlerFunc(s3APIMiddleware(api.PutObjectExtractHandler, traceHdrsS3HFlag)) HandlerFunc(s3APIMiddleware(api.PutObjectExtractHandler, traceHdrsS3HFlag))
// AppendObject to be rejected
router.Methods(http.MethodPut).Path("/{object:.+}").
HeadersRegexp(xhttp.AmzWriteOffsetBytes, "").
HandlerFunc(s3APIMiddleware(errorResponseHandler))
// PutObject // PutObject
router.Methods(http.MethodPut).Path("/{object:.+}"). router.Methods(http.MethodPut).Path("/{object:.+}").
HandlerFunc(s3APIMiddleware(api.PutObjectHandler, traceHdrsS3HFlag)) HandlerFunc(s3APIMiddleware(api.PutObjectHandler, traceHdrsS3HFlag))

View File

@ -43,7 +43,7 @@ func shouldEscape(c byte) bool {
// - Force encoding of '~' // - Force encoding of '~'
func s3URLEncode(s string) string { func s3URLEncode(s string) string {
spaceCount, hexCount := 0, 0 spaceCount, hexCount := 0, 0
for i := 0; i < len(s); i++ { for i := range len(s) {
c := s[i] c := s[i]
if shouldEscape(c) { if shouldEscape(c) {
if c == ' ' { if c == ' ' {
@ -70,7 +70,7 @@ func s3URLEncode(s string) string {
if hexCount == 0 { if hexCount == 0 {
copy(t, s) copy(t, s)
for i := 0; i < len(s); i++ { for i := range len(s) {
if s[i] == ' ' { if s[i] == ' ' {
t[i] = '+' t[i] = '+'
} }
@ -79,7 +79,7 @@ func s3URLEncode(s string) string {
} }
j := 0 j := 0
for i := 0; i < len(s); i++ { for i := range len(s) {
switch c := s[i]; { switch c := s[i]; {
case c == ' ': case c == ' ':
t[j] = '+' t[j] = '+'

View File

@ -216,7 +216,7 @@ func getSessionToken(r *http.Request) (token string) {
// Fetch claims in the security token returned by the client, doesn't return // Fetch claims in the security token returned by the client, doesn't return
// errors - upon errors the returned claims map will be empty. // errors - upon errors the returned claims map will be empty.
func mustGetClaimsFromToken(r *http.Request) map[string]interface{} { func mustGetClaimsFromToken(r *http.Request) map[string]any {
claims, _ := getClaimsFromToken(getSessionToken(r)) claims, _ := getClaimsFromToken(getSessionToken(r))
return claims return claims
} }
@ -266,7 +266,7 @@ func getClaimsFromTokenWithSecret(token, secret string) (*xjwt.MapClaims, error)
} }
// Fetch claims in the security token returned by the client. // Fetch claims in the security token returned by the client.
func getClaimsFromToken(token string) (map[string]interface{}, error) { func getClaimsFromToken(token string) (map[string]any, error) {
jwtClaims, err := getClaimsFromTokenWithSecret(token, globalActiveCred.SecretKey) jwtClaims, err := getClaimsFromTokenWithSecret(token, globalActiveCred.SecretKey)
if err != nil { if err != nil {
return nil, err return nil, err
@ -275,7 +275,7 @@ func getClaimsFromToken(token string) (map[string]interface{}, error) {
} }
// Fetch claims in the security token returned by the client and validate the token. // Fetch claims in the security token returned by the client and validate the token.
func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]interface{}, APIErrorCode) { func checkClaimsFromToken(r *http.Request, cred auth.Credentials) (map[string]any, APIErrorCode) {
token := getSessionToken(r) token := getSessionToken(r)
if token != "" && cred.AccessKey == "" { if token != "" && cred.AccessKey == "" {
// x-amz-security-token is not allowed for anonymous access. // x-amz-security-token is not allowed for anonymous access.

View File

@ -102,7 +102,7 @@ func waitForLowHTTPReq() {
func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) { func initBackgroundHealing(ctx context.Context, objAPI ObjectLayer) {
bgSeq := newBgHealSequence() bgSeq := newBgHealSequence()
// Run the background healer // Run the background healer
for i := 0; i < globalBackgroundHealRoutine.workers; i++ { for range globalBackgroundHealRoutine.workers {
go globalBackgroundHealRoutine.AddWorker(ctx, objAPI, bgSeq) go globalBackgroundHealRoutine.AddWorker(ctx, objAPI, bgSeq)
} }

View File

@ -24,6 +24,7 @@ import (
"fmt" "fmt"
"io" "io"
"os" "os"
"slices"
"sort" "sort"
"strings" "strings"
"sync" "sync"
@ -269,12 +270,7 @@ func (h *healingTracker) delete(ctx context.Context) error {
func (h *healingTracker) isHealed(bucket string) bool { func (h *healingTracker) isHealed(bucket string) bool {
h.mu.RLock() h.mu.RLock()
defer h.mu.RUnlock() defer h.mu.RUnlock()
for _, v := range h.HealedBuckets { return slices.Contains(h.HealedBuckets, bucket)
if v == bucket {
return true
}
}
return false
} }
// resume will reset progress to the numbers at the start of the bucket. // resume will reset progress to the numbers at the start of the bucket.

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"time" "time"

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -25,6 +25,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"maps"
"math/rand" "math/rand"
"net/http" "net/http"
"net/url" "net/url"
@ -248,7 +249,7 @@ func (r *BatchJobReplicateV1) copyWithMultipartfromSource(ctx context.Context, a
pInfo PartInfo pInfo PartInfo
) )
for i := 0; i < partsCount; i++ { for i := range partsCount {
gopts := minio.GetObjectOptions{ gopts := minio.GetObjectOptions{
VersionID: srcObjInfo.VersionID, VersionID: srcObjInfo.VersionID,
PartNumber: i + 1, PartNumber: i + 1,
@ -574,9 +575,7 @@ func toObjectInfo(bucket, object string, objInfo minio.ObjectInfo) ObjectInfo {
oi.UserDefined[xhttp.AmzStorageClass] = objInfo.StorageClass oi.UserDefined[xhttp.AmzStorageClass] = objInfo.StorageClass
} }
for k, v := range objInfo.UserMetadata { maps.Copy(oi.UserDefined, objInfo.UserMetadata)
oi.UserDefined[k] = v
}
return oi return oi
} }
@ -997,7 +996,16 @@ func (ri *batchJobInfo) updateAfter(ctx context.Context, api ObjectLayer, durati
// a single action. e.g batch-expire has an option to expire all versions of an // a single action. e.g batch-expire has an option to expire all versions of an
// object which matches the given filters. // object which matches the given filters.
func (ri *batchJobInfo) trackMultipleObjectVersions(info expireObjInfo, success bool) { func (ri *batchJobInfo) trackMultipleObjectVersions(info expireObjInfo, success bool) {
if ri == nil {
return
}
ri.mu.Lock()
defer ri.mu.Unlock()
if success { if success {
ri.Bucket = info.Bucket
ri.Object = info.Name
ri.Objects += int64(info.NumVersions) - info.DeleteMarkerCount ri.Objects += int64(info.NumVersions) - info.DeleteMarkerCount
ri.DeleteMarkers += info.DeleteMarkerCount ri.DeleteMarkers += info.DeleteMarkerCount
} else { } else {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -275,7 +275,7 @@ func (sf BatchJobSizeFilter) Validate() error {
type BatchJobSize int64 type BatchJobSize int64
// UnmarshalYAML to parse humanized byte values // UnmarshalYAML to parse humanized byte values
func (s *BatchJobSize) UnmarshalYAML(unmarshal func(interface{}) error) error { func (s *BatchJobSize) UnmarshalYAML(unmarshal func(any) error) error {
var batchExpireSz string var batchExpireSz string
err := unmarshal(&batchExpireSz) err := unmarshal(&batchExpireSz)
if err != nil { if err != nil {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -21,6 +21,7 @@ import (
"context" "context"
"encoding/base64" "encoding/base64"
"fmt" "fmt"
"maps"
"math/rand" "math/rand"
"net/http" "net/http"
"runtime" "runtime"
@ -110,9 +111,7 @@ func (e BatchJobKeyRotateEncryption) Validate() error {
} }
} }
e.kmsContext = kms.Context{} e.kmsContext = kms.Context{}
for k, v := range ctx { maps.Copy(e.kmsContext, ctx)
e.kmsContext[k] = v
}
ctx["MinIO batch API"] = "batchrotate" // Context for a test key operation ctx["MinIO batch API"] = "batchrotate" // Context for a test key operation
if _, err := GlobalKMS.GenerateKey(GlobalContext, &kms.GenerateKeyRequest{Name: e.Key, AssociatedData: ctx}); err != nil { if _, err := GlobalKMS.GenerateKey(GlobalContext, &kms.GenerateKeyRequest{Name: e.Key, AssociatedData: ctx}); err != nil {
return err return err
@ -225,9 +224,7 @@ func (r *BatchJobKeyRotateV1) KeyRotate(ctx context.Context, api ObjectLayer, ob
// Since we are rotating the keys, make sure to update the metadata. // Since we are rotating the keys, make sure to update the metadata.
oi.metadataOnly = true oi.metadataOnly = true
oi.keyRotation = true oi.keyRotation = true
for k, v := range encMetadata { maps.Copy(oi.UserDefined, encMetadata)
oi.UserDefined[k] = v
}
if _, err := api.CopyObject(ctx, r.Bucket, oi.Name, r.Bucket, oi.Name, oi, ObjectOptions{ if _, err := api.CopyObject(ctx, r.Bucket, oi.Name, r.Bucket, oi.Name, oi, ObjectOptions{
VersionID: oi.VersionID, VersionID: oi.VersionID,
}, ObjectOptions{ }, ObjectOptions{

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -51,8 +51,8 @@ func runPutObjectBenchmark(b *testing.B, obj ObjectLayer, objSize int) {
// benchmark utility which helps obtain number of allocations and bytes allocated per ops. // benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs() b.ReportAllocs()
// the actual benchmark for PutObject starts here. Reset the benchmark timer. // the actual benchmark for PutObject starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; b.Loop(); i++ {
// insert the object. // insert the object.
objInfo, err := obj.PutObject(b.Context(), bucket, "object"+strconv.Itoa(i), objInfo, err := obj.PutObject(b.Context(), bucket, "object"+strconv.Itoa(i),
mustGetPutObjReader(b, bytes.NewReader(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{}) mustGetPutObjReader(b, bytes.NewReader(textData), int64(len(textData)), md5hex, sha256hex), ObjectOptions{})
@ -101,11 +101,11 @@ func runPutObjectPartBenchmark(b *testing.B, obj ObjectLayer, partSize int) {
// benchmark utility which helps obtain number of allocations and bytes allocated per ops. // benchmark utility which helps obtain number of allocations and bytes allocated per ops.
b.ReportAllocs() b.ReportAllocs()
// the actual benchmark for PutObjectPart starts here. Reset the benchmark timer. // the actual benchmark for PutObjectPart starts here. Reset the benchmark timer.
b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; b.Loop(); i++ {
// insert the object. // insert the object.
totalPartsNR := int(math.Ceil(float64(objSize) / float64(partSize))) totalPartsNR := int(math.Ceil(float64(objSize) / float64(partSize)))
for j := 0; j < totalPartsNR; j++ { for j := range totalPartsNR {
if j < totalPartsNR-1 { if j < totalPartsNR-1 {
textPartData = textData[j*partSize : (j+1)*partSize-1] textPartData = textData[j*partSize : (j+1)*partSize-1]
} else { } else {

View File

@ -99,7 +99,7 @@ func BitrotAlgorithmFromString(s string) (a BitrotAlgorithm) {
return alg return alg
} }
} }
return return a
} }
func newBitrotWriter(disk StorageAPI, origvolume, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64) io.Writer { func newBitrotWriter(disk StorageAPI, origvolume, volume, filePath string, length int64, algo BitrotAlgorithm, shardSize int64) io.Writer {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )
@ -59,19 +59,17 @@ func (z *ServerSystemConfig) DecodeMsg(dc *msgp.Reader) (err error) {
if z.MinioEnv == nil { if z.MinioEnv == nil {
z.MinioEnv = make(map[string]string, zb0003) z.MinioEnv = make(map[string]string, zb0003)
} else if len(z.MinioEnv) > 0 { } else if len(z.MinioEnv) > 0 {
for key := range z.MinioEnv { clear(z.MinioEnv)
delete(z.MinioEnv, key)
}
} }
for zb0003 > 0 { for zb0003 > 0 {
zb0003-- zb0003--
var za0002 string var za0002 string
var za0003 string
za0002, err = dc.ReadString() za0002, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "MinioEnv") err = msgp.WrapError(err, "MinioEnv")
return return
} }
var za0003 string
za0003, err = dc.ReadString() za0003, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "MinioEnv", za0002) err = msgp.WrapError(err, "MinioEnv", za0002)
@ -240,14 +238,12 @@ func (z *ServerSystemConfig) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.MinioEnv == nil { if z.MinioEnv == nil {
z.MinioEnv = make(map[string]string, zb0003) z.MinioEnv = make(map[string]string, zb0003)
} else if len(z.MinioEnv) > 0 { } else if len(z.MinioEnv) > 0 {
for key := range z.MinioEnv { clear(z.MinioEnv)
delete(z.MinioEnv, key)
}
} }
for zb0003 > 0 { for zb0003 > 0 {
var za0002 string
var za0003 string var za0003 string
zb0003-- zb0003--
var za0002 string
za0002, bts, err = msgp.ReadStringBytes(bts) za0002, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "MinioEnv") err = msgp.WrapError(err, "MinioEnv")

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -154,7 +154,6 @@ func initFederatorBackend(buckets []string, objLayer ObjectLayer) {
g := errgroup.WithNErrs(len(bucketsToBeUpdatedSlice)).WithConcurrency(50) g := errgroup.WithNErrs(len(bucketsToBeUpdatedSlice)).WithConcurrency(50)
for index := range bucketsToBeUpdatedSlice { for index := range bucketsToBeUpdatedSlice {
index := index
g.Go(func() error { g.Go(func() error {
return globalDNSConfig.Put(bucketsToBeUpdatedSlice[index]) return globalDNSConfig.Put(bucketsToBeUpdatedSlice[index])
}, index) }, index)
@ -593,7 +592,7 @@ func (api objectAPIHandlers) DeleteMultipleObjectsHandler(w http.ResponseWriter,
output[idx] = obj output[idx] = obj
idx++ idx++
} }
return return output
} }
// Disable timeouts and cancellation // Disable timeouts and cancellation
@ -1089,6 +1088,14 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
break break
} }
// check if have a file
if reader == nil {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The file or text content is missing"))
writeErrorResponse(ctx, w, apiErr, r.URL)
return
}
if keyName, ok := formValues["Key"]; !ok { if keyName, ok := formValues["Key"]; !ok {
apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest) apiErr := errorCodes.ToAPIErr(ErrMalformedPOSTRequest)
apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The name of the uploaded key is missing")) apiErr.Description = fmt.Sprintf("%s (%v)", apiErr.Description, errors.New("The name of the uploaded key is missing"))
@ -1379,10 +1386,7 @@ func (api objectAPIHandlers) PostPolicyBucketHandler(w http.ResponseWriter, r *h
// Set the correct hex md5sum for the fan-out stream. // Set the correct hex md5sum for the fan-out stream.
fanOutOpts.MD5Hex = hex.EncodeToString(md5w.Sum(nil)) fanOutOpts.MD5Hex = hex.EncodeToString(md5w.Sum(nil))
concurrentSize := 100 concurrentSize := min(runtime.GOMAXPROCS(0), 100)
if runtime.GOMAXPROCS(0) < concurrentSize {
concurrentSize = runtime.GOMAXPROCS(0)
}
fanOutResp := make([]minio.PutObjectFanOutResponse, 0, len(fanOutEntries)) fanOutResp := make([]minio.PutObjectFanOutResponse, 0, len(fanOutEntries))
eventArgsList := make([]eventArgs, 0, len(fanOutEntries)) eventArgsList := make([]eventArgs, 0, len(fanOutEntries))
@ -1653,9 +1657,11 @@ func (api objectAPIHandlers) HeadBucketHandler(w http.ResponseWriter, r *http.Re
return return
} }
if s3Error := checkRequestAuthType(ctx, r, policy.ListBucketAction, bucket, ""); s3Error != ErrNone { if s3Error := checkRequestAuthType(ctx, r, policy.HeadBucketAction, bucket, ""); s3Error != ErrNone {
writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(s3Error)) if s3Error := checkRequestAuthType(ctx, r, policy.ListBucketAction, bucket, ""); s3Error != ErrNone {
return writeErrorResponseHeadersOnly(w, errorCodes.ToAPIErr(s3Error))
return
}
} }
getBucketInfo := objectAPI.GetBucketInfo getBucketInfo := objectAPI.GetBucketInfo

View File

@ -657,7 +657,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
sha256sum := "" sha256sum := ""
var objectNames []string var objectNames []string
for i := 0; i < 10; i++ { for i := range 10 {
contentBytes := []byte("hello") contentBytes := []byte("hello")
objectName := "test-object-" + strconv.Itoa(i) objectName := "test-object-" + strconv.Itoa(i)
if i == 0 { if i == 0 {
@ -687,7 +687,7 @@ func testAPIDeleteMultipleObjectsHandler(obj ObjectLayer, instanceType, bucketNa
// The following block will create a bucket policy with delete object to 'public/*'. This is // The following block will create a bucket policy with delete object to 'public/*'. This is
// to test a mixed response of a successful & failure while deleting objects in a single request // to test a mixed response of a successful & failure while deleting objects in a single request
policyBytes := []byte(fmt.Sprintf(`{"Id": "Policy1637752602639", "Version": "2012-10-17", "Statement": [{"Sid": "Stmt1637752600730", "Action": "s3:DeleteObject", "Effect": "Allow", "Resource": "arn:aws:s3:::%s/public/*", "Principal": "*"}]}`, bucketName)) policyBytes := fmt.Appendf(nil, `{"Id": "Policy1637752602639", "Version": "2012-10-17", "Statement": [{"Sid": "Stmt1637752600730", "Action": "s3:DeleteObject", "Effect": "Allow", "Resource": "arn:aws:s3:::%s/public/*", "Principal": "*"}]}`, bucketName)
rec := httptest.NewRecorder() rec := httptest.NewRecorder()
req, err := newTestSignedRequestV4(http.MethodPut, getPutPolicyURL("", bucketName), int64(len(policyBytes)), bytes.NewReader(policyBytes), req, err := newTestSignedRequestV4(http.MethodPut, getPutPolicyURL("", bucketName), int64(len(policyBytes)), bytes.NewReader(policyBytes),
credentials.AccessKey, credentials.SecretKey, nil) credentials.AccessKey, credentials.SecretKey, nil)

View File

@ -23,6 +23,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"maps"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
@ -959,9 +960,7 @@ func putRestoreOpts(bucket, object string, rreq *RestoreObjectRequest, objInfo O
UserDefined: meta, UserDefined: meta,
} }
} }
for k, v := range objInfo.UserDefined { maps.Copy(meta, objInfo.UserDefined)
meta[k] = v
}
if len(objInfo.UserTags) != 0 { if len(objInfo.UserTags) != 0 {
meta[xhttp.AmzObjectTagging] = objInfo.UserTags meta[xhttp.AmzObjectTagging] = objInfo.UserTags
} }

View File

@ -248,19 +248,19 @@ func proxyRequestByToken(ctx context.Context, w http.ResponseWriter, r *http.Req
if subToken, nodeIndex = parseRequestToken(token); nodeIndex >= 0 { if subToken, nodeIndex = parseRequestToken(token); nodeIndex >= 0 {
proxied, success = proxyRequestByNodeIndex(ctx, w, r, nodeIndex, returnErr) proxied, success = proxyRequestByNodeIndex(ctx, w, r, nodeIndex, returnErr)
} }
return return subToken, proxied, success
} }
func proxyRequestByNodeIndex(ctx context.Context, w http.ResponseWriter, r *http.Request, index int, returnErr bool) (proxied, success bool) { func proxyRequestByNodeIndex(ctx context.Context, w http.ResponseWriter, r *http.Request, index int, returnErr bool) (proxied, success bool) {
if len(globalProxyEndpoints) == 0 { if len(globalProxyEndpoints) == 0 {
return return proxied, success
} }
if index < 0 || index >= len(globalProxyEndpoints) { if index < 0 || index >= len(globalProxyEndpoints) {
return return proxied, success
} }
ep := globalProxyEndpoints[index] ep := globalProxyEndpoints[index]
if ep.IsLocal { if ep.IsLocal {
return return proxied, success
} }
return true, proxyRequest(ctx, w, r, ep, returnErr) return true, proxyRequest(ctx, w, r, ep, returnErr)
} }

View File

@ -472,7 +472,7 @@ func (sys *BucketMetadataSys) GetConfig(ctx context.Context, bucket string) (met
return meta, reloaded, nil return meta, reloaded, nil
} }
val, err, _ := sys.group.Do(bucket, func() (val interface{}, err error) { val, err, _ := sys.group.Do(bucket, func() (val any, err error) {
meta, err = loadBucketMetadata(ctx, objAPI, bucket) meta, err = loadBucketMetadata(ctx, objAPI, bucket)
if err != nil { if err != nil {
if !sys.Initialized() { if !sys.Initialized() {
@ -511,7 +511,6 @@ func (sys *BucketMetadataSys) concurrentLoad(ctx context.Context, buckets []stri
g := errgroup.WithNErrs(len(buckets)) g := errgroup.WithNErrs(len(buckets))
bucketMetas := make([]BucketMetadata, len(buckets)) bucketMetas := make([]BucketMetadata, len(buckets))
for index := range buckets { for index := range buckets {
index := index
g.Go(func() error { g.Go(func() error {
// Sleep and stagger to avoid blocked CPU and thundering // Sleep and stagger to avoid blocked CPU and thundering
// herd upon start up sequence. // herd upon start up sequence.

View File

@ -38,7 +38,6 @@ import (
"github.com/minio/minio/internal/bucket/versioning" "github.com/minio/minio/internal/bucket/versioning"
"github.com/minio/minio/internal/crypto" "github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/event" "github.com/minio/minio/internal/event"
"github.com/minio/minio/internal/fips"
"github.com/minio/minio/internal/kms" "github.com/minio/minio/internal/kms"
"github.com/minio/minio/internal/logger" "github.com/minio/minio/internal/logger"
"github.com/minio/pkg/v3/policy" "github.com/minio/pkg/v3/policy"
@ -162,7 +161,7 @@ func (b BucketMetadata) lastUpdate() (t time.Time) {
t = b.BucketTargetsConfigMetaUpdatedAt t = b.BucketTargetsConfigMetaUpdatedAt
} }
return return t
} }
// Versioning returns true if versioning is enabled // Versioning returns true if versioning is enabled
@ -543,26 +542,26 @@ func (b *BucketMetadata) migrateTargetConfig(ctx context.Context, objectAPI Obje
func encryptBucketMetadata(ctx context.Context, bucket string, input []byte, kmsContext kms.Context) (output, metabytes []byte, err error) { func encryptBucketMetadata(ctx context.Context, bucket string, input []byte, kmsContext kms.Context) (output, metabytes []byte, err error) {
if GlobalKMS == nil { if GlobalKMS == nil {
output = input output = input
return return output, metabytes, err
} }
metadata := make(map[string]string) metadata := make(map[string]string)
key, err := GlobalKMS.GenerateKey(ctx, &kms.GenerateKeyRequest{AssociatedData: kmsContext}) key, err := GlobalKMS.GenerateKey(ctx, &kms.GenerateKeyRequest{AssociatedData: kmsContext})
if err != nil { if err != nil {
return return output, metabytes, err
} }
outbuf := bytes.NewBuffer(nil) outbuf := bytes.NewBuffer(nil)
objectKey := crypto.GenerateKey(key.Plaintext, rand.Reader) objectKey := crypto.GenerateKey(key.Plaintext, rand.Reader)
sealedKey := objectKey.Seal(key.Plaintext, crypto.GenerateIV(rand.Reader), crypto.S3.String(), bucket, "") sealedKey := objectKey.Seal(key.Plaintext, crypto.GenerateIV(rand.Reader), crypto.S3.String(), bucket, "")
crypto.S3.CreateMetadata(metadata, key.KeyID, key.Ciphertext, sealedKey) crypto.S3.CreateMetadata(metadata, key.KeyID, key.Ciphertext, sealedKey)
_, err = sio.Encrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20, CipherSuites: fips.DARECiphers()}) _, err = sio.Encrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20})
if err != nil { if err != nil {
return output, metabytes, err return output, metabytes, err
} }
metabytes, err = json.Marshal(metadata) metabytes, err = json.Marshal(metadata)
if err != nil { if err != nil {
return return output, metabytes, err
} }
return outbuf.Bytes(), metabytes, nil return outbuf.Bytes(), metabytes, nil
} }
@ -590,6 +589,6 @@ func decryptBucketMetadata(input []byte, bucket string, meta map[string]string,
} }
outbuf := bytes.NewBuffer(nil) outbuf := bytes.NewBuffer(nil)
_, err = sio.Decrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20, CipherSuites: fips.DARECiphers()}) _, err = sio.Decrypt(outbuf, bytes.NewBuffer(input), sio.Config{Key: objectKey[:], MinVersion: sio.Version20})
return outbuf.Bytes(), err return outbuf.Bytes(), err
} }

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -297,6 +297,9 @@ func checkPutObjectLockAllowed(ctx context.Context, rq *http.Request, bucket, ob
if legalHold, lerr = objectlock.ParseObjectLockLegalHoldHeaders(rq.Header); lerr != nil { if legalHold, lerr = objectlock.ParseObjectLockLegalHoldHeaders(rq.Header); lerr != nil {
return mode, retainDate, legalHold, toAPIErrorCode(ctx, lerr) return mode, retainDate, legalHold, toAPIErrorCode(ctx, lerr)
} }
if legalHoldPermErr != ErrNone {
return mode, retainDate, legalHold, legalHoldPermErr
}
} }
if retentionRequested { if retentionRequested {

View File

@ -122,7 +122,7 @@ func testCreateBucket(obj ObjectLayer, instanceType, bucketName string, apiRoute
var wg sync.WaitGroup var wg sync.WaitGroup
var mu sync.Mutex var mu sync.Mutex
wg.Add(n) wg.Add(n)
for i := 0; i < n; i++ { for range n {
go func() { go func() {
defer wg.Done() defer wg.Done()
// Sync start. // Sync start.
@ -187,7 +187,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// Test case - 1. // Test case - 1.
{ {
bucketName: bucketName, bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))), bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, bucketName, bucketName)),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)), policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKey, accessKey: credentials.AccessKey,
@ -199,7 +199,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// Expecting StatusBadRequest (400). // Expecting StatusBadRequest (400).
{ {
bucketName: bucketName, bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))), bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, bucketName, bucketName)),
policyLen: maxBucketPolicySize + 1, policyLen: maxBucketPolicySize + 1,
accessKey: credentials.AccessKey, accessKey: credentials.AccessKey,
@ -211,7 +211,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// Expecting the HTTP response status to be StatusLengthRequired (411). // Expecting the HTTP response status to be StatusLengthRequired (411).
{ {
bucketName: bucketName, bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))), bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, bucketName, bucketName)),
policyLen: 0, policyLen: 0,
accessKey: credentials.AccessKey, accessKey: credentials.AccessKey,
@ -258,7 +258,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// checkBucketPolicyResources should fail. // checkBucketPolicyResources should fail.
{ {
bucketName: bucketName1, bucketName: bucketName1,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName))), bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, bucketName, bucketName)),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)), policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKey, accessKey: credentials.AccessKey,
@ -271,7 +271,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// should result in 404 StatusNotFound // should result in 404 StatusNotFound
{ {
bucketName: "non-existent-bucket", bucketName: "non-existent-bucket",
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, "non-existent-bucket", "non-existent-bucket"))), bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, "non-existent-bucket", "non-existent-bucket")),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)), policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKey, accessKey: credentials.AccessKey,
@ -284,7 +284,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// should result in 404 StatusNotFound // should result in 404 StatusNotFound
{ {
bucketName: ".invalid-bucket", bucketName: ".invalid-bucket",
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplate, ".invalid-bucket", ".invalid-bucket"))), bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplate, ".invalid-bucket", ".invalid-bucket")),
policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)), policyLen: len(fmt.Sprintf(bucketPolicyTemplate, bucketName, bucketName)),
accessKey: credentials.AccessKey, accessKey: credentials.AccessKey,
@ -297,7 +297,7 @@ func testPutBucketPolicyHandler(obj ObjectLayer, instanceType, bucketName string
// should result in 400 StatusBadRequest. // should result in 400 StatusBadRequest.
{ {
bucketName: bucketName, bucketName: bucketName,
bucketPolicyReader: bytes.NewReader([]byte(fmt.Sprintf(bucketPolicyTemplateWithoutVersion, bucketName, bucketName))), bucketPolicyReader: bytes.NewReader(fmt.Appendf(nil, bucketPolicyTemplateWithoutVersion, bucketName, bucketName)),
policyLen: len(fmt.Sprintf(bucketPolicyTemplateWithoutVersion, bucketName, bucketName)), policyLen: len(fmt.Sprintf(bucketPolicyTemplateWithoutVersion, bucketName, bucketName)),
accessKey: credentials.AccessKey, accessKey: credentials.AccessKey,

View File

@ -19,6 +19,7 @@ package cmd
import ( import (
"encoding/json" "encoding/json"
"maps"
"net/http" "net/http"
"net/url" "net/url"
"strconv" "strconv"
@ -187,9 +188,7 @@ func getConditionValues(r *http.Request, lc string, cred auth.Credentials) map[s
} }
cloneURLValues := make(url.Values, len(r.Form)) cloneURLValues := make(url.Values, len(r.Form))
for k, v := range r.Form { maps.Copy(cloneURLValues, r.Form)
cloneURLValues[k] = v
}
for _, objLock := range []string{ for _, objLock := range []string{
xhttp.AmzObjectLockMode, xhttp.AmzObjectLockMode,
@ -224,7 +223,7 @@ func getConditionValues(r *http.Request, lc string, cred auth.Credentials) map[s
// Add groups claim which could be a list. This will ensure that the claim // Add groups claim which could be a list. This will ensure that the claim
// `jwt:groups` works. // `jwt:groups` works.
if grpsVal, ok := claims["groups"]; ok { if grpsVal, ok := claims["groups"]; ok {
if grpsIs, ok := grpsVal.([]interface{}); ok { if grpsIs, ok := grpsVal.([]any); ok {
grps := []string{} grps := []string{}
for _, gI := range grpsIs { for _, gI := range grpsIs {
if g, ok := gI.(string); ok { if g, ok := gI.(string); ok {

View File

@ -92,12 +92,12 @@ func parseBucketQuota(bucket string, data []byte) (quotaCfg *madmin.BucketQuota,
} }
if !quotaCfg.IsValid() { if !quotaCfg.IsValid() {
if quotaCfg.Type == "fifo" { if quotaCfg.Type == "fifo" {
internalLogIf(GlobalContext, errors.New("Detected older 'fifo' quota config, 'fifo' feature is removed and not supported anymore. Please clear your quota configs using 'mc admin bucket quota alias/bucket --clear' and use 'mc ilm add' for expiration of objects"), logger.WarningKind) internalLogIf(GlobalContext, errors.New("Detected older 'fifo' quota config, 'fifo' feature is removed and not supported anymore. Please clear your quota configs using 'mc quota clear alias/bucket' and use 'mc ilm add' for expiration of objects"), logger.WarningKind)
return quotaCfg, fmt.Errorf("invalid quota type 'fifo'") return quotaCfg, fmt.Errorf("invalid quota type 'fifo'")
} }
return quotaCfg, fmt.Errorf("Invalid quota config %#v", quotaCfg) return quotaCfg, fmt.Errorf("Invalid quota config %#v", quotaCfg)
} }
return return quotaCfg, err
} }
func (sys *BucketQuotaSys) enforceQuotaHard(ctx context.Context, bucket string, size int64) error { func (sys *BucketQuotaSys) enforceQuotaHard(ctx context.Context, bucket string, size int64) error {

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -21,6 +21,7 @@ import (
"bytes" "bytes"
"context" "context"
"fmt" "fmt"
"maps"
"net/http" "net/http"
"net/url" "net/url"
"regexp" "regexp"
@ -171,13 +172,13 @@ func (ri ReplicateObjectInfo) TargetReplicationStatus(arn string) (status replic
repStatMatches := replStatusRegex.FindAllStringSubmatch(ri.ReplicationStatusInternal, -1) repStatMatches := replStatusRegex.FindAllStringSubmatch(ri.ReplicationStatusInternal, -1)
for _, repStatMatch := range repStatMatches { for _, repStatMatch := range repStatMatches {
if len(repStatMatch) != 3 { if len(repStatMatch) != 3 {
return return status
} }
if repStatMatch[1] == arn { if repStatMatch[1] == arn {
return replication.StatusType(repStatMatch[2]) return replication.StatusType(repStatMatch[2])
} }
} }
return return status
} }
// TargetReplicationStatus - returns replication status of a target // TargetReplicationStatus - returns replication status of a target
@ -185,13 +186,13 @@ func (o ObjectInfo) TargetReplicationStatus(arn string) (status replication.Stat
repStatMatches := replStatusRegex.FindAllStringSubmatch(o.ReplicationStatusInternal, -1) repStatMatches := replStatusRegex.FindAllStringSubmatch(o.ReplicationStatusInternal, -1)
for _, repStatMatch := range repStatMatches { for _, repStatMatch := range repStatMatches {
if len(repStatMatch) != 3 { if len(repStatMatch) != 3 {
return return status
} }
if repStatMatch[1] == arn { if repStatMatch[1] == arn {
return replication.StatusType(repStatMatch[2]) return replication.StatusType(repStatMatch[2])
} }
} }
return return status
} }
type replicateTargetDecision struct { type replicateTargetDecision struct {
@ -309,9 +310,9 @@ func parseReplicateDecision(ctx context.Context, bucket, s string) (r ReplicateD
targetsMap: make(map[string]replicateTargetDecision), targetsMap: make(map[string]replicateTargetDecision),
} }
if len(s) == 0 { if len(s) == 0 {
return return r, err
} }
for _, p := range strings.Split(s, ",") { for p := range strings.SplitSeq(s, ",") {
if p == "" { if p == "" {
continue continue
} }
@ -326,7 +327,7 @@ func parseReplicateDecision(ctx context.Context, bucket, s string) (r ReplicateD
} }
r.targetsMap[slc[0]] = replicateTargetDecision{Replicate: tgt[0] == "true", Synchronous: tgt[1] == "true", Arn: tgt[2], ID: tgt[3]} r.targetsMap[slc[0]] = replicateTargetDecision{Replicate: tgt[0] == "true", Synchronous: tgt[1] == "true", Arn: tgt[2], ID: tgt[3]}
} }
return return r, err
} }
// ReplicationState represents internal replication state // ReplicationState represents internal replication state
@ -373,7 +374,7 @@ func (rs *ReplicationState) CompositeReplicationStatus() (st replication.StatusT
case !rs.ReplicaStatus.Empty(): case !rs.ReplicaStatus.Empty():
return rs.ReplicaStatus return rs.ReplicaStatus
default: default:
return return st
} }
} }
@ -735,10 +736,8 @@ type BucketReplicationResyncStatus struct {
func (rs *BucketReplicationResyncStatus) cloneTgtStats() (m map[string]TargetReplicationResyncStatus) { func (rs *BucketReplicationResyncStatus) cloneTgtStats() (m map[string]TargetReplicationResyncStatus) {
m = make(map[string]TargetReplicationResyncStatus) m = make(map[string]TargetReplicationResyncStatus)
for arn, st := range rs.TargetsMap { maps.Copy(m, rs.TargetsMap)
m[arn] = st return m
}
return
} }
func newBucketResyncStatus(bucket string) BucketReplicationResyncStatus { func newBucketResyncStatus(bucket string) BucketReplicationResyncStatus {
@ -775,7 +774,7 @@ func extractReplicateDiffOpts(q url.Values) (opts madmin.ReplDiffOpts) {
opts.Verbose = q.Get("verbose") == "true" opts.Verbose = q.Get("verbose") == "true"
opts.ARN = q.Get("arn") opts.ARN = q.Get("arn")
opts.Prefix = q.Get("prefix") opts.Prefix = q.Get("prefix")
return return opts
} }
const ( const (

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/minio/minio/internal/bucket/replication" "github.com/minio/minio/internal/bucket/replication"
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
@ -41,19 +41,17 @@ func (z *BucketReplicationResyncStatus) DecodeMsg(dc *msgp.Reader) (err error) {
if z.TargetsMap == nil { if z.TargetsMap == nil {
z.TargetsMap = make(map[string]TargetReplicationResyncStatus, zb0002) z.TargetsMap = make(map[string]TargetReplicationResyncStatus, zb0002)
} else if len(z.TargetsMap) > 0 { } else if len(z.TargetsMap) > 0 {
for key := range z.TargetsMap { clear(z.TargetsMap)
delete(z.TargetsMap, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 TargetReplicationResyncStatus
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "TargetsMap") err = msgp.WrapError(err, "TargetsMap")
return return
} }
var za0002 TargetReplicationResyncStatus
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "TargetsMap", za0001) err = msgp.WrapError(err, "TargetsMap", za0001)
@ -203,14 +201,12 @@ func (z *BucketReplicationResyncStatus) UnmarshalMsg(bts []byte) (o []byte, err
if z.TargetsMap == nil { if z.TargetsMap == nil {
z.TargetsMap = make(map[string]TargetReplicationResyncStatus, zb0002) z.TargetsMap = make(map[string]TargetReplicationResyncStatus, zb0002)
} else if len(z.TargetsMap) > 0 { } else if len(z.TargetsMap) > 0 {
for key := range z.TargetsMap { clear(z.TargetsMap)
delete(z.TargetsMap, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 TargetReplicationResyncStatus var za0002 TargetReplicationResyncStatus
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "TargetsMap") err = msgp.WrapError(err, "TargetsMap")
@ -288,19 +284,17 @@ func (z *MRFReplicateEntries) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Entries == nil { if z.Entries == nil {
z.Entries = make(map[string]MRFReplicateEntry, zb0002) z.Entries = make(map[string]MRFReplicateEntry, zb0002)
} else if len(z.Entries) > 0 { } else if len(z.Entries) > 0 {
for key := range z.Entries { clear(z.Entries)
delete(z.Entries, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 MRFReplicateEntry
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Entries") err = msgp.WrapError(err, "Entries")
return return
} }
var za0002 MRFReplicateEntry
var zb0003 uint32 var zb0003 uint32
zb0003, err = dc.ReadMapHeader() zb0003, err = dc.ReadMapHeader()
if err != nil { if err != nil {
@ -478,14 +472,12 @@ func (z *MRFReplicateEntries) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Entries == nil { if z.Entries == nil {
z.Entries = make(map[string]MRFReplicateEntry, zb0002) z.Entries = make(map[string]MRFReplicateEntry, zb0002)
} else if len(z.Entries) > 0 { } else if len(z.Entries) > 0 {
for key := range z.Entries { clear(z.Entries)
delete(z.Entries, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 MRFReplicateEntry var za0002 MRFReplicateEntry
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Entries") err = msgp.WrapError(err, "Entries")
@ -872,19 +864,17 @@ func (z *ReplicationState) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Targets == nil { if z.Targets == nil {
z.Targets = make(map[string]replication.StatusType, zb0002) z.Targets = make(map[string]replication.StatusType, zb0002)
} else if len(z.Targets) > 0 { } else if len(z.Targets) > 0 {
for key := range z.Targets { clear(z.Targets)
delete(z.Targets, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 replication.StatusType
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Targets") err = msgp.WrapError(err, "Targets")
return return
} }
var za0002 replication.StatusType
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Targets", za0001) err = msgp.WrapError(err, "Targets", za0001)
@ -902,19 +892,17 @@ func (z *ReplicationState) DecodeMsg(dc *msgp.Reader) (err error) {
if z.PurgeTargets == nil { if z.PurgeTargets == nil {
z.PurgeTargets = make(map[string]VersionPurgeStatusType, zb0003) z.PurgeTargets = make(map[string]VersionPurgeStatusType, zb0003)
} else if len(z.PurgeTargets) > 0 { } else if len(z.PurgeTargets) > 0 {
for key := range z.PurgeTargets { clear(z.PurgeTargets)
delete(z.PurgeTargets, key)
}
} }
for zb0003 > 0 { for zb0003 > 0 {
zb0003-- zb0003--
var za0003 string var za0003 string
var za0004 VersionPurgeStatusType
za0003, err = dc.ReadString() za0003, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "PurgeTargets") err = msgp.WrapError(err, "PurgeTargets")
return return
} }
var za0004 VersionPurgeStatusType
err = za0004.DecodeMsg(dc) err = za0004.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "PurgeTargets", za0003) err = msgp.WrapError(err, "PurgeTargets", za0003)
@ -932,19 +920,17 @@ func (z *ReplicationState) DecodeMsg(dc *msgp.Reader) (err error) {
if z.ResetStatusesMap == nil { if z.ResetStatusesMap == nil {
z.ResetStatusesMap = make(map[string]string, zb0004) z.ResetStatusesMap = make(map[string]string, zb0004)
} else if len(z.ResetStatusesMap) > 0 { } else if len(z.ResetStatusesMap) > 0 {
for key := range z.ResetStatusesMap { clear(z.ResetStatusesMap)
delete(z.ResetStatusesMap, key)
}
} }
for zb0004 > 0 { for zb0004 > 0 {
zb0004-- zb0004--
var za0005 string var za0005 string
var za0006 string
za0005, err = dc.ReadString() za0005, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "ResetStatusesMap") err = msgp.WrapError(err, "ResetStatusesMap")
return return
} }
var za0006 string
za0006, err = dc.ReadString() za0006, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "ResetStatusesMap", za0005) err = msgp.WrapError(err, "ResetStatusesMap", za0005)
@ -1236,14 +1222,12 @@ func (z *ReplicationState) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Targets == nil { if z.Targets == nil {
z.Targets = make(map[string]replication.StatusType, zb0002) z.Targets = make(map[string]replication.StatusType, zb0002)
} else if len(z.Targets) > 0 { } else if len(z.Targets) > 0 {
for key := range z.Targets { clear(z.Targets)
delete(z.Targets, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 replication.StatusType var za0002 replication.StatusType
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Targets") err = msgp.WrapError(err, "Targets")
@ -1266,14 +1250,12 @@ func (z *ReplicationState) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.PurgeTargets == nil { if z.PurgeTargets == nil {
z.PurgeTargets = make(map[string]VersionPurgeStatusType, zb0003) z.PurgeTargets = make(map[string]VersionPurgeStatusType, zb0003)
} else if len(z.PurgeTargets) > 0 { } else if len(z.PurgeTargets) > 0 {
for key := range z.PurgeTargets { clear(z.PurgeTargets)
delete(z.PurgeTargets, key)
}
} }
for zb0003 > 0 { for zb0003 > 0 {
var za0003 string
var za0004 VersionPurgeStatusType var za0004 VersionPurgeStatusType
zb0003-- zb0003--
var za0003 string
za0003, bts, err = msgp.ReadStringBytes(bts) za0003, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "PurgeTargets") err = msgp.WrapError(err, "PurgeTargets")
@ -1296,14 +1278,12 @@ func (z *ReplicationState) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.ResetStatusesMap == nil { if z.ResetStatusesMap == nil {
z.ResetStatusesMap = make(map[string]string, zb0004) z.ResetStatusesMap = make(map[string]string, zb0004)
} else if len(z.ResetStatusesMap) > 0 { } else if len(z.ResetStatusesMap) > 0 {
for key := range z.ResetStatusesMap { clear(z.ResetStatusesMap)
delete(z.ResetStatusesMap, key)
}
} }
for zb0004 > 0 { for zb0004 > 0 {
var za0005 string
var za0006 string var za0006 string
zb0004-- zb0004--
var za0005 string
za0005, bts, err = msgp.ReadStringBytes(bts) za0005, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "ResetStatusesMap") err = msgp.WrapError(err, "ResetStatusesMap")

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -24,6 +24,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"maps"
"math/rand" "math/rand"
"net/http" "net/http"
"net/url" "net/url"
@ -252,31 +253,31 @@ func getMustReplicateOptions(userDefined map[string]string, userTags string, sta
func mustReplicate(ctx context.Context, bucket, object string, mopts mustReplicateOptions) (dsc ReplicateDecision) { func mustReplicate(ctx context.Context, bucket, object string, mopts mustReplicateOptions) (dsc ReplicateDecision) {
// object layer not initialized we return with no decision. // object layer not initialized we return with no decision.
if newObjectLayerFn() == nil { if newObjectLayerFn() == nil {
return return dsc
} }
// Disable server-side replication on object prefixes which are excluded // Disable server-side replication on object prefixes which are excluded
// from versioning via the MinIO bucket versioning extension. // from versioning via the MinIO bucket versioning extension.
if !globalBucketVersioningSys.PrefixEnabled(bucket, object) { if !globalBucketVersioningSys.PrefixEnabled(bucket, object) {
return return dsc
} }
replStatus := mopts.ReplicationStatus() replStatus := mopts.ReplicationStatus()
if replStatus == replication.Replica && !mopts.isMetadataReplication() { if replStatus == replication.Replica && !mopts.isMetadataReplication() {
return return dsc
} }
if mopts.replicationRequest { // incoming replication request on target cluster if mopts.replicationRequest { // incoming replication request on target cluster
return return dsc
} }
cfg, err := getReplicationConfig(ctx, bucket) cfg, err := getReplicationConfig(ctx, bucket)
if err != nil { if err != nil {
replLogOnceIf(ctx, err, bucket) replLogOnceIf(ctx, err, bucket)
return return dsc
} }
if cfg == nil { if cfg == nil {
return return dsc
} }
opts := replication.ObjectOpts{ opts := replication.ObjectOpts{
@ -347,16 +348,16 @@ func checkReplicateDelete(ctx context.Context, bucket string, dobj ObjectToDelet
rcfg, err := getReplicationConfig(ctx, bucket) rcfg, err := getReplicationConfig(ctx, bucket)
if err != nil || rcfg == nil { if err != nil || rcfg == nil {
replLogOnceIf(ctx, err, bucket) replLogOnceIf(ctx, err, bucket)
return return dsc
} }
// If incoming request is a replication request, it does not need to be re-replicated. // If incoming request is a replication request, it does not need to be re-replicated.
if delOpts.ReplicationRequest { if delOpts.ReplicationRequest {
return return dsc
} }
// Skip replication if this object's prefix is excluded from being // Skip replication if this object's prefix is excluded from being
// versioned. // versioned.
if !delOpts.Versioned { if !delOpts.Versioned {
return return dsc
} }
opts := replication.ObjectOpts{ opts := replication.ObjectOpts{
Name: dobj.ObjectName, Name: dobj.ObjectName,
@ -616,10 +617,10 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
if dobj.VersionID == "" && rinfo.PrevReplicationStatus == replication.Completed && dobj.OpType != replication.ExistingObjectReplicationType { if dobj.VersionID == "" && rinfo.PrevReplicationStatus == replication.Completed && dobj.OpType != replication.ExistingObjectReplicationType {
rinfo.ReplicationStatus = rinfo.PrevReplicationStatus rinfo.ReplicationStatus = rinfo.PrevReplicationStatus
return return rinfo
} }
if dobj.VersionID != "" && rinfo.VersionPurgeStatus == replication.VersionPurgeComplete { if dobj.VersionID != "" && rinfo.VersionPurgeStatus == replication.VersionPurgeComplete {
return return rinfo
} }
if globalBucketTargetSys.isOffline(tgt.EndpointURL()) { if globalBucketTargetSys.isOffline(tgt.EndpointURL()) {
replLogOnceIf(ctx, fmt.Errorf("remote target is offline for bucket:%s arn:%s", dobj.Bucket, tgt.ARN), "replication-target-offline-delete-"+tgt.ARN) replLogOnceIf(ctx, fmt.Errorf("remote target is offline for bucket:%s arn:%s", dobj.Bucket, tgt.ARN), "replication-target-offline-delete-"+tgt.ARN)
@ -640,7 +641,7 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
} else { } else {
rinfo.VersionPurgeStatus = replication.VersionPurgeFailed rinfo.VersionPurgeStatus = replication.VersionPurgeFailed
} }
return return rinfo
} }
// early return if already replicated delete marker for existing object replication/ healing delete markers // early return if already replicated delete marker for existing object replication/ healing delete markers
if dobj.DeleteMarkerVersionID != "" { if dobj.DeleteMarkerVersionID != "" {
@ -657,13 +658,13 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
// delete marker already replicated // delete marker already replicated
if dobj.VersionID == "" && rinfo.VersionPurgeStatus.Empty() { if dobj.VersionID == "" && rinfo.VersionPurgeStatus.Empty() {
rinfo.ReplicationStatus = replication.Completed rinfo.ReplicationStatus = replication.Completed
return return rinfo
} }
case isErrObjectNotFound(serr), isErrVersionNotFound(serr): case isErrObjectNotFound(serr), isErrVersionNotFound(serr):
// version being purged is already not found on target. // version being purged is already not found on target.
if !rinfo.VersionPurgeStatus.Empty() { if !rinfo.VersionPurgeStatus.Empty() {
rinfo.VersionPurgeStatus = replication.VersionPurgeComplete rinfo.VersionPurgeStatus = replication.VersionPurgeComplete
return return rinfo
} }
case isErrReadQuorum(serr), isErrWriteQuorum(serr): case isErrReadQuorum(serr), isErrWriteQuorum(serr):
// destination has some quorum issues, perform removeObject() anyways // destination has some quorum issues, perform removeObject() anyways
@ -677,7 +678,7 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
if err != nil && !toi.ReplicationReady { if err != nil && !toi.ReplicationReady {
rinfo.ReplicationStatus = replication.Failed rinfo.ReplicationStatus = replication.Failed
rinfo.Err = err rinfo.Err = err
return return rinfo
} }
} }
} }
@ -708,7 +709,7 @@ func replicateDeleteToTarget(ctx context.Context, dobj DeletedObjectReplicationI
rinfo.VersionPurgeStatus = replication.VersionPurgeComplete rinfo.VersionPurgeStatus = replication.VersionPurgeComplete
} }
} }
return return rinfo
} }
func getCopyObjMetadata(oi ObjectInfo, sc string) map[string]string { func getCopyObjMetadata(oi ObjectInfo, sc string) map[string]string {
@ -803,9 +804,7 @@ func putReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (put
} else { } else {
cs, mp := getCRCMeta(objInfo, 0, nil) cs, mp := getCRCMeta(objInfo, 0, nil)
// Set object checksum. // Set object checksum.
for k, v := range cs { maps.Copy(meta, cs)
meta[k] = v
}
isMP = mp isMP = mp
if !objInfo.isMultipart() && cs[xhttp.AmzChecksumType] == xhttp.AmzChecksumTypeFullObject { if !objInfo.isMultipart() && cs[xhttp.AmzChecksumType] == xhttp.AmzChecksumTypeFullObject {
// For objects where checksum is full object, it will be the same. // For objects where checksum is full object, it will be the same.
@ -911,7 +910,7 @@ func putReplicationOpts(ctx context.Context, sc string, objInfo ObjectInfo) (put
} }
putOpts.ServerSideEncryption = sseEnc putOpts.ServerSideEncryption = sseEnc
} }
return return putOpts, isMP, err
} }
type replicationAction string type replicationAction string
@ -969,9 +968,7 @@ func getReplicationAction(oi1 ObjectInfo, oi2 minio.ObjectInfo, opType replicati
t, _ := tags.ParseObjectTags(oi1.UserTags) t, _ := tags.ParseObjectTags(oi1.UserTags)
oi2Map := make(map[string]string) oi2Map := make(map[string]string)
for k, v := range oi2.UserTags { maps.Copy(oi2Map, oi2.UserTags)
oi2Map[k] = v
}
if (oi2.UserTagCount > 0 && !reflect.DeepEqual(oi2Map, t.ToMap())) || (oi2.UserTagCount != len(t.ToMap())) { if (oi2.UserTagCount > 0 && !reflect.DeepEqual(oi2Map, t.ToMap())) || (oi2.UserTagCount != len(t.ToMap())) {
return replicateMetadata return replicateMetadata
} }
@ -1211,7 +1208,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
if ri.TargetReplicationStatus(tgt.ARN) == replication.Completed && !ri.ExistingObjResync.Empty() && !ri.ExistingObjResync.mustResyncTarget(tgt.ARN) { if ri.TargetReplicationStatus(tgt.ARN) == replication.Completed && !ri.ExistingObjResync.Empty() && !ri.ExistingObjResync.mustResyncTarget(tgt.ARN) {
rinfo.ReplicationStatus = replication.Completed rinfo.ReplicationStatus = replication.Completed
rinfo.ReplicationResynced = true rinfo.ReplicationResynced = true
return return rinfo
} }
if globalBucketTargetSys.isOffline(tgt.EndpointURL()) { if globalBucketTargetSys.isOffline(tgt.EndpointURL()) {
@ -1223,7 +1220,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
UserAgent: "Internal: [Replication]", UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName, Host: globalLocalNodeName,
}) })
return return rinfo
} }
versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object) versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object)
@ -1247,7 +1244,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
}) })
replLogOnceIf(ctx, fmt.Errorf("unable to read source object %s/%s(%s): %w", bucket, object, objInfo.VersionID, err), object+":"+objInfo.VersionID) replLogOnceIf(ctx, fmt.Errorf("unable to read source object %s/%s(%s): %w", bucket, object, objInfo.VersionID, err), object+":"+objInfo.VersionID)
} }
return return rinfo
} }
defer gr.Close() defer gr.Close()
@ -1271,7 +1268,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
UserAgent: "Internal: [Replication]", UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName, Host: globalLocalNodeName,
}) })
return return rinfo
} }
} }
@ -1310,7 +1307,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
UserAgent: "Internal: [Replication]", UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName, Host: globalLocalNodeName,
}) })
return return rinfo
} }
var headerSize int var headerSize int
@ -1347,7 +1344,7 @@ func (ri ReplicateObjectInfo) replicateObject(ctx context.Context, objectAPI Obj
globalBucketTargetSys.markOffline(tgt.EndpointURL()) globalBucketTargetSys.markOffline(tgt.EndpointURL())
} }
} }
return return rinfo
} }
// replicateAll replicates metadata for specified version of the object to destination bucket // replicateAll replicates metadata for specified version of the object to destination bucket
@ -1383,7 +1380,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
UserAgent: "Internal: [Replication]", UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName, Host: globalLocalNodeName,
}) })
return return rinfo
} }
versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object) versioned := globalBucketVersioningSys.PrefixEnabled(bucket, object)
@ -1408,7 +1405,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
}) })
replLogIf(ctx, fmt.Errorf("unable to replicate to target %s for %s/%s(%s): %w", tgt.EndpointURL(), bucket, object, objInfo.VersionID, err)) replLogIf(ctx, fmt.Errorf("unable to replicate to target %s for %s/%s(%s): %w", tgt.EndpointURL(), bucket, object, objInfo.VersionID, err))
} }
return return rinfo
} }
defer gr.Close() defer gr.Close()
@ -1421,7 +1418,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
if objInfo.TargetReplicationStatus(tgt.ARN) == replication.Completed && !ri.ExistingObjResync.Empty() && !ri.ExistingObjResync.mustResyncTarget(tgt.ARN) { if objInfo.TargetReplicationStatus(tgt.ARN) == replication.Completed && !ri.ExistingObjResync.Empty() && !ri.ExistingObjResync.mustResyncTarget(tgt.ARN) {
rinfo.ReplicationStatus = replication.Completed rinfo.ReplicationStatus = replication.Completed
rinfo.ReplicationResynced = true rinfo.ReplicationResynced = true
return return rinfo
} }
size, err := objInfo.GetActualSize() size, err := objInfo.GetActualSize()
@ -1434,7 +1431,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
UserAgent: "Internal: [Replication]", UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName, Host: globalLocalNodeName,
}) })
return return rinfo
} }
// Set the encrypted size for SSE-C objects // Set the encrypted size for SSE-C objects
@ -1497,7 +1494,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
rinfo.ReplicationAction = rAction rinfo.ReplicationAction = rAction
rinfo.ReplicationStatus = replication.Completed rinfo.ReplicationStatus = replication.Completed
} }
return return rinfo
} }
} else { } else {
// SSEC objects will refuse HeadObject without the decryption key. // SSEC objects will refuse HeadObject without the decryption key.
@ -1531,7 +1528,7 @@ func (ri ReplicateObjectInfo) replicateAll(ctx context.Context, objectAPI Object
UserAgent: "Internal: [Replication]", UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName, Host: globalLocalNodeName,
}) })
return return rinfo
} }
} }
applyAction: applyAction:
@ -1597,7 +1594,7 @@ applyAction:
UserAgent: "Internal: [Replication]", UserAgent: "Internal: [Replication]",
Host: globalLocalNodeName, Host: globalLocalNodeName,
}) })
return return rinfo
} }
var headerSize int var headerSize int
for k, v := range putOpts.Header() { for k, v := range putOpts.Header() {
@ -1634,7 +1631,7 @@ applyAction:
} }
} }
} }
return return rinfo
} }
func replicateObjectWithMultipart(ctx context.Context, c *minio.Core, bucket, object string, r io.Reader, objInfo ObjectInfo, opts minio.PutObjectOptions) (err error) { func replicateObjectWithMultipart(ctx context.Context, c *minio.Core, bucket, object string, r io.Reader, objInfo ObjectInfo, opts minio.PutObjectOptions) (err error) {
@ -1770,9 +1767,7 @@ func filterReplicationStatusMetadata(metadata map[string]string) map[string]stri
} }
if !copied { if !copied {
dst = make(map[string]string, len(metadata)) dst = make(map[string]string, len(metadata))
for k, v := range metadata { maps.Copy(dst, metadata)
dst[k] = v
}
copied = true copied = true
} }
delete(dst, key) delete(dst, key)
@ -2682,7 +2677,7 @@ func (c replicationConfig) Replicate(opts replication.ObjectOpts) bool {
// Resync returns true if replication reset is requested // Resync returns true if replication reset is requested
func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc ReplicateDecision, tgtStatuses map[string]replication.StatusType) (r ResyncDecision) { func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc ReplicateDecision, tgtStatuses map[string]replication.StatusType) (r ResyncDecision) {
if c.Empty() { if c.Empty() {
return return r
} }
// Now overlay existing object replication choices for target // Now overlay existing object replication choices for target
@ -2698,7 +2693,7 @@ func (c replicationConfig) Resync(ctx context.Context, oi ObjectInfo, dsc Replic
tgtArns := c.Config.FilterTargetArns(opts) tgtArns := c.Config.FilterTargetArns(opts)
// indicates no matching target with Existing object replication enabled. // indicates no matching target with Existing object replication enabled.
if len(tgtArns) == 0 { if len(tgtArns) == 0 {
return return r
} }
for _, t := range tgtArns { for _, t := range tgtArns {
opts.TargetArn = t opts.TargetArn = t
@ -2724,7 +2719,7 @@ func (c replicationConfig) resync(oi ObjectInfo, dsc ReplicateDecision, tgtStatu
targets: make(map[string]ResyncTargetDecision, len(dsc.targetsMap)), targets: make(map[string]ResyncTargetDecision, len(dsc.targetsMap)),
} }
if c.remotes == nil { if c.remotes == nil {
return return r
} }
for _, tgt := range c.remotes.Targets { for _, tgt := range c.remotes.Targets {
d, ok := dsc.targetsMap[tgt.Arn] d, ok := dsc.targetsMap[tgt.Arn]
@ -2736,7 +2731,7 @@ func (c replicationConfig) resync(oi ObjectInfo, dsc ReplicateDecision, tgtStatu
} }
r.targets[d.Arn] = resyncTarget(oi, tgt.Arn, tgt.ResetID, tgt.ResetBeforeDate, tgtStatuses[tgt.Arn]) r.targets[d.Arn] = resyncTarget(oi, tgt.Arn, tgt.ResetID, tgt.ResetBeforeDate, tgtStatuses[tgt.Arn])
} }
return return r
} }
func targetResetHeader(arn string) string { func targetResetHeader(arn string) string {
@ -2755,28 +2750,28 @@ func resyncTarget(oi ObjectInfo, arn string, resetID string, resetBeforeDate tim
if !ok { // existing object replication is enabled and object version is unreplicated so far. if !ok { // existing object replication is enabled and object version is unreplicated so far.
if resetID != "" && oi.ModTime.Before(resetBeforeDate) { // trigger replication if `mc replicate reset` requested if resetID != "" && oi.ModTime.Before(resetBeforeDate) { // trigger replication if `mc replicate reset` requested
rd.Replicate = true rd.Replicate = true
return return rd
} }
// For existing object reset - this condition is needed // For existing object reset - this condition is needed
rd.Replicate = tgtStatus == "" rd.Replicate = tgtStatus == ""
return return rd
} }
if resetID == "" || resetBeforeDate.Equal(timeSentinel) { // no reset in progress if resetID == "" || resetBeforeDate.Equal(timeSentinel) { // no reset in progress
return return rd
} }
// if already replicated, return true if a new reset was requested. // if already replicated, return true if a new reset was requested.
splits := strings.SplitN(rs, ";", 2) splits := strings.SplitN(rs, ";", 2)
if len(splits) != 2 { if len(splits) != 2 {
return return rd
} }
newReset := splits[1] != resetID newReset := splits[1] != resetID
if !newReset && tgtStatus == replication.Completed { if !newReset && tgtStatus == replication.Completed {
// already replicated and no reset requested // already replicated and no reset requested
return return rd
} }
rd.Replicate = newReset && oi.ModTime.Before(resetBeforeDate) rd.Replicate = newReset && oi.ModTime.Before(resetBeforeDate)
return return rd
} }
const resyncTimeInterval = time.Minute * 1 const resyncTimeInterval = time.Minute * 1
@ -2954,7 +2949,7 @@ func (s *replicationResyncer) resyncBucket(ctx context.Context, objectAPI Object
}() }()
var wg sync.WaitGroup var wg sync.WaitGroup
for i := 0; i < resyncParallelRoutines; i++ { for i := range resyncParallelRoutines {
wg.Add(1) wg.Add(1)
workers[i] = make(chan ReplicateObjectInfo, 100) workers[i] = make(chan ReplicateObjectInfo, 100)
i := i i := i
@ -3063,7 +3058,7 @@ func (s *replicationResyncer) resyncBucket(ctx context.Context, objectAPI Object
workers[h%uint64(resyncParallelRoutines)] <- roi workers[h%uint64(resyncParallelRoutines)] <- roi
} }
} }
for i := 0; i < resyncParallelRoutines; i++ { for i := range resyncParallelRoutines {
xioutil.SafeClose(workers[i]) xioutil.SafeClose(workers[i])
} }
wg.Wait() wg.Wait()
@ -3193,11 +3188,9 @@ func (p *ReplicationPool) startResyncRoutine(ctx context.Context, buckets []stri
<-ctx.Done() <-ctx.Done()
return return
} }
duration := time.Duration(r.Float64() * float64(time.Minute)) duration := max(time.Duration(r.Float64()*float64(time.Minute)),
if duration < time.Second {
// Make sure to sleep at least a second to avoid high CPU ticks. // Make sure to sleep at least a second to avoid high CPU ticks.
duration = time.Second time.Second)
}
time.Sleep(duration) time.Sleep(duration)
} }
} }
@ -3429,12 +3422,12 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
roi = getHealReplicateObjectInfo(oi, rcfg) roi = getHealReplicateObjectInfo(oi, rcfg)
roi.RetryCount = uint32(retryCount) roi.RetryCount = uint32(retryCount)
if !roi.Dsc.ReplicateAny() { if !roi.Dsc.ReplicateAny() {
return return roi
} }
// early return if replication already done, otherwise we need to determine if this // early return if replication already done, otherwise we need to determine if this
// version is an existing object that needs healing. // version is an existing object that needs healing.
if oi.ReplicationStatus == replication.Completed && oi.VersionPurgeStatus.Empty() && !roi.ExistingObjResync.mustResync() { if oi.ReplicationStatus == replication.Completed && oi.VersionPurgeStatus.Empty() && !roi.ExistingObjResync.mustResync() {
return return roi
} }
if roi.DeleteMarker || !roi.VersionPurgeStatus.Empty() { if roi.DeleteMarker || !roi.VersionPurgeStatus.Empty() {
@ -3464,14 +3457,14 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
roi.ReplicationStatus == replication.Failed || roi.ReplicationStatus == replication.Failed ||
roi.VersionPurgeStatus == replication.VersionPurgeFailed || roi.VersionPurgeStatus == replication.VersionPurgePending { roi.VersionPurgeStatus == replication.VersionPurgeFailed || roi.VersionPurgeStatus == replication.VersionPurgePending {
globalReplicationPool.Get().queueReplicaDeleteTask(dv) globalReplicationPool.Get().queueReplicaDeleteTask(dv)
return return roi
} }
// if replication status is Complete on DeleteMarker and existing object resync required // if replication status is Complete on DeleteMarker and existing object resync required
if roi.ExistingObjResync.mustResync() && (roi.ReplicationStatus == replication.Completed || roi.ReplicationStatus.Empty()) { if roi.ExistingObjResync.mustResync() && (roi.ReplicationStatus == replication.Completed || roi.ReplicationStatus.Empty()) {
queueReplicateDeletesWrapper(dv, roi.ExistingObjResync) queueReplicateDeletesWrapper(dv, roi.ExistingObjResync)
return return roi
} }
return return roi
} }
if roi.ExistingObjResync.mustResync() { if roi.ExistingObjResync.mustResync() {
roi.OpType = replication.ExistingObjectReplicationType roi.OpType = replication.ExistingObjectReplicationType
@ -3480,13 +3473,13 @@ func queueReplicationHeal(ctx context.Context, bucket string, oi ObjectInfo, rcf
case replication.Pending, replication.Failed: case replication.Pending, replication.Failed:
roi.EventType = ReplicateHeal roi.EventType = ReplicateHeal
globalReplicationPool.Get().queueReplicaTask(roi) globalReplicationPool.Get().queueReplicaTask(roi)
return return roi
} }
if roi.ExistingObjResync.mustResync() { if roi.ExistingObjResync.mustResync() {
roi.EventType = ReplicateExisting roi.EventType = ReplicateExisting
globalReplicationPool.Get().queueReplicaTask(roi) globalReplicationPool.Get().queueReplicaTask(roi)
} }
return return roi
} }
const ( const (
@ -3797,14 +3790,13 @@ func getCRCMeta(oi ObjectInfo, partNum int, h http.Header) (cs map[string]string
meta := make(map[string]string) meta := make(map[string]string)
cs, isMP = oi.decryptChecksums(partNum, h) cs, isMP = oi.decryptChecksums(partNum, h)
for k, v := range cs { for k, v := range cs {
cksum := hash.NewChecksumString(k, v) if k == xhttp.AmzChecksumType {
if cksum == nil {
continue continue
} }
if cksum.Valid() { cktype := hash.ChecksumStringToType(k)
meta[cksum.Type.Key()] = v if cktype.IsSet() {
meta[xhttp.AmzChecksumType] = cs[xhttp.AmzChecksumType] meta[cktype.Key()] = v
meta[xhttp.AmzChecksumAlgo] = cksum.Type.String() meta[xhttp.AmzChecksumAlgo] = cktype.String()
} }
} }
return meta, isMP return meta, isMP

View File

@ -19,6 +19,7 @@ package cmd
import ( import (
"fmt" "fmt"
"maps"
"math" "math"
"sync/atomic" "sync/atomic"
"time" "time"
@ -37,7 +38,7 @@ type ReplicationLatency struct {
// Merge two replication latency into a new one // Merge two replication latency into a new one
func (rl ReplicationLatency) merge(other ReplicationLatency) (newReplLatency ReplicationLatency) { func (rl ReplicationLatency) merge(other ReplicationLatency) (newReplLatency ReplicationLatency) {
newReplLatency.UploadHistogram = rl.UploadHistogram.Merge(other.UploadHistogram) newReplLatency.UploadHistogram = rl.UploadHistogram.Merge(other.UploadHistogram)
return return newReplLatency
} }
// Get upload latency of each object size range // Get upload latency of each object size range
@ -48,7 +49,7 @@ func (rl ReplicationLatency) getUploadLatency() (ret map[string]uint64) {
// Convert nanoseconds to milliseconds // Convert nanoseconds to milliseconds
ret[sizeTagToString(k)] = uint64(v.avg() / time.Millisecond) ret[sizeTagToString(k)] = uint64(v.avg() / time.Millisecond)
} }
return return ret
} }
// Update replication upload latency with a new value // Update replication upload latency with a new value
@ -63,7 +64,7 @@ type ReplicationLastMinute struct {
func (rl ReplicationLastMinute) merge(other ReplicationLastMinute) (nl ReplicationLastMinute) { func (rl ReplicationLastMinute) merge(other ReplicationLastMinute) (nl ReplicationLastMinute) {
nl = ReplicationLastMinute{rl.LastMinute.merge(other.LastMinute)} nl = ReplicationLastMinute{rl.LastMinute.merge(other.LastMinute)}
return return nl
} }
func (rl *ReplicationLastMinute) addsize(n int64) { func (rl *ReplicationLastMinute) addsize(n int64) {
@ -221,9 +222,7 @@ func (brs BucketReplicationStats) Clone() (c BucketReplicationStats) {
} }
if s.Failed.ErrCounts == nil { if s.Failed.ErrCounts == nil {
s.Failed.ErrCounts = make(map[string]int) s.Failed.ErrCounts = make(map[string]int)
for k, v := range st.Failed.ErrCounts { maps.Copy(s.Failed.ErrCounts, st.Failed.ErrCounts)
s.Failed.ErrCounts[k] = v
}
} }
c.Stats[arn] = &s c.Stats[arn] = &s
} }

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"github.com/tinylib/msgp/msgp" "github.com/tinylib/msgp/msgp"
) )
@ -617,19 +617,17 @@ func (z *BucketReplicationStats) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Stats == nil { if z.Stats == nil {
z.Stats = make(map[string]*BucketReplicationStat, zb0002) z.Stats = make(map[string]*BucketReplicationStat, zb0002)
} else if len(z.Stats) > 0 { } else if len(z.Stats) > 0 {
for key := range z.Stats { clear(z.Stats)
delete(z.Stats, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 *BucketReplicationStat
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Stats") err = msgp.WrapError(err, "Stats")
return return
} }
var za0002 *BucketReplicationStat
if dc.IsNil() { if dc.IsNil() {
err = dc.ReadNil() err = dc.ReadNil()
if err != nil { if err != nil {
@ -943,14 +941,12 @@ func (z *BucketReplicationStats) UnmarshalMsg(bts []byte) (o []byte, err error)
if z.Stats == nil { if z.Stats == nil {
z.Stats = make(map[string]*BucketReplicationStat, zb0002) z.Stats = make(map[string]*BucketReplicationStat, zb0002)
} else if len(z.Stats) > 0 { } else if len(z.Stats) > 0 {
for key := range z.Stats { clear(z.Stats)
delete(z.Stats, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 *BucketReplicationStat var za0002 *BucketReplicationStat
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Stats") err = msgp.WrapError(err, "Stats")
@ -1402,19 +1398,17 @@ func (z *BucketStatsMap) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Stats == nil { if z.Stats == nil {
z.Stats = make(map[string]BucketStats, zb0002) z.Stats = make(map[string]BucketStats, zb0002)
} else if len(z.Stats) > 0 { } else if len(z.Stats) > 0 {
for key := range z.Stats { clear(z.Stats)
delete(z.Stats, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 BucketStats
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Stats") err = msgp.WrapError(err, "Stats")
return return
} }
var za0002 BucketStats
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Stats", za0001) err = msgp.WrapError(err, "Stats", za0001)
@ -1526,14 +1520,12 @@ func (z *BucketStatsMap) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Stats == nil { if z.Stats == nil {
z.Stats = make(map[string]BucketStats, zb0002) z.Stats = make(map[string]BucketStats, zb0002)
} else if len(z.Stats) > 0 { } else if len(z.Stats) > 0 {
for key := range z.Stats { clear(z.Stats)
delete(z.Stats, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 BucketStats var za0002 BucketStats
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Stats") err = msgp.WrapError(err, "Stats")

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -20,6 +20,7 @@ package cmd
import ( import (
"context" "context"
"errors" "errors"
"maps"
"net/url" "net/url"
"sync" "sync"
"time" "time"
@ -236,9 +237,7 @@ func (sys *BucketTargetSys) healthStats() map[string]epHealth {
sys.hMutex.RLock() sys.hMutex.RLock()
defer sys.hMutex.RUnlock() defer sys.hMutex.RUnlock()
m := make(map[string]epHealth, len(sys.hc)) m := make(map[string]epHealth, len(sys.hc))
for k, v := range sys.hc { maps.Copy(m, sys.hc)
m[k] = v
}
return m return m
} }
@ -286,7 +285,7 @@ func (sys *BucketTargetSys) ListTargets(ctx context.Context, bucket, arnType str
} }
} }
} }
return return targets
} }
// ListBucketTargets - gets list of bucket targets for this bucket. // ListBucketTargets - gets list of bucket targets for this bucket.
@ -669,7 +668,7 @@ func (sys *BucketTargetSys) getRemoteTargetClient(tcfg *madmin.BucketTarget) (*T
// getRemoteARN gets existing ARN for an endpoint or generates a new one. // getRemoteARN gets existing ARN for an endpoint or generates a new one.
func (sys *BucketTargetSys) getRemoteARN(bucket string, target *madmin.BucketTarget, deplID string) (arn string, exists bool) { func (sys *BucketTargetSys) getRemoteARN(bucket string, target *madmin.BucketTarget, deplID string) (arn string, exists bool) {
if target == nil { if target == nil {
return return arn, exists
} }
sys.RLock() sys.RLock()
defer sys.RUnlock() defer sys.RUnlock()
@ -683,7 +682,7 @@ func (sys *BucketTargetSys) getRemoteARN(bucket string, target *madmin.BucketTar
} }
} }
if !target.Type.IsValid() { if !target.Type.IsValid() {
return return arn, exists
} }
return generateARN(target, deplID), false return generateARN(target, deplID), false
} }

View File

@ -57,11 +57,9 @@ func initCallhome(ctx context.Context, objAPI ObjectLayer) {
// callhome running on a different node. // callhome running on a different node.
// sleep for some time and try again. // sleep for some time and try again.
duration := time.Duration(r.Float64() * float64(globalCallhomeConfig.FrequencyDur())) duration := max(time.Duration(r.Float64()*float64(globalCallhomeConfig.FrequencyDur())),
if duration < time.Second {
// Make sure to sleep at least a second to avoid high CPU ticks. // Make sure to sleep at least a second to avoid high CPU ticks.
duration = time.Second time.Second)
}
time.Sleep(duration) time.Sleep(duration)
} }
}() }()

View File

@ -105,7 +105,7 @@ func init() {
gob.Register(madmin.TimeInfo{}) gob.Register(madmin.TimeInfo{})
gob.Register(madmin.XFSErrorConfigs{}) gob.Register(madmin.XFSErrorConfigs{})
gob.Register(map[string]string{}) gob.Register(map[string]string{})
gob.Register(map[string]interface{}{}) gob.Register(map[string]any{})
// All minio-go and madmin-go API operations shall be performed only once, // All minio-go and madmin-go API operations shall be performed only once,
// another way to look at this is we are turning off retries. // another way to look at this is we are turning off retries.
@ -258,7 +258,7 @@ func initConsoleServer() (*consoleapi.Server, error) {
if !serverDebugLog { if !serverDebugLog {
// Disable console logging if server debug log is not enabled // Disable console logging if server debug log is not enabled
noLog := func(string, ...interface{}) {} noLog := func(string, ...any) {}
consoleapi.LogInfo = noLog consoleapi.LogInfo = noLog
consoleapi.LogError = noLog consoleapi.LogError = noLog
@ -761,7 +761,7 @@ func serverHandleEnvVars() {
domains := env.Get(config.EnvDomain, "") domains := env.Get(config.EnvDomain, "")
if len(domains) != 0 { if len(domains) != 0 {
for _, domainName := range strings.Split(domains, config.ValueSeparator) { for domainName := range strings.SplitSeq(domains, config.ValueSeparator) {
if _, ok := dns2.IsDomainName(domainName); !ok { if _, ok := dns2.IsDomainName(domainName); !ok {
logger.Fatal(config.ErrInvalidDomainValue(nil).Msgf("Unknown value `%s`", domainName), logger.Fatal(config.ErrInvalidDomainValue(nil).Msgf("Unknown value `%s`", domainName),
"Invalid MINIO_DOMAIN value in environment variable") "Invalid MINIO_DOMAIN value in environment variable")
@ -1059,6 +1059,6 @@ func (a bgCtx) Deadline() (deadline time.Time, ok bool) {
return time.Time{}, false return time.Time{}, false
} }
func (a bgCtx) Value(key interface{}) interface{} { func (a bgCtx) Value(key any) any {
return a.parent.Value(key) return a.parent.Value(key)
} }

View File

@ -43,7 +43,6 @@ func Test_readFromSecret(t *testing.T) {
} }
for _, testCase := range testCases { for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
tmpfile, err := os.CreateTemp(t.TempDir(), "testfile") tmpfile, err := os.CreateTemp(t.TempDir(), "testfile")
if err != nil { if err != nil {
@ -155,7 +154,6 @@ MINIO_ROOT_PASSWORD=minio123`,
}, },
} }
for _, testCase := range testCases { for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
tmpfile, err := os.CreateTemp(t.TempDir(), "testfile") tmpfile, err := os.CreateTemp(t.TempDir(), "testfile")
if err != nil { if err != nil {

View File

@ -21,6 +21,7 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"maps"
"strings" "strings"
"sync" "sync"
@ -78,12 +79,8 @@ func initHelp() {
config.BatchSubSys: batch.DefaultKVS, config.BatchSubSys: batch.DefaultKVS,
config.BrowserSubSys: browser.DefaultKVS, config.BrowserSubSys: browser.DefaultKVS,
} }
for k, v := range notify.DefaultNotificationKVS { maps.Copy(kvs, notify.DefaultNotificationKVS)
kvs[k] = v maps.Copy(kvs, lambda.DefaultLambdaKVS)
}
for k, v := range lambda.DefaultLambdaKVS {
kvs[k] = v
}
if globalIsErasure { if globalIsErasure {
kvs[config.StorageClassSubSys] = storageclass.DefaultKVS kvs[config.StorageClassSubSys] = storageclass.DefaultKVS
kvs[config.HealSubSys] = heal.DefaultKVS kvs[config.HealSubSys] = heal.DefaultKVS
@ -355,7 +352,9 @@ func validateSubSysConfig(ctx context.Context, s config.Config, subSys string, o
} }
case config.IdentityOpenIDSubSys: case config.IdentityOpenIDSubSys:
if _, err := openid.LookupConfig(s, if _, err := openid.LookupConfig(s,
NewHTTPTransport(), xhttp.DrainBody, globalSite.Region()); err != nil { xhttp.WithUserAgent(NewHTTPTransport(), func() string {
return getUserAgent(getMinioMode())
}), xhttp.DrainBody, globalSite.Region()); err != nil {
return err return err
} }
case config.IdentityLDAPSubSys: case config.IdentityLDAPSubSys:

View File

@ -38,12 +38,12 @@ import (
) )
// Save config file to corresponding backend // Save config file to corresponding backend
func Save(configFile string, data interface{}) error { func Save(configFile string, data any) error {
return quick.SaveConfig(data, configFile, globalEtcdClient) return quick.SaveConfig(data, configFile, globalEtcdClient)
} }
// Load config from backend // Load config from backend
func Load(configFile string, data interface{}) (quick.Config, error) { func Load(configFile string, data any) (quick.Config, error) {
return quick.LoadConfig(configFile, globalEtcdClient, data) return quick.LoadConfig(configFile, globalEtcdClient, data)
} }

View File

@ -129,7 +129,7 @@ func saveServerConfigHistory(ctx context.Context, objAPI ObjectLayer, kv []byte)
return saveConfig(ctx, objAPI, historyFile, kv) return saveConfig(ctx, objAPI, historyFile, kv)
} }
func saveServerConfig(ctx context.Context, objAPI ObjectLayer, cfg interface{}) error { func saveServerConfig(ctx context.Context, objAPI ObjectLayer, cfg any) error {
data, err := json.Marshal(cfg) data, err := json.Marshal(cfg)
if err != nil { if err != nil {
return err return err

View File

@ -28,7 +28,7 @@ import (
"github.com/minio/madmin-go/v3/logger/log" "github.com/minio/madmin-go/v3/logger/log"
"github.com/minio/minio/internal/logger" "github.com/minio/minio/internal/logger"
"github.com/minio/minio/internal/logger/target/console" "github.com/minio/minio/internal/logger/target/console"
"github.com/minio/minio/internal/logger/target/types" types "github.com/minio/minio/internal/logger/target/loggertypes"
"github.com/minio/minio/internal/pubsub" "github.com/minio/minio/internal/pubsub"
xnet "github.com/minio/pkg/v3/net" xnet "github.com/minio/pkg/v3/net"
) )
@ -101,7 +101,7 @@ func (sys *HTTPConsoleLoggerSys) Subscribe(subCh chan log.Info, doneCh <-chan st
lastN = make([]log.Info, last) lastN = make([]log.Info, last)
sys.RLock() sys.RLock()
sys.logBuf.Do(func(p interface{}) { sys.logBuf.Do(func(p any) {
if p != nil { if p != nil {
lg, ok := p.(log.Info) lg, ok := p.(log.Info)
if ok && lg.SendLog(node, logKind) { if ok && lg.SendLog(node, logKind) {
@ -113,7 +113,7 @@ func (sys *HTTPConsoleLoggerSys) Subscribe(subCh chan log.Info, doneCh <-chan st
sys.RUnlock() sys.RUnlock()
// send last n console log messages in order filtered by node // send last n console log messages in order filtered by node
if cnt > 0 { if cnt > 0 {
for i := 0; i < last; i++ { for i := range last {
entry := lastN[(cnt+i)%last] entry := lastN[(cnt+i)%last]
if (entry == log.Info{}) { if (entry == log.Info{}) {
continue continue
@ -155,7 +155,7 @@ func (sys *HTTPConsoleLoggerSys) Stats() types.TargetStats {
// Content returns the console stdout log // Content returns the console stdout log
func (sys *HTTPConsoleLoggerSys) Content() (logs []log.Entry) { func (sys *HTTPConsoleLoggerSys) Content() (logs []log.Entry) {
sys.RLock() sys.RLock()
sys.logBuf.Do(func(p interface{}) { sys.logBuf.Do(func(p any) {
if p != nil { if p != nil {
lg, ok := p.(log.Info) lg, ok := p.(log.Info)
if ok { if ok {
@ -167,7 +167,7 @@ func (sys *HTTPConsoleLoggerSys) Content() (logs []log.Entry) {
}) })
sys.RUnlock() sys.RUnlock()
return return logs
} }
// Cancel - cancels the target // Cancel - cancels the target
@ -181,7 +181,7 @@ func (sys *HTTPConsoleLoggerSys) Type() types.TargetType {
// Send log message 'e' to console and publish to console // Send log message 'e' to console and publish to console
// log pubsub system // log pubsub system
func (sys *HTTPConsoleLoggerSys) Send(ctx context.Context, entry interface{}) error { func (sys *HTTPConsoleLoggerSys) Send(ctx context.Context, entry any) error {
var lg log.Info var lg log.Info
switch e := entry.(type) { switch e := entry.(type) {
case log.Entry: case log.Entry:

View File

@ -106,16 +106,14 @@ func (p *scannerMetrics) log(s scannerMetric, paths ...string) func(custom map[s
// time n scanner actions. // time n scanner actions.
// Use for s < scannerMetricLastRealtime // Use for s < scannerMetricLastRealtime
func (p *scannerMetrics) timeN(s scannerMetric) func(n int) func() { func (p *scannerMetrics) timeN(s scannerMetric) func(n int) {
startTime := time.Now() startTime := time.Now()
return func(n int) func() { return func(n int) {
return func() { duration := time.Since(startTime)
duration := time.Since(startTime)
atomic.AddUint64(&p.operations[s], uint64(n)) atomic.AddUint64(&p.operations[s], uint64(n))
if s < scannerMetricLastRealtime { if s < scannerMetricLastRealtime {
p.latency[s].add(duration) p.latency[s].add(duration)
}
} }
} }
} }
@ -198,7 +196,7 @@ func (p *scannerMetrics) currentPathUpdater(disk, initial string) (update func(p
func (p *scannerMetrics) getCurrentPaths() []string { func (p *scannerMetrics) getCurrentPaths() []string {
var res []string var res []string
prefix := globalLocalNodeName + "/" prefix := globalLocalNodeName + "/"
p.currentPaths.Range(func(key, value interface{}) bool { p.currentPaths.Range(func(key, value any) bool {
// We are a bit paranoid, but better miss an entry than crash. // We are a bit paranoid, but better miss an entry than crash.
name, ok := key.(string) name, ok := key.(string)
if !ok { if !ok {
@ -221,7 +219,7 @@ func (p *scannerMetrics) getCurrentPaths() []string {
// (since this is concurrent it may not be 100% reliable) // (since this is concurrent it may not be 100% reliable)
func (p *scannerMetrics) activeDrives() int { func (p *scannerMetrics) activeDrives() int {
var i int var i int
p.currentPaths.Range(func(k, v interface{}) bool { p.currentPaths.Range(func(k, v any) bool {
i++ i++
return true return true
}) })
@ -299,7 +297,7 @@ func (p *scannerMetrics) report() madmin.ScannerMetrics {
m.CollectedAt = time.Now() m.CollectedAt = time.Now()
m.ActivePaths = p.getCurrentPaths() m.ActivePaths = p.getCurrentPaths()
m.LifeTimeOps = make(map[string]uint64, scannerMetricLast) m.LifeTimeOps = make(map[string]uint64, scannerMetricLast)
for i := scannerMetric(0); i < scannerMetricLast; i++ { for i := range scannerMetricLast {
if n := atomic.LoadUint64(&p.operations[i]); n > 0 { if n := atomic.LoadUint64(&p.operations[i]); n > 0 {
m.LifeTimeOps[i.String()] = n m.LifeTimeOps[i.String()] = n
} }
@ -309,7 +307,7 @@ func (p *scannerMetrics) report() madmin.ScannerMetrics {
} }
m.LastMinute.Actions = make(map[string]madmin.TimedAction, scannerMetricLastRealtime) m.LastMinute.Actions = make(map[string]madmin.TimedAction, scannerMetricLastRealtime)
for i := scannerMetric(0); i < scannerMetricLastRealtime; i++ { for i := range scannerMetricLastRealtime {
lm := p.lastMinute(i) lm := p.lastMinute(i)
if lm.N > 0 { if lm.N > 0 {
m.LastMinute.Actions[i.String()] = lm.asTimedAction() m.LastMinute.Actions[i.String()] = lm.asTimedAction()

View File

@ -78,11 +78,9 @@ func initDataScanner(ctx context.Context, objAPI ObjectLayer) {
// Run the data scanner in a loop // Run the data scanner in a loop
for { for {
runDataScanner(ctx, objAPI) runDataScanner(ctx, objAPI)
duration := time.Duration(r.Float64() * float64(scannerCycle.Load())) duration := max(time.Duration(r.Float64()*float64(scannerCycle.Load())),
if duration < time.Second {
// Make sure to sleep at least a second to avoid high CPU ticks. // Make sure to sleep at least a second to avoid high CPU ticks.
duration = time.Second time.Second)
}
time.Sleep(duration) time.Sleep(duration)
} }
}() }()
@ -332,7 +330,7 @@ func scanDataFolder(ctx context.Context, disks []StorageAPI, drive *xlStorage, c
} }
var skipHeal atomic.Bool var skipHeal atomic.Bool
if globalIsErasure || cache.Info.SkipHealing { if !globalIsErasure || cache.Info.SkipHealing {
skipHeal.Store(true) skipHeal.Store(true)
} }

View File

@ -127,7 +127,7 @@ func TestApplyNewerNoncurrentVersionsLimit(t *testing.T) {
v2 uuid-2 modTime -3m v2 uuid-2 modTime -3m
v1 uuid-1 modTime -4m v1 uuid-1 modTime -4m
*/ */
for i := 0; i < 5; i++ { for i := range 5 {
fivs[i] = FileInfo{ fivs[i] = FileInfo{
Volume: bucket, Volume: bucket,
Name: obj, Name: obj,

View File

@ -22,6 +22,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"maps"
"math/rand" "math/rand"
"net/http" "net/http"
"path" "path"
@ -99,9 +100,7 @@ func (ats *allTierStats) clone() *allTierStats {
} }
dst := *ats dst := *ats
dst.Tiers = make(map[string]tierStats, len(ats.Tiers)) dst.Tiers = make(map[string]tierStats, len(ats.Tiers))
for tier, st := range ats.Tiers { maps.Copy(dst.Tiers, ats.Tiers)
dst.Tiers[tier] = st
}
return &dst return &dst
} }
@ -347,9 +346,7 @@ func (e dataUsageEntry) clone() dataUsageEntry {
// We operate on a copy from the receiver. // We operate on a copy from the receiver.
if e.Children != nil { if e.Children != nil {
ch := make(dataUsageHashMap, len(e.Children)) ch := make(dataUsageHashMap, len(e.Children))
for k, v := range e.Children { maps.Copy(ch, e.Children)
ch[k] = v
}
e.Children = ch e.Children = ch
} }
@ -1224,11 +1221,11 @@ func (z *dataUsageHashMap) DecodeMsg(dc *msgp.Reader) (err error) {
zb0002, err = dc.ReadArrayHeader() zb0002, err = dc.ReadArrayHeader()
if err != nil { if err != nil {
err = msgp.WrapError(err) err = msgp.WrapError(err)
return return err
} }
if zb0002 == 0 { if zb0002 == 0 {
*z = nil *z = nil
return return err
} }
*z = make(dataUsageHashMap, zb0002) *z = make(dataUsageHashMap, zb0002)
for i := uint32(0); i < zb0002; i++ { for i := uint32(0); i < zb0002; i++ {
@ -1237,12 +1234,12 @@ func (z *dataUsageHashMap) DecodeMsg(dc *msgp.Reader) (err error) {
zb0003, err = dc.ReadString() zb0003, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err) err = msgp.WrapError(err)
return return err
} }
(*z)[zb0003] = struct{}{} (*z)[zb0003] = struct{}{}
} }
} }
return return err
} }
// EncodeMsg implements msgp.Encodable // EncodeMsg implements msgp.Encodable
@ -1250,16 +1247,16 @@ func (z dataUsageHashMap) EncodeMsg(en *msgp.Writer) (err error) {
err = en.WriteArrayHeader(uint32(len(z))) err = en.WriteArrayHeader(uint32(len(z)))
if err != nil { if err != nil {
err = msgp.WrapError(err) err = msgp.WrapError(err)
return return err
} }
for zb0004 := range z { for zb0004 := range z {
err = en.WriteString(zb0004) err = en.WriteString(zb0004)
if err != nil { if err != nil {
err = msgp.WrapError(err, zb0004) err = msgp.WrapError(err, zb0004)
return return err
} }
} }
return return err
} }
// MarshalMsg implements msgp.Marshaler // MarshalMsg implements msgp.Marshaler
@ -1269,7 +1266,7 @@ func (z dataUsageHashMap) MarshalMsg(b []byte) (o []byte, err error) {
for zb0004 := range z { for zb0004 := range z {
o = msgp.AppendString(o, zb0004) o = msgp.AppendString(o, zb0004)
} }
return return o, err
} }
// UnmarshalMsg implements msgp.Unmarshaler // UnmarshalMsg implements msgp.Unmarshaler
@ -1278,7 +1275,7 @@ func (z *dataUsageHashMap) UnmarshalMsg(bts []byte) (o []byte, err error) {
zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts) zb0002, bts, err = msgp.ReadArrayHeaderBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err) err = msgp.WrapError(err)
return return o, err
} }
if zb0002 == 0 { if zb0002 == 0 {
*z = nil *z = nil
@ -1291,13 +1288,13 @@ func (z *dataUsageHashMap) UnmarshalMsg(bts []byte) (o []byte, err error) {
zb0003, bts, err = msgp.ReadStringBytes(bts) zb0003, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err) err = msgp.WrapError(err)
return return o, err
} }
(*z)[zb0003] = struct{}{} (*z)[zb0003] = struct{}{}
} }
} }
o = bts o = bts
return return o, err
} }
// Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message // Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message
@ -1306,7 +1303,7 @@ func (z dataUsageHashMap) Msgsize() (s int) {
for zb0004 := range z { for zb0004 := range z {
s += msgp.StringPrefixSize + len(zb0004) s += msgp.StringPrefixSize + len(zb0004)
} }
return return s
} }
//msgp:encode ignore currentScannerCycle //msgp:encode ignore currentScannerCycle

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"time" "time"
@ -36,19 +36,17 @@ func (z *allTierStats) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Tiers == nil { if z.Tiers == nil {
z.Tiers = make(map[string]tierStats, zb0002) z.Tiers = make(map[string]tierStats, zb0002)
} else if len(z.Tiers) > 0 { } else if len(z.Tiers) > 0 {
for key := range z.Tiers { clear(z.Tiers)
delete(z.Tiers, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 tierStats
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Tiers") err = msgp.WrapError(err, "Tiers")
return return
} }
var za0002 tierStats
var zb0003 uint32 var zb0003 uint32
zb0003, err = dc.ReadMapHeader() zb0003, err = dc.ReadMapHeader()
if err != nil { if err != nil {
@ -207,14 +205,12 @@ func (z *allTierStats) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Tiers == nil { if z.Tiers == nil {
z.Tiers = make(map[string]tierStats, zb0002) z.Tiers = make(map[string]tierStats, zb0002)
} else if len(z.Tiers) > 0 { } else if len(z.Tiers) > 0 {
for key := range z.Tiers { clear(z.Tiers)
delete(z.Tiers, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 tierStats var za0002 tierStats
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Tiers") err = msgp.WrapError(err, "Tiers")
@ -415,19 +411,17 @@ func (z *dataUsageCache) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntry, zb0002) z.Cache = make(map[string]dataUsageEntry, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 dataUsageEntry
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
return return
} }
var za0002 dataUsageEntry
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache", za0001) err = msgp.WrapError(err, "Cache", za0001)
@ -543,14 +537,12 @@ func (z *dataUsageCache) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntry, zb0002) z.Cache = make(map[string]dataUsageEntry, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 dataUsageEntry var za0002 dataUsageEntry
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
@ -799,19 +791,17 @@ func (z *dataUsageCacheV2) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV2, zb0002) z.Cache = make(map[string]dataUsageEntryV2, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 dataUsageEntryV2
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
return return
} }
var za0002 dataUsageEntryV2
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache", za0001) err = msgp.WrapError(err, "Cache", za0001)
@ -864,14 +854,12 @@ func (z *dataUsageCacheV2) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV2, zb0002) z.Cache = make(map[string]dataUsageEntryV2, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 dataUsageEntryV2 var za0002 dataUsageEntryV2
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
@ -942,19 +930,17 @@ func (z *dataUsageCacheV3) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV3, zb0002) z.Cache = make(map[string]dataUsageEntryV3, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 dataUsageEntryV3
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
return return
} }
var za0002 dataUsageEntryV3
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache", za0001) err = msgp.WrapError(err, "Cache", za0001)
@ -1007,14 +993,12 @@ func (z *dataUsageCacheV3) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV3, zb0002) z.Cache = make(map[string]dataUsageEntryV3, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 dataUsageEntryV3 var za0002 dataUsageEntryV3
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
@ -1085,19 +1069,17 @@ func (z *dataUsageCacheV4) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV4, zb0002) z.Cache = make(map[string]dataUsageEntryV4, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 dataUsageEntryV4
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
return return
} }
var za0002 dataUsageEntryV4
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache", za0001) err = msgp.WrapError(err, "Cache", za0001)
@ -1150,14 +1132,12 @@ func (z *dataUsageCacheV4) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV4, zb0002) z.Cache = make(map[string]dataUsageEntryV4, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 dataUsageEntryV4 var za0002 dataUsageEntryV4
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
@ -1228,19 +1208,17 @@ func (z *dataUsageCacheV5) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV5, zb0002) z.Cache = make(map[string]dataUsageEntryV5, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 dataUsageEntryV5
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
return return
} }
var za0002 dataUsageEntryV5
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache", za0001) err = msgp.WrapError(err, "Cache", za0001)
@ -1293,14 +1271,12 @@ func (z *dataUsageCacheV5) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV5, zb0002) z.Cache = make(map[string]dataUsageEntryV5, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 dataUsageEntryV5 var za0002 dataUsageEntryV5
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
@ -1371,19 +1347,17 @@ func (z *dataUsageCacheV6) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV6, zb0002) z.Cache = make(map[string]dataUsageEntryV6, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 dataUsageEntryV6
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
return return
} }
var za0002 dataUsageEntryV6
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache", za0001) err = msgp.WrapError(err, "Cache", za0001)
@ -1436,14 +1410,12 @@ func (z *dataUsageCacheV6) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV6, zb0002) z.Cache = make(map[string]dataUsageEntryV6, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 dataUsageEntryV6 var za0002 dataUsageEntryV6
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
@ -1514,19 +1486,17 @@ func (z *dataUsageCacheV7) DecodeMsg(dc *msgp.Reader) (err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV7, zb0002) z.Cache = make(map[string]dataUsageEntryV7, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
zb0002-- zb0002--
var za0001 string var za0001 string
var za0002 dataUsageEntryV7
za0001, err = dc.ReadString() za0001, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
return return
} }
var za0002 dataUsageEntryV7
err = za0002.DecodeMsg(dc) err = za0002.DecodeMsg(dc)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache", za0001) err = msgp.WrapError(err, "Cache", za0001)
@ -1579,14 +1549,12 @@ func (z *dataUsageCacheV7) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.Cache == nil { if z.Cache == nil {
z.Cache = make(map[string]dataUsageEntryV7, zb0002) z.Cache = make(map[string]dataUsageEntryV7, zb0002)
} else if len(z.Cache) > 0 { } else if len(z.Cache) > 0 {
for key := range z.Cache { clear(z.Cache)
delete(z.Cache, key)
}
} }
for zb0002 > 0 { for zb0002 > 0 {
var za0001 string
var za0002 dataUsageEntryV7 var za0002 dataUsageEntryV7
zb0002-- zb0002--
var za0001 string
za0001, bts, err = msgp.ReadStringBytes(bts) za0001, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "Cache") err = msgp.WrapError(err, "Cache")
@ -1745,19 +1713,17 @@ func (z *dataUsageEntry) DecodeMsg(dc *msgp.Reader) (err error) {
if z.AllTierStats.Tiers == nil { if z.AllTierStats.Tiers == nil {
z.AllTierStats.Tiers = make(map[string]tierStats, zb0005) z.AllTierStats.Tiers = make(map[string]tierStats, zb0005)
} else if len(z.AllTierStats.Tiers) > 0 { } else if len(z.AllTierStats.Tiers) > 0 {
for key := range z.AllTierStats.Tiers { clear(z.AllTierStats.Tiers)
delete(z.AllTierStats.Tiers, key)
}
} }
for zb0005 > 0 { for zb0005 > 0 {
zb0005-- zb0005--
var za0003 string var za0003 string
var za0004 tierStats
za0003, err = dc.ReadString() za0003, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "AllTierStats", "Tiers") err = msgp.WrapError(err, "AllTierStats", "Tiers")
return return
} }
var za0004 tierStats
var zb0006 uint32 var zb0006 uint32
zb0006, err = dc.ReadMapHeader() zb0006, err = dc.ReadMapHeader()
if err != nil { if err != nil {
@ -2211,14 +2177,12 @@ func (z *dataUsageEntry) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.AllTierStats.Tiers == nil { if z.AllTierStats.Tiers == nil {
z.AllTierStats.Tiers = make(map[string]tierStats, zb0005) z.AllTierStats.Tiers = make(map[string]tierStats, zb0005)
} else if len(z.AllTierStats.Tiers) > 0 { } else if len(z.AllTierStats.Tiers) > 0 {
for key := range z.AllTierStats.Tiers { clear(z.AllTierStats.Tiers)
delete(z.AllTierStats.Tiers, key)
}
} }
for zb0005 > 0 { for zb0005 > 0 {
var za0003 string
var za0004 tierStats var za0004 tierStats
zb0005-- zb0005--
var za0003 string
za0003, bts, err = msgp.ReadStringBytes(bts) za0003, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "AllTierStats", "Tiers") err = msgp.WrapError(err, "AllTierStats", "Tiers")
@ -2984,19 +2948,17 @@ func (z *dataUsageEntryV7) DecodeMsg(dc *msgp.Reader) (err error) {
if z.AllTierStats.Tiers == nil { if z.AllTierStats.Tiers == nil {
z.AllTierStats.Tiers = make(map[string]tierStats, zb0005) z.AllTierStats.Tiers = make(map[string]tierStats, zb0005)
} else if len(z.AllTierStats.Tiers) > 0 { } else if len(z.AllTierStats.Tiers) > 0 {
for key := range z.AllTierStats.Tiers { clear(z.AllTierStats.Tiers)
delete(z.AllTierStats.Tiers, key)
}
} }
for zb0005 > 0 { for zb0005 > 0 {
zb0005-- zb0005--
var za0003 string var za0003 string
var za0004 tierStats
za0003, err = dc.ReadString() za0003, err = dc.ReadString()
if err != nil { if err != nil {
err = msgp.WrapError(err, "AllTierStats", "Tiers") err = msgp.WrapError(err, "AllTierStats", "Tiers")
return return
} }
var za0004 tierStats
var zb0006 uint32 var zb0006 uint32
zb0006, err = dc.ReadMapHeader() zb0006, err = dc.ReadMapHeader()
if err != nil { if err != nil {
@ -3192,14 +3154,12 @@ func (z *dataUsageEntryV7) UnmarshalMsg(bts []byte) (o []byte, err error) {
if z.AllTierStats.Tiers == nil { if z.AllTierStats.Tiers == nil {
z.AllTierStats.Tiers = make(map[string]tierStats, zb0005) z.AllTierStats.Tiers = make(map[string]tierStats, zb0005)
} else if len(z.AllTierStats.Tiers) > 0 { } else if len(z.AllTierStats.Tiers) > 0 {
for key := range z.AllTierStats.Tiers { clear(z.AllTierStats.Tiers)
delete(z.AllTierStats.Tiers, key)
}
} }
for zb0005 > 0 { for zb0005 > 0 {
var za0003 string
var za0004 tierStats var za0004 tierStats
zb0005-- zb0005--
var za0003 string
za0003, bts, err = msgp.ReadStringBytes(bts) za0003, bts, err = msgp.ReadStringBytes(bts)
if err != nil { if err != nil {
err = msgp.WrapError(err, "AllTierStats", "Tiers") err = msgp.WrapError(err, "AllTierStats", "Tiers")

View File

@ -1,7 +1,7 @@
package cmd
// Code generated by github.com/tinylib/msgp DO NOT EDIT. // Code generated by github.com/tinylib/msgp DO NOT EDIT.
package cmd
import ( import (
"bytes" "bytes"
"testing" "testing"

View File

@ -56,13 +56,13 @@ func TestDataUsageUpdate(t *testing.T) {
var s os.FileInfo var s os.FileInfo
s, err = os.Stat(item.Path) s, err = os.Stat(item.Path)
if err != nil { if err != nil {
return return sizeS, err
} }
sizeS.totalSize = s.Size() sizeS.totalSize = s.Size()
sizeS.versions++ sizeS.versions++
return sizeS, nil return sizeS, nil
} }
return return sizeS, err
} }
xls := xlStorage{drivePath: base, diskInfoCache: cachevalue.New[DiskInfo]()} xls := xlStorage{drivePath: base, diskInfoCache: cachevalue.New[DiskInfo]()}
xls.diskInfoCache.InitOnce(time.Second, cachevalue.Opts{}, func(ctx context.Context) (DiskInfo, error) { xls.diskInfoCache.InitOnce(time.Second, cachevalue.Opts{}, func(ctx context.Context) (DiskInfo, error) {
@ -179,7 +179,7 @@ func TestDataUsageUpdate(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
// Changed dir must be picked up in this many cycles. // Changed dir must be picked up in this many cycles.
for i := 0; i < dataUsageUpdateDirCycles; i++ { for range dataUsageUpdateDirCycles {
got, err = scanDataFolder(t.Context(), nil, &xls, got, getSize, 0, weSleep) got, err = scanDataFolder(t.Context(), nil, &xls, got, getSize, 0, weSleep)
got.Info.NextCycle++ got.Info.NextCycle++
if err != nil { if err != nil {
@ -279,13 +279,13 @@ func TestDataUsageUpdatePrefix(t *testing.T) {
var s os.FileInfo var s os.FileInfo
s, err = os.Stat(item.Path) s, err = os.Stat(item.Path)
if err != nil { if err != nil {
return return sizeS, err
} }
sizeS.totalSize = s.Size() sizeS.totalSize = s.Size()
sizeS.versions++ sizeS.versions++
return return sizeS, err
} }
return return sizeS, err
} }
weSleep := func() bool { return false } weSleep := func() bool { return false }
@ -428,7 +428,7 @@ func TestDataUsageUpdatePrefix(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
// Changed dir must be picked up in this many cycles. // Changed dir must be picked up in this many cycles.
for i := 0; i < dataUsageUpdateDirCycles; i++ { for range dataUsageUpdateDirCycles {
got, err = scanDataFolder(t.Context(), nil, &xls, got, getSize, 0, weSleep) got, err = scanDataFolder(t.Context(), nil, &xls, got, getSize, 0, weSleep)
got.Info.NextCycle++ got.Info.NextCycle++
if err != nil { if err != nil {
@ -526,13 +526,13 @@ func createUsageTestFiles(t *testing.T, base, bucket string, files []usageTestFi
// generateUsageTestFiles create nFolders * nFiles files of size bytes each. // generateUsageTestFiles create nFolders * nFiles files of size bytes each.
func generateUsageTestFiles(t *testing.T, base, bucket string, nFolders, nFiles, size int) { func generateUsageTestFiles(t *testing.T, base, bucket string, nFolders, nFiles, size int) {
pl := make([]byte, size) pl := make([]byte, size)
for i := 0; i < nFolders; i++ { for i := range nFolders {
name := filepath.Join(base, bucket, fmt.Sprint(i), "0.txt") name := filepath.Join(base, bucket, fmt.Sprint(i), "0.txt")
err := os.MkdirAll(filepath.Dir(name), os.ModePerm) err := os.MkdirAll(filepath.Dir(name), os.ModePerm)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
for j := 0; j < nFiles; j++ { for j := range nFiles {
name := filepath.Join(base, bucket, fmt.Sprint(i), fmt.Sprint(j)+".txt") name := filepath.Join(base, bucket, fmt.Sprint(i), fmt.Sprint(j)+".txt")
err = os.WriteFile(name, pl, os.ModePerm) err = os.WriteFile(name, pl, os.ModePerm)
if err != nil { if err != nil {
@ -569,13 +569,13 @@ func TestDataUsageCacheSerialize(t *testing.T) {
var s os.FileInfo var s os.FileInfo
s, err = os.Stat(item.Path) s, err = os.Stat(item.Path)
if err != nil { if err != nil {
return return sizeS, err
} }
sizeS.versions++ sizeS.versions++
sizeS.totalSize = s.Size() sizeS.totalSize = s.Size()
return return sizeS, err
} }
return return sizeS, err
} }
xls := xlStorage{drivePath: base, diskInfoCache: cachevalue.New[DiskInfo]()} xls := xlStorage{drivePath: base, diskInfoCache: cachevalue.New[DiskInfo]()}
xls.diskInfoCache.InitOnce(time.Second, cachevalue.Opts{}, func(ctx context.Context) (DiskInfo, error) { xls.diskInfoCache.InitOnce(time.Second, cachevalue.Opts{}, func(ctx context.Context) (DiskInfo, error) {
@ -618,7 +618,7 @@ func TestDataUsageCacheSerialize(t *testing.T) {
} }
// equalAsJSON returns whether the values are equal when encoded as JSON. // equalAsJSON returns whether the values are equal when encoded as JSON.
func equalAsJSON(a, b interface{}) bool { func equalAsJSON(a, b any) bool {
aj, err := json.Marshal(a) aj, err := json.Marshal(a)
if err != nil { if err != nil {
panic(err) panic(err)

View File

@ -87,7 +87,7 @@ func (d *DummyDataGen) Read(b []byte) (n int, err error) {
} }
err = io.EOF err = io.EOF
} }
return return n, err
} }
func (d *DummyDataGen) Seek(offset int64, whence int) (int64, error) { func (d *DummyDataGen) Seek(offset int64, whence int) (int64, error) {

View File

@ -129,12 +129,9 @@ func (dt *dynamicTimeout) adjust(entries [dynamicTimeoutLogSize]time.Duration) {
if failPct > dynamicTimeoutIncreaseThresholdPct { if failPct > dynamicTimeoutIncreaseThresholdPct {
// We are hitting the timeout too often, so increase the timeout by 25% // We are hitting the timeout too often, so increase the timeout by 25%
timeout := atomic.LoadInt64(&dt.timeout) * 125 / 100 timeout := min(
// Set upper cap.
// Set upper cap. atomic.LoadInt64(&dt.timeout)*125/100, int64(maxDynamicTimeout))
if timeout > int64(maxDynamicTimeout) {
timeout = int64(maxDynamicTimeout)
}
// Safety, shouldn't happen // Safety, shouldn't happen
if timeout < dt.minimum { if timeout < dt.minimum {
timeout = dt.minimum timeout = dt.minimum

View File

@ -30,7 +30,7 @@ func TestDynamicTimeoutSingleIncrease(t *testing.T) {
initial := timeout.Timeout() initial := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
timeout.LogFailure() timeout.LogFailure()
} }
@ -46,13 +46,13 @@ func TestDynamicTimeoutDualIncrease(t *testing.T) {
initial := timeout.Timeout() initial := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
timeout.LogFailure() timeout.LogFailure()
} }
adjusted := timeout.Timeout() adjusted := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
timeout.LogFailure() timeout.LogFailure()
} }
@ -68,7 +68,7 @@ func TestDynamicTimeoutSingleDecrease(t *testing.T) {
initial := timeout.Timeout() initial := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
timeout.LogSuccess(20 * time.Second) timeout.LogSuccess(20 * time.Second)
} }
@ -84,13 +84,13 @@ func TestDynamicTimeoutDualDecrease(t *testing.T) {
initial := timeout.Timeout() initial := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
timeout.LogSuccess(20 * time.Second) timeout.LogSuccess(20 * time.Second)
} }
adjusted := timeout.Timeout() adjusted := timeout.Timeout()
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
timeout.LogSuccess(20 * time.Second) timeout.LogSuccess(20 * time.Second)
} }
@ -107,8 +107,8 @@ func TestDynamicTimeoutManyDecreases(t *testing.T) {
initial := timeout.Timeout() initial := timeout.Timeout()
const successTimeout = 20 * time.Second const successTimeout = 20 * time.Second
for l := 0; l < 100; l++ { for range 100 {
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
timeout.LogSuccess(successTimeout) timeout.LogSuccess(successTimeout)
} }
} }
@ -129,8 +129,8 @@ func TestDynamicTimeoutConcurrent(t *testing.T) {
rng := rand.New(rand.NewSource(int64(i))) rng := rand.New(rand.NewSource(int64(i)))
go func() { go func() {
defer wg.Done() defer wg.Done()
for i := 0; i < 100; i++ { for range 100 {
for j := 0; j < 100; j++ { for range 100 {
timeout.LogSuccess(time.Duration(float64(time.Second) * rng.Float64())) timeout.LogSuccess(time.Duration(float64(time.Second) * rng.Float64()))
} }
to := timeout.Timeout() to := timeout.Timeout()
@ -150,8 +150,8 @@ func TestDynamicTimeoutHitMinimum(t *testing.T) {
initial := timeout.Timeout() initial := timeout.Timeout()
const successTimeout = 20 * time.Second const successTimeout = 20 * time.Second
for l := 0; l < 100; l++ { for range 100 {
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
timeout.LogSuccess(successTimeout) timeout.LogSuccess(successTimeout)
} }
} }
@ -166,13 +166,9 @@ func TestDynamicTimeoutHitMinimum(t *testing.T) {
func testDynamicTimeoutAdjust(t *testing.T, timeout *dynamicTimeout, f func() float64) { func testDynamicTimeoutAdjust(t *testing.T, timeout *dynamicTimeout, f func() float64) {
const successTimeout = 20 * time.Second const successTimeout = 20 * time.Second
for i := 0; i < dynamicTimeoutLogSize; i++ { for range dynamicTimeoutLogSize {
rnd := f() rnd := f()
duration := time.Duration(float64(successTimeout) * rnd) duration := max(time.Duration(float64(successTimeout)*rnd), 100*time.Millisecond)
if duration < 100*time.Millisecond {
duration = 100 * time.Millisecond
}
if duration >= time.Minute { if duration >= time.Minute {
timeout.LogFailure() timeout.LogFailure()
} else { } else {
@ -188,7 +184,7 @@ func TestDynamicTimeoutAdjustExponential(t *testing.T) {
initial := timeout.Timeout() initial := timeout.Timeout()
for try := 0; try < 10; try++ { for range 10 {
testDynamicTimeoutAdjust(t, timeout, rand.ExpFloat64) testDynamicTimeoutAdjust(t, timeout, rand.ExpFloat64)
} }
@ -205,7 +201,7 @@ func TestDynamicTimeoutAdjustNormalized(t *testing.T) {
initial := timeout.Timeout() initial := timeout.Timeout()
for try := 0; try < 10; try++ { for range 10 {
testDynamicTimeoutAdjust(t, timeout, func() float64 { testDynamicTimeoutAdjust(t, timeout, func() float64 {
return 1.0 + rand.NormFloat64() return 1.0 + rand.NormFloat64()
}) })

View File

@ -29,6 +29,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"maps"
"net/http" "net/http"
"path" "path"
"strconv" "strconv"
@ -37,7 +38,6 @@ import (
"github.com/minio/kms-go/kes" "github.com/minio/kms-go/kes"
"github.com/minio/minio/internal/crypto" "github.com/minio/minio/internal/crypto"
"github.com/minio/minio/internal/etag" "github.com/minio/minio/internal/etag"
"github.com/minio/minio/internal/fips"
"github.com/minio/minio/internal/hash" "github.com/minio/minio/internal/hash"
"github.com/minio/minio/internal/hash/sha256" "github.com/minio/minio/internal/hash/sha256"
xhttp "github.com/minio/minio/internal/http" xhttp "github.com/minio/minio/internal/http"
@ -118,10 +118,7 @@ func DecryptETags(ctx context.Context, k *kms.KMS, objects []ObjectInfo) error {
names = make([]string, 0, BatchSize) names = make([]string, 0, BatchSize)
) )
for len(objects) > 0 { for len(objects) > 0 {
N := BatchSize N := min(len(objects), BatchSize)
if len(objects) < BatchSize {
N = len(objects)
}
batch := objects[:N] batch := objects[:N]
// We have to decrypt only ETags of SSE-S3 single-part // We have to decrypt only ETags of SSE-S3 single-part
@ -318,9 +315,7 @@ func rotateKey(ctx context.Context, oldKey []byte, newKeyID string, newKey []byt
// of the client provided context and add the bucket // of the client provided context and add the bucket
// key, if not present. // key, if not present.
kmsCtx := kms.Context{} kmsCtx := kms.Context{}
for k, v := range cryptoCtx { maps.Copy(kmsCtx, cryptoCtx)
kmsCtx[k] = v
}
if _, ok := kmsCtx[bucket]; !ok { if _, ok := kmsCtx[bucket]; !ok {
kmsCtx[bucket] = path.Join(bucket, object) kmsCtx[bucket] = path.Join(bucket, object)
} }
@ -390,9 +385,7 @@ func newEncryptMetadata(ctx context.Context, kind crypto.Type, keyID string, key
// of the client provided context and add the bucket // of the client provided context and add the bucket
// key, if not present. // key, if not present.
kmsCtx := kms.Context{} kmsCtx := kms.Context{}
for k, v := range cryptoCtx { maps.Copy(kmsCtx, cryptoCtx)
kmsCtx[k] = v
}
if _, ok := kmsCtx[bucket]; !ok { if _, ok := kmsCtx[bucket]; !ok {
kmsCtx[bucket] = path.Join(bucket, object) kmsCtx[bucket] = path.Join(bucket, object)
} }
@ -427,7 +420,7 @@ func newEncryptReader(ctx context.Context, content io.Reader, kind crypto.Type,
return nil, crypto.ObjectKey{}, err return nil, crypto.ObjectKey{}, err
} }
reader, err := sio.EncryptReader(content, sio.Config{Key: objectEncryptionKey[:], MinVersion: sio.Version20, CipherSuites: fips.DARECiphers()}) reader, err := sio.EncryptReader(content, sio.Config{Key: objectEncryptionKey[:], MinVersion: sio.Version20})
if err != nil { if err != nil {
return nil, crypto.ObjectKey{}, crypto.ErrInvalidCustomerKey return nil, crypto.ObjectKey{}, crypto.ErrInvalidCustomerKey
} }
@ -457,7 +450,7 @@ func setEncryptionMetadata(r *http.Request, bucket, object string, metadata map[
} }
} }
_, err = newEncryptMetadata(r.Context(), kind, keyID, key, bucket, object, metadata, kmsCtx) _, err = newEncryptMetadata(r.Context(), kind, keyID, key, bucket, object, metadata, kmsCtx)
return return err
} }
// EncryptRequest takes the client provided content and encrypts the data // EncryptRequest takes the client provided content and encrypts the data
@ -570,7 +563,6 @@ func newDecryptReaderWithObjectKey(client io.Reader, objectEncryptionKey []byte,
reader, err := sio.DecryptReader(client, sio.Config{ reader, err := sio.DecryptReader(client, sio.Config{
Key: objectEncryptionKey, Key: objectEncryptionKey,
SequenceNumber: seqNumber, SequenceNumber: seqNumber,
CipherSuites: fips.DARECiphers(),
}) })
if err != nil { if err != nil {
return nil, crypto.ErrInvalidCustomerKey return nil, crypto.ErrInvalidCustomerKey
@ -863,7 +855,7 @@ func tryDecryptETag(key []byte, encryptedETag string, sses3 bool) string {
func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, skipLen int64, seqNumber uint32, partStart int, err error) { func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, skipLen int64, seqNumber uint32, partStart int, err error) {
if _, ok := crypto.IsEncrypted(o.UserDefined); !ok { if _, ok := crypto.IsEncrypted(o.UserDefined); !ok {
err = errors.New("Object is not encrypted") err = errors.New("Object is not encrypted")
return return encOff, encLength, skipLen, seqNumber, partStart, err
} }
if rs == nil { if rs == nil {
@ -881,7 +873,7 @@ func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, sk
partSize, err = sio.DecryptedSize(uint64(part.Size)) partSize, err = sio.DecryptedSize(uint64(part.Size))
if err != nil { if err != nil {
err = errObjectTampered err = errObjectTampered
return return encOff, encLength, skipLen, seqNumber, partStart, err
} }
sizes[i] = int64(partSize) sizes[i] = int64(partSize)
decObjSize += int64(partSize) decObjSize += int64(partSize)
@ -891,7 +883,7 @@ func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, sk
partSize, err = sio.DecryptedSize(uint64(o.Size)) partSize, err = sio.DecryptedSize(uint64(o.Size))
if err != nil { if err != nil {
err = errObjectTampered err = errObjectTampered
return return encOff, encLength, skipLen, seqNumber, partStart, err
} }
sizes = []int64{int64(partSize)} sizes = []int64{int64(partSize)}
decObjSize = sizes[0] decObjSize = sizes[0]
@ -900,7 +892,7 @@ func (o *ObjectInfo) GetDecryptedRange(rs *HTTPRangeSpec) (encOff, encLength, sk
var off, length int64 var off, length int64
off, length, err = rs.GetOffsetLength(decObjSize) off, length, err = rs.GetOffsetLength(decObjSize)
if err != nil { if err != nil {
return return encOff, encLength, skipLen, seqNumber, partStart, err
} }
// At this point, we have: // At this point, we have:
@ -1062,7 +1054,7 @@ func metadataEncrypter(key crypto.ObjectKey) objectMetaEncryptFn {
var buffer bytes.Buffer var buffer bytes.Buffer
mac := hmac.New(sha256.New, key[:]) mac := hmac.New(sha256.New, key[:])
mac.Write([]byte(baseKey)) mac.Write([]byte(baseKey))
if _, err := sio.Encrypt(&buffer, bytes.NewReader(data), sio.Config{Key: mac.Sum(nil), CipherSuites: fips.DARECiphers()}); err != nil { if _, err := sio.Encrypt(&buffer, bytes.NewReader(data), sio.Config{Key: mac.Sum(nil)}); err != nil {
logger.CriticalIf(context.Background(), errors.New("unable to encrypt using object key")) logger.CriticalIf(context.Background(), errors.New("unable to encrypt using object key"))
} }
return buffer.Bytes() return buffer.Bytes()
@ -1076,8 +1068,16 @@ func (o *ObjectInfo) metadataDecrypter(h http.Header) objectMetaDecryptFn {
return input, nil return input, nil
} }
var key []byte var key []byte
if k, err := crypto.SSEC.ParseHTTP(h); err == nil { if crypto.SSECopy.IsRequested(h) {
key = k[:] sseCopyKey, err := crypto.SSECopy.ParseHTTP(h)
if err != nil {
return nil, err
}
key = sseCopyKey[:]
} else {
if k, err := crypto.SSEC.ParseHTTP(h); err == nil {
key = k[:]
}
} }
key, err := decryptObjectMeta(key, o.Bucket, o.Name, o.UserDefined) key, err := decryptObjectMeta(key, o.Bucket, o.Name, o.UserDefined)
if err != nil { if err != nil {
@ -1085,11 +1085,12 @@ func (o *ObjectInfo) metadataDecrypter(h http.Header) objectMetaDecryptFn {
} }
mac := hmac.New(sha256.New, key) mac := hmac.New(sha256.New, key)
mac.Write([]byte(baseKey)) mac.Write([]byte(baseKey))
return sio.DecryptBuffer(nil, input, sio.Config{Key: mac.Sum(nil), CipherSuites: fips.DARECiphers()}) return sio.DecryptBuffer(nil, input, sio.Config{Key: mac.Sum(nil)})
} }
} }
// decryptPartsChecksums will attempt to decode checksums and return it/them if set. // decryptPartsChecksums will attempt to decrypt and decode part checksums, and save
// only the decrypted part checksum values on ObjectInfo directly.
// if part > 0, and we have the checksum for the part that will be returned. // if part > 0, and we have the checksum for the part that will be returned.
func (o *ObjectInfo) decryptPartsChecksums(h http.Header) { func (o *ObjectInfo) decryptPartsChecksums(h http.Header) {
data := o.Checksum data := o.Checksum
@ -1114,6 +1115,23 @@ func (o *ObjectInfo) decryptPartsChecksums(h http.Header) {
} }
} }
// decryptChecksum will attempt to decrypt the ObjectInfo.Checksum, returns the decrypted value
// An error is only returned if it was encrypted and the decryption failed.
func (o *ObjectInfo) decryptChecksum(h http.Header) ([]byte, error) {
data := o.Checksum
if len(data) == 0 {
return data, nil
}
if _, encrypted := crypto.IsEncrypted(o.UserDefined); encrypted {
decrypted, err := o.metadataDecrypter(h)("object-checksum", data)
if err != nil {
return nil, err
}
data = decrypted
}
return data, nil
}
// metadataEncryptFn provides an encryption function for metadata. // metadataEncryptFn provides an encryption function for metadata.
// Will return nil, nil if unencrypted. // Will return nil, nil if unencrypted.
func (o *ObjectInfo) metadataEncryptFn(headers http.Header) (objectMetaEncryptFn, error) { func (o *ObjectInfo) metadataEncryptFn(headers http.Header) (objectMetaEncryptFn, error) {

View File

@ -384,7 +384,7 @@ func TestGetDecryptedRange(t *testing.T) {
// Simple useful utilities // Simple useful utilities
repeat = func(k int64, n int) []int64 { repeat = func(k int64, n int) []int64 {
a := []int64{} a := []int64{}
for i := 0; i < n; i++ { for range n {
a = append(a, k) a = append(a, k)
} }
return a return a
@ -471,10 +471,7 @@ func TestGetDecryptedRange(t *testing.T) {
// round up the lbPartOffset // round up the lbPartOffset
// to the end of the // to the end of the
// corresponding DARE package // corresponding DARE package
lbPkgEndOffset := lbPartOffset - (lbPartOffset % pkgSz) + pkgSz lbPkgEndOffset := min(lbPartOffset-(lbPartOffset%pkgSz)+pkgSz, v)
if lbPkgEndOffset > v {
lbPkgEndOffset = v
}
bytesToDrop := v - lbPkgEndOffset bytesToDrop := v - lbPkgEndOffset
// Last segment to update `l` // Last segment to update `l`
@ -486,7 +483,7 @@ func TestGetDecryptedRange(t *testing.T) {
cumulativeSum += v cumulativeSum += v
cumulativeEncSum += getEncSize(v) cumulativeEncSum += getEncSize(v)
} }
return return o, l, skip, sn, ps
} }
for i, test := range testMPs { for i, test := range testMPs {

View File

@ -22,7 +22,7 @@ import (
"fmt" "fmt"
"net/url" "net/url"
"runtime" "runtime"
"sort" "slices"
"strings" "strings"
"github.com/cespare/xxhash/v2" "github.com/cespare/xxhash/v2"
@ -122,9 +122,7 @@ func possibleSetCountsWithSymmetry(setCounts []uint64, argPatterns []ellipses.Ar
// eyes that we prefer a sorted setCount slice for the // eyes that we prefer a sorted setCount slice for the
// subsequent function to figure out the right common // subsequent function to figure out the right common
// divisor, it avoids loops. // divisor, it avoids loops.
sort.Slice(setCounts, func(i, j int) bool { slices.Sort(setCounts)
return setCounts[i] < setCounts[j]
})
return setCounts return setCounts
} }
@ -445,7 +443,7 @@ func buildDisksLayoutFromConfFile(pools []poolArgs) (layout disksLayout, err err
layout: setArgs, layout: setArgs,
}) })
} }
return return layout, err
} }
// mergeDisksLayoutFromArgs supports with and without ellipses transparently. // mergeDisksLayoutFromArgs supports with and without ellipses transparently.
@ -477,7 +475,7 @@ func mergeDisksLayoutFromArgs(args []string, ctxt *serverCtxt) (err error) {
legacy: true, legacy: true,
pools: []poolDisksLayout{{layout: setArgs, cmdline: strings.Join(args, " ")}}, pools: []poolDisksLayout{{layout: setArgs, cmdline: strings.Join(args, " ")}},
} }
return return err
} }
for _, arg := range args { for _, arg := range args {
@ -491,7 +489,7 @@ func mergeDisksLayoutFromArgs(args []string, ctxt *serverCtxt) (err error) {
} }
ctxt.Layout.pools = append(ctxt.Layout.pools, poolDisksLayout{cmdline: arg, layout: setArgs}) ctxt.Layout.pools = append(ctxt.Layout.pools, poolDisksLayout{cmdline: arg, layout: setArgs})
} }
return return err
} }
// CreateServerEndpoints - validates and creates new endpoints from input args, supports // CreateServerEndpoints - validates and creates new endpoints from input args, supports

View File

@ -55,7 +55,6 @@ func TestCreateServerEndpoints(t *testing.T) {
} }
for i, testCase := range testCases { for i, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
srvCtxt := serverCtxt{} srvCtxt := serverCtxt{}
err := mergeDisksLayoutFromArgs(testCase.args, &srvCtxt) err := mergeDisksLayoutFromArgs(testCase.args, &srvCtxt)
@ -85,7 +84,6 @@ func TestGetDivisibleSize(t *testing.T) {
} }
for _, testCase := range testCases { for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
gotGCD := getDivisibleSize(testCase.totalSizes) gotGCD := getDivisibleSize(testCase.totalSizes)
if testCase.result != gotGCD { if testCase.result != gotGCD {
@ -172,7 +170,6 @@ func TestGetSetIndexesEnvOverride(t *testing.T) {
} }
for _, testCase := range testCases { for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
argPatterns := make([]ellipses.ArgPattern, len(testCase.args)) argPatterns := make([]ellipses.ArgPattern, len(testCase.args))
for i, arg := range testCase.args { for i, arg := range testCase.args {
@ -294,7 +291,6 @@ func TestGetSetIndexes(t *testing.T) {
} }
for _, testCase := range testCases { for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
argPatterns := make([]ellipses.ArgPattern, len(testCase.args)) argPatterns := make([]ellipses.ArgPattern, len(testCase.args))
for i, arg := range testCase.args { for i, arg := range testCase.args {
@ -637,7 +633,6 @@ func TestParseEndpointSet(t *testing.T) {
} }
for _, testCase := range testCases { for _, testCase := range testCases {
testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
gotEs, err := parseEndpointSet(0, testCase.arg) gotEs, err := parseEndpointSet(0, testCase.arg)
if err != nil && testCase.success { if err != nil && testCase.success {

View File

@ -267,7 +267,7 @@ func (l EndpointServerPools) ESCount() (count int) {
for _, p := range l { for _, p := range l {
count += p.SetCount count += p.SetCount
} }
return return count
} }
// GetNodes returns a sorted list of nodes in this cluster // GetNodes returns a sorted list of nodes in this cluster
@ -297,7 +297,7 @@ func (l EndpointServerPools) GetNodes() (nodes []Node) {
sort.Slice(nodes, func(i, j int) bool { sort.Slice(nodes, func(i, j int) bool {
return nodes[i].Host < nodes[j].Host return nodes[i].Host < nodes[j].Host
}) })
return return nodes
} }
// GetPoolIdx return pool index // GetPoolIdx return pool index
@ -588,7 +588,7 @@ func (endpoints Endpoints) GetAllStrings() (all []string) {
for _, e := range endpoints { for _, e := range endpoints {
all = append(all, e.String()) all = append(all, e.String())
} }
return return all
} }
func hostResolveToLocalhost(endpoint Endpoint) bool { func hostResolveToLocalhost(endpoint Endpoint) bool {

View File

@ -312,7 +312,6 @@ func TestCreateEndpoints(t *testing.T) {
} }
for i, testCase := range testCases { for i, testCase := range testCases {
i := i
testCase := testCase testCase := testCase
t.Run("", func(t *testing.T) { t.Run("", func(t *testing.T) {
var srvCtxt serverCtxt var srvCtxt serverCtxt

View File

@ -69,7 +69,7 @@ func NewErasure(ctx context.Context, dataBlocks, parityBlocks int, blockSize int
}) })
return enc return enc
} }
return return e, err
} }
// EncodeData encodes the given data and returns the erasure-coded data. // EncodeData encodes the given data and returns the erasure-coded data.
@ -136,10 +136,7 @@ func (e *Erasure) ShardFileOffset(startOffset, length, totalLength int64) int64
shardSize := e.ShardSize() shardSize := e.ShardSize()
shardFileSize := e.ShardFileSize(totalLength) shardFileSize := e.ShardFileSize(totalLength)
endShard := (startOffset + length) / e.blockSize endShard := (startOffset + length) / e.blockSize
tillOffset := endShard*shardSize + shardSize tillOffset := min(endShard*shardSize+shardSize, shardFileSize)
if tillOffset > shardFileSize {
tillOffset = shardFileSize
}
return tillOffset return tillOffset
} }

View File

@ -30,7 +30,6 @@ func (er erasureObjects) getOnlineDisks() (newDisks []StorageAPI) {
var mu sync.Mutex var mu sync.Mutex
r := rand.New(rand.NewSource(time.Now().UnixNano())) r := rand.New(rand.NewSource(time.Now().UnixNano()))
for _, i := range r.Perm(len(disks)) { for _, i := range r.Perm(len(disks)) {
i := i
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()

View File

@ -251,7 +251,7 @@ func TestErasureDecodeRandomOffsetLength(t *testing.T) {
buf := &bytes.Buffer{} buf := &bytes.Buffer{}
// Verify erasure.Decode() for random offsets and lengths. // Verify erasure.Decode() for random offsets and lengths.
for i := 0; i < iterations; i++ { for range iterations {
offset := r.Int63n(length) offset := r.Int63n(length)
readLen := r.Int63n(length - offset) readLen := r.Int63n(length - offset)
@ -308,17 +308,16 @@ func benchmarkErasureDecode(data, parity, dataDown, parityDown int, size int64,
b.Fatalf("failed to create erasure test file: %v", err) b.Fatalf("failed to create erasure test file: %v", err)
} }
for i := 0; i < dataDown; i++ { for i := range dataDown {
writers[i] = nil writers[i] = nil
} }
for i := data; i < data+parityDown; i++ { for i := data; i < data+parityDown; i++ {
writers[i] = nil writers[i] = nil
} }
b.ResetTimer()
b.SetBytes(size) b.SetBytes(size)
b.ReportAllocs() b.ReportAllocs()
for i := 0; i < b.N; i++ { for b.Loop() {
bitrotReaders := make([]io.ReaderAt, len(disks)) bitrotReaders := make([]io.ReaderAt, len(disks))
for index, disk := range disks { for index, disk := range disks {
if writers[index] == nil { if writers[index] == nil {

View File

@ -172,17 +172,16 @@ func benchmarkErasureEncode(data, parity, dataDown, parityDown int, size int64,
buffer := make([]byte, blockSizeV2, 2*blockSizeV2) buffer := make([]byte, blockSizeV2, 2*blockSizeV2)
content := make([]byte, size) content := make([]byte, size)
for i := 0; i < dataDown; i++ { for i := range dataDown {
disks[i] = OfflineDisk disks[i] = OfflineDisk
} }
for i := data; i < data+parityDown; i++ { for i := data; i < data+parityDown; i++ {
disks[i] = OfflineDisk disks[i] = OfflineDisk
} }
b.ResetTimer()
b.SetBytes(size) b.SetBytes(size)
b.ReportAllocs() b.ReportAllocs()
for i := 0; i < b.N; i++ { for b.Loop() {
writers := make([]io.Writer, len(disks)) writers := make([]io.Writer, len(disks))
for i, disk := range disks { for i, disk := range disks {
if disk == OfflineDisk { if disk == OfflineDisk {

View File

@ -102,7 +102,7 @@ func TestErasureHeal(t *testing.T) {
// setup stale disks for the test case // setup stale disks for the test case
staleDisks := make([]StorageAPI, len(disks)) staleDisks := make([]StorageAPI, len(disks))
copy(staleDisks, disks) copy(staleDisks, disks)
for j := 0; j < len(staleDisks); j++ { for j := range staleDisks {
if j < test.offDisks { if j < test.offDisks {
readers[j] = nil readers[j] = nil
} else { } else {

View File

@ -283,7 +283,7 @@ func countPartNotSuccess(partErrs []int) (c int) {
c++ c++
} }
} }
return return c
} }
// checkObjectWithAllParts sets partsMetadata and onlineDisks when xl.meta is inexistant/corrupted or outdated // checkObjectWithAllParts sets partsMetadata and onlineDisks when xl.meta is inexistant/corrupted or outdated
@ -436,5 +436,5 @@ func checkObjectWithAllParts(ctx context.Context, onlineDisks []StorageAPI, part
dataErrsByDisk[disk][part] = dataErrsByPart[part][disk] dataErrsByDisk[disk][part] = dataErrsByPart[part][disk]
} }
} }
return return dataErrsByDisk, dataErrsByPart
} }

View File

@ -175,7 +175,7 @@ func TestListOnlineDisks(t *testing.T) {
fourNanoSecs := time.Unix(4, 0).UTC() fourNanoSecs := time.Unix(4, 0).UTC()
modTimesThreeNone := make([]time.Time, 16) modTimesThreeNone := make([]time.Time, 16)
modTimesThreeFour := make([]time.Time, 16) modTimesThreeFour := make([]time.Time, 16)
for i := 0; i < 16; i++ { for i := range 16 {
// Have 13 good xl.meta, 12 for default parity count = 4 (EC:4) and one // Have 13 good xl.meta, 12 for default parity count = 4 (EC:4) and one
// to be tampered with. // to be tampered with.
if i > 12 { if i > 12 {
@ -244,7 +244,6 @@ func TestListOnlineDisks(t *testing.T) {
} }
for i, test := range testCases { for i, test := range testCases {
test := test
t.Run(fmt.Sprintf("case-%d", i), func(t *testing.T) { t.Run(fmt.Sprintf("case-%d", i), func(t *testing.T) {
_, err = obj.PutObject(ctx, bucket, object, mustGetPutObjReader(t, bytes.NewReader(data), int64(len(data)), "", ""), ObjectOptions{}) _, err = obj.PutObject(ctx, bucket, object, mustGetPutObjReader(t, bytes.NewReader(data), int64(len(data)), "", ""), ObjectOptions{})
if err != nil { if err != nil {
@ -350,7 +349,7 @@ func TestListOnlineDisksSmallObjects(t *testing.T) {
fourNanoSecs := time.Unix(4, 0).UTC() fourNanoSecs := time.Unix(4, 0).UTC()
modTimesThreeNone := make([]time.Time, 16) modTimesThreeNone := make([]time.Time, 16)
modTimesThreeFour := make([]time.Time, 16) modTimesThreeFour := make([]time.Time, 16)
for i := 0; i < 16; i++ { for i := range 16 {
// Have 13 good xl.meta, 12 for default parity count = 4 (EC:4) and one // Have 13 good xl.meta, 12 for default parity count = 4 (EC:4) and one
// to be tampered with. // to be tampered with.
if i > 12 { if i > 12 {
@ -419,7 +418,6 @@ func TestListOnlineDisksSmallObjects(t *testing.T) {
} }
for i, test := range testCases { for i, test := range testCases {
test := test
t.Run(fmt.Sprintf("case-%d", i), func(t *testing.T) { t.Run(fmt.Sprintf("case-%d", i), func(t *testing.T) {
_, err := obj.PutObject(ctx, bucket, object, _, err := obj.PutObject(ctx, bucket, object,
mustGetPutObjReader(t, bytes.NewReader(data), int64(len(data)), "", ""), ObjectOptions{}) mustGetPutObjReader(t, bytes.NewReader(data), int64(len(data)), "", ""), ObjectOptions{})
@ -753,7 +751,7 @@ func TestCommonParities(t *testing.T) {
} }
for idx, test := range tests { for idx, test := range tests {
var metaArr []FileInfo var metaArr []FileInfo
for i := 0; i < 12; i++ { for i := range 12 {
fi := test.fi1 fi := test.fi1
if i%2 == 0 { if i%2 == 0 {
fi = test.fi2 fi = test.fi2

View File

@ -116,7 +116,6 @@ func (er erasureObjects) listAndHeal(ctx context.Context, bucket, prefix string,
func listAllBuckets(ctx context.Context, storageDisks []StorageAPI, healBuckets *xsync.MapOf[string, VolInfo], readQuorum int) error { func listAllBuckets(ctx context.Context, storageDisks []StorageAPI, healBuckets *xsync.MapOf[string, VolInfo], readQuorum int) error {
g := errgroup.WithNErrs(len(storageDisks)) g := errgroup.WithNErrs(len(storageDisks))
for index := range storageDisks { for index := range storageDisks {
index := index
g.Go(func() error { g.Go(func() error {
if storageDisks[index] == nil { if storageDisks[index] == nil {
// we ignore disk not found errors // we ignore disk not found errors
@ -966,7 +965,7 @@ func danglingMetaErrsCount(cerrs []error) (notFoundCount int, nonActionableCount
nonActionableCount++ nonActionableCount++
} }
} }
return return notFoundCount, nonActionableCount
} }
func danglingPartErrsCount(results []int) (notFoundCount int, nonActionableCount int) { func danglingPartErrsCount(results []int) (notFoundCount int, nonActionableCount int) {
@ -981,7 +980,7 @@ func danglingPartErrsCount(results []int) (notFoundCount int, nonActionableCount
nonActionableCount++ nonActionableCount++
} }
} }
return return notFoundCount, nonActionableCount
} }
// Object is considered dangling/corrupted if and only // Object is considered dangling/corrupted if and only

View File

@ -296,7 +296,6 @@ func TestIsObjectDangling(t *testing.T) {
// Add new cases as seen // Add new cases as seen
} }
for _, testCase := range testCases { for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) { t.Run(testCase.name, func(t *testing.T) {
gotMeta, dangling := isObjectDangling(testCase.metaArr, testCase.errs, testCase.dataErrs) gotMeta, dangling := isObjectDangling(testCase.metaArr, testCase.errs, testCase.dataErrs)
if !gotMeta.Equals(testCase.expectedMeta) { if !gotMeta.Equals(testCase.expectedMeta) {

Some files were not shown because too many files have changed in this diff Show More