Compare commits

..

1 Commits

Author SHA1 Message Date
1sonofqiu 684f3b55c3
feat(stream): impl log tail truncate 2025-10-28 23:55:06 +08:00
212 changed files with 4520 additions and 14045 deletions

View File

@ -9,14 +9,6 @@ or [Slack](https://join.slack.com/t/automq/shared_invite/zt-29h17vye9-thf31ebIVL
Before getting started, please review AutoMQ's Code of Conduct. Everyone interacting in Slack or WeChat
follow [Code of Conduct](CODE_OF_CONDUCT.md).
## Suggested Onboarding Path for New Contributors
If you are new to AutoMQ, it is recommended to first deploy and run AutoMQ using Docker as described in the README.
This helps you quickly understand AutoMQs core concepts and behavior without local environment complexity.
After gaining familiarity, contributors who want to work on code can follow the steps in this guide to build and run AutoMQ locally.
## Code Contributions
### Finding or Reporting Issues

View File

@ -13,7 +13,7 @@
</p>
[![Linkedin Badge](https://img.shields.io/badge/-LinkedIn-blue?style=flat-square&logo=Linkedin&logoColor=white&link=https://www.linkedin.com/company/automq)](https://www.linkedin.com/company/automq)
[![](https://badgen.net/badge/Slack/Join%20AutoMQ/0abd59?icon=slack)](https://go.automq.com/slack)
[![](https://badgen.net/badge/Slack/Join%20AutoMQ/0abd59?icon=slack)](https://join.slack.com/t/automq/shared_invite/zt-29h17vye9-thf31ebIVL9oXuRdACnOIA)
[![](https://img.shields.io/badge/AutoMQ%20vs.%20Kafka(Cost)-yellow)](https://www.automq.com/blog/automq-vs-apache-kafka-a-real-aws-cloud-bill-comparison?utm_source=github_automq)
[![](https://img.shields.io/badge/AutoMQ%20vs.%20Kafka(Performance)-orange)](https://www.automq.com/docs/automq/benchmarks/automq-vs-apache-kafka-benchmarks-and-cost?utm_source=github_automq)
[![Gurubase](https://img.shields.io/badge/Gurubase-Ask%20AutoMQ%20Guru-006BFF)](https://gurubase.io/g/automq)
@ -23,8 +23,8 @@
</div>
<div align="center">
<img width="97%" alt="automq-solgan" src="https://github.com/user-attachments/assets/bdf6c5f5-7fe1-4004-8e15-54f1aa6bc32f" />
<img width="97%" alt="automq-solgan" src="https://github.com/user-attachments/assets/97fcde87-19ef-42a9-9835-01b63516d497" />
<a href="https://www.youtube.com/watch?v=IB8sh639Rsg" target="_blank">
<img alt="Grab" src="https://github.com/user-attachments/assets/01668da4-3916-4f49-97af-18f91b25f8c1" width="19%" />
@ -85,14 +85,7 @@
- [Asia's GOAT, Poizon uses AutoMQ Kafka to build observability platform for massive data(30 GB/s)](https://www.automq.com/blog/asiax27s-goat-poizon-uses-automq-kafka-to-build-a-new-generation-observability-platform-for-massive-data?utm_source=github_automq)
- [AutoMQ Helps CaoCao Mobility Address Kafka Scalability During Holidays](https://www.automq.com/blog/automq-helps-caocao-mobility-address-kafka-scalability-issues-during-mid-autumn-and-national-day?utm_source=github_automq)
### Prerequisites
Before running AutoMQ locally, please ensure:
- Docker version 20.x or later
- Docker Compose v2
- At least 4 GB RAM allocated to Docker
- Ports 9092 and 9000 are available on your system
## ⛄ Get started with AutoMQ
> [!Tip]
> Deploying a production-ready AutoMQ cluster is challenging. This Quick Start is only for evaluating AutoMQ features and is not suitable for production use. For production deployment best practices, please [contact](https://www.automq.com/contact) our community for support.
@ -163,7 +156,7 @@ Star AutoMQ on GitHub for instant updates on new releases.
## 💬 Community
You can join the following groups or channels to discuss or ask questions about AutoMQ:
- Ask questions or report a bug by [GitHub Issues](https://github.com/AutoMQ/automq/issues)
- Discuss about AutoMQ or Kafka by [Slack](https://go.automq.com/slack) or [Wechat Group](docs/images/automq-wechat.png)
- Discuss about AutoMQ or Kafka by [Slack](https://join.slack.com/t/automq/shared_invite/zt-29h17vye9-thf31ebIVL9oXuRdACnOIA) or [Wechat Group](docs/images/automq-wechat.png)
## 👥 How to contribute

View File

@ -1,125 +0,0 @@
# AutoMQ Log Uploader Module
This module provides asynchronous S3 log upload capability based on Log4j 1.x. Other submodules only need to depend on this module and configure it simply to synchronize logs to object storage. Core components:
- `com.automq.log.S3RollingFileAppender`: Extends `RollingFileAppender`, pushes log events to the uploader while writing to local files.
- `com.automq.log.uploader.LogUploader`: Asynchronously buffers, compresses, and uploads logs; supports configuration switches and periodic cleanup.
- `com.automq.log.uploader.S3LogConfig`: Interface that abstracts the configuration required for uploading. Implementations must provide cluster ID, node ID, object storage instance, and leadership status.
## Quick Integration
1. Add dependency in your module's `build.gradle`:
```groovy
implementation project(':automq-log-uploader')
```
2. Implement or provide an `S3LogConfig` instance and configure the appender:
```java
// Set up the S3LogConfig through your application
S3LogConfig config = // your S3LogConfig implementation
S3RollingFileAppender.setup(config);
```
3. Reference the Appender in `log4j.properties`:
```properties
log4j.appender.s3_uploader=com.automq.log.S3RollingFileAppender
log4j.appender.s3_uploader.File=logs/server.log
log4j.appender.s3_uploader.MaxFileSize=100MB
log4j.appender.s3_uploader.MaxBackupIndex=10
log4j.appender.s3_uploader.layout=org.apache.log4j.PatternLayout
log4j.appender.s3_uploader.layout.ConversionPattern=[%d] %p %m (%c)%n
```
## S3LogConfig Interface
The `S3LogConfig` interface provides the configuration needed for log uploading:
```java
public interface S3LogConfig {
boolean isEnabled(); // Whether S3 upload is enabled
String clusterId(); // Cluster identifier
int nodeId(); // Node identifier
ObjectStorage objectStorage(); // S3 object storage instance
boolean isLeader(); // Whether this node should upload logs
}
```
The upload schedule can be overridden by environment variables:
- `AUTOMQ_OBSERVABILITY_UPLOAD_INTERVAL`: Maximum upload interval (milliseconds).
- `AUTOMQ_OBSERVABILITY_CLEANUP_INTERVAL`: Retention period (milliseconds), old objects earlier than this time will be cleaned up.
## Implementation Notes
### Leader Selection
The log uploader relies on the `S3LogConfig.isLeader()` method to determine whether the current node should upload logs and perform cleanup tasks. This avoids multiple nodes in a cluster simultaneously executing these operations.
### Object Storage Path
Logs are uploaded to object storage following this path pattern:
```
automq/logs/{clusterId}/{nodeId}/{hour}/{uuid}
```
Where:
- `clusterId` and `nodeId` come from the S3LogConfig
- `hour` is the timestamp hour for log organization
- `uuid` is a unique identifier for each log batch
## Usage Example
Complete example of using the log uploader:
```java
import com.automq.log.S3RollingFileAppender;
import com.automq.log.uploader.S3LogConfig;
import com.automq.stream.s3.operator.ObjectStorage;
// Implement S3LogConfig
public class MyS3LogConfig implements S3LogConfig {
@Override
public boolean isEnabled() {
return true; // Enable S3 upload
}
@Override
public String clusterId() {
return "my-cluster";
}
@Override
public int nodeId() {
return 1;
}
@Override
public ObjectStorage objectStorage() {
// Return your ObjectStorage instance
return myObjectStorage;
}
@Override
public boolean isLeader() {
// Return true if this node should upload logs
return isCurrentNodeLeader();
}
}
// Setup and use
S3LogConfig config = new MyS3LogConfig();
S3RollingFileAppender.setup(config);
// Configure Log4j to use the appender
// The appender will now automatically upload logs to S3
```
## Lifecycle Management
Remember to properly shutdown the log uploader when your application terminates:
```java
// During application shutdown
S3RollingFileAppender.shutdown();
```

View File

@ -1,105 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.log;
import com.automq.log.uploader.LogRecorder;
import com.automq.log.uploader.LogUploader;
import com.automq.log.uploader.S3LogConfig;
import org.apache.log4j.RollingFileAppender;
import org.apache.log4j.spi.LoggingEvent;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class S3RollingFileAppender extends RollingFileAppender {
private static final Logger LOGGER = LoggerFactory.getLogger(S3RollingFileAppender.class);
private static final Object INIT_LOCK = new Object();
private static volatile LogUploader logUploaderInstance;
private static volatile S3LogConfig s3LogConfig;
public S3RollingFileAppender() {
super();
}
public static void setup(S3LogConfig config) {
s3LogConfig = config;
synchronized (INIT_LOCK) {
if (logUploaderInstance != null) {
return;
}
try {
if (s3LogConfig == null) {
LOGGER.error("No s3LogConfig available; S3 log upload remains disabled.");
throw new RuntimeException("S3 log configuration is missing.");
}
if (!s3LogConfig.isEnabled() || s3LogConfig.objectStorage() == null) {
LOGGER.warn("S3 log upload is disabled by configuration.");
return;
}
LogUploader uploader = new LogUploader();
uploader.start(s3LogConfig);
logUploaderInstance = uploader;
LOGGER.info("S3RollingFileAppender initialized successfully using s3LogConfig {}.", s3LogConfig.getClass().getName());
} catch (Exception e) {
LOGGER.error("Failed to initialize S3RollingFileAppender", e);
throw e;
}
}
}
public static void shutdown() {
if (logUploaderInstance != null) {
synchronized (INIT_LOCK) {
if (logUploaderInstance != null) {
try {
logUploaderInstance.close();
logUploaderInstance = null;
LOGGER.info("S3RollingFileAppender log uploader closed successfully.");
} catch (Exception e) {
LOGGER.error("Failed to close S3RollingFileAppender log uploader", e);
}
}
}
}
}
@Override
protected void subAppend(LoggingEvent event) {
super.subAppend(event);
if (!closed && logUploaderInstance != null) {
LogRecorder.LogEvent logEvent = new LogRecorder.LogEvent(
event.getTimeStamp(),
event.getLevel().toString(),
event.getLoggerName(),
event.getRenderedMessage(),
event.getThrowableStrRep());
try {
logEvent.validate();
logUploaderInstance.append(logEvent);
} catch (IllegalArgumentException e) {
errorHandler.error("Failed to validate and append log event", e, 0);
}
}
}
}

View File

@ -1,69 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.log.uploader.util;
import com.automq.stream.s3.ByteBufAlloc;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;
import io.netty.buffer.ByteBuf;
public class Utils {
private Utils() {
}
public static ByteBuf compress(ByteBuf input) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
try (GZIPOutputStream gzipOutputStream = new GZIPOutputStream(byteArrayOutputStream)) {
byte[] buffer = new byte[input.readableBytes()];
input.readBytes(buffer);
gzipOutputStream.write(buffer);
}
ByteBuf compressed = ByteBufAlloc.byteBuffer(byteArrayOutputStream.size());
compressed.writeBytes(byteArrayOutputStream.toByteArray());
return compressed;
}
public static ByteBuf decompress(ByteBuf input) throws IOException {
byte[] compressedData = new byte[input.readableBytes()];
input.readBytes(compressedData);
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(compressedData);
try (GZIPInputStream gzipInputStream = new GZIPInputStream(byteArrayInputStream);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream()) {
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = gzipInputStream.read(buffer)) != -1) {
byteArrayOutputStream.write(buffer, 0, bytesRead);
}
byte[] uncompressedData = byteArrayOutputStream.toByteArray();
ByteBuf output = ByteBufAlloc.byteBuffer(uncompressedData.length);
output.writeBytes(uncompressedData);
return output;
}
}
}

View File

@ -1,459 +0,0 @@
# AutoMQ automq-metrics Module
## Module Structure
```
com.automq.opentelemetry/
├── AutoMQTelemetryManager.java # Main management class for initialization and lifecycle
├── TelemetryConstants.java # Constants definition
├── common/
│ ├── OTLPCompressionType.java # OTLP compression types
│ └── OTLPProtocol.java # OTLP protocol types
├── exporter/
│ ├── MetricsExporter.java # Exporter interface
│ ├── MetricsExportConfig.java # Export configuration
│ ├── MetricsExporterProvider.java # Exporter factory provider
│ ├── MetricsExporterType.java # Exporter type enumeration
│ ├── MetricsExporterURI.java # URI parser for exporters
│ ├── OTLPMetricsExporter.java # OTLP exporter implementation
│ ├── PrometheusMetricsExporter.java # Prometheus exporter implementation
│ └── s3/ # S3 metrics exporter implementation
│ ├── CompressionUtils.java # Utility for data compression
│ ├── PrometheusUtils.java # Utilities for Prometheus format
│ ├── S3MetricsExporter.java # S3 metrics exporter implementation
│ └── S3MetricsExporterAdapter.java # Adapter to handle S3 metrics export
└── yammer/
├── DeltaHistogram.java # Delta histogram implementation
├── OTelMetricUtils.java # OpenTelemetry metrics utilities
├── YammerMetricsProcessor.java # Yammer metrics processor
└── YammerMetricsReporter.java # Yammer metrics reporter
```
The AutoMQ OpenTelemetry module is a telemetry data collection and export component based on OpenTelemetry SDK, specifically designed for AutoMQ Kafka. This module provides unified telemetry data management capabilities, supporting the collection of JVM metrics, JMX metrics, and Yammer metrics, and can export data to Prometheus, OTLP-compatible backend systems, or S3-compatible storage.
## Core Features
### 1. Metrics Collection
- **JVM Metrics**: Automatically collect JVM runtime metrics including CPU, memory pools, garbage collection, threads, etc.
- **JMX Metrics**: Define and collect JMX Bean metrics through configuration files
- **Yammer Metrics**: Bridge existing Kafka Yammer metrics system to OpenTelemetry
### 2. Multiple Exporter Support
- **Prometheus**: Expose metrics in Prometheus format through HTTP server
- **OTLP**: Support both gRPC and HTTP/Protobuf protocols for exporting to OTLP backends
- **S3**: Export metrics to S3-compatible object storage systems
### 3. Flexible Configuration
- Support parameter settings through Properties configuration files
- Configurable export intervals, compression methods, timeout values, etc.
- Support metric cardinality limits to control memory usage
## Module Structure
```
com.automq.opentelemetry/
├── AutoMQTelemetryManager.java # Main management class for initialization and lifecycle
├── TelemetryConfig.java # Configuration management class
├── TelemetryConstants.java # Constants definition
├── common/
│ └── MetricsUtils.java # Metrics utility class
├── exporter/
│ ├── MetricsExporter.java # Exporter interface
│ ├── MetricsExporterURI.java # URI parser
<20><><EFBFBD>── OTLPMetricsExporter.java # OTLP exporter implementation
│ ├── PrometheusMetricsExporter.java # Prometheus exporter implementation
│ └── s3/ # S3 metrics exporter implementation
│ ├── CompressionUtils.java # Utility for data compression
│ ├── PrometheusUtils.java # Utilities for Prometheus format
│ ├── S3MetricsConfig.java # Configuration interface
│ ├── S3MetricsExporter.java # S3 metrics exporter implementation
│ ├── S3MetricsExporterAdapter.java # Adapter to handle S3 metrics export
│ ├── LeaderNodeSelector.java # Interface for node selection logic
│ └── LeaderNodeSelectors.java # Factory for node selector implementations
└── yammer/
├── DeltaHistogram.java # Delta histogram implementation
├── OTelMetricUtils.java # OpenTelemetry metrics utilities
├── YammerMetricsProcessor.java # Yammer metrics processor
└── YammerMetricsReporter.java # Yammer metrics reporter
```
## Quick Start
### 1. Basic Usage
```java
import com.automq.opentelemetry.AutoMQTelemetryManager;
import com.automq.opentelemetry.exporter.MetricsExportConfig;
// Implement MetricsExportConfig
public class MyMetricsExportConfig implements MetricsExportConfig {
@Override
public String clusterId() { return "my-cluster"; }
@Override
public boolean isLeader() { return true; }
@Override
public int nodeId() { return 1; }
@Override
public ObjectStorage objectStorage() {
// Return your object storage instance for S3 exports
return myObjectStorage;
}
@Override
public List<Pair<String, String>> baseLabels() {
return Arrays.asList(
Pair.of("environment", "production"),
Pair.of("region", "us-east-1")
);
}
@Override
public int intervalMs() { return 60000; } // 60 seconds
}
// Create export configuration
MetricsExportConfig config = new MyMetricsExportConfig();
// Initialize telemetry manager singleton
AutoMQTelemetryManager manager = AutoMQTelemetryManager.initializeInstance(
"prometheus://localhost:9090", // exporter URI
"automq-kafka", // service name
"broker-1", // instance ID
config // export config
);
// Start Yammer metrics reporting (optional)
MetricsRegistry yammerRegistry = // Get Kafka's Yammer registry
manager.startYammerMetricsReporter(yammerRegistry);
// Application running...
// Shutdown telemetry system
AutoMQTelemetryManager.shutdownInstance();
```
### 2. Get Meter Instance
```java
// Get the singleton instance
AutoMQTelemetryManager manager = AutoMQTelemetryManager.getInstance();
// Get Meter for custom metrics
Meter meter = manager.getMeter();
// Create custom metrics
LongCounter requestCounter = meter
.counterBuilder("http_requests_total")
.setDescription("Total number of HTTP requests")
.build();
requestCounter.add(1, Attributes.of(AttributeKey.stringKey("method"), "GET"));
```
## Configuration
### Basic Configuration
Configuration is provided through the `MetricsExportConfig` interface and constructor parameters:
| Parameter | Description | Example |
|-----------|-------------|---------|
| `exporterUri` | Metrics exporter URI | `prometheus://localhost:9090` |
| `serviceName` | Service name for telemetry | `automq-kafka` |
| `instanceId` | Unique service instance ID | `broker-1` |
| `config` | MetricsExportConfig implementation | See example above |
### Exporter Configuration
All configuration is done through the `MetricsExportConfig` interface and constructor parameters. Export intervals, compression settings, and other options are controlled through:
1. **Exporter URI**: Determines the export destination and protocol
2. **MetricsExportConfig**: Provides cluster information, intervals, and base labels
3. **Constructor parameters**: Service name and instance ID
#### Prometheus Exporter
```java
// Use prometheus:// URI scheme
AutoMQTelemetryManager manager = AutoMQTelemetryManager.initializeInstance(
"prometheus://localhost:9090",
"automq-kafka",
"broker-1",
config
);
```
#### OTLP Exporter
```java
// Use otlp:// URI scheme with optional query parameters
AutoMQTelemetryManager manager = AutoMQTelemetryManager.initializeInstance(
"otlp://localhost:4317?protocol=grpc&compression=gzip&timeout=30000",
"automq-kafka",
"broker-1",
config
);
```
#### S3 Metrics Exporter
```java
// Use s3:// URI scheme
AutoMQTelemetryManager manager = AutoMQTelemetryManager.initializeInstance(
"s3://access-key:secret-key@my-bucket.s3.amazonaws.com",
"automq-kafka",
"broker-1",
config // config.clusterId(), nodeId(), isLeader() used for S3 export
);
```
Example usage with S3 exporter:
```java
// Implementation for S3 export configuration
public class S3MetricsExportConfig implements MetricsExportConfig {
private final ObjectStorage objectStorage;
public S3MetricsExportConfig(ObjectStorage objectStorage) {
this.objectStorage = objectStorage;
}
@Override
public String clusterId() { return "my-kafka-cluster"; }
@Override
public boolean isLeader() {
// Only one node in the cluster should return true
return isCurrentNodeLeader();
}
@Override
public int nodeId() { return 1; }
@Override
public ObjectStorage objectStorage() { return objectStorage; }
@Override
public List<Pair<String, String>> baseLabels() {
return Arrays.asList(Pair.of("environment", "production"));
}
@Override
public int intervalMs() { return 60000; }
}
// Initialize telemetry manager with S3 export
ObjectStorage objectStorage = // Create your object storage instance
MetricsExportConfig config = new S3MetricsExportConfig(objectStorage);
AutoMQTelemetryManager manager = AutoMQTelemetryManager.initializeInstance(
"s3://access-key:secret-key@my-bucket.s3.amazonaws.com",
"automq-kafka",
"broker-1",
config
);
// Application running...
// Shutdown telemetry system
AutoMQTelemetryManager.shutdownInstance();
```
### JMX Metrics Configuration
Define JMX metrics collection rules through YAML configuration files:
```java
AutoMQTelemetryManager manager = AutoMQTelemetryManager.initializeInstance(
exporterUri, serviceName, instanceId, config
);
// Set JMX config paths after initialization
manager.setJmxConfigPaths("/jmx-config.yaml,/kafka-jmx.yaml");
```
#### Configuration File Requirements
1. **Directory Requirements**:
- Configuration files must be placed in the project's classpath (e.g., `src/main/resources` directory)
- Support subdirectory structure, e.g., `/config/jmx-metrics.yaml`
2. **Path Format**:
- Paths must start with `/` to indicate starting from classpath root
- Multiple configuration files separated by commas
3. **File Format**:
- Use YAML format (`.yaml` or `.yml` extension)
- Filenames can be customized, meaningful names are recommended
#### Recommended Directory Structure
```
src/main/resources/
├── jmx-kafka-broker.yaml # Kafka Broker metrics configuration
├── jmx-kafka-consumer.yaml # Kafka Consumer metrics configuration
├── jmx-kafka-producer.yaml # Kafka Producer metrics configuration
└── config/
├── custom-jmx.yaml # Custom JMX metrics configuration
└── third-party-jmx.yaml # Third-party component JMX configuration
```
JMX configuration file example (`jmx-config.yaml`):
```yaml
rules:
- bean: kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec
metricAttribute:
name: kafka_server_broker_topic_messages_in_per_sec
description: Messages in per second
unit: "1/s"
attributes:
- name: topic
value: topic
```
## Supported Metric Types
### 1. JVM Metrics
- Memory usage (heap memory, non-heap memory, memory pools)
- CPU usage
- Garbage collection statistics
- Thread states
### 2. Kafka Metrics
Through Yammer metrics bridging, supports the following types of Kafka metrics:
- `BytesInPerSec` - Bytes input per second
- `BytesOutPerSec` - Bytes output per second
- `Size` - Log size (for identifying idle partitions)
### 3. Custom Metrics
Support creating custom metrics through OpenTelemetry API:
- Counter
- Gauge
- Histogram
- UpDownCounter
## Best Practices
### 1. Production Environment Configuration
```java
public class ProductionMetricsConfig implements MetricsExportConfig {
@Override
public String clusterId() { return "production-cluster"; }
@Override
public boolean isLeader() {
// Implement your leader election logic
return isCurrentNodeController();
}
@Override
public int nodeId() { return getCurrentNodeId(); }
@Override
public ObjectStorage objectStorage() {
return productionObjectStorage;
}
@Override
public List<Pair<String, String>> baseLabels() {
return Arrays.asList(
Pair.of("environment", "production"),
Pair.of("region", System.getenv("AWS_REGION")),
Pair.of("version", getApplicationVersion())
);
}
@Override
public int intervalMs() { return 60000; } // 1 minute
}
// Initialize for production
AutoMQTelemetryManager manager = AutoMQTelemetryManager.initializeInstance(
"prometheus://0.0.0.0:9090", // Or S3 URI for object storage export
"automq-kafka",
System.getenv("HOSTNAME"),
new ProductionMetricsConfig()
);
```
### 2. Development Environment Configuration
```java
public class DevelopmentMetricsConfig implements MetricsExportConfig {
@Override
public String clusterId() { return "dev-cluster"; }
@Override
public boolean isLeader() { return true; } // Single node in dev
@Override
public int nodeId() { return 1; }
@Override
public ObjectStorage objectStorage() { return null; } // Not needed for OTLP
@Override
public List<Pair<String, String>> baseLabels() {
return Arrays.asList(Pair.of("environment", "development"));
}
@Override
public int intervalMs() { return 10000; } // 10 seconds for faster feedback
}
// Initialize for development
AutoMQTelemetryManager manager = AutoMQTelemetryManager.initializeInstance(
"otlp://localhost:4317",
"automq-kafka-dev",
"local-dev",
new DevelopmentMetricsConfig()
);
```
### 3. Resource Management
- Set appropriate metric cardinality limits to avoid memory leaks
- Call `shutdown()` method when application closes to release resources
- Monitor exporter health status
## Troubleshooting
### Common Issues
1. **Metrics not exported**
- Check if exporter URI passed to `initializeInstance()` is correct
- Verify target endpoint is reachable
- Check error messages in logs
- Ensure `MetricsExportConfig.intervalMs()` returns reasonable value
2. **JMX metrics missing**
- Confirm JMX configuration file path set via `setJmxConfigPaths()` is correct
- Check YAML configuration file format
- Verify JMX Bean exists
- Ensure files are in classpath
3. **High memory usage**
- Implement cardinality limits in your `MetricsExportConfig`
- Check for high cardinality labels in `baseLabels()`
- Consider increasing export interval via `intervalMs()`
### Logging Configuration
Enable debug logging for more information using your logging framework configuration (e.g., logback.xml, log4j2.xml):
```xml
<!-- For Logback -->
<logger name="com.automq.opentelemetry" level="DEBUG" />
<logger name="io.opentelemetry" level="INFO" />
```
## Dependencies
- Java 8+
- OpenTelemetry SDK 1.30+
- Apache Commons Lang3
- SLF4J logging framework
## License
This module is open source under the Apache License 2.0.

View File

@ -1,330 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.opentelemetry;
import com.automq.opentelemetry.exporter.MetricsExportConfig;
import com.automq.opentelemetry.exporter.MetricsExporter;
import com.automq.opentelemetry.exporter.MetricsExporterURI;
import com.automq.opentelemetry.yammer.YammerMetricsReporter;
import com.yammer.metrics.core.MetricsRegistry;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.lang3.tuple.Pair;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.bridge.SLF4JBridgeHandler;
import java.io.IOException;
import java.io.InputStream;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.baggage.propagation.W3CBaggagePropagator;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.common.AttributesBuilder;
import io.opentelemetry.api.metrics.Meter;
import io.opentelemetry.api.trace.propagation.W3CTraceContextPropagator;
import io.opentelemetry.context.propagation.ContextPropagators;
import io.opentelemetry.context.propagation.TextMapPropagator;
import io.opentelemetry.instrumentation.jmx.engine.JmxMetricInsight;
import io.opentelemetry.instrumentation.jmx.engine.MetricConfiguration;
import io.opentelemetry.instrumentation.jmx.yaml.RuleParser;
import io.opentelemetry.instrumentation.runtimemetrics.java8.Cpu;
import io.opentelemetry.instrumentation.runtimemetrics.java8.GarbageCollector;
import io.opentelemetry.instrumentation.runtimemetrics.java8.MemoryPools;
import io.opentelemetry.instrumentation.runtimemetrics.java8.Threads;
import io.opentelemetry.sdk.OpenTelemetrySdk;
import io.opentelemetry.sdk.metrics.SdkMeterProvider;
import io.opentelemetry.sdk.metrics.SdkMeterProviderBuilder;
import io.opentelemetry.sdk.metrics.export.MetricReader;
import io.opentelemetry.sdk.metrics.internal.SdkMeterProviderUtil;
import io.opentelemetry.sdk.resources.Resource;
/**
* The main manager for AutoMQ telemetry.
* This class is responsible for initializing, configuring, and managing the lifecycle of all
* telemetry components, including the OpenTelemetry SDK, metric exporters, and various metric sources.
*/
public class AutoMQTelemetryManager {
private static final Logger LOGGER = LoggerFactory.getLogger(AutoMQTelemetryManager.class);
// Singleton instance support
private static volatile AutoMQTelemetryManager instance;
private static final Object LOCK = new Object();
private final String exporterUri;
private final String serviceName;
private final String instanceId;
private final MetricsExportConfig metricsExportConfig;
private final List<MetricReader> metricReaders = new ArrayList<>();
private final List<AutoCloseable> autoCloseableList;
private OpenTelemetrySdk openTelemetrySdk;
private YammerMetricsReporter yammerReporter;
private int metricCardinalityLimit = TelemetryConstants.DEFAULT_METRIC_CARDINALITY_LIMIT;
private String jmxConfigPath;
/**
* Constructs a new Telemetry Manager with the given configuration.
*
* @param exporterUri The metrics exporter URI.
* @param serviceName The service name to be used in telemetry data.
* @param instanceId The unique instance ID for this service instance.
* @param metricsExportConfig The metrics configuration.
*/
public AutoMQTelemetryManager(String exporterUri, String serviceName, String instanceId, MetricsExportConfig metricsExportConfig) {
this.exporterUri = exporterUri;
this.serviceName = serviceName;
this.instanceId = instanceId;
this.metricsExportConfig = metricsExportConfig;
this.autoCloseableList = new ArrayList<>();
// Redirect JUL from OpenTelemetry SDK to SLF4J for unified logging
SLF4JBridgeHandler.removeHandlersForRootLogger();
SLF4JBridgeHandler.install();
}
/**
* Gets the singleton instance of AutoMQTelemetryManager.
* Returns null if no instance has been initialized.
*
* @return the singleton instance, or null if not initialized
*/
public static AutoMQTelemetryManager getInstance() {
return instance;
}
/**
* Initializes the singleton instance with the given configuration.
* This method should be called before any other components try to access the instance.
*
* @param exporterUri The metrics exporter URI.
* @param serviceName The service name to be used in telemetry data.
* @param instanceId The unique instance ID for this service instance.
* @param metricsExportConfig The metrics configuration.
* @return the initialized singleton instance
*/
public static AutoMQTelemetryManager initializeInstance(String exporterUri, String serviceName, String instanceId, MetricsExportConfig metricsExportConfig) {
if (instance == null) {
synchronized (LOCK) {
if (instance == null) {
AutoMQTelemetryManager newInstance = new AutoMQTelemetryManager(exporterUri, serviceName, instanceId, metricsExportConfig);
newInstance.init();
instance = newInstance;
LOGGER.info("AutoMQTelemetryManager singleton instance initialized");
}
}
}
return instance;
}
/**
* Shuts down the singleton instance and releases all resources.
*/
public static void shutdownInstance() {
if (instance != null) {
synchronized (LOCK) {
if (instance != null) {
instance.shutdown();
instance = null;
LOGGER.info("AutoMQTelemetryManager singleton instance shutdown");
}
}
}
}
/**
* Initializes the telemetry system. This method sets up the OpenTelemetry SDK,
* configures exporters, and registers JVM and JMX metrics.
*/
public void init() {
SdkMeterProvider meterProvider = buildMeterProvider();
this.openTelemetrySdk = OpenTelemetrySdk.builder()
.setMeterProvider(meterProvider)
.setPropagators(ContextPropagators.create(TextMapPropagator.composite(
W3CTraceContextPropagator.getInstance(), W3CBaggagePropagator.getInstance())))
.buildAndRegisterGlobal();
// Register JVM and JMX metrics
registerJvmMetrics(openTelemetrySdk);
registerJmxMetrics(openTelemetrySdk);
LOGGER.info("AutoMQ Telemetry Manager initialized successfully.");
}
private SdkMeterProvider buildMeterProvider() {
String hostName;
try {
hostName = InetAddress.getLocalHost().getHostName();
} catch (UnknownHostException e) {
hostName = "unknown-host";
}
AttributesBuilder attrsBuilder = Attributes.builder()
.put(TelemetryConstants.SERVICE_NAME_KEY, serviceName)
.put(TelemetryConstants.SERVICE_INSTANCE_ID_KEY, instanceId)
.put(TelemetryConstants.HOST_NAME_KEY, hostName)
// Add attributes for Prometheus compatibility
.put(TelemetryConstants.PROMETHEUS_JOB_KEY, serviceName)
.put(TelemetryConstants.PROMETHEUS_INSTANCE_KEY, instanceId);
for (Pair<String, String> label : metricsExportConfig.baseLabels()) {
attrsBuilder.put(label.getKey(), label.getValue());
}
Resource resource = Resource.getDefault().merge(Resource.create(attrsBuilder.build()));
SdkMeterProviderBuilder meterProviderBuilder = SdkMeterProvider.builder().setResource(resource);
// Configure exporters from URI
MetricsExporterURI exporterURI = buildMetricsExporterURI(exporterUri, metricsExportConfig);
for (MetricsExporter exporter : exporterURI.getMetricsExporters()) {
MetricReader reader = exporter.asMetricReader();
metricReaders.add(reader);
SdkMeterProviderUtil.registerMetricReaderWithCardinalitySelector(meterProviderBuilder, reader,
instrumentType -> metricCardinalityLimit);
}
return meterProviderBuilder.build();
}
protected MetricsExporterURI buildMetricsExporterURI(String exporterUri, MetricsExportConfig metricsExportConfig) {
return MetricsExporterURI.parse(exporterUri, metricsExportConfig);
}
private void registerJvmMetrics(OpenTelemetry openTelemetry) {
autoCloseableList.addAll(MemoryPools.registerObservers(openTelemetry));
autoCloseableList.addAll(Cpu.registerObservers(openTelemetry));
autoCloseableList.addAll(GarbageCollector.registerObservers(openTelemetry));
autoCloseableList.addAll(Threads.registerObservers(openTelemetry));
LOGGER.info("JVM metrics registered.");
}
@SuppressWarnings({"NP_LOAD_OF_KNOWN_NULL_VALUE", "RCN_REDUNDANT_NULLCHECK_OF_NULL_VALUE"})
private void registerJmxMetrics(OpenTelemetry openTelemetry) {
List<String> jmxConfigPaths = getJmxConfigPaths();
if (jmxConfigPaths.isEmpty()) {
LOGGER.info("No JMX metric config paths provided, skipping JMX metrics registration.");
return;
}
JmxMetricInsight jmxMetricInsight = JmxMetricInsight.createService(openTelemetry, metricsExportConfig.intervalMs());
MetricConfiguration metricConfig = new MetricConfiguration();
for (String path : jmxConfigPaths) {
try (InputStream ins = this.getClass().getResourceAsStream(path)) {
if (ins == null) {
LOGGER.error("JMX config file not found in classpath: {}", path);
continue;
}
RuleParser parser = RuleParser.get();
parser.addMetricDefsTo(metricConfig, ins, path);
} catch (Exception e) {
LOGGER.error("Failed to parse JMX config file: {}", path, e);
}
}
jmxMetricInsight.start(metricConfig);
// JmxMetricInsight doesn't implement Closeable, but we can create a wrapper
LOGGER.info("JMX metrics registered with config paths: {}", jmxConfigPaths);
}
public List<String> getJmxConfigPaths() {
if (StringUtils.isEmpty(jmxConfigPath)) {
return Collections.emptyList();
}
return Stream.of(jmxConfigPath.split(","))
.map(String::trim)
.filter(s -> !s.isEmpty())
.collect(Collectors.toList());
}
/**
* Starts reporting metrics from a given Yammer MetricsRegistry.
*
* @param registry The Yammer registry to bridge metrics from.
*/
public void startYammerMetricsReporter(MetricsRegistry registry) {
if (this.openTelemetrySdk == null) {
throw new IllegalStateException("TelemetryManager is not initialized. Call init() first.");
}
if (registry == null) {
LOGGER.warn("Yammer MetricsRegistry is null, skipping reporter start.");
return;
}
this.yammerReporter = new YammerMetricsReporter(registry);
this.yammerReporter.start(getMeter());
}
public void shutdown() {
autoCloseableList.forEach(autoCloseable -> {
try {
autoCloseable.close();
} catch (Exception e) {
LOGGER.error("Failed to close auto closeable", e);
}
});
metricReaders.forEach(metricReader -> {
metricReader.forceFlush();
try {
metricReader.close();
} catch (IOException e) {
LOGGER.error("Failed to close metric reader", e);
}
});
if (openTelemetrySdk != null) {
openTelemetrySdk.close();
}
}
/**
* get YammerMetricsReporter instance.
*
* @return The YammerMetricsReporter instance.
*/
public YammerMetricsReporter getYammerReporter() {
return this.yammerReporter;
}
public void setMetricCardinalityLimit(int limit) {
this.metricCardinalityLimit = limit;
}
public void setJmxConfigPaths(String jmxConfigPaths) {
this.jmxConfigPath = jmxConfigPaths;
}
/**
* Gets the default meter from the initialized OpenTelemetry SDK.
*
* @return The meter instance.
*/
public Meter getMeter() {
if (this.openTelemetrySdk == null) {
throw new IllegalStateException("TelemetryManager is not initialized. Call init() first.");
}
return this.openTelemetrySdk.getMeter(TelemetryConstants.TELEMETRY_SCOPE_NAME);
}
}

View File

@ -1,54 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.opentelemetry;
import io.opentelemetry.api.common.AttributeKey;
/**
* Constants for telemetry, including configuration keys, attribute keys, and default values.
*/
public class TelemetryConstants {
//################################################################
// Service and Resource Attributes
//################################################################
public static final String SERVICE_NAME_KEY = "service.name";
public static final String SERVICE_INSTANCE_ID_KEY = "service.instance.id";
public static final String HOST_NAME_KEY = "host.name";
public static final String TELEMETRY_SCOPE_NAME = "automq_for_kafka";
/**
* The cardinality limit for any single metric.
*/
public static final String METRIC_CARDINALITY_LIMIT_KEY = "automq.telemetry.metric.cardinality.limit";
public static final int DEFAULT_METRIC_CARDINALITY_LIMIT = 20000;
//################################################################
// Prometheus specific Attributes, for compatibility
//################################################################
public static final String PROMETHEUS_JOB_KEY = "job";
public static final String PROMETHEUS_INSTANCE_KEY = "instance";
//################################################################
// Custom Kafka-related Attribute Keys
//################################################################
public static final AttributeKey<Long> START_OFFSET_KEY = AttributeKey.longKey("startOffset");
public static final AttributeKey<Long> END_OFFSET_KEY = AttributeKey.longKey("endOffset");
}

View File

@ -1,68 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.opentelemetry.exporter;
import com.automq.stream.s3.operator.ObjectStorage;
import org.apache.commons.lang3.tuple.Pair;
import java.util.List;
/**
* Configuration interface for metrics exporter.
*/
public interface MetricsExportConfig {
/**
* Get the cluster ID.
* @return The cluster ID.
*/
String clusterId();
/**
* Check if the current node is a primary node for metrics upload.
* @return True if the current node should upload metrics, false otherwise.
*/
boolean isLeader();
/**
* Get the node ID.
* @return The node ID.
*/
int nodeId();
/**
* Get the object storage instance.
* @return The object storage instance.
*/
ObjectStorage objectStorage();
/**
* Get the base labels to include in all metrics.
* @return The base labels.
*/
List<Pair<String, String>> baseLabels();
/**
* Get the interval in milliseconds for metrics export.
* @return The interval in milliseconds.
*/
int intervalMs();
}

View File

@ -1,47 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.opentelemetry.exporter;
import java.net.URI;
import java.util.List;
import java.util.Map;
/**
* Service Provider Interface that allows extending the available metrics exporters
* without modifying the core AutoMQ OpenTelemetry module.
*/
public interface MetricsExporterProvider {
/**
* @param scheme exporter scheme (e.g. "rw")
* @return true if this provider can create an exporter for the supplied scheme
*/
boolean supports(String scheme);
/**
* Creates a metrics exporter for the provided URI.
*
* @param config metrics configuration
* @param uri original exporter URI
* @param queryParameters parsed query parameters from the URI
* @return a MetricsExporter instance, or {@code null} if unable to create one
*/
MetricsExporter create(MetricsExportConfig config, URI uri, Map<String, List<String>> queryParameters);
}

View File

@ -1,220 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.opentelemetry.exporter;
import com.automq.opentelemetry.common.OTLPCompressionType;
import com.automq.opentelemetry.common.OTLPProtocol;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.URI;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.ServiceLoader;
/**
* Parses the exporter URI and creates the corresponding MetricsExporter instances.
*/
public class MetricsExporterURI {
private static final Logger LOGGER = LoggerFactory.getLogger(MetricsExporterURI.class);
private static final List<MetricsExporterProvider> PROVIDERS;
static {
List<MetricsExporterProvider> providers = new ArrayList<>();
ServiceLoader.load(MetricsExporterProvider.class).forEach(providers::add);
PROVIDERS = Collections.unmodifiableList(providers);
if (!PROVIDERS.isEmpty()) {
LOGGER.info("Loaded {} telemetry exporter providers", PROVIDERS.size());
}
}
private final List<MetricsExporter> metricsExporters;
private MetricsExporterURI(List<MetricsExporter> metricsExporters) {
this.metricsExporters = metricsExporters != null ? metricsExporters : new ArrayList<>();
}
public List<MetricsExporter> getMetricsExporters() {
return metricsExporters;
}
public static MetricsExporterURI parse(String uriStr, MetricsExportConfig config) {
LOGGER.info("Parsing metrics exporter URI: {}", uriStr);
if (StringUtils.isBlank(uriStr)) {
LOGGER.info("Metrics exporter URI is not configured, no metrics will be exported.");
return new MetricsExporterURI(Collections.emptyList());
}
// Support multiple exporters separated by comma
String[] exporterUris = uriStr.split(",");
if (exporterUris.length == 0) {
return new MetricsExporterURI(Collections.emptyList());
}
List<MetricsExporter> exporters = new ArrayList<>();
for (String uri : exporterUris) {
if (StringUtils.isBlank(uri)) {
continue;
}
MetricsExporter exporter = parseExporter(config, uri.trim());
if (exporter != null) {
exporters.add(exporter);
}
}
return new MetricsExporterURI(exporters);
}
public static MetricsExporter parseExporter(MetricsExportConfig config, String uriStr) {
try {
URI uri = new URI(uriStr);
String type = uri.getScheme();
if (StringUtils.isBlank(type)) {
LOGGER.error("Invalid metrics exporter URI: {}, exporter scheme is missing", uriStr);
throw new IllegalArgumentException("Invalid metrics exporter URI: " + uriStr);
}
Map<String, List<String>> queries = parseQueryParameters(uri);
return parseExporter(config, type, queries, uri);
} catch (Exception e) {
LOGGER.warn("Parse metrics exporter URI {} failed", uriStr, e);
throw new IllegalArgumentException("Invalid metrics exporter URI: " + uriStr, e);
}
}
public static MetricsExporter parseExporter(MetricsExportConfig config, String type, Map<String, List<String>> queries, URI uri) {
MetricsExporterType exporterType = MetricsExporterType.fromString(type);
switch (exporterType) {
case PROMETHEUS:
return buildPrometheusExporter(config, queries, uri);
case OTLP:
return buildOtlpExporter(config, queries, uri);
case OPS:
return buildS3MetricsExporter(config, uri);
default:
break;
}
MetricsExporterProvider provider = findProvider(type);
if (provider != null) {
MetricsExporter exporter = provider.create(config, uri, queries);
if (exporter != null) {
return exporter;
}
}
LOGGER.warn("Unsupported metrics exporter type: {}", type);
return null;
}
private static MetricsExporter buildPrometheusExporter(MetricsExportConfig config, Map<String, List<String>> queries, URI uri) {
// Use query parameters if available, otherwise fall back to URI authority or config defaults
String host = getStringFromQuery(queries, "host", uri.getHost());
if (StringUtils.isBlank(host)) {
host = "localhost";
}
int port = uri.getPort();
if (port <= 0) {
String portStr = getStringFromQuery(queries, "port", null);
if (StringUtils.isNotBlank(portStr)) {
try {
port = Integer.parseInt(portStr);
} catch (NumberFormatException e) {
LOGGER.warn("Invalid port in query parameters: {}, using default", portStr);
port = 9090;
}
} else {
port = 9090;
}
}
return new PrometheusMetricsExporter(host, port, config.baseLabels());
}
private static MetricsExporter buildOtlpExporter(MetricsExportConfig config, Map<String, List<String>> queries, URI uri) {
// Get endpoint from query parameters or construct from URI
String endpoint = getStringFromQuery(queries, "endpoint", null);
if (StringUtils.isBlank(endpoint)) {
endpoint = uri.getScheme() + "://" + uri.getAuthority();
}
// Get protocol from query parameters or config
String protocol = getStringFromQuery(queries, "protocol", OTLPProtocol.GRPC.getProtocol());
// Get compression from query parameters or config
String compression = getStringFromQuery(queries, "compression", OTLPCompressionType.NONE.getType());
return new OTLPMetricsExporter(config.intervalMs(), endpoint, protocol, compression);
}
private static MetricsExporter buildS3MetricsExporter(MetricsExportConfig config, URI uri) {
LOGGER.info("Creating S3 metrics exporter from URI: {}", uri);
if (config.objectStorage() == null) {
LOGGER.warn("No object storage configured, skip s3 metrics exporter creation.");
return null;
}
// Create the S3MetricsExporterAdapter with appropriate configuration
return new com.automq.opentelemetry.exporter.s3.S3MetricsExporterAdapter(config);
}
private static Map<String, List<String>> parseQueryParameters(URI uri) {
Map<String, List<String>> queries = new HashMap<>();
String query = uri.getQuery();
if (StringUtils.isNotBlank(query)) {
String[] pairs = query.split("&");
for (String pair : pairs) {
String[] keyValue = pair.split("=", 2);
if (keyValue.length == 2) {
String key = keyValue[0];
String value = keyValue[1];
queries.computeIfAbsent(key, k -> new ArrayList<>()).add(value);
}
}
}
return queries;
}
private static String getStringFromQuery(Map<String, List<String>> queries, String key, String defaultValue) {
List<String> values = queries.get(key);
if (values != null && !values.isEmpty()) {
return values.get(0);
}
return defaultValue;
}
private static MetricsExporterProvider findProvider(String scheme) {
for (MetricsExporterProvider provider : PROVIDERS) {
try {
if (provider.supports(scheme)) {
return provider;
}
} catch (Exception e) {
LOGGER.warn("Telemetry exporter provider {} failed to evaluate support for scheme {}", provider.getClass().getName(), scheme, e);
}
}
return null;
}
}

View File

@ -1,86 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.opentelemetry.exporter.s3;
import com.automq.stream.s3.ByteBufAlloc;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;
import io.netty.buffer.ByteBuf;
/**
* Utility class for data compression and decompression.
*/
public class CompressionUtils {
/**
* Compress a ByteBuf using GZIP.
*
* @param input The input ByteBuf to compress.
* @return A new ByteBuf containing the compressed data.
* @throws IOException If an I/O error occurs during compression.
*/
public static ByteBuf compress(ByteBuf input) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(byteArrayOutputStream);
byte[] buffer = new byte[input.readableBytes()];
input.readBytes(buffer);
gzipOutputStream.write(buffer);
gzipOutputStream.close();
ByteBuf compressed = ByteBufAlloc.byteBuffer(byteArrayOutputStream.size());
compressed.writeBytes(byteArrayOutputStream.toByteArray());
return compressed;
}
/**
* Decompress a GZIP-compressed ByteBuf.
*
* @param input The compressed ByteBuf to decompress.
* @return A new ByteBuf containing the decompressed data.
* @throws IOException If an I/O error occurs during decompression.
*/
public static ByteBuf decompress(ByteBuf input) throws IOException {
byte[] compressedData = new byte[input.readableBytes()];
input.readBytes(compressedData);
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(compressedData);
GZIPInputStream gzipInputStream = new GZIPInputStream(byteArrayInputStream);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = gzipInputStream.read(buffer)) != -1) {
byteArrayOutputStream.write(buffer, 0, bytesRead);
}
gzipInputStream.close();
byteArrayOutputStream.close();
byte[] uncompressedData = byteArrayOutputStream.toByteArray();
ByteBuf output = ByteBufAlloc.byteBuffer(uncompressedData.length);
output.writeBytes(uncompressedData);
return output;
}
}

View File

@ -1,276 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.opentelemetry.exporter.s3;
import org.apache.commons.lang3.StringUtils;
import java.util.Locale;
/**
* Utility class for Prometheus metric and label naming.
*/
public class PrometheusUtils {
private static final String TOTAL_SUFFIX = "_total";
/**
* Get the Prometheus unit from the OpenTelemetry unit.
*
* @param unit The OpenTelemetry unit.
* @return The Prometheus unit.
*/
public static String getPrometheusUnit(String unit) {
if (unit.contains("{")) {
return "";
}
switch (unit) {
// Time
case "d":
return "days";
case "h":
return "hours";
case "min":
return "minutes";
case "s":
return "seconds";
case "ms":
return "milliseconds";
case "us":
return "microseconds";
case "ns":
return "nanoseconds";
// Bytes
case "By":
return "bytes";
case "KiBy":
return "kibibytes";
case "MiBy":
return "mebibytes";
case "GiBy":
return "gibibytes";
case "TiBy":
return "tibibytes";
case "KBy":
return "kilobytes";
case "MBy":
return "megabytes";
case "GBy":
return "gigabytes";
case "TBy":
return "terabytes";
// SI
case "m":
return "meters";
case "V":
return "volts";
case "A":
return "amperes";
case "J":
return "joules";
case "W":
return "watts";
case "g":
return "grams";
// Misc
case "Cel":
return "celsius";
case "Hz":
return "hertz";
case "1":
return "";
case "%":
return "percent";
// Rate units (per second)
case "1/s":
return "per_second";
case "By/s":
return "bytes_per_second";
case "KiBy/s":
return "kibibytes_per_second";
case "MiBy/s":
return "mebibytes_per_second";
case "GiBy/s":
return "gibibytes_per_second";
case "KBy/s":
return "kilobytes_per_second";
case "MBy/s":
return "megabytes_per_second";
case "GBy/s":
return "gigabytes_per_second";
// Rate units (per minute)
case "1/min":
return "per_minute";
case "By/min":
return "bytes_per_minute";
// Rate units (per hour)
case "1/h":
return "per_hour";
case "By/h":
return "bytes_per_hour";
// Rate units (per day)
case "1/d":
return "per_day";
case "By/d":
return "bytes_per_day";
default:
return unit;
}
}
/**
* Map a metric name to a Prometheus-compatible name.
*
* @param name The original metric name.
* @param unit The metric unit.
* @param isCounter Whether the metric is a counter.
* @param isGauge Whether the metric is a gauge.
* @return The Prometheus-compatible metric name.
*/
public static String mapMetricsName(String name, String unit, boolean isCounter, boolean isGauge) {
// Replace "." into "_"
name = name.replaceAll("\\.", "_");
String prometheusUnit = getPrometheusUnit(unit);
boolean shouldAppendUnit = StringUtils.isNotBlank(prometheusUnit) && !name.contains(prometheusUnit);
// append prometheus unit if not null or empty.
// unit should be appended before type suffix
if (shouldAppendUnit) {
name = name + "_" + prometheusUnit;
}
// trim counter's _total suffix so the unit is placed before it.
if (isCounter && name.endsWith(TOTAL_SUFFIX)) {
name = name.substring(0, name.length() - TOTAL_SUFFIX.length());
}
// replace _total suffix, or add if it wasn't already present.
if (isCounter) {
name = name + TOTAL_SUFFIX;
}
// special case - gauge with intelligent Connect metric handling
if ("1".equals(unit) && isGauge && !name.contains("ratio")) {
if (isConnectMetric(name)) {
// For Connect metrics, use improved logic to avoid misleading _ratio suffix
if (shouldAddRatioSuffixForConnect(name)) {
name = name + "_ratio";
}
} else {
// For other metrics, maintain original behavior
name = name + "_ratio";
}
}
return name;
}
/**
* Map a label name to a Prometheus-compatible name.
*
* @param name The original label name.
* @return The Prometheus-compatible label name.
*/
public static String mapLabelName(String name) {
if (StringUtils.isBlank(name)) {
return "";
}
return name.replaceAll("\\.", "_");
}
/**
* Check if a metric name is related to Kafka Connect.
*
* @param name The metric name to check.
* @return true if it's a Connect metric, false otherwise.
*/
private static boolean isConnectMetric(String name) {
String lowerName = name.toLowerCase(Locale.ROOT);
return lowerName.contains("kafka_connector_") ||
lowerName.contains("kafka_task_") ||
lowerName.contains("kafka_worker_") ||
lowerName.contains("kafka_connect_") ||
lowerName.contains("kafka_source_task_") ||
lowerName.contains("kafka_sink_task_") ||
lowerName.contains("connector_metrics") ||
lowerName.contains("task_metrics") ||
lowerName.contains("worker_metrics") ||
lowerName.contains("source_task_metrics") ||
lowerName.contains("sink_task_metrics");
}
/**
* Intelligently determine if a Connect metric should have a _ratio suffix.
* This method avoids adding misleading _ratio suffixes to count-based metrics.
*
* @param name The metric name to check.
* @return true if _ratio suffix should be added, false otherwise.
*/
private static boolean shouldAddRatioSuffixForConnect(String name) {
String lowerName = name.toLowerCase(Locale.ROOT);
if (hasRatioRelatedWords(lowerName)) {
return false;
}
if (isCountMetric(lowerName)) {
return false;
}
return isRatioMetric(lowerName);
}
private static boolean hasRatioRelatedWords(String lowerName) {
return lowerName.contains("ratio") || lowerName.contains("percent") ||
lowerName.contains("rate") || lowerName.contains("fraction");
}
private static boolean isCountMetric(String lowerName) {
return hasBasicCountKeywords(lowerName) || hasConnectCountKeywords(lowerName) ||
hasStatusCountKeywords(lowerName);
}
private static boolean hasBasicCountKeywords(String lowerName) {
return lowerName.contains("count") || lowerName.contains("num") ||
lowerName.contains("size") || lowerName.contains("total") ||
lowerName.contains("active") || lowerName.contains("current");
}
private static boolean hasConnectCountKeywords(String lowerName) {
return lowerName.contains("partition") || lowerName.contains("task") ||
lowerName.contains("connector") || lowerName.contains("seq_no") ||
lowerName.contains("seq_num") || lowerName.contains("attempts");
}
private static boolean hasStatusCountKeywords(String lowerName) {
return lowerName.contains("success") || lowerName.contains("failure") ||
lowerName.contains("errors") || lowerName.contains("retries") ||
lowerName.contains("skipped") || lowerName.contains("running") ||
lowerName.contains("paused") || lowerName.contains("failed") ||
lowerName.contains("destroyed");
}
private static boolean isRatioMetric(String lowerName) {
return lowerName.contains("utilization") ||
lowerName.contains("usage") ||
lowerName.contains("load") ||
lowerName.contains("efficiency") ||
lowerName.contains("hit_rate") ||
lowerName.contains("miss_rate");
}
}

View File

@ -1,63 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.opentelemetry.exporter.s3;
import com.automq.opentelemetry.exporter.MetricsExportConfig;
import com.automq.opentelemetry.exporter.MetricsExporter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration;
import io.opentelemetry.sdk.metrics.export.MetricReader;
import io.opentelemetry.sdk.metrics.export.PeriodicMetricReader;
/**
* An adapter class that implements the MetricsExporter interface and uses S3MetricsExporter
* for actual metrics exporting functionality.
*/
public class S3MetricsExporterAdapter implements MetricsExporter {
private static final Logger LOGGER = LoggerFactory.getLogger(S3MetricsExporterAdapter.class);
private final MetricsExportConfig metricsExportConfig;
/**
* Creates a new S3MetricsExporterAdapter.
*
* @param metricsExportConfig The configuration for the S3 metrics exporter.
*/
public S3MetricsExporterAdapter(MetricsExportConfig metricsExportConfig) {
this.metricsExportConfig = metricsExportConfig;
LOGGER.info("S3MetricsExporterAdapter initialized with labels :{}", metricsExportConfig.baseLabels());
}
@Override
public MetricReader asMetricReader() {
// Create and start the S3MetricsExporter
S3MetricsExporter s3MetricsExporter = new S3MetricsExporter(metricsExportConfig);
s3MetricsExporter.start();
// Create and return the periodic metric reader
return PeriodicMetricReader.builder(s3MetricsExporter)
.setInterval(Duration.ofMillis(metricsExportConfig.intervalMs()))
.build();
}
}

View File

@ -18,8 +18,7 @@ dependencies {
compileOnly libs.awsSdkAuth
implementation libs.reload4j
implementation libs.nettyBuffer
implementation project(':automq-metrics')
implementation project(':automq-log-uploader')
implementation libs.opentelemetrySdk
implementation libs.jacksonDatabind
implementation libs.jacksonYaml
implementation libs.commonLang
@ -66,4 +65,4 @@ jar {
manifest {
attributes 'Main-Class': 'com.automq.shell.AutoMQCLI'
}
}
}

View File

@ -17,7 +17,7 @@
* limitations under the License.
*/
package com.automq.log.uploader;
package com.automq.shell.log;
import org.apache.commons.lang3.StringUtils;

View File

@ -17,9 +17,10 @@
* limitations under the License.
*/
package com.automq.log.uploader;
package com.automq.shell.log;
import com.automq.log.uploader.util.Utils;
import com.automq.shell.AutoMQApplication;
import com.automq.shell.util.Utils;
import com.automq.stream.s3.operator.ObjectStorage;
import com.automq.stream.s3.operator.ObjectStorage.ObjectInfo;
import com.automq.stream.s3.operator.ObjectStorage.ObjectPath;
@ -54,14 +55,12 @@ public class LogUploader implements LogRecorder {
public static final int DEFAULT_MAX_QUEUE_SIZE = 64 * 1024;
public static final int DEFAULT_BUFFER_SIZE = 16 * 1024 * 1024;
public static final int UPLOAD_INTERVAL = System.getenv("AUTOMQ_OBSERVABILITY_UPLOAD_INTERVAL") != null
? Integer.parseInt(System.getenv("AUTOMQ_OBSERVABILITY_UPLOAD_INTERVAL"))
: 60 * 1000;
public static final int CLEANUP_INTERVAL = System.getenv("AUTOMQ_OBSERVABILITY_CLEANUP_INTERVAL") != null
? Integer.parseInt(System.getenv("AUTOMQ_OBSERVABILITY_CLEANUP_INTERVAL"))
: 2 * 60 * 1000;
public static final int UPLOAD_INTERVAL = System.getenv("AUTOMQ_OBSERVABILITY_UPLOAD_INTERVAL") != null ? Integer.parseInt(System.getenv("AUTOMQ_OBSERVABILITY_UPLOAD_INTERVAL")) : 60 * 1000;
public static final int CLEANUP_INTERVAL = System.getenv("AUTOMQ_OBSERVABILITY_CLEANUP_INTERVAL") != null ? Integer.parseInt(System.getenv("AUTOMQ_OBSERVABILITY_CLEANUP_INTERVAL")) : 2 * 60 * 1000;
public static final int MAX_JITTER_INTERVAL = 60 * 1000;
private static final LogUploader INSTANCE = new LogUploader();
private final BlockingQueue<LogEvent> queue = new LinkedBlockingQueue<>(DEFAULT_MAX_QUEUE_SIZE);
private final ByteBuf uploadBuffer = Unpooled.directBuffer(DEFAULT_BUFFER_SIZE);
private final Random random = new Random();
@ -72,42 +71,16 @@ public class LogUploader implements LogRecorder {
private volatile S3LogConfig config;
private volatile CompletableFuture<Void> startFuture;
private ObjectStorage objectStorage;
private Thread uploadThread;
private Thread cleanupThread;
public LogUploader() {
private LogUploader() {
}
public synchronized void start(S3LogConfig config) {
if (this.config != null) {
LOGGER.warn("LogUploader is already started.");
return;
}
this.config = config;
if (!config.isEnabled() || config.objectStorage() == null) {
LOGGER.warn("LogUploader is disabled due to configuration.");
closed = true;
return;
}
try {
this.objectStorage = config.objectStorage();
this.uploadThread = new Thread(new UploadTask());
this.uploadThread.setName("log-uploader-upload-thread");
this.uploadThread.setDaemon(true);
this.uploadThread.start();
this.cleanupThread = new Thread(new CleanupTask());
this.cleanupThread.setName("log-uploader-cleanup-thread");
this.cleanupThread.setDaemon(true);
this.cleanupThread.start();
LOGGER.info("LogUploader started successfully.");
} catch (Exception e) {
LOGGER.error("Failed to start LogUploader", e);
closed = true;
}
public static LogUploader getInstance() {
return INSTANCE;
}
public void close() throws InterruptedException {
@ -124,15 +97,63 @@ public class LogUploader implements LogRecorder {
@Override
public boolean append(LogEvent event) {
if (!closed) {
if (!closed && couldUpload()) {
return queue.offer(event);
}
return false;
}
private boolean couldUpload() {
initConfiguration();
boolean enabled = config != null && config.isEnabled() && config.objectStorage() != null;
if (enabled) {
initUploadComponent();
}
return enabled && startFuture != null && startFuture.isDone();
}
private void initConfiguration() {
if (config == null) {
synchronized (this) {
if (config == null) {
config = AutoMQApplication.getBean(S3LogConfig.class);
}
}
}
}
private void initUploadComponent() {
if (startFuture == null) {
synchronized (this) {
if (startFuture == null) {
startFuture = CompletableFuture.runAsync(() -> {
try {
objectStorage = config.objectStorage();
uploadThread = new Thread(new UploadTask());
uploadThread.setName("log-uploader-upload-thread");
uploadThread.setDaemon(true);
uploadThread.start();
cleanupThread = new Thread(new CleanupTask());
cleanupThread.setName("log-uploader-cleanup-thread");
cleanupThread.setDaemon(true);
cleanupThread.start();
startFuture.complete(null);
} catch (Exception e) {
LOGGER.error("Initialize log uploader failed", e);
}
}, command -> new Thread(command).start());
}
}
}
}
private class UploadTask implements Runnable {
private String formatTimestampInMillis(long timestamp) {
public String formatTimestampInMillis(long timestamp) {
return ZonedDateTime.ofInstant(Instant.ofEpochMilli(timestamp), ZoneId.systemDefault())
.format(DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS Z"));
}
@ -144,6 +165,7 @@ public class LogUploader implements LogRecorder {
long now = System.currentTimeMillis();
LogEvent event = queue.poll(1, TimeUnit.SECONDS);
if (event != null) {
// DateTime Level [Logger] Message \n stackTrace
StringBuilder logLine = new StringBuilder()
.append(formatTimestampInMillis(event.timestampMillis()))
.append(" ")
@ -182,22 +204,25 @@ public class LogUploader implements LogRecorder {
private void upload(long now) {
if (uploadBuffer.readableBytes() > 0) {
try {
while (!Thread.currentThread().isInterrupted()) {
if (objectStorage == null) {
break;
}
try {
String objectKey = getObjectKey();
objectStorage.write(WriteOptions.DEFAULT, objectKey, Utils.compress(uploadBuffer.slice().asReadOnly())).get();
break;
} catch (Exception e) {
LOGGER.warn("Failed to upload logs, will retry", e);
Thread.sleep(1000);
if (couldUpload()) {
try {
while (!Thread.currentThread().isInterrupted()) {
if (objectStorage == null) {
break;
}
try {
String objectKey = getObjectKey();
objectStorage.write(WriteOptions.DEFAULT, objectKey, Utils.compress(uploadBuffer.slice().asReadOnly())).get();
break;
} catch (Exception e) {
e.printStackTrace(System.err);
Thread.sleep(1000);
}
}
} catch (InterruptedException e) {
//ignore
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
uploadBuffer.clear();
lastUploadTimestamp = now;
@ -212,11 +237,12 @@ public class LogUploader implements LogRecorder {
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
if (closed || !config.isLeader()) {
if (closed || !config.isActiveController()) {
Thread.sleep(Duration.ofMinutes(1).toMillis());
continue;
}
long expiredTime = System.currentTimeMillis() - CLEANUP_INTERVAL;
List<ObjectInfo> objects = objectStorage.list(String.format("automq/logs/%s", config.clusterId())).join();
if (!objects.isEmpty()) {
@ -226,6 +252,7 @@ public class LogUploader implements LogRecorder {
.collect(Collectors.toList());
if (!keyList.isEmpty()) {
// Some of s3 implements allow only 1000 keys per request.
CompletableFuture<?>[] deleteFutures = Lists.partition(keyList, 1000)
.stream()
.map(objectStorage::delete)
@ -233,6 +260,7 @@ public class LogUploader implements LogRecorder {
CompletableFuture.allOf(deleteFutures).join();
}
}
Thread.sleep(Duration.ofMinutes(1).toMillis());
} catch (InterruptedException e) {
break;
@ -247,4 +275,5 @@ public class LogUploader implements LogRecorder {
String hour = LocalDateTime.now(ZoneOffset.UTC).format(DateTimeFormatter.ofPattern("yyyyMMddHH"));
return String.format("automq/logs/%s/%s/%s/%s", config.clusterId(), config.nodeId(), hour, UUID.randomUUID());
}
}

View File

@ -17,18 +17,19 @@
* limitations under the License.
*/
package com.automq.log.uploader;
package com.automq.shell.log;
import com.automq.stream.s3.operator.ObjectStorage;
public interface S3LogConfig {
boolean isEnabled();
boolean isActiveController();
String clusterId();
int nodeId();
ObjectStorage objectStorage();
boolean isLeader();
}

View File

@ -0,0 +1,50 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.log;
import org.apache.log4j.RollingFileAppender;
import org.apache.log4j.spi.LoggingEvent;
public class S3RollingFileAppender extends RollingFileAppender {
private final LogUploader logUploader = LogUploader.getInstance();
@Override
protected void subAppend(LoggingEvent event) {
super.subAppend(event);
if (!closed) {
LogRecorder.LogEvent logEvent = new LogRecorder.LogEvent(
event.getTimeStamp(),
event.getLevel().toString(),
event.getLoggerName(),
event.getRenderedMessage(),
event.getThrowableStrRep());
try {
logEvent.validate();
} catch (IllegalArgumentException e) {
// Drop invalid log event
errorHandler.error("Failed to validate log event", e, 0);
return;
}
logUploader.append(logEvent);
}
}
}

View File

@ -0,0 +1,128 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.automq.shell.metrics;
import org.apache.commons.lang3.StringUtils;
public class PrometheusUtils {
private static final String TOTAL_SUFFIX = "_total";
public static String getPrometheusUnit(String unit) {
if (unit.contains("{")) {
return "";
}
switch (unit) {
// Time
case "d":
return "days";
case "h":
return "hours";
case "min":
return "minutes";
case "s":
return "seconds";
case "ms":
return "milliseconds";
case "us":
return "microseconds";
case "ns":
return "nanoseconds";
// Bytes
case "By":
return "bytes";
case "KiBy":
return "kibibytes";
case "MiBy":
return "mebibytes";
case "GiBy":
return "gibibytes";
case "TiBy":
return "tibibytes";
case "KBy":
return "kilobytes";
case "MBy":
return "megabytes";
case "GBy":
return "gigabytes";
case "TBy":
return "terabytes";
// SI
case "m":
return "meters";
case "V":
return "volts";
case "A":
return "amperes";
case "J":
return "joules";
case "W":
return "watts";
case "g":
return "grams";
// Misc
case "Cel":
return "celsius";
case "Hz":
return "hertz";
case "1":
return "";
case "%":
return "percent";
default:
return unit;
}
}
public static String mapMetricsName(String name, String unit, boolean isCounter, boolean isGauge) {
// Replace "." into "_"
name = name.replaceAll("\\.", "_");
String prometheusUnit = getPrometheusUnit(unit);
boolean shouldAppendUnit = StringUtils.isNotBlank(prometheusUnit) && !name.contains(prometheusUnit);
// append prometheus unit if not null or empty.
// unit should be appended before type suffix
if (shouldAppendUnit) {
name = name + "_" + prometheusUnit;
}
// trim counter's _total suffix so the unit is placed before it.
if (isCounter && name.endsWith(TOTAL_SUFFIX)) {
name = name.substring(0, name.length() - TOTAL_SUFFIX.length());
}
// replace _total suffix, or add if it wasn't already present.
if (isCounter) {
name = name + TOTAL_SUFFIX;
}
// special case - gauge
if (unit.equals("1") && isGauge && !name.contains("ratio")) {
name = name + "_ratio";
}
return name;
}
public static String mapLabelName(String name) {
if (StringUtils.isBlank(name)) {
return "";
}
return name.replaceAll("\\.", "_");
}
}

View File

@ -16,12 +16,24 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.automq.table.binder;
import org.apache.iceberg.types.Type;
package com.automq.shell.metrics;
@FunctionalInterface
public interface StructConverter<S> {
import com.automq.stream.s3.operator.ObjectStorage;
Object convert(Object sourceValue, S sourceSchema, Type targetType);
import org.apache.commons.lang3.tuple.Pair;
import java.util.List;
public interface S3MetricsConfig {
String clusterId();
boolean isActiveController();
int nodeId();
ObjectStorage objectStorage();
List<Pair<String, String>> baseLabels();
}

View File

@ -17,9 +17,9 @@
* limitations under the License.
*/
package com.automq.opentelemetry.exporter.s3;
package com.automq.shell.metrics;
import com.automq.opentelemetry.exporter.MetricsExportConfig;
import com.automq.shell.util.Utils;
import com.automq.stream.s3.operator.ObjectStorage;
import com.automq.stream.s3.operator.ObjectStorage.ObjectInfo;
import com.automq.stream.s3.operator.ObjectStorage.ObjectPath;
@ -60,9 +60,6 @@ import io.opentelemetry.sdk.metrics.data.HistogramPointData;
import io.opentelemetry.sdk.metrics.data.MetricData;
import io.opentelemetry.sdk.metrics.export.MetricExporter;
/**
* An S3 metrics exporter that uploads metrics data to S3 buckets.
*/
public class S3MetricsExporter implements MetricExporter {
private static final Logger LOGGER = LoggerFactory.getLogger(S3MetricsExporter.class);
@ -71,13 +68,13 @@ public class S3MetricsExporter implements MetricExporter {
public static final int MAX_JITTER_INTERVAL = 60 * 1000;
public static final int DEFAULT_BUFFER_SIZE = 16 * 1024 * 1024;
private final MetricsExportConfig config;
private final S3MetricsConfig config;
private final Map<String, String> defaultTagMap = new HashMap<>();
private final ByteBuf uploadBuffer = Unpooled.directBuffer(DEFAULT_BUFFER_SIZE);
private static final Random RANDOM = new Random();
private final Random random = new Random();
private volatile long lastUploadTimestamp = System.currentTimeMillis();
private volatile long nextUploadInterval = UPLOAD_INTERVAL + RANDOM.nextInt(MAX_JITTER_INTERVAL);
private volatile long nextUploadInterval = UPLOAD_INTERVAL + random.nextInt(MAX_JITTER_INTERVAL);
private final ObjectStorage objectStorage;
private final ObjectMapper objectMapper = new ObjectMapper();
@ -86,12 +83,7 @@ public class S3MetricsExporter implements MetricExporter {
private final Thread uploadThread;
private final Thread cleanupThread;
/**
* Creates a new S3MetricsExporter.
*
* @param config The configuration for the S3 metrics exporter.
*/
public S3MetricsExporter(MetricsExportConfig config) {
public S3MetricsExporter(S3MetricsConfig config) {
this.config = config;
this.objectStorage = config.objectStorage();
@ -109,9 +101,6 @@ public class S3MetricsExporter implements MetricExporter {
cleanupThread.setDaemon(true);
}
/**
* Starts the exporter threads.
*/
public void start() {
uploadThread.start();
cleanupThread.start();
@ -150,7 +139,7 @@ public class S3MetricsExporter implements MetricExporter {
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
if (closed || !config.isLeader()) {
if (closed || !config.isActiveController()) {
Thread.sleep(Duration.ofMinutes(1).toMillis());
continue;
}
@ -173,11 +162,16 @@ public class S3MetricsExporter implements MetricExporter {
CompletableFuture.allOf(deleteFutures).join();
}
}
Threads.sleep(Duration.ofMinutes(1).toMillis());
if (Threads.sleep(Duration.ofMinutes(1).toMillis())) {
break;
}
} catch (InterruptedException e) {
break;
} catch (Exception e) {
LOGGER.error("Cleanup s3 metrics failed", e);
if (Threads.sleep(Duration.ofMinutes(1).toMillis())) {
break;
}
}
}
}
@ -203,33 +197,37 @@ public class S3MetricsExporter implements MetricExporter {
for (MetricData metric : metrics) {
switch (metric.getType()) {
case LONG_SUM:
String longSumMetricsName = PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), metric.getLongSumData().isMonotonic(), false);
metric.getLongSumData().getPoints().forEach(point ->
lineList.add(serializeCounter(
PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), metric.getLongSumData().isMonotonic(), false),
lineList.add(serializeCounter(longSumMetricsName,
point.getValue(), point.getAttributes(), point.getEpochNanos())));
break;
case DOUBLE_SUM:
String doubleSumMetricsName = PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), metric.getDoubleSumData().isMonotonic(), false);
metric.getDoubleSumData().getPoints().forEach(point ->
lineList.add(serializeCounter(
PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), metric.getDoubleSumData().isMonotonic(), false),
doubleSumMetricsName,
point.getValue(), point.getAttributes(), point.getEpochNanos())));
break;
case LONG_GAUGE:
String longGaugeMetricsName = PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), false, true);
metric.getLongGaugeData().getPoints().forEach(point ->
lineList.add(serializeGauge(
PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), false, true),
longGaugeMetricsName,
point.getValue(), point.getAttributes(), point.getEpochNanos())));
break;
case DOUBLE_GAUGE:
String doubleGaugeMetricsName = PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), false, true);
metric.getDoubleGaugeData().getPoints().forEach(point ->
lineList.add(serializeGauge(
PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), false, true),
doubleGaugeMetricsName,
point.getValue(), point.getAttributes(), point.getEpochNanos())));
break;
case HISTOGRAM:
String histogramMetricsName = PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), false, false);
metric.getHistogramData().getPoints().forEach(point ->
lineList.add(serializeHistogram(
PrometheusUtils.mapMetricsName(metric.getName(), metric.getUnit(), false, false),
histogramMetricsName,
point)));
break;
default:
@ -262,13 +260,13 @@ public class S3MetricsExporter implements MetricExporter {
synchronized (uploadBuffer) {
if (uploadBuffer.readableBytes() > 0) {
try {
objectStorage.write(WriteOptions.DEFAULT, getObjectKey(), CompressionUtils.compress(uploadBuffer.slice().asReadOnly())).get();
objectStorage.write(WriteOptions.DEFAULT, getObjectKey(), Utils.compress(uploadBuffer.slice().asReadOnly())).get();
} catch (Exception e) {
LOGGER.error("Failed to upload metrics to s3", e);
return CompletableResultCode.ofFailure();
} finally {
lastUploadTimestamp = System.currentTimeMillis();
nextUploadInterval = UPLOAD_INTERVAL + RANDOM.nextInt(MAX_JITTER_INTERVAL);
nextUploadInterval = UPLOAD_INTERVAL + random.nextInt(MAX_JITTER_INTERVAL);
uploadBuffer.clear();
}
}

View File

@ -37,6 +37,7 @@ import org.apache.kafka.common.requests.s3.GetKVsRequest;
import org.apache.kafka.common.requests.s3.PutKVsRequest;
import org.apache.kafka.common.utils.Time;
import com.automq.shell.metrics.S3MetricsExporter;
import com.automq.stream.api.KeyValue;
import org.slf4j.Logger;
@ -47,7 +48,7 @@ import java.util.List;
import java.util.Objects;
public class ClientKVClient {
private static final Logger LOGGER = LoggerFactory.getLogger(ClientKVClient.class);
private static final Logger LOGGER = LoggerFactory.getLogger(S3MetricsExporter.class);
private final NetworkClient networkClient;
private final Node bootstrapServer;

View File

@ -42,5 +42,4 @@ case $COMMAND in
;;
esac
export KAFKA_CONNECT_MODE=true
exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectDistributed "$@"

View File

@ -42,5 +42,4 @@ case $COMMAND in
;;
esac
export KAFKA_CONNECT_MODE=true
exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectStandalone "$@"

View File

@ -40,23 +40,7 @@ should_include_file() {
fi
file=$1
if [ -z "$(echo "$file" | grep -E "$regex")" ] ; then
# If Connect mode is enabled, apply additional filtering
if [ "$KAFKA_CONNECT_MODE" = "true" ]; then
# Skip if file doesn't exist
[ ! -f "$file" ] && return 1
# Exclude heavy dependencies that Connect doesn't need
case "$file" in
*hadoop*) return 1 ;;
*hive*) return 1 ;;
*iceberg*) return 1 ;;
*avro*) return 1 ;;
*parquet*) return 1 ;;
*) return 0 ;;
esac
else
return 0
fi
return 0
else
return 1
fi

View File

@ -838,13 +838,6 @@ tasks.create(name: "jarConnect", dependsOn: connectPkgs.collect { it + ":jar" })
tasks.create(name: "testConnect", dependsOn: connectPkgs.collect { it + ":test" }) {}
// OpenTelemetry related tasks
tasks.create(name: "jarOpenTelemetry", dependsOn: ":opentelemetry:jar") {}
tasks.create(name: "testOpenTelemetry", dependsOn: ":opentelemetry:test") {}
tasks.create(name: "buildOpenTelemetry", dependsOn: [":opentelemetry:jar", ":opentelemetry:test"]) {}
project(':server') {
base {
archivesName = "kafka-server"
@ -945,8 +938,6 @@ project(':core') {
implementation project(':storage')
implementation project(':server')
implementation project(':automq-shell')
implementation project(':automq-metrics')
implementation project(':automq-log-uploader')
implementation libs.argparse4j
implementation libs.commonsValidator
@ -987,6 +978,14 @@ project(':core') {
// The `jcl-over-slf4j` library is used to redirect JCL logging to SLF4J.
implementation libs.jclOverSlf4j
implementation libs.opentelemetryJava8
implementation libs.opentelemetryOshi
implementation libs.opentelemetrySdk
implementation libs.opentelemetrySdkMetrics
implementation libs.opentelemetryExporterLogging
implementation libs.opentelemetryExporterProm
implementation libs.opentelemetryExporterOTLP
implementation libs.opentelemetryJmx
implementation libs.awsSdkAuth
// table topic start
@ -1250,10 +1249,6 @@ project(':core') {
from(project(':trogdor').configurations.runtimeClasspath) { into("libs/") }
from(project(':automq-shell').jar) { into("libs/") }
from(project(':automq-shell').configurations.runtimeClasspath) { into("libs/") }
from(project(':automq-metrics').jar) { into("libs/") }
from(project(':automq-metrics').configurations.runtimeClasspath) { into("libs/") }
from(project(':automq-log-uploader').jar) { into("libs/") }
from(project(':automq-log-uploader').configurations.runtimeClasspath) { into("libs/") }
from(project(':shell').jar) { into("libs/") }
from(project(':shell').configurations.runtimeClasspath) { into("libs/") }
from(project(':connect:api').jar) { into("libs/") }
@ -2334,107 +2329,6 @@ project(':tools:tools-api') {
}
}
project(':automq-metrics') {
archivesBaseName = "automq-metrics"
checkstyle {
configProperties = checkstyleConfigProperties("import-control-server.xml")
}
configurations {
all {
exclude group: 'io.opentelemetry', module: 'opentelemetry-exporter-sender-okhttp'
}
}
dependencies {
// OpenTelemetry core dependencies
api libs.opentelemetryJava8
api libs.opentelemetryOshi
api libs.opentelemetrySdk
api libs.opentelemetrySdkMetrics
api libs.opentelemetryExporterLogging
api libs.opentelemetryExporterProm
api libs.opentelemetryExporterOTLP
api libs.opentelemetryExporterSenderJdk
api libs.opentelemetryJmx
// Logging dependencies
api libs.slf4jApi
api libs.slf4jBridge // SLF4J Bridge
api libs.reload4j
api libs.commonLang
// Yammer metrics (for integration)
api 'com.yammer.metrics:metrics-core:2.2.0'
implementation(project(':s3stream')) {
exclude(group: 'io.opentelemetry', module: '*')
exclude(group: 'io.opentelemetry.instrumentation', module: '*')
exclude(group: 'io.opentelemetry.proto', module: '*')
exclude(group: 'io.netty', module: 'netty-tcnative-boringssl-static')
exclude(group: 'com.github.jnr', module: '*')
exclude(group: 'org.aspectj', module: '*')
exclude(group: 'net.java.dev.jna', module: '*')
exclude(group: 'net.sourceforge.argparse4j', module: '*')
exclude(group: 'com.bucket4j', module: '*')
exclude(group: 'com.yammer.metrics', module: '*')
exclude(group: 'com.github.spotbugs', module: '*')
exclude(group: 'org.apache.kafka.shaded', module: '*')
}
implementation libs.nettyBuffer
implementation libs.jacksonDatabind
implementation libs.guava
implementation project(':clients')
// Test dependencies
testImplementation libs.junitJupiter
testImplementation libs.mockitoCore
testImplementation libs.slf4jReload4j
testRuntimeOnly libs.junitPlatformLanucher
implementation('io.opentelemetry:opentelemetry-sdk:1.40.0')
implementation("io.opentelemetry.semconv:opentelemetry-semconv:1.25.0-alpha")
implementation("io.opentelemetry.instrumentation:opentelemetry-runtime-telemetry-java8:2.6.0-alpha")
implementation('com.google.protobuf:protobuf-java:3.25.5')
implementation('org.xerial.snappy:snappy-java:1.1.10.5')
}
clean.doFirst {
delete "$buildDir/kafka/"
}
javadoc {
enabled = false
}
}
project(':automq-log-uploader') {
archivesBaseName = "automq-log-uploader"
checkstyle {
configProperties = checkstyleConfigProperties("import-control-server.xml")
}
dependencies {
api project(':s3stream')
implementation project(':clients')
implementation libs.reload4j
implementation libs.slf4jApi
implementation libs.slf4jBridge
implementation libs.nettyBuffer
implementation libs.guava
implementation libs.commonLang
}
javadoc {
enabled = false
}
}
project(':tools') {
base {
archivesName = "kafka-tools"
@ -3542,8 +3436,6 @@ project(':connect:runtime') {
api project(':clients')
api project(':connect:json')
api project(':connect:transforms')
api project(':automq-metrics')
api project(':automq-log-uploader')
implementation libs.slf4jApi
implementation libs.reload4j
@ -3552,7 +3444,6 @@ project(':connect:runtime') {
implementation libs.jacksonJaxrsJsonProvider
implementation libs.jerseyContainerServlet
implementation libs.jerseyHk2
implementation libs.jaxrsApi
implementation libs.jaxbApi // Jersey dependency that was available in the JDK before Java 9
implementation libs.activation // Jersey dependency that was available in the JDK before Java 9
implementation libs.jettyServer

View File

@ -322,18 +322,6 @@ public class TopicConfig {
public static final String AUTOMQ_TABLE_TOPIC_ERRORS_TOLERANCE_CONFIG = "automq.table.topic.errors.tolerance";
public static final String AUTOMQ_TABLE_TOPIC_ERRORS_TOLERANCE_DOC = "Configures the error handling strategy for table topic record processing. Valid values are <code>none</code>, <code>invalid_data</code>, and <code>all</code>.";
public static final String AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_ENABLED_CONFIG = "automq.table.topic.expire.snapshot.enabled";
public static final String AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_ENABLED_DOC = "Enable/disable automatic snapshot expiration.";
public static final boolean AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_ENABLED_DEFAULT = true;
public static final String AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_OLDER_THAN_HOURS_CONFIG = "automq.table.topic.expire.snapshot.older.than.hours";
public static final String AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_OLDER_THAN_HOURS_DOC = "Set retention duration in hours.";
public static final int AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_OLDER_THAN_HOURS_DEFAULT = 1;
public static final String AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_RETAIN_LAST_CONFIG = "automq.table.topic.expire.snapshot.retain.last";
public static final String AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_RETAIN_LAST_DOC = "Minimum snapshots to retain.";
public static final int AUTOMQ_TABLE_TOPIC_EXPIRE_SNAPSHOT_RETAIN_LAST_DEFAULT = 1;
public static final String KAFKA_LINKS_ID_CONFIG = "automq.kafka.links.id";
public static final String KAFKA_LINKS_ID_DOC = "The unique id of a kafka link";
public static final String KAFKA_LINKS_TOPIC_START_TIME_CONFIG = "automq.kafka.links.topic.start.time";

View File

@ -20,7 +20,7 @@
"broker"
],
"name": "AutomqGetPartitionSnapshotRequest",
"validVersions": "0-2",
"validVersions": "0-1",
"flexibleVersions": "0+",
"fields": [
{

View File

@ -17,7 +17,7 @@
"apiKey": 516,
"type": "response",
"name": "AutomqGetPartitionSnapshotResponse",
"validVersions": "0-2",
"validVersions": "0-1",
"flexibleVersions": "0+",
"fields": [
{ "name": "ErrorCode", "type": "int16", "versions": "0+", "about": "The top level response error code" },
@ -51,13 +51,6 @@
"type": "string",
"versions": "1+",
"about": "The confirm WAL config."
},
{
"name": "ConfirmWalDeltaData",
"type": "bytes",
"versions": "2+",
"nullableVersions": "2+",
"about": "The confirm WAL delta data between two end offsets. It's an optional field. If not present, the client should read the delta from WAL"
}
],
"commonStructs": [

View File

@ -24,8 +24,7 @@ log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
# location of the log files (e.g. ${kafka.logs.dir}/connect.log). The `MaxFileSize` option specifies the maximum size of the log file,
# and the `MaxBackupIndex` option specifies the number of backup files to keep.
#
log4j.appender.connectAppender=com.automq.log.S3RollingFileAppender
log4j.appender.connectAppender.configProviderClass=org.apache.kafka.connect.automq.log.ConnectS3LogConfigProvider
log4j.appender.connectAppender=org.apache.log4j.RollingFileAppender
log4j.appender.connectAppender.MaxFileSize=10MB
log4j.appender.connectAppender.MaxBackupIndex=11
log4j.appender.connectAppender.File=${kafka.logs.dir}/connect.log

View File

@ -21,73 +21,70 @@ log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.logger.com.automq.log.S3RollingFileAppender=INFO, stdout
log4j.additivity.com.automq.log.S3RollingFileAppender=false
log4j.appender.kafkaAppender=com.automq.log.S3RollingFileAppender
log4j.appender.kafkaAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.kafkaAppender.MaxFileSize=100MB
log4j.appender.kafkaAppender.MaxBackupIndex=14
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stateChangeAppender=com.automq.log.S3RollingFileAppender
log4j.appender.stateChangeAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.stateChangeAppender.MaxFileSize=10MB
log4j.appender.stateChangeAppender.MaxBackupIndex=11
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.requestAppender=com.automq.log.S3RollingFileAppender
log4j.appender.requestAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.requestAppender.MaxFileSize=10MB
log4j.appender.requestAppender.MaxBackupIndex=11
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.cleanerAppender=com.automq.log.S3RollingFileAppender
log4j.appender.cleanerAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.cleanerAppender.MaxFileSize=10MB
log4j.appender.cleanerAppender.MaxBackupIndex=11
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.controllerAppender=com.automq.log.S3RollingFileAppender
log4j.appender.controllerAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.controllerAppender.MaxFileSize=100MB
log4j.appender.controllerAppender.MaxBackupIndex=14
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.authorizerAppender=com.automq.log.S3RollingFileAppender
log4j.appender.authorizerAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.authorizerAppender.MaxFileSize=10MB
log4j.appender.authorizerAppender.MaxBackupIndex=11
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.s3ObjectAppender=com.automq.log.S3RollingFileAppender
log4j.appender.s3ObjectAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.s3ObjectAppender.MaxFileSize=100MB
log4j.appender.s3ObjectAppender.MaxBackupIndex=14
log4j.appender.s3ObjectAppender.File=${kafka.logs.dir}/s3-object.log
log4j.appender.s3ObjectAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.s3ObjectAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.s3StreamMetricsAppender=com.automq.log.S3RollingFileAppender
log4j.appender.s3StreamMetricsAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.s3StreamMetricsAppender.MaxFileSize=10MB
log4j.appender.s3StreamMetricsAppender.MaxBackupIndex=11
log4j.appender.s3StreamMetricsAppender.File=${kafka.logs.dir}/s3stream-metrics.log
log4j.appender.s3StreamMetricsAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.s3StreamMetricsAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.s3StreamThreadPoolAppender=com.automq.log.S3RollingFileAppender
log4j.appender.s3StreamThreadPoolAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.s3StreamThreadPoolAppender.MaxFileSize=10MB
log4j.appender.s3StreamThreadPoolAppender.MaxBackupIndex=11
log4j.appender.s3StreamThreadPoolAppender.File=${kafka.logs.dir}/s3stream-threads.log
log4j.appender.s3StreamThreadPoolAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.s3StreamThreadPoolAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.autoBalancerAppender=com.automq.log.S3RollingFileAppender
log4j.appender.autoBalancerAppender=com.automq.shell.log.S3RollingFileAppender
log4j.appender.autoBalancerAppender.MaxFileSize=10MB
log4j.appender.autoBalancerAppender.MaxBackupIndex=11
log4j.appender.autoBalancerAppender.File=${kafka.logs.dir}/auto-balancer.log

View File

@ -1,221 +0,0 @@
# Kafka Connect OpenTelemetry Metrics Integration
## Overview
This integration allows Kafka Connect to export metrics through the AutoMQ OpenTelemetry module, enabling unified observability across your Kafka ecosystem.
## Configuration
### 1. Enable the MetricsReporter
Add the following to your Kafka Connect configuration file (`connect-distributed.properties` or `connect-standalone.properties`):
```properties
# Enable OpenTelemetry MetricsReporter
metric.reporters=org.apache.kafka.connect.automq.metrics.OpenTelemetryMetricsReporter
# OpenTelemetry configuration
opentelemetry.metrics.enabled=true
opentelemetry.metrics.prefix=kafka.connect
# Optional: Filter metrics
opentelemetry.metrics.include.pattern=.*connector.*|.*task.*|.*worker.*
opentelemetry.metrics.exclude.pattern=.*jmx.*|.*debug.*
```
### 2. AutoMQ Telemetry Configuration
Ensure the AutoMQ telemetry is properly configured. Add these properties to your application configuration:
```properties
# Telemetry export configuration
automq.telemetry.exporter.uri=prometheus://localhost:9090
# or for OTLP: automq.telemetry.exporter.uri=otlp://localhost:4317
# Service identification
service.name=kafka-connect
service.instance.id=connect-worker-1
# Export settings
automq.telemetry.exporter.interval.ms=30000
automq.telemetry.metric.cardinality.limit=10000
```
## S3 Log Upload
Kafka Connect bundles the AutoMQ log uploader so that worker logs can be streamed to S3 together with in-cluster cleanup. The uploader uses the connect-leader election mechanism by default and requires no additional configuration.
### Worker Configuration
Add the following properties to your worker configuration (ConfigMap, properties file, etc.):
```properties
# Enable S3 log upload
log.s3.enable=true
log.s3.bucket=0@s3://your-log-bucket?region=us-east-1
# Optional overrides (defaults shown)
log.s3.selector.type=connect-leader
# Provide credentials if the bucket URI does not embed them
# log.s3.access.key=...
# log.s3.secret.key=...
```
`log.s3.node.id` defaults to a hash of the pod hostname if not provided, ensuring objects are partitioned per worker.
### Log4j Integration
`config/connect-log4j.properties` has switched `connectAppender` to `com.automq.log.S3RollingFileAppender` and specifies `org.apache.kafka.connect.automq.log.ConnectS3LogConfigProvider` as the config provider. As long as you enable `log.s3.enable=true` and configure the bucket info in the worker config, log upload will be automatically initialized with the Connect process; if not set or returns `log.s3.enable=false`, the uploader remains disabled.
## Programmatic Usage
### 1. Initialize Telemetry Manager
```java
import com.automq.opentelemetry.AutoMQTelemetryManager;
import java.util.Properties;
// Initialize AutoMQ telemetry before starting Kafka Connect
Properties telemetryProps = new Properties();
telemetryProps.setProperty("automq.telemetry.exporter.uri", "prometheus://localhost:9090");
telemetryProps.setProperty("service.name", "kafka-connect");
telemetryProps.setProperty("service.instance.id", "worker-1");
// Initialize singleton instance
AutoMQTelemetryManager.initializeInstance(telemetryProps);
// Now start Kafka Connect - it will automatically use the OpenTelemetryMetricsReporter
```
### 2. Shutdown
```java
// When shutting down your application
AutoMQTelemetryManager.shutdownInstance();
```
## Exported Metrics
The integration automatically converts Kafka Connect metrics to OpenTelemetry format:
### Metric Naming Convention
- **Format**: `kafka.connect.{group}.{metric_name}`
- **Example**: `kafka.connect.connector.task.batch.size.avg``kafka.connect.connector_task_batch_size_avg`
### Metric Types
- **Counters**: Metrics containing "total", "count", "error", "failure"
- **Gauges**: All other numeric metrics (rates, averages, sizes, etc.)
### Attributes
Kafka metric tags are converted to OpenTelemetry attributes:
- `connector``connector`
- `task``task`
- `worker-id``worker_id`
- Plus standard attributes: `metric.group`, `service.name`, `service.instance.id`
## Example Metrics
Common Kafka Connect metrics that will be exported:
```
# Connector metrics
kafka.connect.connector.startup.attempts.total
kafka.connect.connector.startup.success.total
kafka.connect.connector.startup.failure.total
# Task metrics
kafka.connect.connector.task.batch.size.avg
kafka.connect.connector.task.batch.size.max
kafka.connect.connector.task.offset.commit.avg.time.ms
# Worker metrics
kafka.connect.worker.connector.count
kafka.connect.worker.task.count
kafka.connect.worker.connector.startup.attempts.total
```
## Configuration Options
### OpenTelemetry MetricsReporter Options
| Property | Description | Default | Example |
|----------|-------------|---------|---------|
| `opentelemetry.metrics.enabled` | Enable/disable metrics export | `true` | `false` |
| `opentelemetry.metrics.prefix` | Metric name prefix | `kafka.connect` | `my.connect` |
| `opentelemetry.metrics.include.pattern` | Regex for included metrics | All metrics | `.*connector.*` |
| `opentelemetry.metrics.exclude.pattern` | Regex for excluded metrics | None | `.*jmx.*` |
### AutoMQ Telemetry Options
| Property | Description | Default |
|----------|-------------|---------|
| `automq.telemetry.exporter.uri` | Exporter endpoint | Empty |
| `automq.telemetry.exporter.interval.ms` | Export interval | `60000` |
| `automq.telemetry.metric.cardinality.limit` | Max metric cardinality | `20000` |
## Monitoring Examples
### Prometheus Queries
```promql
# Connector count by worker
kafka_connect_worker_connector_count
# Task failure rate
rate(kafka_connect_connector_task_startup_failure_total[5m])
# Average batch processing time
kafka_connect_connector_task_batch_size_avg
# Connector startup success rate
rate(kafka_connect_connector_startup_success_total[5m]) /
rate(kafka_connect_connector_startup_attempts_total[5m])
```
### Grafana Dashboard
Common panels to create:
1. **Connector Health**: Count of running/failed connectors
2. **Task Performance**: Batch size, processing time, throughput
3. **Error Rates**: Failed startups, task failures
4. **Resource Usage**: Combined with JVM metrics from AutoMQ telemetry
## Troubleshooting
### Common Issues
1. **Metrics not appearing**
```
Check logs for: "AutoMQTelemetryManager is not initialized"
Solution: Ensure AutoMQTelemetryManager.initializeInstance() is called before Connect starts
```
2. **High cardinality warnings**
```
Solution: Use include/exclude patterns to filter metrics
```
3. **Missing dependencies**
```
Ensure connect-runtime depends on the opentelemetry module
```
### Debug Logging
Enable debug logging to troubleshoot:
```properties
log4j.logger.org.apache.kafka.connect.automq=DEBUG
log4j.logger.com.automq.opentelemetry=DEBUG
```
## Integration with Existing Monitoring
This integration works alongside:
- Existing JMX metrics (not replaced)
- Kafka broker metrics via AutoMQ telemetry
- Application-specific metrics
- Third-party monitoring tools
The OpenTelemetry integration provides a unified export path while preserving existing monitoring setups.

View File

@ -1,95 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.az;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.URLEncoder;
import java.nio.charset.StandardCharsets;
import java.util.Map;
import java.util.Optional;
public final class AzAwareClientConfigurator {
private static final Logger LOGGER = LoggerFactory.getLogger(AzAwareClientConfigurator.class);
private AzAwareClientConfigurator() {
}
public enum ClientFamily {
PRODUCER,
CONSUMER,
ADMIN
}
public static void maybeApplyAz(Map<String, Object> props, ClientFamily family, String roleDescriptor) {
Optional<String> azOpt = AzMetadataProviderHolder.provider().availabilityZoneId();
LOGGER.info("AZ-aware client.id configuration for role {}: resolved availability zone id '{}'",
roleDescriptor, azOpt.orElse("unknown"));
if (azOpt.isEmpty()) {
LOGGER.info("Skipping AZ-aware client.id configuration for role {} as no availability zone id is available",
roleDescriptor);
return;
}
String az = azOpt.get();
String encodedAz = URLEncoder.encode(az, StandardCharsets.UTF_8);
String automqClientId;
if (props.containsKey(CommonClientConfigs.CLIENT_ID_CONFIG)) {
Object currentId = props.get(CommonClientConfigs.CLIENT_ID_CONFIG);
if (currentId instanceof String currentIdStr) {
automqClientId = "automq_az=" + encodedAz + "&" + currentIdStr;
} else {
LOGGER.warn("client.id for role {} is not a string ({});",
roleDescriptor, currentId.getClass().getName());
return;
}
} else {
automqClientId = "automq_az=" + encodedAz;
}
props.put(CommonClientConfigs.CLIENT_ID_CONFIG, automqClientId);
LOGGER.info("Applied AZ-aware client.id for role {} -> {}", roleDescriptor, automqClientId);
if (family == ClientFamily.CONSUMER) {
LOGGER.info("Applying client.rack configuration for consumer role {} -> {}", roleDescriptor, az);
Object rackValue = props.get(ConsumerConfig.CLIENT_RACK_CONFIG);
if (rackValue == null || String.valueOf(rackValue).isBlank()) {
props.put(ConsumerConfig.CLIENT_RACK_CONFIG, az);
}
}
}
public static void maybeApplyProducerAz(Map<String, Object> props, String roleDescriptor) {
maybeApplyAz(props, ClientFamily.PRODUCER, roleDescriptor);
}
public static void maybeApplyConsumerAz(Map<String, Object> props, String roleDescriptor) {
maybeApplyAz(props, ClientFamily.CONSUMER, roleDescriptor);
}
public static void maybeApplyAdminAz(Map<String, Object> props, String roleDescriptor) {
maybeApplyAz(props, ClientFamily.ADMIN, roleDescriptor);
}
}

View File

@ -1,64 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.az;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Map;
import java.util.ServiceLoader;
public final class AzMetadataProviderHolder {
private static final Logger LOGGER = LoggerFactory.getLogger(AzMetadataProviderHolder.class);
private static final AzMetadataProvider DEFAULT_PROVIDER = new AzMetadataProvider() { };
private static volatile AzMetadataProvider provider = DEFAULT_PROVIDER;
private AzMetadataProviderHolder() {
}
public static void initialize(Map<String, String> workerProps) {
AzMetadataProvider selected = DEFAULT_PROVIDER;
try {
ServiceLoader<AzMetadataProvider> loader = ServiceLoader.load(AzMetadataProvider.class);
for (AzMetadataProvider candidate : loader) {
try {
candidate.configure(workerProps);
selected = candidate;
LOGGER.info("Loaded AZ metadata provider: {}", candidate.getClass().getName());
break;
} catch (Exception e) {
LOGGER.warn("Failed to initialize AZ metadata provider: {}", candidate.getClass().getName(), e);
}
}
} catch (Throwable t) {
LOGGER.warn("Failed to load AZ metadata providers", t);
}
provider = selected;
}
public static AzMetadataProvider provider() {
return provider;
}
public static void setProviderForTest(AzMetadataProvider newProvider) {
provider = newProvider != null ? newProvider : DEFAULT_PROVIDER;
}
}

View File

@ -1,56 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.log;
import com.automq.log.S3RollingFileAppender;
import com.automq.log.uploader.S3LogConfig;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Map;
import java.util.Properties;
/**
* Initializes the AutoMQ S3 log uploader for Kafka Connect.
*/
public final class ConnectLogUploader {
private static Logger getLogger() {
return LoggerFactory.getLogger(ConnectLogUploader.class);
}
private ConnectLogUploader() {
}
public static void initialize(Map<String, String> workerProps) {
Properties props = new Properties();
if (workerProps != null) {
workerProps.forEach((k, v) -> {
if (k != null && v != null) {
props.put(k, v);
}
});
}
ConnectS3LogConfigProvider.initialize(props);
S3LogConfig s3LogConfig = new ConnectS3LogConfigProvider().get();
S3RollingFileAppender.setup(s3LogConfig);
getLogger().info("Initialized Connect S3 log uploader context");
}
}

View File

@ -1,95 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.log;
import org.apache.kafka.connect.automq.runtime.LeaderNodeSelector;
import org.apache.kafka.connect.automq.runtime.RuntimeLeaderSelectorProvider;
import com.automq.log.uploader.S3LogConfig;
import com.automq.stream.s3.operator.BucketURI;
import com.automq.stream.s3.operator.ObjectStorage;
import com.automq.stream.s3.operator.ObjectStorageFactory;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class ConnectS3LogConfig implements S3LogConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(ConnectS3LogConfig.class);
private final boolean enable;
private final String clusterId;
private final int nodeId;
private final String bucketURI;
private ObjectStorage objectStorage;
private LeaderNodeSelector leaderNodeSelector;
public ConnectS3LogConfig(boolean enable, String clusterId, int nodeId, String bucketURI) {
this.enable = enable;
this.clusterId = clusterId;
this.nodeId = nodeId;
this.bucketURI = bucketURI;
}
@Override
public boolean isEnabled() {
return this.enable;
}
@Override
public String clusterId() {
return this.clusterId;
}
@Override
public int nodeId() {
return this.nodeId;
}
@Override
public synchronized ObjectStorage objectStorage() {
if (this.objectStorage != null) {
return this.objectStorage;
}
if (StringUtils.isBlank(bucketURI)) {
LOGGER.error("Mandatory log config bucketURI is not set.");
return null;
}
String normalizedBucket = bucketURI.trim();
BucketURI logBucket = BucketURI.parse(normalizedBucket);
this.objectStorage = ObjectStorageFactory.instance().builder(logBucket).threadPrefix("s3-log-uploader").build();
return this.objectStorage;
}
@Override
public boolean isLeader() {
LeaderNodeSelector selector = leaderSelector();
return selector != null && selector.isLeader();
}
public LeaderNodeSelector leaderSelector() {
if (leaderNodeSelector == null) {
this.leaderNodeSelector = new RuntimeLeaderSelectorProvider().createSelector();
}
return leaderNodeSelector;
}
}

View File

@ -1,112 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.log;
import com.automq.log.uploader.S3LogConfig;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.InetAddress;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
/**
* Provides S3 log uploader configuration for Kafka Connect workers.
*/
public class ConnectS3LogConfigProvider {
private static Logger getLogger() {
return LoggerFactory.getLogger(ConnectS3LogConfigProvider.class);
}
private static final AtomicReference<Properties> CONFIG = new AtomicReference<>();
private static final long WAIT_TIMEOUT_MS = TimeUnit.SECONDS.toMillis(10);
private static final CountDownLatch INIT = new CountDownLatch(1);
public static void initialize(Properties workerProps) {
try {
if (workerProps == null) {
CONFIG.set(null);
return;
}
Properties copy = new Properties();
for (Map.Entry<Object, Object> entry : workerProps.entrySet()) {
if (entry.getKey() != null && entry.getValue() != null) {
copy.put(entry.getKey(), entry.getValue());
}
}
CONFIG.set(copy);
} finally {
INIT.countDown();
}
getLogger().info("Initializing ConnectS3LogConfigProvider");
}
public S3LogConfig get() {
try {
if (!INIT.await(WAIT_TIMEOUT_MS, TimeUnit.MILLISECONDS)) {
getLogger().warn("S3 log uploader config not initialized within timeout; uploader disabled.");
}
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
getLogger().warn("Interrupted while waiting for S3 log uploader config; uploader disabled.");
return null;
}
Properties source = CONFIG.get();
if (source == null) {
getLogger().warn("S3 log upload configuration was not provided; uploader disabled.");
return null;
}
String bucketURI = source.getProperty(LogConfigConstants.LOG_S3_BUCKET_KEY);
String clusterId = source.getProperty(LogConfigConstants.LOG_S3_CLUSTER_ID_KEY);
String nodeIdStr = resolveNodeId(source);
boolean enable = Boolean.parseBoolean(source.getProperty(LogConfigConstants.LOG_S3_ENABLE_KEY, "false"));
return new ConnectS3LogConfig(enable, clusterId, Integer.parseInt(nodeIdStr), bucketURI);
}
private String resolveNodeId(Properties workerProps) {
String fromConfig = workerProps.getProperty(LogConfigConstants.LOG_S3_NODE_ID_KEY);
if (!isBlank(fromConfig)) {
return fromConfig.trim();
}
String env = System.getenv("CONNECT_NODE_ID");
if (!isBlank(env)) {
return env.trim();
}
String host = workerProps.getProperty("automq.log.s3.node.hostname");
if (isBlank(host)) {
try {
host = InetAddress.getLocalHost().getHostName();
} catch (Exception e) {
host = System.getenv().getOrDefault("HOSTNAME", "0");
}
}
return Integer.toString(host.hashCode() & Integer.MAX_VALUE);
}
private boolean isBlank(String value) {
return value == null || value.trim().isEmpty();
}
}

View File

@ -1,30 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.log;
public class LogConfigConstants {
public static final String LOG_S3_ENABLE_KEY = "log.s3.enable";
public static final String LOG_S3_BUCKET_KEY = "log.s3.bucket";
public static final String LOG_S3_CLUSTER_ID_KEY = "log.s3.cluster.id";
public static final String LOG_S3_NODE_ID_KEY = "log.s3.node.id";
}

View File

@ -1,77 +0,0 @@
package org.apache.kafka.connect.automq.metrics;
import org.apache.kafka.connect.automq.runtime.LeaderNodeSelector;
import org.apache.kafka.connect.automq.runtime.RuntimeLeaderSelectorProvider;
import com.automq.opentelemetry.exporter.MetricsExportConfig;
import com.automq.stream.s3.operator.BucketURI;
import com.automq.stream.s3.operator.ObjectStorage;
import com.automq.stream.s3.operator.ObjectStorageFactory;
import org.apache.commons.lang3.tuple.Pair;
import java.util.List;
public class ConnectMetricsExportConfig implements MetricsExportConfig {
private final BucketURI metricsBucket;
private final String clusterId;
private final int nodeId;
private final int intervalMs;
private final List<Pair<String, String>> baseLabels;
private ObjectStorage objectStorage;
private LeaderNodeSelector leaderNodeSelector;
public ConnectMetricsExportConfig(String clusterId, int nodeId, BucketURI metricsBucket, List<Pair<String, String>> baseLabels, int intervalMs) {
this.clusterId = clusterId;
this.nodeId = nodeId;
this.metricsBucket = metricsBucket;
this.baseLabels = baseLabels;
this.intervalMs = intervalMs;
}
@Override
public String clusterId() {
return this.clusterId;
}
@Override
public boolean isLeader() {
LeaderNodeSelector selector = leaderSelector();
return selector != null && selector.isLeader();
}
public LeaderNodeSelector leaderSelector() {
if (leaderNodeSelector == null) {
this.leaderNodeSelector = new RuntimeLeaderSelectorProvider().createSelector();
}
return leaderNodeSelector;
}
@Override
public int nodeId() {
return this.nodeId;
}
@Override
public ObjectStorage objectStorage() {
if (metricsBucket == null) {
return null;
}
if (this.objectStorage == null) {
this.objectStorage = ObjectStorageFactory.instance().builder(metricsBucket).threadPrefix("s3-metric").build();
}
return this.objectStorage;
}
@Override
public List<Pair<String, String>> baseLabels() {
return this.baseLabels;
}
@Override
public int intervalMs() {
return this.intervalMs;
}
}

View File

@ -1,30 +0,0 @@
package org.apache.kafka.connect.automq.metrics;
public class MetricsConfigConstants {
public static final String SERVICE_NAME_KEY = "service.name";
public static final String SERVICE_INSTANCE_ID_KEY = "service.instance.id";
public static final String S3_CLIENT_ID_KEY = "automq.telemetry.s3.cluster.id";
/**
* The URI for configuring metrics exporters. e.g. prometheus://localhost:9090, otlp://localhost:4317
*/
public static final String EXPORTER_URI_KEY = "automq.telemetry.exporter.uri";
/**
* The export interval in milliseconds.
*/
public static final String EXPORTER_INTERVAL_MS_KEY = "automq.telemetry.exporter.interval.ms";
/**
* The cardinality limit for any single metric.
*/
public static final String METRIC_CARDINALITY_LIMIT_KEY = "automq.telemetry.metric.cardinality.limit";
public static final int DEFAULT_METRIC_CARDINALITY_LIMIT = 20000;
public static final String TELEMETRY_METRICS_BASE_LABELS_CONFIG = "automq.telemetry.metrics.base.labels";
public static final String TELEMETRY_METRICS_BASE_LABELS_DOC = "The base labels that will be added to all metrics. The format is key1=value1,key2=value2.";
public static final String S3_BUCKET = "automq.telemetry.s3.bucket";
public static final String S3_BUCKETS_DOC = "The buckets url with format 0@s3://$bucket?region=$region. \n" +
"the full url format for s3 is 0@s3://$bucket?region=$region[&endpoint=$endpoint][&pathStyle=$enablePathStyle][&authType=$authType][&accessKey=$accessKey][&secretKey=$secretKey][&checksumAlgorithm=$checksumAlgorithm]" +
"- pathStyle: true|false. The object storage access path style. When using MinIO, it should be set to true.\n" +
"- authType: instance|static. When set to instance, it will use instance profile to auth. When set to static, it will get accessKey and secretKey from the url or from system environment KAFKA_S3_ACCESS_KEY/KAFKA_S3_SECRET_KEY.";
}

View File

@ -1,822 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.metrics;
import org.apache.kafka.common.MetricName;
import org.apache.kafka.common.metrics.KafkaMetric;
import org.apache.kafka.common.metrics.MetricsReporter;
import com.automq.opentelemetry.AutoMQTelemetryManager;
import com.automq.stream.s3.operator.BucketURI;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.lang3.tuple.Pair;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ConcurrentHashMap;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.common.AttributesBuilder;
import io.opentelemetry.api.metrics.Meter;
import io.opentelemetry.api.metrics.ObservableDoubleCounter;
import io.opentelemetry.api.metrics.ObservableDoubleGauge;
import io.opentelemetry.api.metrics.ObservableLongCounter;
/**
* A MetricsReporter implementation that bridges Kafka Connect metrics to OpenTelemetry.
*
* <p>This reporter integrates with the AutoMQ OpenTelemetry module to export Kafka Connect
* metrics through various exporters (Prometheus, OTLP, etc.). It automatically converts
* Kafka metrics to OpenTelemetry instruments based on metric types and provides proper
* labeling and naming conventions.
*
* <p>Key features:
* <ul>
* <li>Automatic metric type detection and conversion</li>
* <li>Support for gauges and counters using async observable instruments</li>
* <li>Proper attribute mapping from Kafka metric tags</li>
* <li>Integration with AutoMQ telemetry infrastructure</li>
* <li>Configurable metric filtering</li>
* <li>Real-time metric value updates through callbacks</li>
* </ul>
*
* <p>Configuration options:
* <ul>
* <li>{@code opentelemetry.metrics.enabled} - Enable/disable OpenTelemetry metrics (default: true)</li>
* <li>{@code opentelemetry.metrics.prefix} - Prefix for metric names (default: "kafka.connect")</li>
* <li>{@code opentelemetry.metrics.include.pattern} - Regex pattern for included metrics</li>
* <li>{@code opentelemetry.metrics.exclude.pattern} - Regex pattern for excluded metrics</li>
* </ul>
*/
public class OpenTelemetryMetricsReporter implements MetricsReporter {
private static final Logger LOGGER = LoggerFactory.getLogger(OpenTelemetryMetricsReporter.class);
private static final String ENABLED_CONFIG = "opentelemetry.metrics.enabled";
private static final String PREFIX_CONFIG = "opentelemetry.metrics.prefix";
private static final String INCLUDE_PATTERN_CONFIG = "opentelemetry.metrics.include.pattern";
private static final String EXCLUDE_PATTERN_CONFIG = "opentelemetry.metrics.exclude.pattern";
private static final String DEFAULT_PREFIX = "kafka";
private boolean enabled = true;
private String metricPrefix = DEFAULT_PREFIX;
private String includePattern = null;
private String excludePattern = null;
private Meter meter;
private final Map<String, AutoCloseable> observableHandles = new ConcurrentHashMap<>();
private final Map<String, KafkaMetric> registeredMetrics = new ConcurrentHashMap<>();
public static void initializeTelemetry(Properties props) {
String exportURIStr = props.getProperty(MetricsConfigConstants.EXPORTER_URI_KEY);
String serviceName = props.getProperty(MetricsConfigConstants.SERVICE_NAME_KEY, "connect-default");
String instanceId = props.getProperty(MetricsConfigConstants.SERVICE_INSTANCE_ID_KEY, "0");
String clusterId = props.getProperty(MetricsConfigConstants.S3_CLIENT_ID_KEY, "cluster-default");
int intervalMs = Integer.parseInt(props.getProperty(MetricsConfigConstants.EXPORTER_INTERVAL_MS_KEY, "60000"));
BucketURI metricsBucket = getMetricsBucket(props);
List<Pair<String, String>> baseLabels = getBaseLabels(props);
AutoMQTelemetryManager.initializeInstance(exportURIStr, serviceName, instanceId, new ConnectMetricsExportConfig(clusterId, Integer.parseInt(instanceId), metricsBucket, baseLabels, intervalMs));
LOGGER.info("OpenTelemetryMetricsReporter initialized");
}
private static BucketURI getMetricsBucket(Properties props) {
String metricsBucket = props.getProperty(MetricsConfigConstants.S3_BUCKET, "");
if (StringUtils.isNotBlank(metricsBucket)) {
List<BucketURI> bucketList = BucketURI.parseBuckets(metricsBucket);
if (!bucketList.isEmpty()) {
return bucketList.get(0);
}
}
return null;
}
private static List<Pair<String, String>> getBaseLabels(Properties props) {
// This part is hard to abstract without a clear config pattern.
// Assuming for now it's empty. The caller can extend this class
// or the manager can have a method to add more labels.
String baseLabels = props.getProperty(MetricsConfigConstants.TELEMETRY_METRICS_BASE_LABELS_CONFIG);
if (StringUtils.isBlank(baseLabels)) {
return Collections.emptyList();
}
List<Pair<String, String>> labels = new ArrayList<>();
for (String label : baseLabels.split(",")) {
String[] kv = label.split("=");
if (kv.length != 2) {
continue;
}
labels.add(Pair.of(kv[0], kv[1]));
}
return labels;
}
@Override
public void configure(Map<String, ?> configs) {
// Parse configuration
Object enabledObj = configs.get(ENABLED_CONFIG);
if (enabledObj != null) {
enabled = Boolean.parseBoolean(enabledObj.toString());
}
Object prefixObj = configs.get(PREFIX_CONFIG);
if (prefixObj != null) {
metricPrefix = prefixObj.toString();
}
Object includeObj = configs.get(INCLUDE_PATTERN_CONFIG);
if (includeObj != null) {
includePattern = includeObj.toString();
}
Object excludeObj = configs.get(EXCLUDE_PATTERN_CONFIG);
if (excludeObj != null) {
excludePattern = excludeObj.toString();
}
LOGGER.info("OpenTelemetryMetricsReporter configured - enabled: {}, prefix: {}, include: {}, exclude: {}",
enabled, metricPrefix, includePattern, excludePattern);
}
@Override
public void init(List<KafkaMetric> metrics) {
if (!enabled) {
LOGGER.info("OpenTelemetryMetricsReporter is disabled");
return;
}
try {
// Get the OpenTelemetry meter from AutoMQTelemetryManager
// This assumes the telemetry manager is already initialized
meter = AutoMQTelemetryManager.getInstance().getMeter();
if (meter == null) {
LOGGER.warn("AutoMQTelemetryManager is not initialized, OpenTelemetry metrics will not be available");
enabled = false;
return;
}
// Register initial metrics
for (KafkaMetric metric : metrics) {
registerMetric(metric);
}
LOGGER.info("OpenTelemetryMetricsReporter initialized with {} metrics", metrics.size());
} catch (Exception e) {
LOGGER.error("Failed to initialize OpenTelemetryMetricsReporter", e);
enabled = false;
}
}
@Override
public void metricChange(KafkaMetric metric) {
if (!enabled || meter == null) {
return;
}
try {
registerMetric(metric);
} catch (Exception e) {
LOGGER.warn("Failed to register metric change for {}", metric.metricName(), e);
}
}
@Override
public void metricRemoval(KafkaMetric metric) {
if (!enabled) {
return;
}
try {
String metricKey = buildMetricKey(metric.metricName());
closeHandle(metricKey);
registeredMetrics.remove(metricKey);
LOGGER.debug("Removed metric: {}", metricKey);
} catch (Exception e) {
LOGGER.warn("Failed to remove metric {}", metric.metricName(), e);
}
}
@Override
public void close() {
if (enabled) {
// Close all observable handles to prevent memory leaks
observableHandles.values().forEach(handle -> {
try {
handle.close();
} catch (Exception e) {
LOGGER.debug("Error closing observable handle", e);
}
});
observableHandles.clear();
registeredMetrics.clear();
}
LOGGER.info("OpenTelemetryMetricsReporter closed");
}
private void registerMetric(KafkaMetric metric) {
LOGGER.debug("OpenTelemetryMetricsReporter registering metric {}", metric.metricName());
MetricName metricName = metric.metricName();
String metricKey = buildMetricKey(metricName);
// Apply filtering
if (!shouldIncludeMetric(metricKey)) {
return;
}
// Check if metric value is numeric at registration time
Object testValue = safeMetricValue(metric);
if (!(testValue instanceof Number)) {
LOGGER.debug("Skipping non-numeric metric: {}", metricKey);
return;
}
Attributes attributes = buildAttributes(metricName);
// Close existing handle if present (for metric updates)
closeHandle(metricKey);
// Register the metric for future access
registeredMetrics.put(metricKey, metric);
// Determine metric type and register accordingly
if (isCounterMetric(metricName)) {
registerAsyncCounter(metricKey, metricName, metric, attributes, (Number) testValue);
} else {
registerAsyncGauge(metricKey, metricName, metric, attributes);
}
}
private void registerAsyncGauge(String metricKey, MetricName metricName, KafkaMetric metric, Attributes attributes) {
try {
String description = buildDescription(metricName);
String unit = determineUnit(metricName);
ObservableDoubleGauge gauge = meter.gaugeBuilder(metricKey)
.setDescription(description)
.setUnit(unit)
.buildWithCallback(measurement -> {
Number value = (Number) safeMetricValue(metric);
if (value != null) {
measurement.record(value.doubleValue(), attributes);
}
});
observableHandles.put(metricKey, gauge);
LOGGER.debug("Registered async gauge: {}", metricKey);
} catch (Exception e) {
LOGGER.warn("Failed to register async gauge for {}", metricKey, e);
}
}
private void registerAsyncCounter(String metricKey, MetricName metricName, KafkaMetric metric,
Attributes attributes, Number initialValue) {
try {
String description = buildDescription(metricName);
String unit = determineUnit(metricName);
// Use appropriate counter type based on initial value type
if (initialValue instanceof Long || initialValue instanceof Integer) {
ObservableLongCounter counter = meter.counterBuilder(metricKey)
.setDescription(description)
.setUnit(unit)
.buildWithCallback(measurement -> {
Number value = (Number) safeMetricValue(metric);
if (value != null) {
long longValue = value.longValue();
if (longValue >= 0) {
measurement.record(longValue, attributes);
}
}
});
observableHandles.put(metricKey, counter);
} else {
ObservableDoubleCounter counter = meter.counterBuilder(metricKey)
.ofDoubles()
.setDescription(description)
.setUnit(unit)
.buildWithCallback(measurement -> {
Number value = (Number) safeMetricValue(metric);
if (value != null) {
double doubleValue = value.doubleValue();
if (doubleValue >= 0) {
measurement.record(doubleValue, attributes);
}
}
});
observableHandles.put(metricKey, counter);
}
LOGGER.debug("Registered async counter: {}", metricKey);
} catch (Exception e) {
LOGGER.warn("Failed to register async counter for {}", metricKey, e);
}
}
private Object safeMetricValue(KafkaMetric metric) {
try {
return metric.metricValue();
} catch (Exception e) {
LOGGER.debug("Failed to read metric value for {}", metric.metricName(), e);
return null;
}
}
private void closeHandle(String metricKey) {
AutoCloseable handle = observableHandles.remove(metricKey);
if (handle != null) {
try {
handle.close();
} catch (Exception e) {
LOGGER.debug("Error closing handle for {}", metricKey, e);
}
}
}
private String buildMetricKey(MetricName metricName) {
StringBuilder sb = new StringBuilder(metricPrefix);
sb.append(".");
// Add group if present
if (metricName.group() != null && !metricName.group().isEmpty()) {
sb.append(metricName.group().replace("-", "_").toLowerCase(Locale.ROOT));
sb.append(".");
}
// Add name
sb.append(metricName.name().replace("-", "_").toLowerCase(Locale.ROOT));
return sb.toString();
}
private Attributes buildAttributes(MetricName metricName) {
AttributesBuilder builder = Attributes.builder();
// Add metric tags as attributes
Map<String, String> tags = metricName.tags();
if (tags != null) {
for (Map.Entry<String, String> entry : tags.entrySet()) {
String key = entry.getKey();
String value = entry.getValue();
if (key != null && value != null) {
builder.put(sanitizeAttributeKey(key), value);
}
}
}
// Add standard attributes
if (metricName.group() != null) {
builder.put("metric.group", metricName.group());
}
return builder.build();
}
private String sanitizeAttributeKey(String key) {
return key.replace("-", "_").replace(".", "_").toLowerCase(Locale.ROOT);
}
private String buildDescription(MetricName metricName) {
StringBuilder description = new StringBuilder();
description.append("Kafka Connect metric: ");
if (metricName.group() != null) {
description.append(metricName.group()).append(" - ");
}
description.append(metricName.name());
return description.toString();
}
private String determineUnit(MetricName metricName) {
String name = metricName.name().toLowerCase(Locale.ROOT);
String group = metricName.group() != null ? metricName.group().toLowerCase(Locale.ROOT) : "";
if (isKafkaConnectMetric(group)) {
return determineConnectMetricUnit(name);
}
if (isTimeMetric(name)) {
return determineTimeUnit(name);
}
if (isBytesMetric(name)) {
return determineBytesUnit(name);
}
if (isRateMetric(name)) {
return "1/s";
}
if (isRatioOrPercentageMetric(name)) {
return "1";
}
if (isCountMetric(name)) {
return "1";
}
return "1";
}
private boolean isCounterMetric(MetricName metricName) {
String name = metricName.name().toLowerCase(Locale.ROOT);
String group = metricName.group() != null ? metricName.group().toLowerCase(Locale.ROOT) : "";
if (isKafkaConnectMetric(group)) {
return isConnectCounterMetric(name);
}
if (isGaugeMetric(name)) {
return false;
}
return hasCounterKeywords(name);
}
private boolean isGaugeMetric(String name) {
return hasRateOrAvgKeywords(name) || hasRatioOrPercentKeywords(name) ||
hasMinMaxOrCurrentKeywords(name) || hasActiveOrSizeKeywords(name) ||
hasTimeButNotTotal(name);
}
private boolean hasRateOrAvgKeywords(String name) {
return name.contains("rate") || name.contains("avg") || name.contains("mean");
}
private boolean hasRatioOrPercentKeywords(String name) {
return name.contains("ratio") || name.contains("percent") || name.contains("pct");
}
private boolean hasMinMaxOrCurrentKeywords(String name) {
return name.contains("max") || name.contains("min") || name.contains("current");
}
private boolean hasActiveOrSizeKeywords(String name) {
return name.contains("active") || name.contains("lag") || name.contains("size");
}
private boolean hasTimeButNotTotal(String name) {
return name.contains("time") && !name.contains("total");
}
private boolean hasCounterKeywords(String name) {
String[] parts = name.split("[._-]");
for (String part : parts) {
if (isCounterKeyword(part)) {
return true;
}
}
return false;
}
private boolean isCounterKeyword(String part) {
return isBasicCounterKeyword(part) || isAdvancedCounterKeyword(part);
}
private boolean isBasicCounterKeyword(String part) {
return "total".equals(part) || "count".equals(part) || "sum".equals(part) ||
"attempts".equals(part);
}
private boolean isAdvancedCounterKeyword(String part) {
return "success".equals(part) || "failure".equals(part) ||
"errors".equals(part) || "retries".equals(part) || "skipped".equals(part);
}
private boolean isConnectCounterMetric(String name) {
if (hasTotalBasedCounters(name)) {
return true;
}
if (hasRecordCounters(name)) {
return true;
}
if (hasActiveCountMetrics(name)) {
return false;
}
return false;
}
private boolean hasTotalBasedCounters(String name) {
return hasBasicTotalCounters(name) || hasSuccessFailureCounters(name) ||
hasErrorRetryCounters(name) || hasRequestCompletionCounters(name);
}
private boolean hasBasicTotalCounters(String name) {
return name.contains("total") || name.contains("attempts");
}
private boolean hasSuccessFailureCounters(String name) {
return (name.contains("success") && name.contains("total")) ||
(name.contains("failure") && name.contains("total"));
}
private boolean hasErrorRetryCounters(String name) {
return name.contains("errors") || name.contains("retries") || name.contains("skipped");
}
private boolean hasRequestCompletionCounters(String name) {
return name.contains("requests") || name.contains("completions");
}
private boolean hasRecordCounters(String name) {
return hasRecordKeyword(name) && hasTotalOperation(name);
}
private boolean hasRecordKeyword(String name) {
return name.contains("record") || name.contains("records");
}
private boolean hasTotalOperation(String name) {
return hasPollWriteTotal(name) || hasReadSendTotal(name);
}
private boolean hasPollWriteTotal(String name) {
return name.contains("poll-total") || name.contains("write-total");
}
private boolean hasReadSendTotal(String name) {
return name.contains("read-total") || name.contains("send-total");
}
private boolean hasActiveCountMetrics(String name) {
return hasCountMetrics(name) || hasSequenceMetrics(name);
}
private boolean hasCountMetrics(String name) {
return hasActiveTaskCount(name) || hasConnectorCount(name) || hasStatusCount(name);
}
private boolean hasActiveTaskCount(String name) {
return name.contains("active-count") || name.contains("partition-count") ||
name.contains("task-count");
}
private boolean hasConnectorCount(String name) {
return name.contains("connector-count") || name.contains("running-count");
}
private boolean hasStatusCount(String name) {
return name.contains("paused-count") || name.contains("failed-count");
}
private boolean hasSequenceMetrics(String name) {
return name.contains("seq-no") || name.contains("seq-num");
}
private boolean isKafkaConnectMetric(String group) {
return group.contains("connector") || group.contains("task") ||
group.contains("connect") || group.contains("worker");
}
private String determineConnectMetricUnit(String name) {
String timeUnit = getTimeUnit(name);
if (timeUnit != null) {
return timeUnit;
}
String countUnit = getCountUnit(name);
if (countUnit != null) {
return countUnit;
}
String specialUnit = getSpecialUnit(name);
if (specialUnit != null) {
return specialUnit;
}
return "1";
}
private String getTimeUnit(String name) {
if (isTimeBasedMetric(name)) {
return "ms";
}
if (isTimestampMetric(name)) {
return "ms";
}
if (isTimeSinceMetric(name)) {
return "ms";
}
return null;
}
private String getCountUnit(String name) {
if (isSequenceOrCountMetric(name)) {
return "1";
}
if (isLagMetric(name)) {
return "1";
}
if (isTotalOrCounterMetric(name)) {
return "1";
}
return null;
}
private String getSpecialUnit(String name) {
if (isStatusOrMetadataMetric(name)) {
return "1";
}
if (isConnectRateMetric(name)) {
return "1/s";
}
if (isRatioMetric(name)) {
return "1";
}
return null;
}
private boolean isTimeBasedMetric(String name) {
return hasTimeMs(name) || hasCommitBatchTime(name);
}
private boolean hasTimeMs(String name) {
return name.endsWith("-time-ms") || name.endsWith("-avg-time-ms") ||
name.endsWith("-max-time-ms");
}
private boolean hasCommitBatchTime(String name) {
return name.contains("commit-time") || name.contains("batch-time") ||
name.contains("rebalance-time");
}
private boolean isSequenceOrCountMetric(String name) {
return hasSequenceNumbers(name) || hasCountSuffix(name);
}
private boolean hasSequenceNumbers(String name) {
return name.contains("seq-no") || name.contains("seq-num");
}
private boolean hasCountSuffix(String name) {
return name.endsWith("-count") || name.contains("task-count") ||
name.contains("partition-count");
}
private boolean isLagMetric(String name) {
return name.contains("lag");
}
private boolean isStatusOrMetadataMetric(String name) {
return isStatusMetric(name) || hasProtocolLeaderMetrics(name) ||
hasConnectorMetrics(name);
}
private boolean isStatusMetric(String name) {
return "status".equals(name) || name.contains("protocol");
}
private boolean hasProtocolLeaderMetrics(String name) {
return name.contains("leader-name");
}
private boolean hasConnectorMetrics(String name) {
return name.contains("connector-type") || name.contains("connector-class") ||
name.contains("connector-version");
}
private boolean isRatioMetric(String name) {
return name.contains("ratio") || name.contains("percentage");
}
private boolean isTotalOrCounterMetric(String name) {
return hasTotalSum(name) || hasAttempts(name) || hasSuccessFailure(name) ||
hasErrorsRetries(name);
}
private boolean hasTotalSum(String name) {
return name.contains("total") || name.contains("sum");
}
private boolean hasAttempts(String name) {
return name.contains("attempts");
}
private boolean hasSuccessFailure(String name) {
return name.contains("success") || name.contains("failure");
}
private boolean hasErrorsRetries(String name) {
return name.contains("errors") || name.contains("retries") || name.contains("skipped");
}
private boolean isTimestampMetric(String name) {
return name.contains("timestamp") || name.contains("epoch");
}
private boolean isConnectRateMetric(String name) {
return name.contains("rate") && !name.contains("ratio");
}
private boolean isTimeSinceMetric(String name) {
return name.contains("time-since-last") || name.contains("since-last");
}
private boolean isTimeMetric(String name) {
return hasTimeKeywords(name) && !hasTimeExclusions(name);
}
private boolean hasTimeKeywords(String name) {
return name.contains("time") || name.contains("latency") ||
name.contains("duration");
}
private boolean hasTimeExclusions(String name) {
return name.contains("ratio") || name.contains("rate") ||
name.contains("count") || name.contains("since-last");
}
private String determineTimeUnit(String name) {
if (name.contains("ms") || name.contains("millisecond")) {
return "ms";
} else if (name.contains("us") || name.contains("microsecond")) {
return "us";
} else if (name.contains("ns") || name.contains("nanosecond")) {
return "ns";
} else if (name.contains("s") && !name.contains("ms")) {
return "s";
} else {
return "ms";
}
}
private boolean isBytesMetric(String name) {
return name.contains("byte") || name.contains("bytes") ||
name.contains("size") && !name.contains("batch-size");
}
private String determineBytesUnit(String name) {
boolean isRate = name.contains("rate") || name.contains("per-sec") ||
name.contains("persec") || name.contains("/s");
return isRate ? "By/s" : "By";
}
private boolean isRateMetric(String name) {
return hasRateKeywords(name) && !hasExcludedKeywords(name);
}
private boolean hasRateKeywords(String name) {
return name.contains("rate") || name.contains("per-sec") ||
name.contains("persec") || name.contains("/s");
}
private boolean hasExcludedKeywords(String name) {
return name.contains("byte") || name.contains("ratio");
}
private boolean isRatioOrPercentageMetric(String name) {
return hasPercentKeywords(name) || hasRatioKeywords(name);
}
private boolean hasPercentKeywords(String name) {
return name.contains("percent") || name.contains("pct");
}
private boolean hasRatioKeywords(String name) {
return name.contains("ratio");
}
private boolean isCountMetric(String name) {
return name.contains("count") || name.contains("total") ||
name.contains("sum") || name.endsWith("-num");
}
private boolean shouldIncludeMetric(String metricKey) {
if (excludePattern != null && metricKey.matches(excludePattern)) {
return false;
}
if (includePattern != null) {
return metricKey.matches(includePattern);
}
return true;
}
}

View File

@ -1,34 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.runtime;
/**
* An interface for determining which node should be responsible for clean metrics.
* This abstraction allows different implementations of clean node selection strategies.
*/
public interface LeaderNodeSelector {
/**
* Determines if the current node should be responsible for clean metrics.
*
* @return true if the current node should clean metrics, false otherwise.
*/
boolean isLeader();
}

View File

@ -1,36 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.runtime;
/**
* SPI interface for providing custom LeaderNodeSelector implementations.
* Third-party libraries can implement this interface and register their implementations
* using Java's ServiceLoader mechanism.
*/
public interface LeaderNodeSelectorProvider {
/**
* Creates a new LeaderNodeSelector instance based on the provided configuration.
*
* @return A new LeaderNodeSelector instance
* @throws Exception If the selector cannot be created
*/
LeaderNodeSelector createSelector() throws Exception;
}

View File

@ -1,46 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.runtime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.function.BooleanSupplier;
/**
* Stores runtime-provided suppliers that answer whether the current process
* should act as the leader.
*/
public final class RuntimeLeaderRegistry {
private static final Logger LOGGER = LoggerFactory.getLogger(RuntimeLeaderRegistry.class);
private static BooleanSupplier supplier = () -> false;
private RuntimeLeaderRegistry() {
}
public static void register(BooleanSupplier supplier) {
RuntimeLeaderRegistry.supplier = supplier;
LOGGER.info("Registered runtime leader supplier for log metrics.");
}
public static BooleanSupplier supplier() {
return supplier;
}
}

View File

@ -1,74 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.connect.automq.runtime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.function.BooleanSupplier;
public class RuntimeLeaderSelectorProvider implements LeaderNodeSelectorProvider {
private static final Logger LOGGER = LoggerFactory.getLogger(RuntimeLeaderSelectorProvider.class);
@Override
public LeaderNodeSelector createSelector() {
final AtomicBoolean missingLogged = new AtomicBoolean(false);
final AtomicBoolean leaderLogged = new AtomicBoolean(false);
return () -> {
BooleanSupplier current = org.apache.kafka.connect.automq.runtime.RuntimeLeaderRegistry.supplier();
if (current == null) {
if (missingLogged.compareAndSet(false, true)) {
LOGGER.warn("leader supplier for key not yet available; treating node as follower until registration happens.");
}
if (leaderLogged.getAndSet(false)) {
LOGGER.info("Node stepped down from leadership because supplier is unavailable.");
}
return false;
}
if (missingLogged.get()) {
missingLogged.set(false);
LOGGER.info("leader supplier is now available.");
}
try {
boolean leader = current.getAsBoolean();
if (leader) {
if (!leaderLogged.getAndSet(true)) {
LOGGER.info("Node became leader");
}
} else {
if (leaderLogged.getAndSet(false)) {
LOGGER.info("Node stepped down from leadership");
}
}
return leader;
} catch (RuntimeException e) {
if (leaderLogged.getAndSet(false)) {
LOGGER.info("Node stepped down from leadership due to supplier exception.");
}
LOGGER.warn("leader supplier threw exception. Treating as follower.", e);
return false;
}
};
}
}

View File

@ -19,9 +19,6 @@ package org.apache.kafka.connect.cli;
import org.apache.kafka.common.utils.Exit;
import org.apache.kafka.common.utils.Time;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.connect.automq.az.AzMetadataProviderHolder;
import org.apache.kafka.connect.automq.log.ConnectLogUploader;
import org.apache.kafka.connect.automq.metrics.OpenTelemetryMetricsReporter;
import org.apache.kafka.connect.connector.policy.ConnectorClientConfigOverridePolicy;
import org.apache.kafka.connect.runtime.Connect;
import org.apache.kafka.connect.runtime.Herder;
@ -39,7 +36,6 @@ import java.net.URI;
import java.util.Arrays;
import java.util.Collections;
import java.util.Map;
import java.util.Properties;
/**
* Common initialization logic for Kafka Connect, intended for use by command line utilities
@ -49,9 +45,7 @@ import java.util.Properties;
*/
public abstract class AbstractConnectCli<H extends Herder, T extends WorkerConfig> {
private static Logger getLogger() {
return LoggerFactory.getLogger(AbstractConnectCli.class);
}
private static final Logger log = LoggerFactory.getLogger(AbstractConnectCli.class);
private final String[] args;
private final Time time = Time.SYSTEM;
@ -89,6 +83,7 @@ public abstract class AbstractConnectCli<H extends Herder, T extends WorkerConfi
*/
public void run() {
if (args.length < 1 || Arrays.asList(args).contains("--help")) {
log.info("Usage: {}", usage());
Exit.exit(1);
}
@ -97,17 +92,6 @@ public abstract class AbstractConnectCli<H extends Herder, T extends WorkerConfi
Map<String, String> workerProps = !workerPropsFile.isEmpty() ?
Utils.propsToStringMap(Utils.loadProps(workerPropsFile)) : Collections.emptyMap();
String[] extraArgs = Arrays.copyOfRange(args, 1, args.length);
// AutoMQ inject start
// Initialize S3 log uploader and OpenTelemetry with worker properties
ConnectLogUploader.initialize(workerProps);
AzMetadataProviderHolder.initialize(workerProps);
Properties telemetryProps = new Properties();
telemetryProps.putAll(workerProps);
OpenTelemetryMetricsReporter.initializeTelemetry(telemetryProps);
// AutoMQ inject end
Connect<H> connect = startConnect(workerProps);
processExtraArgs(connect, extraArgs);
@ -115,7 +99,7 @@ public abstract class AbstractConnectCli<H extends Herder, T extends WorkerConfi
connect.awaitStop();
} catch (Throwable t) {
getLogger().error("Stopping due to error", t);
log.error("Stopping due to error", t);
Exit.exit(2);
}
}
@ -127,17 +111,17 @@ public abstract class AbstractConnectCli<H extends Herder, T extends WorkerConfi
* @return a started instance of {@link Connect}
*/
public Connect<H> startConnect(Map<String, String> workerProps) {
getLogger().info("Kafka Connect worker initializing ...");
log.info("Kafka Connect worker initializing ...");
long initStart = time.hiResClockMs();
WorkerInfo initInfo = new WorkerInfo();
initInfo.logAll();
getLogger().info("Scanning for plugin classes. This might take a moment ...");
log.info("Scanning for plugin classes. This might take a moment ...");
Plugins plugins = new Plugins(workerProps);
plugins.compareAndSwapWithDelegatingLoader();
T config = createConfig(workerProps);
getLogger().debug("Kafka cluster ID: {}", config.kafkaClusterId());
log.debug("Kafka cluster ID: {}", config.kafkaClusterId());
RestClient restClient = new RestClient(config);
@ -154,11 +138,11 @@ public abstract class AbstractConnectCli<H extends Herder, T extends WorkerConfi
H herder = createHerder(config, workerId, plugins, connectorClientConfigOverridePolicy, restServer, restClient);
final Connect<H> connect = new Connect<>(herder, restServer);
getLogger().info("Kafka Connect worker initialization took {}ms", time.hiResClockMs() - initStart);
log.info("Kafka Connect worker initialization took {}ms", time.hiResClockMs() - initStart);
try {
connect.start();
} catch (Exception e) {
getLogger().error("Failed to start Connect", e);
log.error("Failed to start Connect", e);
connect.stop();
Exit.exit(3);
}

View File

@ -17,7 +17,6 @@
package org.apache.kafka.connect.cli;
import org.apache.kafka.common.utils.Time;
import org.apache.kafka.connect.automq.runtime.RuntimeLeaderRegistry;
import org.apache.kafka.connect.connector.policy.ConnectorClientConfigOverridePolicy;
import org.apache.kafka.connect.json.JsonConverter;
import org.apache.kafka.connect.json.JsonConverterConfig;
@ -40,7 +39,6 @@ import org.apache.kafka.connect.util.SharedTopicAdmin;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.function.BooleanSupplier;
import static org.apache.kafka.clients.CommonClientConfigs.CLIENT_ID_CONFIG;
@ -98,16 +96,10 @@ public class ConnectDistributed extends AbstractConnectCli<DistributedHerder, Di
// Pass the shared admin to the distributed herder as an additional AutoCloseable object that should be closed when the
// herder is stopped. This is easier than having to track and own the lifecycle ourselves.
DistributedHerder herder = new DistributedHerder(config, Time.SYSTEM, worker,
return new DistributedHerder(config, Time.SYSTEM, worker,
kafkaClusterId, statusBackingStore, configBackingStore,
restServer.advertisedUrl().toString(), restClient, connectorClientConfigOverridePolicy,
Collections.emptyList(), sharedAdmin);
// AutoMQ for Kafka connect inject start
BooleanSupplier leaderSupplier = herder::isLeaderInstance;
RuntimeLeaderRegistry.register(leaderSupplier);
// AutoMQ for Kafka connect inject end
return herder;
}
@Override

View File

@ -21,8 +21,6 @@ import org.apache.kafka.connect.runtime.distributed.DistributedHerder;
import org.apache.kafka.connect.runtime.rest.ConnectRestServer;
import org.apache.kafka.connect.runtime.rest.RestServer;
import com.automq.log.S3RollingFileAppender;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -117,9 +115,6 @@ public class Connect<H extends Herder> {
try {
startLatch.await();
Connect.this.stop();
// AutoMQ inject start
S3RollingFileAppender.shutdown();
// AutoMQ inject end
} catch (InterruptedException e) {
log.error("Interrupted in shutdown hook while waiting for Kafka Connect startup to finish");
}

View File

@ -48,7 +48,6 @@ import org.apache.kafka.common.utils.ThreadUtils;
import org.apache.kafka.common.utils.Time;
import org.apache.kafka.common.utils.Timer;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.connect.automq.az.AzAwareClientConfigurator;
import org.apache.kafka.connect.connector.ConnectRecord;
import org.apache.kafka.connect.connector.Connector;
import org.apache.kafka.connect.connector.Task;
@ -842,10 +841,6 @@ public class Worker {
connectorClientConfigOverridePolicy);
producerProps.putAll(producerOverrides);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyProducerAz(producerProps, defaultClientId);
// AutoMQ for Kafka inject end
return producerProps;
}
@ -914,10 +909,6 @@ public class Worker {
connectorClientConfigOverridePolicy);
consumerProps.putAll(consumerOverrides);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyConsumerAz(consumerProps, defaultClientId);
// AutoMQ for Kafka inject end
return consumerProps;
}
@ -947,10 +938,6 @@ public class Worker {
// Admin client-specific overrides in the worker config
adminProps.putAll(config.originalsWithPrefix("admin."));
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyAdminAz(adminProps, defaultClientId);
// AutoMQ for Kafka inject end
// Connector-specified overrides
Map<String, Object> adminOverrides =
connectorClientConfigOverrides(connName, connConfig, connectorClass, ConnectorConfig.CONNECTOR_CLIENT_ADMIN_OVERRIDES_PREFIX,

View File

@ -1735,12 +1735,6 @@ public class DistributedHerder extends AbstractHerder implements Runnable {
configBackingStore.putLoggerLevel(namespace, level);
}
// AutoMQ inject start
public boolean isLeaderInstance() {
return isLeader();
}
// AutoMQ inject end
// Should only be called from work thread, so synchronization should not be needed
protected boolean isLeader() {
return assignment != null && member.memberId().equals(assignment.leader());

View File

@ -35,7 +35,6 @@ import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.common.utils.Time;
import org.apache.kafka.common.utils.Timer;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.connect.automq.az.AzAwareClientConfigurator;
import org.apache.kafka.connect.data.Schema;
import org.apache.kafka.connect.data.SchemaAndValue;
import org.apache.kafka.connect.data.SchemaBuilder;
@ -441,9 +440,6 @@ public class KafkaConfigBackingStore extends KafkaTopicBasedBackingStore impleme
Map<String, Object> result = new HashMap<>(baseProducerProps(workerConfig));
result.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId + "-leader");
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyProducerAz(result, "config-log-leader");
// AutoMQ for Kafka inject end
// Always require producer acks to all to ensure durable writes
result.put(ProducerConfig.ACKS_CONFIG, "all");
// We can set this to 5 instead of 1 without risking reordering because we are using an idempotent producer
@ -777,17 +773,11 @@ public class KafkaConfigBackingStore extends KafkaTopicBasedBackingStore impleme
Map<String, Object> producerProps = new HashMap<>(baseProducerProps);
producerProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyProducerAz(producerProps, "config-log");
// AutoMQ for Kafka inject end
Map<String, Object> consumerProps = new HashMap<>(originals);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());
consumerProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyConsumerAz(consumerProps, "config-log");
// AutoMQ for Kafka inject end
ConnectUtils.addMetricsContextProperties(consumerProps, config, clusterId);
if (config.exactlyOnceSourceEnabled()) {
ConnectUtils.ensureProperty(
@ -800,9 +790,6 @@ public class KafkaConfigBackingStore extends KafkaTopicBasedBackingStore impleme
Map<String, Object> adminProps = new HashMap<>(originals);
ConnectUtils.addMetricsContextProperties(adminProps, config, clusterId);
adminProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyAdminAz(adminProps, "config-log");
// AutoMQ for Kafka inject end
Map<String, Object> topicSettings = config instanceof DistributedConfig
? ((DistributedConfig) config).configStorageTopicSettings()

View File

@ -30,7 +30,6 @@ import org.apache.kafka.common.errors.UnsupportedVersionException;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.ByteArraySerializer;
import org.apache.kafka.common.utils.Time;
import org.apache.kafka.connect.automq.az.AzAwareClientConfigurator;
import org.apache.kafka.connect.errors.ConnectException;
import org.apache.kafka.connect.runtime.WorkerConfig;
import org.apache.kafka.connect.runtime.distributed.DistributedConfig;
@ -193,18 +192,12 @@ public class KafkaOffsetBackingStore extends KafkaTopicBasedBackingStore impleme
// gets approved and scheduled for release.
producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "false");
producerProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyProducerAz(producerProps, "offset-log");
// AutoMQ for Kafka inject end
ConnectUtils.addMetricsContextProperties(producerProps, config, clusterId);
Map<String, Object> consumerProps = new HashMap<>(originals);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());
consumerProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyConsumerAz(consumerProps, "offset-log");
// AutoMQ for Kafka inject end
ConnectUtils.addMetricsContextProperties(consumerProps, config, clusterId);
if (config.exactlyOnceSourceEnabled()) {
ConnectUtils.ensureProperty(
@ -216,9 +209,6 @@ public class KafkaOffsetBackingStore extends KafkaTopicBasedBackingStore impleme
Map<String, Object> adminProps = new HashMap<>(originals);
adminProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyAdminAz(adminProps, "offset-log");
// AutoMQ for Kafka inject end
ConnectUtils.addMetricsContextProperties(adminProps, config, clusterId);
NewTopic topicDescription = newTopicDescription(topic, config);

View File

@ -30,7 +30,6 @@ import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.common.utils.ThreadUtils;
import org.apache.kafka.common.utils.Time;
import org.apache.kafka.connect.automq.az.AzAwareClientConfigurator;
import org.apache.kafka.connect.data.Schema;
import org.apache.kafka.connect.data.SchemaAndValue;
import org.apache.kafka.connect.data.SchemaBuilder;
@ -184,25 +183,16 @@ public class KafkaStatusBackingStore extends KafkaTopicBasedBackingStore impleme
// gets approved and scheduled for release.
producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "false"); // disable idempotence since retries is force to 0
producerProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyProducerAz(producerProps, "status-log");
// AutoMQ for Kafka inject end
ConnectUtils.addMetricsContextProperties(producerProps, config, clusterId);
Map<String, Object> consumerProps = new HashMap<>(originals);
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());
consumerProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyConsumerAz(consumerProps, "status-log");
// AutoMQ for Kafka inject end
ConnectUtils.addMetricsContextProperties(consumerProps, config, clusterId);
Map<String, Object> adminProps = new HashMap<>(originals);
adminProps.put(CommonClientConfigs.CLIENT_ID_CONFIG, clientId);
// AutoMQ for Kafka inject start
AzAwareClientConfigurator.maybeApplyAdminAz(adminProps, "status-log");
// AutoMQ for Kafka inject end
ConnectUtils.addMetricsContextProperties(adminProps, config, clusterId);
Map<String, Object> topicSettings = config instanceof DistributedConfig

View File

@ -1,115 +0,0 @@
package org.apache.kafka.connect.automq;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.connect.automq.az.AzAwareClientConfigurator;
import org.apache.kafka.connect.automq.az.AzMetadataProvider;
import org.apache.kafka.connect.automq.az.AzMetadataProviderHolder;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.Test;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
class AzAwareClientConfiguratorTest {
@AfterEach
void resetProvider() {
AzMetadataProviderHolder.setProviderForTest(null);
}
@Test
void shouldDecorateProducerClientId() {
AzMetadataProviderHolder.setProviderForTest(new FixedAzProvider("us-east-1a"));
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.CLIENT_ID_CONFIG, "producer-1");
AzAwareClientConfigurator.maybeApplyProducerAz(props, "producer-1");
assertEquals("automq_type=producer&automq_role=producer-1&automq_az=us-east-1a&producer-1",
props.get(ProducerConfig.CLIENT_ID_CONFIG));
}
@Test
void shouldPreserveCustomClientIdInAzConfig() {
AzMetadataProviderHolder.setProviderForTest(new FixedAzProvider("us-east-1a"));
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.CLIENT_ID_CONFIG, "custom-id");
AzAwareClientConfigurator.maybeApplyProducerAz(props, "producer-1");
assertEquals("automq_type=producer&automq_role=producer-1&automq_az=us-east-1a&custom-id",
props.get(ProducerConfig.CLIENT_ID_CONFIG));
}
@Test
void shouldAssignRackForConsumers() {
AzMetadataProviderHolder.setProviderForTest(new FixedAzProvider("us-west-2c"));
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.CLIENT_ID_CONFIG, "consumer-1");
AzAwareClientConfigurator.maybeApplyConsumerAz(props, "consumer-1");
assertEquals("us-west-2c", props.get(ConsumerConfig.CLIENT_RACK_CONFIG));
}
@Test
void shouldDecorateAdminClientId() {
AzMetadataProviderHolder.setProviderForTest(new FixedAzProvider("eu-west-1b"));
Map<String, Object> props = new HashMap<>();
props.put(AdminClientConfig.CLIENT_ID_CONFIG, "admin-1");
AzAwareClientConfigurator.maybeApplyAdminAz(props, "admin-1");
assertEquals("automq_type=admin&automq_role=admin-1&automq_az=eu-west-1b&admin-1",
props.get(AdminClientConfig.CLIENT_ID_CONFIG));
}
@Test
void shouldLeaveClientIdWhenAzUnavailable() {
AzMetadataProviderHolder.setProviderForTest(new AzMetadataProvider() {
@Override
public Optional<String> availabilityZoneId() {
return Optional.empty();
}
});
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.CLIENT_ID_CONFIG, "producer-1");
AzAwareClientConfigurator.maybeApplyProducerAz(props, "producer-1");
assertEquals("producer-1", props.get(ProducerConfig.CLIENT_ID_CONFIG));
assertFalse(props.containsKey(ConsumerConfig.CLIENT_RACK_CONFIG));
}
@Test
void shouldEncodeSpecialCharactersInClientId() {
AzMetadataProviderHolder.setProviderForTest(new FixedAzProvider("us-east-1a"));
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.CLIENT_ID_CONFIG, "client-with-spaces & symbols");
AzAwareClientConfigurator.maybeApplyProducerAz(props, "test-role");
assertEquals("automq_type=producer&automq_role=test-role&automq_az=us-east-1a&client-with-spaces & symbols",
props.get(ProducerConfig.CLIENT_ID_CONFIG));
}
private static final class FixedAzProvider implements AzMetadataProvider {
private final String az;
private FixedAzProvider(String az) {
this.az = az;
}
@Override
public Optional<String> availabilityZoneId() {
return Optional.ofNullable(az);
}
}
}

View File

@ -19,6 +19,7 @@
package kafka.automq;
import kafka.log.stream.s3.telemetry.exporter.ExporterConstants;
import kafka.server.KafkaConfig;
import org.apache.kafka.common.config.ConfigDef;
@ -79,7 +80,7 @@ public class AutoMQConfig {
public static final String S3_WAL_UPLOAD_INTERVAL_MS_CONFIG = "s3.wal.upload.interval.ms";
public static final String S3_WAL_UPLOAD_INTERVAL_MS_DOC = "The interval at which WAL triggers upload to object storage. -1 means only upload by size trigger";
public static final long S3_WAL_UPLOAD_INTERVAL_MS_DEFAULT = 60000L;
public static final long S3_WAL_UPLOAD_INTERVAL_MS_DEFAULT = -1L;
public static final String S3_STREAM_SPLIT_SIZE_CONFIG = "s3.stream.object.split.size";
public static final String S3_STREAM_SPLIT_SIZE_DOC = "The S3 stream object split size threshold when upload delta WAL or compact stream set object.";
@ -250,11 +251,6 @@ public class AutoMQConfig {
public static final String S3_TELEMETRY_OPS_ENABLED_CONFIG = "s3.telemetry.ops.enabled";
public static final String S3_TELEMETRY_OPS_ENABLED_DOC = "[DEPRECATED] use s3.telemetry.metrics.uri instead.";
private static final String TELEMETRY_EXPORTER_TYPE_OTLP = "otlp";
private static final String TELEMETRY_EXPORTER_TYPE_PROMETHEUS = "prometheus";
private static final String TELEMETRY_EXPORTER_TYPE_OPS = "ops";
public static final String URI_DELIMITER = "://?";
// Deprecated config end
public static void define(ConfigDef configDef) {
@ -407,7 +403,7 @@ public class AutoMQConfig {
if (uri == null) {
uri = buildMetrixExporterURIWithOldConfigs(config);
}
if (!uri.contains(TELEMETRY_EXPORTER_TYPE_OPS)) {
if (!uri.contains(ExporterConstants.OPS_TYPE)) {
uri += "," + buildOpsExporterURI();
}
return uri;
@ -424,10 +420,10 @@ public class AutoMQConfig {
for (String exporterType : exporterTypeArray) {
exporterType = exporterType.trim();
switch (exporterType) {
case TELEMETRY_EXPORTER_TYPE_OTLP:
case ExporterConstants.OTLP_TYPE:
exportedUris.add(buildOTLPExporterURI(kafkaConfig));
break;
case TELEMETRY_EXPORTER_TYPE_PROMETHEUS:
case ExporterConstants.PROMETHEUS_TYPE:
exportedUris.add(buildPrometheusExporterURI(kafkaConfig));
break;
default:
@ -445,31 +441,26 @@ public class AutoMQConfig {
}
private static String buildOTLPExporterURI(KafkaConfig kafkaConfig) {
String endpoint = kafkaConfig.getString(S3_TELEMETRY_EXPORTER_OTLP_ENDPOINT_CONFIG);
if (StringUtils.isBlank(endpoint)) {
return "";
}
StringBuilder uriBuilder = new StringBuilder()
.append(TELEMETRY_EXPORTER_TYPE_OTLP)
.append("://?endpoint=").append(endpoint);
String protocol = kafkaConfig.getString(S3_TELEMETRY_EXPORTER_OTLP_PROTOCOL_CONFIG);
if (StringUtils.isNotBlank(protocol)) {
uriBuilder.append("&protocol=").append(protocol);
}
.append(ExporterConstants.OTLP_TYPE)
.append(ExporterConstants.URI_DELIMITER)
.append(ExporterConstants.ENDPOINT).append("=").append(kafkaConfig.getString(S3_TELEMETRY_EXPORTER_OTLP_ENDPOINT_CONFIG))
.append("&")
.append(ExporterConstants.PROTOCOL).append("=").append(kafkaConfig.getString(S3_TELEMETRY_EXPORTER_OTLP_PROTOCOL_CONFIG));
if (kafkaConfig.getBoolean(S3_TELEMETRY_EXPORTER_OTLP_COMPRESSION_ENABLE_CONFIG)) {
uriBuilder.append("&compression=gzip");
uriBuilder.append("&").append(ExporterConstants.COMPRESSION).append("=").append("gzip");
}
return uriBuilder.toString();
}
private static String buildPrometheusExporterURI(KafkaConfig kafkaConfig) {
return TELEMETRY_EXPORTER_TYPE_PROMETHEUS + URI_DELIMITER +
"host" + "=" + kafkaConfig.getString(S3_METRICS_EXPORTER_PROM_HOST_CONFIG) + "&" +
"port" + "=" + kafkaConfig.getInt(S3_METRICS_EXPORTER_PROM_PORT_CONFIG);
return ExporterConstants.PROMETHEUS_TYPE + ExporterConstants.URI_DELIMITER +
ExporterConstants.HOST + "=" + kafkaConfig.getString(S3_METRICS_EXPORTER_PROM_HOST_CONFIG) + "&" +
ExporterConstants.PORT + "=" + kafkaConfig.getInt(S3_METRICS_EXPORTER_PROM_PORT_CONFIG);
}
private static String buildOpsExporterURI() {
return TELEMETRY_EXPORTER_TYPE_OPS + URI_DELIMITER;
return ExporterConstants.OPS_TYPE + ExporterConstants.URI_DELIMITER;
}
private static List<Pair<String, String>> parseBaseLabels(KafkaConfig config) {

View File

@ -27,10 +27,10 @@ public interface FailedNode {
int id();
static FailedNode from(NodeRuntimeMetadata node) {
return new DefaultFailedNode(node.id(), node.epoch());
return new K8sFailedNode(node.id());
}
static FailedNode from(FailoverContext context) {
return new DefaultFailedNode(context.getNodeId(), context.getNodeEpoch());
return new K8sFailedNode(context.getNodeId());
}
}

View File

@ -181,7 +181,7 @@ public class FailoverControlManager implements AutoCloseable {
node.getNodeId(),
// There are node epochs in both streamControlManager and nodeControlManager, and they are the same in most cases.
// However, in some rare cases, the node epoch in streamControlManager may be updated earlier than the node epoch in nodeControlManager.
// So we use the node epoch in streamControlManager as the source of truth.
// So we use the node epoch in nodeControlManager as the source of truth.
nodeEpochMap.get(node.getNodeId()),
node.getWalConfig(),
node.getTags(),

View File

@ -17,28 +17,39 @@
* limitations under the License.
*/
package org.apache.kafka.connect.automq.az;
package kafka.automq.failover;
import java.util.Map;
import java.util.Optional;
import java.util.Objects;
/**
* Pluggable provider for availability-zone metadata used to tune Kafka client configurations.
*/
public interface AzMetadataProvider {
public final class K8sFailedNode implements FailedNode {
private final int id;
/**
* Configure the provider with the worker properties. Implementations may cache values extracted from the
* configuration map. This method is invoked exactly once during worker bootstrap.
*/
default void configure(Map<String, String> workerProps) {
// no-op
public K8sFailedNode(int id) {
this.id = id;
}
/**
* @return the availability-zone identifier for the current node, if known.
*/
default Optional<String> availabilityZoneId() {
return Optional.empty();
public int id() {
return id;
}
@Override
public boolean equals(Object obj) {
if (obj == this)
return true;
if (obj == null || obj.getClass() != this.getClass())
return false;
var that = (K8sFailedNode) obj;
return this.id == that.id;
}
@Override
public int hashCode() {
return Objects.hash(id);
}
@Override
public String toString() {
return "K8sFailedNode[" +
"id=" + id + ']';
}
}

View File

@ -25,7 +25,6 @@ import org.apache.kafka.controller.stream.NodeState;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
/**
* NodeRuntimeMetadata is a runtime view of a node's metadata.
@ -40,7 +39,6 @@ public final class NodeRuntimeMetadata {
* @see ClusterControlManager#getNextNodeId()
*/
private static final int MAX_CONTROLLER_ID = 1000 - 1;
private static final long DONT_FAILOVER_AFTER_NEW_EPOCH_MS = TimeUnit.MINUTES.toMillis(1);
private final int id;
private final long epoch;
private final String walConfigs;
@ -62,11 +60,7 @@ public final class NodeRuntimeMetadata {
}
public boolean shouldFailover() {
return isFenced() && hasOpeningStreams
// The node epoch is the start timestamp of node.
// We need to avoid failover just after node restart.
// The node may take some time to recover its data.
&& System.currentTimeMillis() - epoch > DONT_FAILOVER_AFTER_NEW_EPOCH_MS;
return isFenced() && hasOpeningStreams;
}
public boolean isFenced() {

View File

@ -1,191 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.automq.partition.snapshot;
import org.apache.kafka.common.message.AutomqGetPartitionSnapshotResponseData;
import com.automq.stream.s3.ConfirmWAL;
import com.automq.stream.s3.model.StreamRecordBatch;
import com.automq.stream.s3.wal.RecordOffset;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.atomic.AtomicInteger;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
/**
* Maintains a bounded, in-memory delta of recent WAL appends so snapshot responses can
* piggy-back fresh data instead of forcing clients to replay the physical WAL.
*
* <p><strong>Responsibilities</strong>
* <ul>
* <li>Subscribe to {@link ConfirmWAL} append events and retain the encoded
* {@link StreamRecordBatch} payloads while they are eligible for delta export.</li>
* <li>Track confirm offsets and expose them via {@link #handle(short, AutomqGetPartitionSnapshotResponseData)}.</li>
* <li>Serialize buffered batches into {@code confirmWalDeltaData} for request versions
* &gt;= 2, or signal that callers must replay the WAL otherwise.</li>
* <li>Enforce {@link #MAX_RECORDS_BUFFER_SIZE} so the delta cache remains lightweight.</li>
* </ul>
*
* <p><strong>State machine</strong>
* <ul>
* <li>{@link #STATE_NOT_SYNC}: Buffer content is discarded (e.g. overflow) and only confirm
* offsets are returned until new appends arrive.</li>
* <li>{@link #STATE_SYNCING}: Buffered records are eligible to be drained and turned into a
* delta payload when {@link #handle(short, AutomqGetPartitionSnapshotResponseData)} runs.</li>
* <li>{@link #STATE_CLOSED}: Listener is torn down and ignores subsequent appends.</li>
* </ul>
*
* <p><strong>Concurrency and lifecycle</strong>
* <ul>
* <li>All public methods are synchronized to guard the state machine, queue, and
* {@link #lastConfirmOffset} tracking.</li>
* <li>Buffered batches are reference-counted; ownership transfers to this class until the
* delta is emitted or the buffer is dropped/closed.</li>
* <li>{@link #close()} must be invoked when the owning {@link PartitionSnapshotsManager.Session} ends to release buffers
* and remove the {@link ConfirmWAL.AppendListener}.</li>
* </ul>
*
* <p><strong>Snapshot interaction</strong>
* <ul>
* <li>{@link #handle(short, AutomqGetPartitionSnapshotResponseData)} always updates
* {@code confirmWalEndOffset} and, when possible, attaches {@code confirmWalDeltaData}.</li>
* <li>A {@code null} delta signals the client must replay the WAL, whereas an empty byte array
* indicates no new data but confirms offsets.</li>
* <li>When the aggregated encoded bytes would exceed {@link #MAX_RECORDS_BUFFER_SIZE}, the
* buffer is dropped and state resets to {@link #STATE_NOT_SYNC}.</li>
* </ul>
*/
public class ConfirmWalDataDelta implements ConfirmWAL.AppendListener {
static final int STATE_NOT_SYNC = 0;
static final int STATE_SYNCING = 1;
static final int STATE_CLOSED = 9;
static final int MAX_RECORDS_BUFFER_SIZE = 32 * 1024; // 32KiB
private final ConfirmWAL confirmWAL;
private final ConfirmWAL.ListenerHandle listenerHandle;
final BlockingQueue<RecordExt> records = new LinkedBlockingQueue<>();
final AtomicInteger size = new AtomicInteger(0);
private RecordOffset lastConfirmOffset = null;
int state = STATE_NOT_SYNC;
public ConfirmWalDataDelta(ConfirmWAL confirmWAL) {
this.confirmWAL = confirmWAL;
this.listenerHandle = confirmWAL.addAppendListener(this);
}
public synchronized void close() {
this.state = STATE_CLOSED;
this.listenerHandle.close();
records.forEach(r -> r.record.release());
records.clear();
}
public void handle(short requestVersion,
AutomqGetPartitionSnapshotResponseData resp) {
RecordOffset newConfirmOffset = null;
List<RecordExt> delta = null;
synchronized (this) {
if (state == STATE_NOT_SYNC) {
List<RecordExt> drainedRecords = new ArrayList<>(records.size());
records.drainTo(drainedRecords);
size.addAndGet(-drainedRecords.stream().mapToInt(r -> r.record.encoded().readableBytes()).sum());
if (!drainedRecords.isEmpty()) {
RecordOffset deltaConfirmOffset = drainedRecords.get(drainedRecords.size() - 1).nextOffset();
if (lastConfirmOffset == null || deltaConfirmOffset.compareTo(lastConfirmOffset) > 0) {
newConfirmOffset = deltaConfirmOffset;
state = STATE_SYNCING;
}
drainedRecords.forEach(r -> r.record.release());
}
} else if (state == STATE_SYNCING) {
delta = new ArrayList<>(records.size());
records.drainTo(delta);
size.addAndGet(-delta.stream().mapToInt(r -> r.record.encoded().readableBytes()).sum());
newConfirmOffset = delta.isEmpty() ? lastConfirmOffset : delta.get(delta.size() - 1).nextOffset();
}
if (newConfirmOffset == null) {
newConfirmOffset = confirmWAL.confirmOffset();
}
this.lastConfirmOffset = newConfirmOffset;
}
resp.setConfirmWalEndOffset(newConfirmOffset.bufferAsBytes());
if (delta != null) {
int size = delta.stream().mapToInt(r -> r.record.encoded().readableBytes()).sum();
byte[] data = new byte[size];
ByteBuf buf = Unpooled.wrappedBuffer(data).clear();
delta.forEach(r -> {
buf.writeBytes(r.record.encoded());
r.record.release();
});
if (requestVersion >= 2) {
// The confirmWalDeltaData is only supported in request version >= 2
resp.setConfirmWalDeltaData(data);
}
} else {
if (requestVersion >= 2) {
// - Null means the client needs replay from the physical WAL
// - Empty means there is no delta data.
resp.setConfirmWalDeltaData(null);
}
}
}
@Override
public synchronized void onAppend(StreamRecordBatch record, RecordOffset recordOffset,
RecordOffset nextOffset) {
if (state == STATE_CLOSED) {
return;
}
record.retain();
records.add(new RecordExt(record, recordOffset, nextOffset));
if (size.addAndGet(record.encoded().readableBytes()) > MAX_RECORDS_BUFFER_SIZE) {
// If the buffer is full, drop all records and switch to NOT_SYNC state.
// It's cheaper to replay from the physical WAL instead of transferring the data by network.
state = STATE_NOT_SYNC;
records.forEach(r -> r.record.release());
records.clear();
size.set(0);
}
}
record RecordExt(StreamRecordBatch record, RecordOffset recordOffset, RecordOffset nextOffset) {
}
public static List<StreamRecordBatch> decodeDeltaRecords(byte[] data) {
if (data == null) {
return null;
}
List<StreamRecordBatch> records = new ArrayList<>();
ByteBuf buf = Unpooled.wrappedBuffer(data);
while (buf.readableBytes() > 0) {
StreamRecordBatch record = StreamRecordBatch.parse(buf, false);
records.add(record);
}
return records;
}
}

View File

@ -68,11 +68,13 @@ public class PartitionSnapshotsManager {
private final Map<Integer, Session> sessions = new HashMap<>();
private final List<PartitionWithVersion> snapshotVersions = new CopyOnWriteArrayList<>();
private final Time time;
private final String confirmWalConfig;
private final ConfirmWAL confirmWAL;
public PartitionSnapshotsManager(Time time, AutoMQConfig config, ConfirmWAL confirmWAL,
Supplier<AutoMQVersion> versionGetter) {
this.time = time;
this.confirmWalConfig = config.walConfig();
this.confirmWAL = confirmWAL;
if (config.zoneRouterChannels().isPresent()) {
Threads.COMMON_SCHEDULER.scheduleWithFixedDelay(this::cleanExpiredSessions, 1, 1, TimeUnit.MINUTES);
@ -120,7 +122,7 @@ public class PartitionSnapshotsManager {
newSession = true;
}
}
return session.snapshotsDelta(request, request.data().requestCommit() || newSession);
return session.snapshotsDelta(request.data().version(), request.data().requestCommit() || newSession);
}
private synchronized int nextSessionId() {
@ -133,13 +135,7 @@ public class PartitionSnapshotsManager {
}
private synchronized void cleanExpiredSessions() {
sessions.values().removeIf(s -> {
boolean expired = s.expired();
if (expired) {
s.close();
}
return expired;
});
sessions.values().removeIf(Session::expired);
}
class Session {
@ -156,23 +152,17 @@ public class PartitionSnapshotsManager {
private final List<Partition> removed = new ArrayList<>();
private long lastGetSnapshotsTimestamp = time.milliseconds();
private final Set<CompletableFuture<Void>> inflightCommitCfSet = ConcurrentHashMap.newKeySet();
private final ConfirmWalDataDelta delta;
public Session(int sessionId) {
this.sessionId = sessionId;
this.delta = new ConfirmWalDataDelta(confirmWAL);
}
public synchronized void close() {
delta.close();
}
public synchronized int sessionEpoch() {
return sessionEpoch;
}
public synchronized CompletableFuture<AutomqGetPartitionSnapshotResponse> snapshotsDelta(
AutomqGetPartitionSnapshotRequest request, boolean requestCommit) {
public synchronized CompletableFuture<AutomqGetPartitionSnapshotResponse> snapshotsDelta(short requestVersion,
boolean requestCommit) {
AutomqGetPartitionSnapshotResponseData resp = new AutomqGetPartitionSnapshotResponseData();
sessionEpoch++;
lastGetSnapshotsTimestamp = time.milliseconds();
@ -181,29 +171,23 @@ public class PartitionSnapshotsManager {
long finalSessionEpoch = sessionEpoch;
CompletableFuture<Void> collectPartitionSnapshotsCf;
if (!requestCommit && inflightCommitCfSet.isEmpty()) {
collectPartitionSnapshotsCf = collectPartitionSnapshots(request.data().version(), resp);
collectPartitionSnapshotsCf = collectPartitionSnapshots(resp);
} else {
collectPartitionSnapshotsCf = CompletableFuture.completedFuture(null);
}
boolean newSession = finalSessionEpoch == 1;
return collectPartitionSnapshotsCf
.thenApply(nil -> {
if (request.data().version() > ZERO_ZONE_V0_REQUEST_VERSION) {
if (newSession) {
if (requestVersion > ZERO_ZONE_V0_REQUEST_VERSION) {
if (finalSessionEpoch == 1) {
// return the WAL config in the session first response
resp.setConfirmWalConfig(confirmWAL.uri());
resp.setConfirmWalConfig(confirmWalConfig);
}
delta.handle(request.version(), resp);
resp.setConfirmWalEndOffset(confirmWAL.confirmOffset().bufferAsBytes());
}
if (requestCommit) {
// Commit after generating the snapshots.
// Then the snapshot-read partitions could read from snapshot-read cache or block cache.
CompletableFuture<Void> commitCf = newSession ?
// The proxy node's first snapshot-read request needs to commit immediately to ensure the data could be read.
confirmWAL.commit(0, false)
// The proxy node's snapshot-read cache isn't enough to hold the 'uncommitted' data,
// so the proxy node request a commit to ensure the data could be read from block cache.
: confirmWAL.commit(1000, false);
CompletableFuture<Void> commitCf = confirmWAL.commit(0, false);
inflightCommitCfSet.add(commitCf);
commitCf.whenComplete((rst, ex) -> inflightCommitCfSet.remove(commitCf));
}
@ -219,8 +203,7 @@ public class PartitionSnapshotsManager {
return time.milliseconds() - lastGetSnapshotsTimestamp > 60000;
}
private CompletableFuture<Void> collectPartitionSnapshots(short funcVersion,
AutomqGetPartitionSnapshotResponseData resp) {
private CompletableFuture<Void> collectPartitionSnapshots(AutomqGetPartitionSnapshotResponseData resp) {
Map<Uuid, List<PartitionSnapshot>> topic2partitions = new HashMap<>();
List<CompletableFuture<Void>> completeCfList = COMPLETE_CF_LIST_LOCAL.get();
completeCfList.clear();
@ -228,7 +211,7 @@ public class PartitionSnapshotsManager {
PartitionSnapshotVersion version = synced.remove(partition);
if (version != null) {
List<PartitionSnapshot> partitionSnapshots = topic2partitions.computeIfAbsent(partition.topicId().get(), topic -> new ArrayList<>());
partitionSnapshots.add(snapshot(funcVersion, partition, version, null, completeCfList));
partitionSnapshots.add(snapshot(partition, version, null, completeCfList));
}
});
removed.clear();
@ -238,7 +221,7 @@ public class PartitionSnapshotsManager {
if (!Objects.equals(p.version, oldVersion)) {
List<PartitionSnapshot> partitionSnapshots = topic2partitions.computeIfAbsent(p.partition.topicId().get(), topic -> new ArrayList<>());
PartitionSnapshotVersion newVersion = p.version.copy();
PartitionSnapshot partitionSnapshot = snapshot(funcVersion, p.partition, oldVersion, newVersion, completeCfList);
PartitionSnapshot partitionSnapshot = snapshot(p.partition, oldVersion, newVersion, completeCfList);
partitionSnapshots.add(partitionSnapshot);
synced.put(p.partition, newVersion);
}
@ -256,8 +239,7 @@ public class PartitionSnapshotsManager {
return retCf;
}
private PartitionSnapshot snapshot(short funcVersion, Partition partition,
PartitionSnapshotVersion oldVersion,
private PartitionSnapshot snapshot(Partition partition, PartitionSnapshotVersion oldVersion,
PartitionSnapshotVersion newVersion, List<CompletableFuture<Void>> completeCfList) {
if (newVersion == null) {
// partition is closed
@ -286,9 +268,7 @@ public class PartitionSnapshotsManager {
if (includeSegments) {
snapshot.setLogMetadata(logMetadata(src.logMeta()));
}
if (funcVersion > ZERO_ZONE_V0_REQUEST_VERSION) {
snapshot.setLastTimestampOffset(timestampOffset(src.lastTimestampOffset()));
}
snapshot.setLastTimestampOffset(timestampOffset(src.lastTimestampOffset()));
return snapshot;
});
}
@ -374,5 +354,4 @@ public class PartitionSnapshotsManager {
static LogEventListener newLogEventListener(PartitionWithVersion version) {
return (segment, event) -> version.version.incrementSegmentsVersion();
}
}

View File

@ -48,10 +48,9 @@ import java.util.UUID;
*/
public abstract class AbstractTypeAdapter<S> implements TypeAdapter<S> {
@SuppressWarnings({"CyclomaticComplexity", "NPathComplexity"})
@Override
public Object convert(Object sourceValue, S sourceSchema, Type targetType, StructConverter<S> structConverter) {
public Object convert(Object sourceValue, S sourceSchema, Type targetType) {
if (sourceValue == null) {
return null;
}
@ -84,11 +83,9 @@ public abstract class AbstractTypeAdapter<S> implements TypeAdapter<S> {
case TIMESTAMP:
return convertTimestamp(sourceValue, sourceSchema, (Types.TimestampType) targetType);
case LIST:
return convertList(sourceValue, sourceSchema, (Types.ListType) targetType, structConverter);
return convertList(sourceValue, sourceSchema, (Types.ListType) targetType);
case MAP:
return convertMap(sourceValue, sourceSchema, (Types.MapType) targetType, structConverter);
case STRUCT:
return structConverter.convert(sourceValue, sourceSchema, targetType);
return convertMap(sourceValue, sourceSchema, (Types.MapType) targetType);
default:
return sourceValue;
}
@ -200,30 +197,16 @@ public abstract class AbstractTypeAdapter<S> implements TypeAdapter<S> {
if (sourceValue instanceof Temporal) return sourceValue;
if (sourceValue instanceof Date) {
Instant instant = ((Date) sourceValue).toInstant();
long micros = DateTimeUtil.microsFromInstant(instant);
return targetType.shouldAdjustToUTC()
? DateTimeUtil.timestamptzFromMicros(micros)
: DateTimeUtil.timestampFromMicros(micros);
return DateTimeUtil.timestamptzFromMicros(DateTimeUtil.microsFromInstant(instant));
}
if (sourceValue instanceof String) {
Instant instant = Instant.parse(sourceValue.toString());
long micros = DateTimeUtil.microsFromInstant(instant);
return targetType.shouldAdjustToUTC()
? DateTimeUtil.timestamptzFromMicros(micros)
: DateTimeUtil.timestampFromMicros(micros);
}
if (sourceValue instanceof Number) {
// Assume the number represents microseconds since epoch
// Subclasses should override to handle milliseconds or other units based on logical type
long micros = ((Number) sourceValue).longValue();
return targetType.shouldAdjustToUTC()
? DateTimeUtil.timestamptzFromMicros(micros)
: DateTimeUtil.timestampFromMicros(micros);
return DateTimeUtil.timestamptzFromMicros(DateTimeUtil.microsFromInstant(instant));
}
throw new IllegalArgumentException("Cannot convert " + sourceValue.getClass().getSimpleName() + " to " + targetType.typeId());
}
protected abstract List<?> convertList(Object sourceValue, S sourceSchema, Types.ListType targetType, StructConverter<S> structConverter);
protected abstract List<?> convertList(Object sourceValue, S sourceSchema, Types.ListType targetType);
protected abstract Map<?, ?> convertMap(Object sourceValue, S sourceSchema, Types.MapType targetType, StructConverter<S> structConverter);
protected abstract Map<?, ?> convertMap(Object sourceValue, S sourceSchema, Types.MapType targetType);
}

View File

@ -116,7 +116,7 @@ public class AvroValueAdapter extends AbstractTypeAdapter<Schema> {
}
@Override
protected List<?> convertList(Object sourceValue, Schema sourceSchema, Types.ListType targetType, StructConverter<Schema> structConverter) {
protected List<?> convertList(Object sourceValue, Schema sourceSchema, Types.ListType targetType) {
Schema listSchema = sourceSchema;
Schema elementSchema = listSchema.getElementType();
@ -131,14 +131,14 @@ public class AvroValueAdapter extends AbstractTypeAdapter<Schema> {
List<Object> list = new ArrayList<>(sourceList.size());
for (Object element : sourceList) {
Object convert = convert(element, elementSchema, targetType.elementType(), structConverter);
Object convert = convert(element, elementSchema, targetType.elementType());
list.add(convert);
}
return list;
}
@Override
protected Map<?, ?> convertMap(Object sourceValue, Schema sourceSchema, Types.MapType targetType, StructConverter<Schema> structConverter) {
protected Map<?, ?> convertMap(Object sourceValue, Schema sourceSchema, Types.MapType targetType) {
if (sourceValue instanceof GenericData.Array) {
GenericData.Array<?> arrayValue = (GenericData.Array<?>) sourceValue;
Map<Object, Object> recordMap = new HashMap<>(arrayValue.size());
@ -161,8 +161,8 @@ public class AvroValueAdapter extends AbstractTypeAdapter<Schema> {
continue;
}
GenericRecord record = (GenericRecord) element;
Object key = convert(record.get(keyField.pos()), keySchema, keyType, structConverter);
Object value = convert(record.get(valueField.pos()), valueSchema, valueType, structConverter);
Object key = convert(record.get(keyField.pos()), keySchema, keyType);
Object value = convert(record.get(valueField.pos()), valueSchema, valueType);
recordMap.put(key, value);
}
return recordMap;
@ -179,32 +179,10 @@ public class AvroValueAdapter extends AbstractTypeAdapter<Schema> {
for (Map.Entry<?, ?> entry : sourceMap.entrySet()) {
Object rawKey = entry.getKey();
Object key = convert(rawKey, STRING_SCHEMA_INSTANCE, keyType, structConverter);
Object value = convert(entry.getValue(), valueSchema, valueType, structConverter);
Object key = convert(rawKey, STRING_SCHEMA_INSTANCE, keyType);
Object value = convert(entry.getValue(), valueSchema, valueType);
adaptedMap.put(key, value);
}
return adaptedMap;
}
@Override
public Object convert(Object sourceValue, Schema sourceSchema, Type targetType) {
return convert(sourceValue, sourceSchema, targetType, this::convertStruct);
}
protected Object convertStruct(Object sourceValue, Schema sourceSchema, Type targetType) {
org.apache.iceberg.Schema schema = targetType.asStructType().asSchema();
org.apache.iceberg.data.GenericRecord result = org.apache.iceberg.data.GenericRecord.create(schema);
for (Types.NestedField f : schema.columns()) {
// Convert the value to the expected type
GenericRecord record = (GenericRecord) sourceValue;
Schema.Field sourceField = sourceSchema.getField(f.name());
if (sourceField == null) {
throw new IllegalStateException("Missing field '" + f.name()
+ "' in source schema: " + sourceSchema.getFullName());
}
Object fieldValue = convert(record.get(f.name()), sourceField.schema(), f.type());
result.setField(f.name(), fieldValue);
}
return result;
}
}

View File

@ -1,57 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.automq.table.binder;
import org.apache.avro.Schema;
import org.apache.iceberg.types.Type;
/**
* Represents the mapping between an Avro field and its corresponding Iceberg field.
* This class stores the position, key, schema, and type information needed to
* convert field values during record binding.
*/
public class FieldMapping {
private final int avroPosition;
private final String avroKey;
private final Type icebergType;
private final Schema avroSchema;
public FieldMapping(int avroPosition, String avroKey, Type icebergType, Schema avroSchema) {
this.avroPosition = avroPosition;
this.avroKey = avroKey;
this.icebergType = icebergType;
this.avroSchema = avroSchema;
}
public int avroPosition() {
return avroPosition;
}
public String avroKey() {
return avroKey;
}
public Type icebergType() {
return icebergType;
}
public Schema avroSchema() {
return avroSchema;
}
}

View File

@ -22,7 +22,6 @@ package kafka.automq.table.binder;
import kafka.automq.table.metric.FieldMetric;
import org.apache.avro.Schema;
import org.apache.avro.SchemaBuilder;
import org.apache.avro.generic.GenericRecord;
import org.apache.iceberg.avro.AvroSchemaUtil;
import org.apache.iceberg.data.Record;
@ -30,14 +29,11 @@ import org.apache.iceberg.types.Type;
import org.apache.iceberg.types.Types;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.IdentityHashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.atomic.AtomicLong;
import static org.apache.avro.Schema.Type.ARRAY;
import static org.apache.avro.Schema.Type.NULL;
/**
@ -52,7 +48,7 @@ public class RecordBinder {
private final FieldMapping[] fieldMappings;
// Pre-computed RecordBinders for nested STRUCT fields
private final Map<Schema, RecordBinder> nestedStructBinders;
private final Map<String, RecordBinder> nestedStructBinders;
// Field count statistics for this batch
private final AtomicLong batchFieldCount;
@ -82,9 +78,11 @@ public class RecordBinder {
}
// Initialize field mappings
this.fieldMappings = buildFieldMappings(avroSchema, icebergSchema);
this.fieldMappings = new FieldMapping[icebergSchema.columns().size()];
initializeFieldMappings(avroSchema);
// Pre-compute nested struct binders
this.nestedStructBinders = precomputeBindersMap(typeAdapter);
this.nestedStructBinders = precomputeNestedStructBinders(typeAdapter);
}
public RecordBinder createBinderForNewSchema(org.apache.iceberg.Schema icebergSchema, Schema avroSchema) {
@ -123,12 +121,15 @@ public class RecordBinder {
batchFieldCount.addAndGet(count);
}
private FieldMapping[] buildFieldMappings(Schema avroSchema, org.apache.iceberg.Schema icebergSchema) {
private void initializeFieldMappings(Schema avroSchema) {
Schema recordSchema = avroSchema;
FieldMapping[] mappings = new FieldMapping[icebergSchema.columns().size()];
// Unwrap UNION if it contains only one non-NULL type
recordSchema = resolveUnionElement(recordSchema);
if (recordSchema.getType() == Schema.Type.UNION) {
recordSchema = recordSchema.getTypes().stream()
.filter(s -> s.getType() == Schema.Type.RECORD)
.findFirst()
.orElseThrow(() -> new IllegalArgumentException("UNION schema does not contain a RECORD type: " + avroSchema));
}
for (int icebergPos = 0; icebergPos < icebergSchema.columns().size(); icebergPos++) {
Types.NestedField icebergField = icebergSchema.columns().get(icebergPos);
@ -136,210 +137,86 @@ public class RecordBinder {
Schema.Field avroField = recordSchema.getField(fieldName);
if (avroField != null) {
mappings[icebergPos] = buildFieldMapping(
fieldMappings[icebergPos] = createOptimizedMapping(
avroField.name(),
avroField.pos(),
icebergField.type(),
avroField.schema()
);
} else {
mappings[icebergPos] = null;
fieldMappings[icebergPos] = null;
}
}
return mappings;
}
private FieldMapping buildFieldMapping(String avroFieldName, int avroPosition, Type icebergType, Schema avroType) {
if (Type.TypeID.TIMESTAMP.equals(icebergType.typeId())
|| Type.TypeID.TIME.equals(icebergType.typeId())
|| Type.TypeID.MAP.equals(icebergType.typeId())
|| Type.TypeID.LIST.equals(icebergType.typeId())
|| Type.TypeID.STRUCT.equals(icebergType.typeId())) {
private FieldMapping createOptimizedMapping(String avroFieldName, int avroPosition, Type icebergType, Schema avroType) {
org.apache.iceberg.Schema nestedSchema = null;
String nestedSchemaId = null;
if (icebergType.isStructType()) {
nestedSchema = icebergType.asStructType().asSchema();
nestedSchemaId = icebergType.toString();
}
if (Type.TypeID.MAP.equals(icebergType.typeId()) || Type.TypeID.LIST.equals(icebergType.typeId())) {
avroType = resolveUnionElement(avroType);
}
return new FieldMapping(avroPosition, avroFieldName, icebergType, avroType);
return new FieldMapping(avroPosition, avroFieldName, icebergType, icebergType.typeId(), avroType, nestedSchema, nestedSchemaId);
}
private Schema resolveUnionElement(Schema schema) {
if (schema.getType() != Schema.Type.UNION) {
return schema;
}
// Collect all non-NULL types
List<Schema> nonNullTypes = new ArrayList<>();
for (Schema s : schema.getTypes()) {
if (s.getType() != NULL) {
nonNullTypes.add(s);
Schema resolved = schema;
if (schema.getType() == Schema.Type.UNION) {
resolved = null;
for (Schema unionMember : schema.getTypes()) {
if (unionMember.getType() != NULL) {
resolved = unionMember;
break;
}
}
}
if (nonNullTypes.isEmpty()) {
throw new IllegalArgumentException("UNION schema contains only NULL type: " + schema);
} else if (nonNullTypes.size() == 1) {
// Only unwrap UNION if it contains exactly one non-NULL type (optional union)
return nonNullTypes.get(0);
} else {
// Multiple non-NULL types: non-optional union not supported
throw new UnsupportedOperationException(
"Non-optional UNION with multiple non-NULL types is not supported. " +
"Found " + nonNullTypes.size() + " non-NULL types in UNION: " + schema);
}
return resolved;
}
/**
* Pre-computes RecordBinders for nested STRUCT fields.
*/
private Map<Schema, RecordBinder> precomputeBindersMap(TypeAdapter<Schema> typeAdapter) {
Map<Schema, RecordBinder> binders = new IdentityHashMap<>();
private Map<String, RecordBinder> precomputeNestedStructBinders(TypeAdapter<Schema> typeAdapter) {
Map<String, RecordBinder> binders = new HashMap<>();
for (FieldMapping mapping : fieldMappings) {
if (mapping != null) {
precomputeBindersForType(mapping.icebergType(), mapping.avroSchema(), binders, typeAdapter);
if (mapping != null && mapping.typeId() == Type.TypeID.STRUCT) {
String structId = mapping.nestedSchemaId();
if (!binders.containsKey(structId)) {
RecordBinder nestedBinder = new RecordBinder(
mapping.nestedSchema(),
mapping.avroSchema(),
typeAdapter,
batchFieldCount
);
binders.put(structId, nestedBinder);
}
}
}
return binders;
}
/**
* Recursively precomputes binders for a given Iceberg type and its corresponding Avro schema.
*/
private void precomputeBindersForType(Type icebergType, Schema avroSchema,
Map<Schema, RecordBinder> binders,
TypeAdapter<Schema> typeAdapter) {
if (icebergType.isPrimitiveType()) {
return; // No binders needed for primitive types
}
if (icebergType.isStructType() && !avroSchema.isUnion()) {
createStructBinder(icebergType.asStructType(), avroSchema, binders, typeAdapter);
} else if (icebergType.isStructType() && avroSchema.isUnion()) {
createUnionStructBinders(icebergType.asStructType(), avroSchema, binders, typeAdapter);
} else if (icebergType.isListType()) {
createListBinder(icebergType.asListType(), avroSchema, binders, typeAdapter);
} else if (icebergType.isMapType()) {
createMapBinder(icebergType.asMapType(), avroSchema, binders, typeAdapter);
}
}
/**
* Creates binders for STRUCT types represented as Avro UNIONs.
*/
private void createUnionStructBinders(Types.StructType structType, Schema avroSchema,
Map<Schema, RecordBinder> binders,
TypeAdapter<Schema> typeAdapter) {
org.apache.iceberg.Schema schema = structType.asSchema();
SchemaBuilder.FieldAssembler<Schema> schemaBuilder = SchemaBuilder.record(avroSchema.getName()).fields()
.name("tag").type().intType().noDefault();
int tag = 0;
for (Schema unionMember : avroSchema.getTypes()) {
if (unionMember.getType() != NULL) {
schemaBuilder.name("field" + tag).type(unionMember).noDefault();
tag++;
}
}
RecordBinder structBinder = new RecordBinder(schema, schemaBuilder.endRecord(), typeAdapter, batchFieldCount);
binders.put(avroSchema, structBinder);
}
/**
* Creates a binder for a STRUCT type field.
*/
private void createStructBinder(Types.StructType structType, Schema avroSchema,
Map<Schema, RecordBinder> binders,
TypeAdapter<Schema> typeAdapter) {
org.apache.iceberg.Schema schema = structType.asSchema();
RecordBinder structBinder = new RecordBinder(schema, avroSchema, typeAdapter, batchFieldCount);
binders.put(avroSchema, structBinder);
}
/**
* Creates binders for LIST type elements (if they are STRUCT types).
*/
private void createListBinder(Types.ListType listType, Schema avroSchema,
Map<Schema, RecordBinder> binders,
TypeAdapter<Schema> typeAdapter) {
Type elementType = listType.elementType();
if (elementType.isStructType()) {
Schema elementAvroSchema = avroSchema.getElementType();
createStructBinder(elementType.asStructType(), elementAvroSchema, binders, typeAdapter);
}
}
/**
* Creates binders for MAP type keys and values (if they are STRUCT types).
* Handles two Avro representations: ARRAY of key-value records, or native MAP.
*/
private void createMapBinder(Types.MapType mapType, Schema avroSchema,
Map<Schema, RecordBinder> binders,
TypeAdapter<Schema> typeAdapter) {
Type keyType = mapType.keyType();
Type valueType = mapType.valueType();
if (ARRAY.equals(avroSchema.getType())) {
// Avro represents MAP as ARRAY of records with "key" and "value" fields
createMapAsArrayBinder(keyType, valueType, avroSchema, binders, typeAdapter);
} else {
// Avro represents MAP as native MAP type
createMapAsMapBinder(keyType, valueType, avroSchema, binders, typeAdapter);
}
}
/**
* Handles MAP represented as Avro ARRAY of {key, value} records.
*/
private void createMapAsArrayBinder(Type keyType, Type valueType, Schema avroSchema,
Map<Schema, RecordBinder> binders,
TypeAdapter<Schema> typeAdapter) {
Schema elementSchema = avroSchema.getElementType();
// Process key if it's a STRUCT
if (keyType.isStructType()) {
Schema keyAvroSchema = elementSchema.getField("key").schema();
createStructBinder(keyType.asStructType(), keyAvroSchema, binders, typeAdapter);
}
// Process value if it's a STRUCT
if (valueType.isStructType()) {
Schema valueAvroSchema = elementSchema.getField("value").schema();
createStructBinder(valueType.asStructType(), valueAvroSchema, binders, typeAdapter);
}
}
/**
* Handles MAP represented as Avro native MAP type.
*/
private void createMapAsMapBinder(Type keyType, Type valueType, Schema avroSchema,
Map<Schema, RecordBinder> binders,
TypeAdapter<Schema> typeAdapter) {
// Struct keys in native MAP are not supported by Avro
if (keyType.isStructType()) {
throw new UnsupportedOperationException("Struct keys in MAP types are not supported");
}
// Process value if it's a STRUCT
if (valueType.isStructType()) {
Schema valueAvroSchema = avroSchema.getValueType();
createStructBinder(valueType.asStructType(), valueAvroSchema, binders, typeAdapter);
}
}
private static class AvroRecordView implements Record {
private final GenericRecord avroRecord;
private final org.apache.iceberg.Schema icebergSchema;
private final TypeAdapter<Schema> typeAdapter;
private final Map<String, Integer> fieldNameToPosition;
private final FieldMapping[] fieldMappings;
private final Map<Schema, RecordBinder> nestedStructBinders;
private final Map<String, RecordBinder> nestedStructBinders;
private final RecordBinder parentBinder;
AvroRecordView(GenericRecord avroRecord,
org.apache.iceberg.Schema icebergSchema,
TypeAdapter<Schema> typeAdapter,
Map<String, Integer> fieldNameToPosition,
FieldMapping[] fieldMappings,
Map<Schema, RecordBinder> nestedStructBinders,
RecordBinder parentBinder) {
org.apache.iceberg.Schema icebergSchema,
TypeAdapter<Schema> typeAdapter,
Map<String, Integer> fieldNameToPosition,
FieldMapping[] fieldMappings,
Map<String, RecordBinder> nestedStructBinders,
RecordBinder parentBinder) {
this.avroRecord = avroRecord;
this.icebergSchema = icebergSchema;
this.typeAdapter = typeAdapter;
@ -362,11 +239,25 @@ public class RecordBinder {
if (mapping == null) {
return null;
}
Object avroValue = avroRecord.get(mapping.avroPosition());
if (avroValue == null) {
return null;
}
Object result = convert(avroValue, mapping.avroSchema(), mapping.icebergType());
// Handle STRUCT type - delegate to nested binder
if (mapping.typeId() == Type.TypeID.STRUCT) {
String structId = mapping.nestedSchemaId();
RecordBinder nestedBinder = nestedStructBinders.get(structId);
if (nestedBinder == null) {
throw new IllegalStateException("Nested binder not found for struct: " + structId);
}
parentBinder.addFieldCount(1);
return nestedBinder.bind((GenericRecord) avroValue);
}
// Convert non-STRUCT types
Object result = typeAdapter.convert(avroValue, mapping.avroSchema(), mapping.icebergType());
// Calculate and accumulate field count
long fieldCount = calculateFieldCount(result, mapping.icebergType());
@ -375,17 +266,6 @@ public class RecordBinder {
return result;
}
public Object convert(Object sourceValue, Schema sourceSchema, Type targetType) {
if (targetType.typeId() == Type.TypeID.STRUCT) {
RecordBinder binder = nestedStructBinders.get(sourceSchema);
if (binder == null) {
throw new IllegalStateException("Missing nested binder for schema: " + sourceSchema);
}
return binder.bind((GenericRecord) sourceValue);
}
return typeAdapter.convert(sourceValue, (Schema) sourceSchema, targetType, this::convert);
}
/**
* Calculates the field count for a converted value based on its size.
* Large fields are counted multiple times based on the size threshold.
@ -475,20 +355,66 @@ public class RecordBinder {
public void setField(String name, Object value) {
throw new UnsupportedOperationException("Read-only");
}
@Override
public Record copy() {
throw new UnsupportedOperationException("Read-only");
}
@Override
public Record copy(Map<String, Object> overwriteValues) {
throw new UnsupportedOperationException("Read-only");
}
@Override
public <T> void set(int pos, T value) {
throw new UnsupportedOperationException("Read-only");
}
}
// Field mapping structure
private static class FieldMapping {
private final int avroPosition;
private final String avroKey;
private final Type icebergType;
private final Type.TypeID typeId;
private final Schema avroSchema;
private final org.apache.iceberg.Schema nestedSchema;
private final String nestedSchemaId;
FieldMapping(int avroPosition, String avroKey, Type icebergType, Type.TypeID typeId, Schema avroSchema, org.apache.iceberg.Schema nestedSchema, String nestedSchemaId) {
this.avroPosition = avroPosition;
this.avroKey = avroKey;
this.icebergType = icebergType;
this.typeId = typeId;
this.avroSchema = avroSchema;
this.nestedSchema = nestedSchema;
this.nestedSchemaId = nestedSchemaId;
}
public int avroPosition() {
return avroPosition;
}
public String avroKey() {
return avroKey;
}
public Type icebergType() {
return icebergType;
}
public Type.TypeID typeId() {
return typeId;
}
public Schema avroSchema() {
return avroSchema;
}
public org.apache.iceberg.Schema nestedSchema() {
return nestedSchema;
}
public String nestedSchemaId() {
return nestedSchemaId;
}
}
}

View File

@ -37,14 +37,4 @@ public interface TypeAdapter<S> {
*/
Object convert(Object sourceValue, S sourceSchema, Type targetType);
/**
* Converts a source value to the target Iceberg type with support for recursive struct conversion.
*
* @param sourceValue The source value
* @param sourceSchema The source schema
* @param targetType The target Iceberg type
* @param structConverter A callback for converting nested STRUCT types
* @return The converted value
*/
Object convert(Object sourceValue, S sourceSchema, Type targetType, StructConverter<S> structConverter);
}

View File

@ -175,7 +175,7 @@ public class TableCoordinator implements Closeable {
commitStatusMachine.nextRoundCommit();
break;
case REQUEST_COMMIT:
commitStatusMachine.tryMoveToCommittedStatus();
commitStatusMachine.tryMoveToCommitedStatus();
break;
default:
LOGGER.error("[TABLE_COORDINATOR_UNKNOWN_STATUS],{}", commitStatusMachine.status);
@ -325,7 +325,7 @@ public class TableCoordinator implements Closeable {
channel.send(topic, new Event(time.milliseconds(), EventType.COMMIT_REQUEST, commitRequest));
}
public void tryMoveToCommittedStatus() throws Exception {
public void tryMoveToCommitedStatus() throws Exception {
for (; ; ) {
boolean awaitCommitTimeout = (time.milliseconds() - requestCommitTimestamp) > commitTimeout;
if (!awaitCommitTimeout) {
@ -389,14 +389,11 @@ public class TableCoordinator implements Closeable {
delta.commit();
}
try {
LogConfig currentLogConfig = config.get();
if (currentLogConfig.tableTopicExpireSnapshotEnabled) {
transaction.expireSnapshots()
.expireOlderThan(System.currentTimeMillis() - TimeUnit.HOURS.toMillis(currentLogConfig.tableTopicExpireSnapshotOlderThanHours))
.retainLast(currentLogConfig.tableTopicExpireSnapshotRetainLast)
.executeDeleteWith(EXPIRE_SNAPSHOT_EXECUTOR)
.commit();
}
transaction.expireSnapshots()
.expireOlderThan(System.currentTimeMillis() - TimeUnit.HOURS.toMillis(1))
.retainLast(1)
.executeDeleteWith(EXPIRE_SNAPSHOT_EXECUTOR)
.commit();
} catch (Exception exception) {
// skip expire snapshot failure
LOGGER.error("[EXPIRE_SNAPSHOT_FAIL],{}", getTable().name(), exception);

View File

@ -127,8 +127,8 @@ public class ProtoElementSchemaConvert implements ProtoElementConvert {
MessageDefinition.Builder mapMessage = MessageDefinition.newBuilder(mapEntryName);
mapMessage.setMapEntry(true);
mapMessage.addField(null, resolveFieldTypeName(keyType), ProtoConstants.KEY_FIELD, 1, null, null, null);
mapMessage.addField(null, resolveFieldTypeName(valueType), ProtoConstants.VALUE_FIELD, 2, null, null, null);
mapMessage.addField(null, keyType.getSimpleName(), ProtoConstants.KEY_FIELD, 1, null, null, null);
mapMessage.addField(null, valueType.getSimpleName(), ProtoConstants.VALUE_FIELD, 2, null, null, null);
message.addMessageDefinition(mapMessage.build());
message.addField("repeated", mapEntryName, field.getName(), field.getTag(),
@ -180,8 +180,4 @@ public class ProtoElementSchemaConvert implements ProtoElementConvert {
fieldName.substring(1) +
ProtoConstants.MAP_ENTRY_SUFFIX;
}
private static String resolveFieldTypeName(ProtoType type) {
return type.toString();
}
}

View File

@ -27,7 +27,6 @@ import org.apache.avro.SchemaBuilder;
import org.apache.avro.SchemaNormalization;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.util.internal.Accessor;
import java.util.ArrayList;
import java.util.List;
@ -146,9 +145,7 @@ public final class RecordAssembler {
List<Schema.Field> finalFields = new ArrayList<>(baseRecord.getSchema().getFields().size() + 3);
Schema baseSchema = baseRecord.getSchema();
for (Schema.Field field : baseSchema.getFields()) {
// Accessor keeps the original Schema instance (preserving logical types) while skipping default-value revalidation.
Schema.Field f = Accessor.createField(field.name(), field.schema(), field.doc(), Accessor.defaultValue(field), false, field.order());
finalFields.add(f);
finalFields.add(new Schema.Field(field, field.schema()));
}
int baseFieldCount = baseSchema.getFields().size();

View File

@ -1,77 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.automq.table.process.convert;
import com.google.protobuf.Descriptors;
import org.apache.avro.Schema;
import org.apache.avro.protobuf.ProtobufData;
import org.apache.iceberg.avro.CodecSetup;
import java.util.Arrays;
/**
* ProtobufData extension that annotates protobuf map fields with Iceberg's LogicalMap logical type so that
* downstream Avro{@literal >}Iceberg conversion keeps them as MAP instead of generic {@literal ARRAY<record<key,value>>}.
*/
public class LogicalMapProtobufData extends ProtobufData {
private static final LogicalMapProtobufData INSTANCE = new LogicalMapProtobufData();
private static final Schema NULL = Schema.create(Schema.Type.NULL);
public static LogicalMapProtobufData get() {
return INSTANCE;
}
@Override
public Schema getSchema(Descriptors.FieldDescriptor f) {
Schema schema = super.getSchema(f);
if (f.isMapField()) {
Schema nonNull = resolveNonNull(schema);
// protobuf maps are materialized as ARRAY<entry{key,value}> in Avro
if (nonNull != null && nonNull.getType() == Schema.Type.ARRAY) {
// set logicalType property; LogicalTypes is registered in CodecSetup
CodecSetup.getLogicalMap().addToSchema(nonNull);
}
} else if (f.isOptional() && !f.isRepeated() && f.getContainingOneof() == null
&& schema.getType() != Schema.Type.UNION) {
// Proto3 optional scalars/messages: wrap as union(type, null) so the protobuf default (typically non-null)
// remains valid (Avro default must match the first branch).
schema = Schema.createUnion(Arrays.asList(schema, NULL));
} else if (f.getContainingOneof() != null && !f.isRepeated() && schema.getType() != Schema.Type.UNION) {
// oneof fields: wrap as union(type, null) so that non-set fields can be represented as null
schema = Schema.createUnion(Arrays.asList(schema, NULL));
}
return schema;
}
private Schema resolveNonNull(Schema schema) {
if (schema == null) {
return null;
}
if (schema.getType() == Schema.Type.UNION) {
for (Schema member : schema.getTypes()) {
if (member.getType() != Schema.Type.NULL) {
return member;
}
}
return null;
}
return schema;
}
}

View File

@ -34,22 +34,16 @@ import org.apache.avro.protobuf.ProtoConversions;
import org.apache.avro.protobuf.ProtobufData;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;
public class ProtoToAvroConverter {
private static final ProtobufData DATA = initProtobufData();
private static ProtobufData initProtobufData() {
ProtobufData protobufData = LogicalMapProtobufData.get();
protobufData.addLogicalTypeConversion(new ProtoConversions.TimestampMicrosConversion());
return protobufData;
}
public static GenericRecord convert(Message protoMessage, Schema schema) {
try {
Schema nonNull = resolveNonNullSchema(schema);
return convertRecord(protoMessage, nonNull, DATA);
ProtobufData protobufData = ProtobufData.get();
protobufData.addLogicalTypeConversion(new ProtoConversions.TimestampMicrosConversion());
return convertRecord(protoMessage, schema, protobufData);
} catch (Exception e) {
throw new ConverterException("Proto to Avro conversion failed", e);
}
@ -57,52 +51,56 @@ public class ProtoToAvroConverter {
private static Object convert(Message protoMessage, Schema schema, ProtobufData protobufData) {
Conversion<?> conversion = getConversion(protoMessage.getDescriptorForType(), protobufData);
if (conversion instanceof ProtoConversions.TimestampMicrosConversion) {
ProtoConversions.TimestampMicrosConversion timestampConversion = (ProtoConversions.TimestampMicrosConversion) conversion;
Timestamp.Builder builder = Timestamp.newBuilder();
Timestamp.getDescriptor().getFields().forEach(field -> {
Descriptors.FieldDescriptor protoField = protoMessage.getDescriptorForType().findFieldByName(field.getName());
if (protoField != null && protoMessage.hasField(protoField)) {
builder.setField(field, protoMessage.getField(protoField));
}
});
return timestampConversion.toLong(builder.build(), schema, null);
if (conversion != null) {
if (conversion instanceof ProtoConversions.TimestampMicrosConversion) {
ProtoConversions.TimestampMicrosConversion timestampConversion = (ProtoConversions.TimestampMicrosConversion) conversion;
Timestamp.Builder builder = Timestamp.newBuilder();
Timestamp.getDescriptor().getFields().forEach(field -> {
String fieldName = field.getName();
Descriptors.FieldDescriptor protoField = protoMessage.getDescriptorForType()
.findFieldByName(fieldName);
if (protoField != null) {
Object value = protoMessage.getField(protoField);
if (value != null) {
builder.setField(field, value);
}
}
});
Timestamp timestamp = builder.build();
return timestampConversion.toLong(timestamp, schema, null);
}
}
Schema nonNull = resolveNonNullSchema(schema);
if (nonNull.getType() == Schema.Type.RECORD) {
return convertRecord(protoMessage, nonNull, protobufData);
if (schema.getType() == Schema.Type.RECORD) {
return convertRecord(protoMessage, schema, protobufData);
} else if (schema.getType() == Schema.Type.UNION) {
Schema dataSchema = protobufData.getSchema(protoMessage.getDescriptorForType());
return convertRecord(protoMessage, dataSchema, protobufData);
} else {
return null;
}
return null;
}
private static Conversion<?> getConversion(Descriptors.Descriptor descriptor, ProtobufData protobufData) {
String namespace = protobufData.getNamespace(descriptor.getFile(), descriptor.getContainingType());
String name = descriptor.getName();
if ("com.google.protobuf".equals(namespace) && "Timestamp".equals(name)) {
return new ProtoConversions.TimestampMicrosConversion();
if (namespace.equals("com.google.protobuf")) {
if (name.equals("Timestamp")) {
return new ProtoConversions.TimestampMicrosConversion();
}
}
return null;
}
private static GenericRecord convertRecord(Message protoMessage, Schema recordSchema, ProtobufData protobufData) {
GenericRecord record = new GenericData.Record(recordSchema);
Descriptors.Descriptor descriptor = protoMessage.getDescriptorForType();
for (Schema.Field field : recordSchema.getFields()) {
private static GenericRecord convertRecord(Message protoMessage, Schema schema, ProtobufData protobufData) {
GenericRecord record = new GenericData.Record(schema);
for (Schema.Field field : schema.getFields()) {
String fieldName = field.name();
Descriptors.FieldDescriptor protoField = descriptor.findFieldByName(fieldName);
if (protoField == null) {
continue;
}
Descriptors.FieldDescriptor protoField = protoMessage.getDescriptorForType()
.findFieldByName(fieldName);
boolean hasPresence = protoField.hasPresence() || protoField.getContainingOneof() != null;
if (!protoField.isRepeated() && hasPresence && !protoMessage.hasField(protoField)) {
if (allowsNull(field.schema())) {
record.put(fieldName, null);
}
if (protoField == null)
continue;
}
Object value = protoMessage.getField(protoField);
Object convertedValue = convertValue(value, protoField, field.schema(), protobufData);
@ -113,23 +111,22 @@ public class ProtoToAvroConverter {
private static Object convertValue(Object value, Descriptors.FieldDescriptor fieldDesc, Schema avroSchema,
ProtobufData protobufData) {
if (value == null) {
if (value == null)
return null;
}
Schema nonNullSchema = resolveNonNullSchema(avroSchema);
// process repeated fields
if (fieldDesc.isRepeated() && value instanceof List<?>) {
List<?> protoList = (List<?>) value;
GenericData.Array<Object> avroArray = new GenericData.Array<>(protoList.size(), nonNullSchema);
Schema elementSchema = nonNullSchema.getElementType();
List<Object> avroList = new ArrayList<>();
Schema elementSchema = avroSchema.getElementType();
for (Object item : protoList) {
avroArray.add(convertSingleValue(item, elementSchema, protobufData));
avroList.add(convertSingleValue(item, elementSchema, protobufData));
}
return avroArray;
return avroList;
}
return convertSingleValue(value, nonNullSchema, protobufData);
return convertSingleValue(value, avroSchema, protobufData);
}
private static Object convertSingleValue(Object value, Schema avroSchema, ProtobufData protobufData) {
@ -138,59 +135,41 @@ public class ProtoToAvroConverter {
} else if (value instanceof ByteString) {
return ((ByteString) value).asReadOnlyByteBuffer();
} else if (value instanceof Enum) {
return value.toString();
return value.toString(); // protobuf Enum is represented as string
} else if (value instanceof List) {
throw new ConverterException("Unexpected list type found; repeated fields should have been handled in convertValue");
}
// primitive types
return convertPrimitive(value, avroSchema);
}
private static Object convertPrimitive(Object value, Schema schema) {
Schema.Type type = schema.getType();
switch (type) {
case INT:
switch (schema.getType()) {
case INT: {
return ((Number) value).intValue();
case LONG:
}
case LONG: {
return ((Number) value).longValue();
case FLOAT:
}
case FLOAT: {
return ((Number) value).floatValue();
case DOUBLE:
}
case DOUBLE: {
return ((Number) value).doubleValue();
case BOOLEAN:
}
case BOOLEAN: {
return (Boolean) value;
case BYTES:
}
case BYTES: {
if (value instanceof byte[]) {
return ByteBuffer.wrap((byte[]) value);
}
return value;
default:
}
default: {
return value;
}
}
private static Schema resolveNonNullSchema(Schema schema) {
if (schema.getType() == Schema.Type.UNION) {
for (Schema type : schema.getTypes()) {
if (type.getType() != Schema.Type.NULL) {
return type;
}
}
}
return schema;
}
private static boolean allowsNull(Schema schema) {
if (schema.getType() == Schema.Type.NULL) {
return true;
}
if (schema.getType() == Schema.Type.UNION) {
for (Schema type : schema.getTypes()) {
if (type.getType() == Schema.Type.NULL) {
return true;
}
}
}
return false;
}
}

View File

@ -78,7 +78,7 @@ public class ProtobufRegistryConverter implements Converter {
Message protoMessage = deserializer.deserialize(topic, null, buffer);
Schema schema = avroSchemaCache.getIfPresent(schemaId);
if (schema == null) {
ProtobufData protobufData = LogicalMapProtobufData.get();
ProtobufData protobufData = ProtobufData.get();
protobufData.addLogicalTypeConversion(new ProtoConversions.TimestampMicrosConversion());
schema = protobufData.getSchema(protoMessage.getDescriptorForType());
avroSchemaCache.put(schemaId, schema);

View File

@ -152,7 +152,7 @@ public class IcebergTableManager {
*/
@VisibleForTesting
protected synchronized void applySchemaChange(Table table, List<SchemaChange> changes) {
LOGGER.info("Applying schema changes to table {}, changes {}", tableId, changes.stream().map(c -> c.getType() + ":" + c.getColumnFullName()).toList());
LOGGER.info("Applying schema changes to table {}, changes {}", tableId, changes);
Tasks.range(1)
.retry(2)
.run(notUsed -> applyChanges(table, changes));
@ -221,18 +221,16 @@ public class IcebergTableManager {
// if field doesn't exist in current schema and it's not a struct, mark it as optional (soft removal)
if (currentField == null && !tableField.isOptional()) {
changes.add(new SchemaChange(SchemaChange.ChangeType.MAKE_OPTIONAL, fieldName,
null, parentName));
tableField.type().asPrimitiveType(), parentName));
return;
}
// if it is a nested field, recursively process subfields
if (tableField.type().isStructType()) {
collectRemovedStructFields(tableField.type().asStructType().fields(), fullFieldName, currentSchema, changes);
} else if (isStructList(tableField.type())) {
collectRemovedStructFields(tableField.type().asListType().elementType().asStructType().fields(),
fullFieldName + ".element", currentSchema, changes);
} else if (isStructMap(tableField.type())) {
collectRemovedStructFields(tableField.type().asMapType().valueType().asStructType().fields(),
fullFieldName + ".value", currentSchema, changes);
List<Types.NestedField> tableSubFields = tableField.type().asStructType().fields();
for (Types.NestedField tableSubField : tableSubFields) {
collectRemovedField(tableSubField, fullFieldName, currentSchema, changes);
}
}
}
@ -243,59 +241,38 @@ public class IcebergTableManager {
Types.NestedField tableField = tableSchema.findField(fullFieldName);
if (tableField == null) {
changes.add(new SchemaChange(SchemaChange.ChangeType.ADD_COLUMN, fieldName,
currentField.type(), parentName));
return;
} else {
Type currentType = currentField.type();
Type tableType = tableField.type();
if (currentType.isStructType() && tableType.isStructType()) {
collectStructFieldChanges(currentType.asStructType().fields(), fullFieldName, tableSchema, changes);
collectOptionalFieldChanges(currentField, parentName, changes, tableField, fieldName);
} else if (isStructList(currentType) && isStructList(tableType)) {
collectStructFieldChanges(currentType.asListType().elementType().asStructType().fields(),
fullFieldName + ".element", tableSchema, changes);
} else if (isStructMap(currentType) && isStructMap(tableType)) {
collectStructFieldChanges(currentType.asMapType().valueType().asStructType().fields(),
fullFieldName + ".value", tableSchema, changes);
} else if (!currentType.isStructType() && !tableType.isStructType()) {
collectOptionalFieldChanges(currentField, parentName, changes, tableField, fieldName);
// if it is a nested field, recursively process subfields
if (currentField.type().isStructType()) {
List<Types.NestedField> currentSubFields = currentField.type().asStructType().fields();
if (!tableType.equals(currentType) && canPromoteType(tableType, currentType)) {
changes.add(new SchemaChange(SchemaChange.ChangeType.PROMOTE_TYPE, fieldName, currentType, parentName));
for (Types.NestedField currentSubField : currentSubFields) {
collectFieldChanges(currentSubField, fullFieldName, tableSchema, changes);
}
} else {
changes.add(new SchemaChange(SchemaChange.ChangeType.ADD_COLUMN, fieldName, currentField.type(), parentName));
}
} else {
// if it is a nested field, recursively process subfields
if (currentField.type().isStructType() && tableField.type().isStructType()) {
List<Types.NestedField> currentSubFields = currentField.type().asStructType().fields();
for (Types.NestedField currentSubField : currentSubFields) {
collectFieldChanges(currentSubField, fullFieldName, tableSchema, changes);
}
} else if (!currentField.type().isStructType() && !tableField.type().isStructType()) {
// process optional fields
if (!tableField.isOptional() && currentField.isOptional()) {
changes.add(new SchemaChange(SchemaChange.ChangeType.MAKE_OPTIONAL, fieldName, null, parentName));
}
// promote type if needed
if (!tableField.type().equals(currentField.type()) && canPromoteType(tableField.type(), currentField.type())) {
changes.add(new SchemaChange(SchemaChange.ChangeType.PROMOTE_TYPE, fieldName, currentField.type(), parentName));
}
}
}
}
private static void collectOptionalFieldChanges(Types.NestedField currentField, String parentName, List<SchemaChange> changes, Types.NestedField tableField, String fieldName) {
if (!tableField.isOptional() && currentField.isOptional()) {
changes.add(new SchemaChange(SchemaChange.ChangeType.MAKE_OPTIONAL, fieldName, null, parentName));
}
}
private void collectStructFieldChanges(List<Types.NestedField> currentSubFields, String parentFullName,
Schema tableSchema, List<SchemaChange> changes) {
for (Types.NestedField currentSubField : currentSubFields) {
collectFieldChanges(currentSubField, parentFullName, tableSchema, changes);
}
}
private void collectRemovedStructFields(List<Types.NestedField> tableSubFields, String parentFullName,
Schema currentSchema, List<SchemaChange> changes) {
for (Types.NestedField tableSubField : tableSubFields) {
collectRemovedField(tableSubField, parentFullName, currentSchema, changes);
}
}
private boolean isStructList(Type type) {
return type.typeId() == Type.TypeID.LIST && type.asListType().elementType().isStructType();
}
private boolean isStructMap(Type type) {
return type.typeId() == Type.TypeID.MAP && type.asMapType().valueType().isStructType();
}
private boolean canPromoteType(Type oldType, Type newType) {
if (oldType.typeId() == Type.TypeID.INTEGER && newType.typeId() == Type.TypeID.LONG) {
return true;

View File

@ -27,6 +27,7 @@ import org.apache.kafka.common.security.auth.SecurityProtocol;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Properties;
import static scala.jdk.javaapi.CollectionConverters.asJava;
@ -34,63 +35,43 @@ import static scala.jdk.javaapi.CollectionConverters.asJava;
public class ClientUtils {
public static Properties clusterClientBaseConfig(KafkaConfig kafkaConfig) {
ListenerName listenerName = kafkaConfig.interBrokerListenerName();
List<EndPoint> endpoints = asJava(kafkaConfig.effectiveAdvertisedBrokerListeners());
EndPoint endpoint = endpoints.stream()
.filter(e -> listenerName.equals(e.listenerName()))
.findFirst()
.orElseThrow(() -> new IllegalArgumentException(
"Cannot find " + listenerName + " in endpoints " + endpoints));
List<EndPoint> endpoints = asJava(kafkaConfig.effectiveAdvertisedBrokerListeners());
Optional<EndPoint> endpointOpt = endpoints.stream().filter(e -> listenerName.equals(e.listenerName())).findFirst();
if (endpointOpt.isEmpty()) {
throw new IllegalArgumentException("Cannot find " + listenerName + " in endpoints " + endpoints);
}
EndPoint endpoint = endpointOpt.get();
SecurityProtocol securityProtocol = kafkaConfig.interBrokerSecurityProtocol();
Map<String, Object> parsedConfigs = kafkaConfig.valuesWithPrefixOverride(listenerName.configPrefix());
String listenerPrefix = listenerName.configPrefix();
// mirror ChannelBuilders#channelBuilderConfigs - SINGLE PASS FOR-LOOP (3x faster)
for (Map.Entry<String, Object> entry : kafkaConfig.originals().entrySet()) {
String key = entry.getKey();
if (parsedConfigs.containsKey(key)) continue;
// exclude listener prefix configs
if (key.startsWith(listenerPrefix)) {
String suffixKey = key.substring(listenerPrefix.length());
if (parsedConfigs.containsKey(suffixKey)) continue;
}
// exclude mechanism shadow configs
int dotIndex = key.indexOf('.');
if (dotIndex > 0) {
String shortKey = key.substring(dotIndex + 1);
if (parsedConfigs.containsKey(shortKey)) continue;
}
parsedConfigs.put(key, entry.getValue());
}
// mirror ChannelBuilders#channelBuilderConfigs
kafkaConfig.originals().entrySet().stream()
.filter(entry -> !parsedConfigs.containsKey(entry.getKey()))
// exclude already parsed listener prefix configs
.filter(entry -> !(entry.getKey().startsWith(listenerName.configPrefix())
&& parsedConfigs.containsKey(entry.getKey().substring(listenerName.configPrefix().length()))))
// exclude keys like `{mechanism}.some.prop` if "listener.name." prefix is present and key `some.prop` exists in parsed configs.
.filter(entry -> !parsedConfigs.containsKey(entry.getKey().substring(entry.getKey().indexOf('.') + 1)))
.forEach(entry -> parsedConfigs.put(entry.getKey(), entry.getValue()));
Properties clientConfig = new Properties();
// Security configs - DIRECT LOOP (no stream overhead)
for (Map.Entry<String, Object> entry : parsedConfigs.entrySet()) {
if (entry.getValue() == null) continue;
if (isSecurityKey(entry.getKey(), listenerName)) {
clientConfig.put(entry.getKey(), entry.getValue());
}
}
parsedConfigs.entrySet().stream()
.filter(entry -> entry.getValue() != null)
.filter(entry -> isSecurityKey(entry.getKey(), listenerName))
.forEach(entry -> clientConfig.put(entry.getKey(), entry.getValue()));
String interBrokerSaslMechanism = kafkaConfig.saslMechanismInterBrokerProtocol();
if (interBrokerSaslMechanism != null && !interBrokerSaslMechanism.isEmpty()) {
// SASL configs - DIRECT LOOP (no stream overhead)
for (Map.Entry<String, Object> entry :
kafkaConfig.originalsWithPrefix(listenerName.saslMechanismConfigPrefix(interBrokerSaslMechanism)).entrySet()) {
if (entry.getValue() != null) {
clientConfig.put(entry.getKey(), entry.getValue());
}
}
kafkaConfig.originalsWithPrefix(listenerName.saslMechanismConfigPrefix(interBrokerSaslMechanism)).entrySet().stream()
.filter(entry -> entry.getValue() != null)
.forEach(entry -> clientConfig.put(entry.getKey(), entry.getValue()));
clientConfig.putIfAbsent("sasl.mechanism", interBrokerSaslMechanism);
}
clientConfig.put("security.protocol", securityProtocol.toString());
clientConfig.put("bootstrap.servers", endpoint.host() + ":" + endpoint.port());
clientConfig.put("bootstrap.servers", String.format("%s:%d", endpoint.host(), endpoint.port()));
return clientConfig;
}
@ -102,4 +83,5 @@ public class ClientUtils {
|| key.startsWith("security.")
|| key.startsWith(listenerName.configPrefix());
}
}

View File

@ -118,7 +118,7 @@ public interface AsyncSender {
brokerConfig.connectionSetupTimeoutMs(),
brokerConfig.connectionSetupTimeoutMaxMs(),
time,
true,
false,
new ApiVersions(),
logContext,
MetadataRecoveryStrategy.REBOOTSTRAP
@ -157,7 +157,7 @@ public interface AsyncSender {
if (NetworkClientUtils.isReady(networkClient, node, now)) {
connectingStates.remove(node);
Request request = queue.poll();
ClientRequest clientRequest = networkClient.newClientRequest(Integer.toString(node.id()), request.requestBuilder, now, true, 30000, new RequestCompletionHandler() {
ClientRequest clientRequest = networkClient.newClientRequest(Integer.toString(node.id()), request.requestBuilder, now, true, 10000, new RequestCompletionHandler() {
@Override
public void onComplete(ClientResponse response) {
request.cf.complete(response);

View File

@ -99,12 +99,9 @@ public class CommittedEpochManager implements RouterChannelProvider.EpochListene
break;
}
AtomicLong inflight = entry.getValue();
if (inflight.get() <= 0) {
// We only bump the commitEpoch when this epoch was fenced and has no inflight requests.
if (inflight.get() == 0) {
it.remove();
newWaitingEpoch = epoch;
} else {
break;
}
}
if (epoch2inflight.isEmpty()) {

View File

@ -31,6 +31,7 @@ import org.slf4j.LoggerFactory;
import java.util.concurrent.CompletableFuture;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
public class DefaultLinkRecordDecoder implements com.automq.stream.api.LinkRecordDecoder {
private static final Logger LOGGER = LoggerFactory.getLogger(DefaultLinkRecordDecoder.class);
@ -60,13 +61,16 @@ public class DefaultLinkRecordDecoder implements com.automq.stream.api.LinkRecor
recordBatch.setLastOffset(linkRecord.lastOffset());
recordBatch.setMaxTimestamp(linkRecord.timestampType(), linkRecord.maxTimestamp());
recordBatch.setPartitionLeaderEpoch(linkRecord.partitionLeaderEpoch());
return StreamRecordBatch.of(src.getStreamId(), src.getEpoch(), src.getBaseOffset(),
-src.getCount(), records.buffer(), SnapshotReadCache.ENCODE_ALLOC);
StreamRecordBatch streamRecordBatch = new StreamRecordBatch(src.getStreamId(), src.getEpoch(), src.getBaseOffset(),
-src.getCount(), Unpooled.wrappedBuffer(records.buffer()));
// The buf will be release after the finally block, so we need copy the data by #encoded.
streamRecordBatch.encoded(SnapshotReadCache.ENCODE_ALLOC);
return streamRecordBatch;
} finally {
src.release();
buf.release();
}
}).whenComplete((rst, ex) -> {
src.release();
if (ex != null) {
LOGGER.error("Error while decoding link record, link={}", linkRecord, ex);
}

View File

@ -22,7 +22,6 @@ package kafka.automq.zerozone;
import com.automq.stream.Context;
import com.automq.stream.s3.cache.SnapshotReadCache;
import com.automq.stream.s3.metadata.S3ObjectMetadata;
import com.automq.stream.s3.model.StreamRecordBatch;
import com.automq.stream.s3.wal.RecordOffset;
import com.automq.stream.s3.wal.WriteAheadLog;
@ -39,8 +38,8 @@ public class DefaultReplayer implements Replayer {
}
@Override
public CompletableFuture<Void> replay(WriteAheadLog confirmWAL, RecordOffset startOffset, RecordOffset endOffset, List<StreamRecordBatch> walRecords) {
return snapshotReadCache().replay(confirmWAL, startOffset, endOffset, walRecords);
public CompletableFuture<Void> replay(WriteAheadLog confirmWAL, RecordOffset startOffset, RecordOffset endOffset) {
return snapshotReadCache().replay(confirmWAL, startOffset, endOffset);
}
private SnapshotReadCache snapshotReadCache() {

View File

@ -27,7 +27,6 @@ import com.automq.stream.s3.wal.impl.DefaultRecordOffset;
import com.automq.stream.s3.wal.impl.object.ObjectWALService;
import com.automq.stream.utils.FutureUtil;
import com.automq.stream.utils.LogContext;
import com.automq.stream.utils.Threads;
import org.slf4j.Logger;
@ -45,7 +44,6 @@ import io.netty.buffer.ByteBuf;
public class ObjectRouterChannel implements RouterChannel {
private static final ExecutorService ASYNC_EXECUTOR = Executors.newCachedThreadPool();
private static final long OVER_CAPACITY_RETRY_DELAY_MS = 1000L;
private final Logger logger;
private final AtomicLong mockOffset = new AtomicLong(0);
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
@ -83,30 +81,21 @@ public class ObjectRouterChannel implements RouterChannel {
}
CompletableFuture<AppendResult> append0(int targetNodeId, short orderHint, ByteBuf data) {
StreamRecordBatch record = StreamRecordBatch.of(targetNodeId, 0, mockOffset.incrementAndGet(), 1, data);
for (; ; ) {
try {
record.retain();
return wal.append(TraceContext.DEFAULT, record).thenApply(walRst -> {
readLock.lock();
try {
long epoch = this.channelEpoch;
ChannelOffset channelOffset = ChannelOffset.of(channelId, orderHint, nodeId, targetNodeId, walRst.recordOffset().buffer());
channelEpoch2LastRecordOffset.put(epoch, walRst.recordOffset());
return new AppendResult(epoch, channelOffset.byteBuf());
} finally {
readLock.unlock();
}
}).whenComplete((r, e) -> record.release());
} catch (OverCapacityException e) {
logger.warn("OverCapacityException occurred while appending, err={}", e.getMessage());
// Use block-based delayed retries for network backpressure.
Threads.sleep(OVER_CAPACITY_RETRY_DELAY_MS);
} catch (Throwable e) {
logger.error("[UNEXPECTED], append wal fail", e);
record.release();
return CompletableFuture.failedFuture(e);
}
StreamRecordBatch record = new StreamRecordBatch(targetNodeId, 0, mockOffset.incrementAndGet(), 1, data);
try {
return wal.append(TraceContext.DEFAULT, record).thenApply(walRst -> {
readLock.lock();
try {
long epoch = this.channelEpoch;
ChannelOffset channelOffset = ChannelOffset.of(channelId, orderHint, nodeId, targetNodeId, walRst.recordOffset().buffer());
channelEpoch2LastRecordOffset.put(epoch, walRst.recordOffset());
return new AppendResult(epoch, channelOffset.byteBuf());
} finally {
readLock.unlock();
}
});
} catch (OverCapacityException e) {
return CompletableFuture.failedFuture(e);
}
}

View File

@ -20,7 +20,6 @@
package kafka.automq.zerozone;
import com.automq.stream.s3.metadata.S3ObjectMetadata;
import com.automq.stream.s3.model.StreamRecordBatch;
import com.automq.stream.s3.wal.RecordOffset;
import com.automq.stream.s3.wal.WriteAheadLog;
@ -38,6 +37,6 @@ public interface Replayer {
* Replay WAL to snapshot-read cache.
* If the record in WAL is a linked record, it will decode the linked record to the real record.
*/
CompletableFuture<Void> replay(WriteAheadLog confirmWAL, RecordOffset startOffset, RecordOffset endOffset, List<StreamRecordBatch> walRecords);
CompletableFuture<Void> replay(WriteAheadLog confirmWAL, RecordOffset startOffset, RecordOffset endOffset);
}

View File

@ -112,7 +112,11 @@ class RouterIn {
.thenCompose(rst -> prevLastRouterCf.thenApply(nil -> rst))
.thenComposeAsync(produces -> {
List<CompletableFuture<AutomqZoneRouterResponseData.Response>> cfList = new ArrayList<>();
produces.stream().map(this::append).forEach(cfList::add);
produces.stream().map(request -> {
try (request) {
return append(request);
}
}).forEach(cfList::add);
return CompletableFuture.allOf(cfList.toArray(new CompletableFuture[0])).thenApply(nil -> {
AutomqZoneRouterResponseData response = new AutomqZoneRouterResponseData();
cfList.forEach(cf -> response.responses().add(cf.join()));

View File

@ -47,9 +47,9 @@ import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.Queue;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Collectors;
@ -71,7 +71,7 @@ public class RouterInV2 implements NonBlockingLocalRouterHandler {
private final String rack;
private final RouterInProduceHandler localAppendHandler;
private RouterInProduceHandler routerInProduceHandler;
private final BlockingQueue<PartitionProduceRequest> unpackLinkQueue = new ArrayBlockingQueue<>(Systems.CPU_CORES * 8192);
private final Queue<PartitionProduceRequest> unpackLinkQueue = new ConcurrentLinkedQueue<>();
private final EventLoop[] appendEventLoops;
private final FastThreadLocal<RequestLocal> requestLocals = new FastThreadLocal<>() {
@Override
@ -115,9 +115,9 @@ public class RouterInV2 implements NonBlockingLocalRouterHandler {
for (ByteBuf channelOffset : routerRecord.channelOffsets()) {
PartitionProduceRequest partitionProduceRequest = new PartitionProduceRequest(ChannelOffset.of(channelOffset));
partitionProduceRequest.unpackLinkCf = routerChannel.get(channelOffset);
addToUnpackLinkQueue(partitionProduceRequest);
unpackLinkQueue.add(partitionProduceRequest);
partitionProduceRequest.unpackLinkCf.whenComplete((rst, ex) -> {
if (ex == null) {
if (ex != null) {
size.addAndGet(rst.readableBytes());
}
handleUnpackLink();
@ -165,16 +165,6 @@ public class RouterInV2 implements NonBlockingLocalRouterHandler {
}
}
private void addToUnpackLinkQueue(PartitionProduceRequest req) {
for (;;) {
try {
unpackLinkQueue.put(req);
return;
} catch (InterruptedException ignored) {
}
}
}
@Override
public CompletableFuture<AutomqZoneRouterResponseData.Response> append(
ChannelOffset channelOffset,

View File

@ -109,11 +109,6 @@ public class RouterOutV2 {
}
ZeroZoneMetricsManager.PROXY_REQUEST_LATENCY.record(time.nanoseconds() - startNanos);
});
}).exceptionally(ex -> {
LOGGER.error("Exception in processing append proxies", ex);
// Make the producer retry send.
responseMap.put(tp, errorPartitionResponse(Errors.LEADER_NOT_AVAILABLE));
return null;
});
cfList.add(proxyCf);
}
@ -149,10 +144,6 @@ public class RouterOutV2 {
void send(ProxyRequest request);
}
static ProduceResponse.PartitionResponse errorPartitionResponse(Errors error) {
return new ProduceResponse.PartitionResponse(error, -1, -1, -1, -1, Collections.emptyList(), "");
}
static class LocalProxy implements Proxy {
private final NonBlockingLocalRouterHandler localRouterHandler;
@ -350,7 +341,7 @@ public class RouterOutV2 {
}
private void completeWithError(Errors errors) {
ProduceResponse.PartitionResponse rst = errorPartitionResponse(errors);
ProduceResponse.PartitionResponse rst = new ProduceResponse.PartitionResponse(errors, -1, -1, -1, -1, Collections.emptyList(), "");
cf.complete(rst);
}
}

View File

@ -366,8 +366,8 @@ public class SnapshotReadPartitionsManager implements MetadataListener, ProxyTop
replayer.reset();
}
void onNewWalEndOffset(String walConfig, RecordOffset endOffset, byte[] walDeltaData) {
replayer.onNewWalEndOffset(walConfig, endOffset, walDeltaData);
void onNewWalEndOffset(String walConfig, RecordOffset endOffset) {
replayer.onNewWalEndOffset(walConfig, endOffset);
}
void onNewOperationBatch(OperationBatch batch) {

View File

@ -43,8 +43,6 @@ import java.util.concurrent.Executors;
import java.util.function.LongConsumer;
import java.util.stream.Collectors;
import static kafka.automq.partition.snapshot.ConfirmWalDataDelta.decodeDeltaRecords;
class SubscriberReplayer {
private static final Logger LOGGER = LoggerFactory.getLogger(SubscriberReplayer.class);
private static final ExecutorService CLOSE_EXECUTOR = Executors.newCachedThreadPool();
@ -66,7 +64,7 @@ class SubscriberReplayer {
this.metadataCache = metadataCache;
}
public void onNewWalEndOffset(String walConfig, RecordOffset endOffset, byte[] walDeltaData) {
public void onNewWalEndOffset(String walConfig, RecordOffset endOffset) {
if (wal == null) {
this.wal = confirmWALProvider.readOnly(walConfig, node.id());
}
@ -79,14 +77,11 @@ class SubscriberReplayer {
return;
}
// The replayer will ensure the order of replay
this.lastDataLoadCf = wal.thenCompose(w -> replayer.replay(w, startOffset, endOffset, decodeDeltaRecords(walDeltaData)).thenAccept(nil -> {
this.lastDataLoadCf = wal.thenCompose(w -> replayer.replay(w, startOffset, endOffset).thenAccept(nil -> {
if (LOGGER.isTraceEnabled()) {
LOGGER.trace("replay {} confirm wal [{}, {})", node, startOffset, endOffset);
}
})).exceptionally(ex -> {
LOGGER.error("[UNEXPECTED] replay confirm wal fail", ex);
return null;
});
}));
}
public CompletableFuture<Void> relayObject() {

View File

@ -72,8 +72,7 @@ import io.netty.buffer.Unpooled;
private final EventLoop eventLoop;
private final Time time;
public SubscriberRequester(SnapshotReadPartitionsManager.Subscriber subscriber, Node node, AutoMQVersion version,
AsyncSender asyncSender,
public SubscriberRequester(SnapshotReadPartitionsManager.Subscriber subscriber, Node node, AutoMQVersion version, AsyncSender asyncSender,
Function<Uuid, String> topicNameGetter, EventLoop eventLoop, Time time) {
this.subscriber = subscriber;
this.node = node;
@ -116,7 +115,7 @@ import io.netty.buffer.Unpooled;
tryReset0();
lastRequestTime = time.milliseconds();
AutomqGetPartitionSnapshotRequestData data = new AutomqGetPartitionSnapshotRequestData().setSessionId(sessionId).setSessionEpoch(sessionEpoch);
AutomqGetPartitionSnapshotRequestData data = new AutomqGetPartitionSnapshotRequestData().setSessionId(sessionId).setSessionEpoch(sessionEpoch).setVersion((short) 1);
if (version.isZeroZoneV2Supported()) {
data.setVersion((short) 1);
}
@ -127,9 +126,6 @@ import io.netty.buffer.Unpooled;
requestCommit = false;
data.setRequestCommit(true);
}
if (data.requestCommit()) {
LOGGER.info("[SNAPSHOT_SUBSCRIBE_REQUEST_COMMIT],node={},sessionId={},sessionEpoch={}", node, sessionId, sessionEpoch);
}
AutomqGetPartitionSnapshotRequest.Builder builder = new AutomqGetPartitionSnapshotRequest.Builder(data);
asyncSender.sendRequest(node, builder)
.thenAcceptAsync(rst -> {
@ -202,13 +198,7 @@ import io.netty.buffer.Unpooled;
int c2 = o2.operation.code() == SnapshotOperation.REMOVE.code() ? 0 : 1;
return c1 - c2;
});
short requestVersion = clientResponse.requestHeader().apiVersion();
if (resp.confirmWalEndOffset() != null && resp.confirmWalEndOffset().length > 0) {
// zerozone v2
subscriber.onNewWalEndOffset(resp.confirmWalConfig(),
DefaultRecordOffset.of(Unpooled.wrappedBuffer(resp.confirmWalEndOffset())),
requestVersion >= 2 ? resp.confirmWalDeltaData() : null);
}
subscriber.onNewWalEndOffset(resp.confirmWalConfig(), DefaultRecordOffset.of(Unpooled.wrappedBuffer(resp.confirmWalEndOffset())));
batch.operations.add(SnapshotWithOperation.snapshotMark(snapshotCf));
subscriber.onNewOperationBatch(batch);
}

View File

@ -50,9 +50,9 @@ public class ZeroZoneMetricsManager {
.build());
private static final Metrics.HistogramBundle ROUTER_LATENCY = Metrics.instance().histogram(PREFIX + "router_latency", "ZeroZone route latency", "nanoseconds");
public static final DeltaHistogram APPEND_CHANNEL_LATENCY = ROUTER_LATENCY.histogram(MetricsLevel.INFO, Attributes.of(AttributeKey.stringKey("operation"), "out", AttributeKey.stringKey("stage"), "append_channel"));
public static final DeltaHistogram PROXY_REQUEST_LATENCY = ROUTER_LATENCY.histogram(MetricsLevel.INFO, Attributes.of(AttributeKey.stringKey("operation"), "out", AttributeKey.stringKey("stage"), "proxy_request"));
public static final DeltaHistogram GET_CHANNEL_LATENCY = ROUTER_LATENCY.histogram(MetricsLevel.INFO, Attributes.of(AttributeKey.stringKey("operation"), "in", AttributeKey.stringKey("stage"), "get_channel"));
public static final DeltaHistogram APPEND_CHANNEL_LATENCY = ROUTER_LATENCY.histogram(MetricsLevel.DEBUG, Attributes.of(AttributeKey.stringKey("operation"), "out", AttributeKey.stringKey("stage"), "append_channel"));
public static final DeltaHistogram PROXY_REQUEST_LATENCY = ROUTER_LATENCY.histogram(MetricsLevel.DEBUG, Attributes.of(AttributeKey.stringKey("operation"), "out", AttributeKey.stringKey("stage"), "proxy_request"));
public static final DeltaHistogram GET_CHANNEL_LATENCY = ROUTER_LATENCY.histogram(MetricsLevel.DEBUG, Attributes.of(AttributeKey.stringKey("operation"), "in", AttributeKey.stringKey("stage"), "get_channel"));
public static void recordRouterOutBytes(int toNodeId, int bytes) {
try {

View File

@ -1,139 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.server;
import kafka.automq.table.metric.TableTopicMetricsManager;
import org.apache.kafka.common.config.types.Password;
import org.apache.kafka.server.ProcessRole;
import org.apache.kafka.server.metrics.KafkaYammerMetrics;
import org.apache.kafka.server.metrics.s3stream.S3StreamKafkaMetricsManager;
import com.automq.opentelemetry.AutoMQTelemetryManager;
import com.automq.opentelemetry.exporter.MetricsExportConfig;
import com.automq.shell.AutoMQApplication;
import com.automq.stream.s3.metrics.Metrics;
import com.automq.stream.s3.metrics.MetricsConfig;
import com.automq.stream.s3.metrics.MetricsLevel;
import com.automq.stream.s3.metrics.S3StreamMetricsManager;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.List;
import java.util.Locale;
import io.opentelemetry.api.common.Attributes;
import io.opentelemetry.api.metrics.Meter;
import scala.collection.immutable.Set;
/**
* Helper used by the core module to bootstrap AutoMQ telemetry using the AutoMQTelemetryManager implement.
*/
public final class TelemetrySupport {
private static final Logger LOGGER = LoggerFactory.getLogger(TelemetrySupport.class);
private static final String COMMON_JMX_PATH = "/jmx/rules/common.yaml";
private static final String BROKER_JMX_PATH = "/jmx/rules/broker.yaml";
private static final String CONTROLLER_JMX_PATH = "/jmx/rules/controller.yaml";
private static final String KAFKA_METRICS_PREFIX = "kafka_stream_";
private TelemetrySupport() {
// Utility class
}
public static AutoMQTelemetryManager start(KafkaConfig config, String clusterId) {
AutoMQTelemetryManager telemetryManager = new AutoMQTelemetryManager(
config.automq().metricsExporterURI(),
clusterId,
String.valueOf(config.nodeId()),
AutoMQApplication.getBean(MetricsExportConfig.class)
);
telemetryManager.setJmxConfigPaths(buildJmxConfigPaths(config));
telemetryManager.init();
telemetryManager.startYammerMetricsReporter(KafkaYammerMetrics.defaultRegistry());
initializeMetrics(telemetryManager, config);
return telemetryManager;
}
private static void initializeMetrics(AutoMQTelemetryManager manager, KafkaConfig config) {
S3StreamKafkaMetricsManager.setTruststoreCertsSupplier(() -> {
try {
Password password = config.getPassword("ssl.truststore.certificates");
return password != null ? password.value() : null;
} catch (Exception e) {
LOGGER.error("Failed to obtain truststore certificates", e);
return null;
}
});
S3StreamKafkaMetricsManager.setCertChainSupplier(() -> {
try {
Password password = config.getPassword("ssl.keystore.certificate.chain");
return password != null ? password.value() : null;
} catch (Exception e) {
LOGGER.error("Failed to obtain certificate chain", e);
return null;
}
});
Meter meter = manager.getMeter();
MetricsLevel metricsLevel = parseMetricsLevel(config.s3MetricsLevel());
long metricsIntervalMs = (long) config.s3ExporterReportIntervalMs();
MetricsConfig metricsConfig = new MetricsConfig(metricsLevel, Attributes.empty(), metricsIntervalMs);
Metrics.instance().setup(meter, metricsConfig);
S3StreamMetricsManager.configure(new MetricsConfig(metricsLevel, Attributes.empty(), metricsIntervalMs));
S3StreamMetricsManager.initMetrics(meter, KAFKA_METRICS_PREFIX);
S3StreamKafkaMetricsManager.configure(new MetricsConfig(metricsLevel, Attributes.empty(), metricsIntervalMs));
S3StreamKafkaMetricsManager.initMetrics(meter, KAFKA_METRICS_PREFIX);
TableTopicMetricsManager.initMetrics(meter);
}
private static MetricsLevel parseMetricsLevel(String rawLevel) {
if (StringUtils.isBlank(rawLevel)) {
return MetricsLevel.INFO;
}
try {
return MetricsLevel.valueOf(rawLevel.trim().toUpperCase(Locale.ENGLISH));
} catch (IllegalArgumentException e) {
LOGGER.warn("Illegal metrics level '{}', defaulting to INFO", rawLevel);
return MetricsLevel.INFO;
}
}
private static String buildJmxConfigPaths(KafkaConfig config) {
List<String> paths = new ArrayList<>();
paths.add(COMMON_JMX_PATH);
Set<ProcessRole> roles = config.processRoles();
if (roles.contains(ProcessRole.BrokerRole)) {
paths.add(BROKER_JMX_PATH);
}
if (roles.contains(ProcessRole.ControllerRole)) {
paths.add(CONTROLLER_JMX_PATH);
}
return String.join(",", paths);
}
}

View File

@ -23,10 +23,6 @@ import org.apache.avro.LogicalTypes;
public class CodecSetup {
public static LogicalMap getLogicalMap() {
return LogicalMap.get();
}
static {
LogicalTypes.register(LogicalMap.NAME, schema -> LogicalMap.get());
}

View File

@ -17,9 +17,8 @@
package kafka
import com.automq.log.S3RollingFileAppender
import com.automq.opentelemetry.exporter.MetricsExportConfig
import com.automq.shell.AutoMQApplication
import com.automq.shell.log.{LogUploader, S3LogConfig}
import com.automq.stream.s3.ByteBufAlloc
import joptsimple.OptionParser
import kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter
@ -77,7 +76,8 @@ object Kafka extends Logging {
private def enableApiForwarding(config: KafkaConfig) =
config.migrationEnabled && config.interBrokerProtocolVersion.isApiForwardingEnabled
private def buildServer(config: KafkaConfig): Server = {
private def buildServer(props: Properties): Server = {
val config = KafkaConfig.fromProps(props, doLog = false)
// AutoMQ for Kafka inject start
// set allocator's policy as early as possible
ByteBufAlloc.setPolicy(config.s3StreamAllocatorPolicy)
@ -89,24 +89,18 @@ object Kafka extends Logging {
threadNamePrefix = None,
enableForwarding = enableApiForwarding(config)
)
// AutoMQ for Kafka inject start
AutoMQApplication.setClusterId(kafkaServer.clusterId)
S3RollingFileAppender.setup(new KafkaS3LogConfig(config, kafkaServer, null))
AutoMQApplication.registerSingleton(classOf[MetricsExportConfig], new KafkaMetricsExportConfig(config, kafkaServer, null))
AutoMQApplication.registerSingleton(classOf[S3LogConfig], new KafkaS3LogConfig(config, kafkaServer, null))
kafkaServer
// AutoMQ for Kafka inject end
} else {
val kafkaRaftServer = new KafkaRaftServer(
config,
Time.SYSTEM,
)
// AutoMQ for Kafka inject start
AutoMQApplication.setClusterId(kafkaRaftServer.getSharedServer().clusterId)
S3RollingFileAppender.setup(new KafkaS3LogConfig(config, null, kafkaRaftServer))
AutoMQApplication.registerSingleton(classOf[MetricsExportConfig], new KafkaMetricsExportConfig(config, null, kafkaRaftServer))
AutoMQApplication.registerSingleton(classOf[S3LogConfig], new KafkaS3LogConfig(config, null, kafkaRaftServer))
AutoMQApplication.registerSingleton(classOf[KafkaRaftServer], kafkaRaftServer)
kafkaRaftServer
// AutoMQ for Kafka inject end
}
}
@ -130,8 +124,7 @@ object Kafka extends Logging {
val serverProps = getPropsFromArgs(args)
addDefaultProps(serverProps)
StorageUtil.formatStorage(serverProps)
val kafkaConfig = KafkaConfig.fromProps(serverProps, doLog = false)
val server = buildServer(kafkaConfig)
val server = buildServer(serverProps)
AutoMQApplication.registerSingleton(classOf[Server], server)
// AutoMQ for Kafka inject end
@ -148,7 +141,7 @@ object Kafka extends Logging {
Exit.addShutdownHook("kafka-shutdown-hook", {
try {
server.shutdown()
S3RollingFileAppender.shutdown()
LogUploader.getInstance().close()
} catch {
case _: Throwable =>
fatal("Halting Kafka.")
@ -164,6 +157,7 @@ object Kafka extends Logging {
fatal("Exiting Kafka due to fatal exception during startup.", e)
Exit.exit(1)
}
server.awaitShutdown()
}
catch {

View File

@ -1,71 +0,0 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka
import com.automq.opentelemetry.exporter.MetricsExportConfig
import com.automq.stream.s3.operator.{ObjectStorage, ObjectStorageFactory}
import kafka.server.{KafkaConfig, KafkaRaftServer, KafkaServer}
import org.apache.commons.lang3.tuple.Pair
import java.util
class KafkaMetricsExportConfig(
config: KafkaConfig,
kafkaServer: KafkaServer,
kafkaRaftServer: KafkaRaftServer
) extends MetricsExportConfig {
private val _objectStorage = if (config.automq.opsBuckets().isEmpty) {
null
} else {
ObjectStorageFactory.instance().builder(config.automq.opsBuckets().get(0)).threadPrefix("s3-metrics").build()
}
override def clusterId(): String = {
if (kafkaServer != null) {
kafkaServer.clusterId
} else {
kafkaRaftServer.getSharedServer().clusterId
}
}
override def isLeader: Boolean = {
if (kafkaServer != null) {
// For broker mode, typically only one node should upload metrics
// You can implement your own leader selection logic here
false
} else {
// For KRaft mode, only active controller uploads metrics
kafkaRaftServer.controller.exists(controller => controller.controller != null && controller.controller.isActive)
}
}
override def nodeId(): Int = config.nodeId
override def objectStorage(): ObjectStorage = {
_objectStorage
}
override def baseLabels(): util.List[Pair[String, String]] = {
config.automq.baseLabels()
}
override def intervalMs(): Int = config.s3ExporterReportIntervalMs
}

View File

@ -19,15 +19,15 @@
package kafka
import com.automq.log.uploader.S3LogConfig
import com.automq.shell.log.S3LogConfig
import com.automq.stream.s3.operator.{ObjectStorage, ObjectStorageFactory}
import kafka.server.{KafkaConfig, KafkaRaftServer, KafkaServer}
class KafkaS3LogConfig(
config: KafkaConfig,
kafkaServer: KafkaServer,
kafkaRaftServer: KafkaRaftServer
) extends S3LogConfig {
config: KafkaConfig,
kafkaServer: KafkaServer,
kafkaRaftServer: KafkaRaftServer
) extends S3LogConfig {
private val _objectStorage = if (config.automq.opsBuckets().isEmpty) {
null
@ -37,6 +37,15 @@ class KafkaS3LogConfig(
override def isEnabled: Boolean = config.s3OpsTelemetryEnabled
override def isActiveController: Boolean = {
if (kafkaServer != null) {
false
} else {
kafkaRaftServer.controller.exists(controller => controller.controller != null && controller.controller.isActive)
}
}
override def clusterId(): String = {
if (kafkaServer != null) {
kafkaServer.clusterId
@ -51,11 +60,4 @@ class KafkaS3LogConfig(
_objectStorage
}
override def isLeader: Boolean = {
if (kafkaServer != null) {
false
} else {
kafkaRaftServer.controller.exists(controller => controller.controller != null && controller.controller.isActive)
}
}
}

View File

@ -1502,7 +1502,7 @@ class Partition(val topicPartition: TopicPartition,
}
private def doAppendRecordsToFollowerOrFutureReplica(records: MemoryRecords, isFuture: Boolean): Option[LogAppendInfo] = {
val rst = if (isFuture) {
if (isFuture) {
// The read lock is needed to handle race condition if request handler thread tries to
// remove future replica after receiving AlterReplicaLogDirsRequest.
inReadLock(leaderIsrUpdateLock) {
@ -1517,11 +1517,6 @@ class Partition(val topicPartition: TopicPartition,
Some(localLogOrException.appendAsFollower(records))
}
}
// AutoMQ inject start
notifyAppendListener(records)
newAppendListener.onNewAppend(topicPartition, localLogOrException.logEndOffset)
// AutoMQ inject end
rst
}
def appendRecordsToFollowerOrFutureReplica(records: MemoryRecords, isFuture: Boolean): Option[LogAppendInfo] = {

View File

@ -755,10 +755,6 @@ private[group] class GroupMetadata(val groupId: String, initialState: GroupState
val currentOffsetOpt = offsets.get(topicPartition)
if (currentOffsetOpt.forall(_.olderThan(commitRecordMetadataAndOffset))) {
// AutoMQ for Kafka inject start
if (!offsets.contains(topicPartition))
recreateOffsetMetric(topicPartition)
// AutoMQ for Kafka inject end
trace(s"TxnOffsetCommit for producer $producerId and group $groupId with offset $commitRecordMetadataAndOffset " +
"committed and loaded into the cache.")
offsets.put(topicPartition, commitRecordMetadataAndOffset)
@ -908,3 +904,4 @@ private[group] class GroupMetadata(val groupId: String, initialState: GroupState
}
}

View File

@ -27,7 +27,7 @@ import kafka.log.stream.s3.node.NodeManagerStub;
import kafka.log.stream.s3.node.NoopNodeManager;
import kafka.log.stream.s3.objects.ControllerObjectManager;
import kafka.log.stream.s3.streams.ControllerStreamManager;
import kafka.log.stream.s3.wal.ConfirmWal;
import kafka.log.stream.s3.wal.BootstrapWalV1;
import kafka.log.stream.s3.wal.DefaultWalFactory;
import kafka.server.BrokerServer;
@ -215,7 +215,7 @@ public class DefaultS3Client implements Client {
String clusterId = brokerServer.clusterId();
WalHandle walHandle = new DefaultWalHandle(clusterId);
WalFactory factory = new DefaultWalFactory(config.nodeId(), config.objectTagging(), networkInboundLimiter, networkOutboundLimiter);
return new ConfirmWal(config.nodeId(), config.nodeEpoch(), config.walConfig(), false, factory, getNodeManager(), walHandle);
return new BootstrapWalV1(config.nodeId(), config.nodeEpoch(), config.walConfig(), false, factory, getNodeManager(), walHandle);
}
protected ObjectStorage newMainObjectStorage() {
@ -276,7 +276,7 @@ public class DefaultS3Client implements Client {
WalHandle walHandle = new DefaultWalHandle(clusterId);
WalFactory factory = new DefaultWalFactory(nodeId, config.objectTagging(), networkInboundLimiter, networkOutboundLimiter);
NodeManager nodeManager = new NodeManagerStub(requestSender, nodeId, nodeEpoch, Collections.emptyMap());
return new ConfirmWal(nodeId, nodeEpoch, request.getKraftWalConfigs(), true, factory, nodeManager, walHandle);
return new BootstrapWalV1(nodeId, nodeEpoch, request.getKraftWalConfigs(), true, factory, nodeManager, walHandle);
}
}, (wal, sm, om, logger) -> {
try {

View File

@ -31,7 +31,6 @@ import org.apache.kafka.common.requests.s3.AbstractBatchResponse;
import org.apache.kafka.server.ControllerRequestCompletionHandler;
import org.apache.kafka.server.NodeToControllerChannelManager;
import com.automq.stream.utils.Systems;
import com.automq.stream.utils.Threads;
import org.slf4j.Logger;
@ -52,7 +51,7 @@ public class ControllerRequestSender {
private static final Logger LOGGER = LoggerFactory.getLogger(ControllerRequestSender.class);
private static final long MAX_RETRY_DELAY_MS = Systems.getEnvLong("AUTOMQ_CONTROLLER_REQUEST_MAX_RETRY_DELAY_MS", 10L * 1000); // 10s
private static final long MAX_RETRY_DELAY_MS = 10 * 1000; // 10s
private final RetryPolicyContext retryPolicyContext;

View File

@ -0,0 +1,49 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.log.stream.s3.telemetry;
import com.automq.stream.s3.context.AppendContext;
import com.automq.stream.s3.context.FetchContext;
import com.automq.stream.s3.trace.context.TraceContext;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Context;
import io.opentelemetry.sdk.OpenTelemetrySdk;
public class ContextUtils {
public static FetchContext creaetFetchContext() {
return new FetchContext(createTraceContext());
}
public static AppendContext createAppendContext() {
return new AppendContext(createTraceContext());
}
public static TraceContext createTraceContext() {
OpenTelemetrySdk openTelemetrySdk = TelemetryManager.getOpenTelemetrySdk();
boolean isTraceEnabled = openTelemetrySdk != null && TelemetryManager.isTraceEnable();
Tracer tracer = null;
if (isTraceEnabled) {
tracer = openTelemetrySdk.getTracer(TelemetryConstants.TELEMETRY_SCOPE_NAME);
}
return new TraceContext(isTraceEnabled, tracer, Context.current());
}
}

View File

@ -17,7 +17,13 @@
* limitations under the License.
*/
package kafka.automq.failover;
package kafka.log.stream.s3.telemetry;
public record DefaultFailedNode(int id, long epoch) implements FailedNode {
public class MetricsConstants {
public static final String SERVICE_NAME = "service.name";
public static final String SERVICE_INSTANCE = "service.instance.id";
public static final String HOST_NAME = "host.name";
public static final String INSTANCE = "instance";
public static final String JOB = "job";
public static final String NODE_TYPE = "node.type";
}

View File

@ -0,0 +1,37 @@
/*
* Copyright 2025, AutoMQ HK Limited.
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.log.stream.s3.telemetry;
import io.opentelemetry.api.common.AttributeKey;
public class TelemetryConstants {
// The maximum number of unique attribute combinations for a single metric
public static final int CARDINALITY_LIMIT = 20000;
public static final String COMMON_JMX_YAML_CONFIG_PATH = "/jmx/rules/common.yaml";
public static final String BROKER_JMX_YAML_CONFIG_PATH = "/jmx/rules/broker.yaml";
public static final String CONTROLLER_JMX_YAML_CONFIG_PATH = "/jmx/rules/controller.yaml";
public static final String TELEMETRY_SCOPE_NAME = "automq_for_kafka";
public static final String KAFKA_METRICS_PREFIX = "kafka_stream_";
public static final String KAFKA_WAL_METRICS_PREFIX = "kafka_wal_";
public static final AttributeKey<Long> STREAM_ID_NAME = AttributeKey.longKey("streamId");
public static final AttributeKey<Long> START_OFFSET_NAME = AttributeKey.longKey("startOffset");
public static final AttributeKey<Long> END_OFFSET_NAME = AttributeKey.longKey("endOffset");
public static final AttributeKey<Long> MAX_BYTES_NAME = AttributeKey.longKey("maxBytes");
}

Some files were not shown because too many files have changed in this diff Show More