Skip to content

[RDI] Add RDI 1.18.0 release notes#3093

Open
ZdravkoDonev-redis wants to merge 3 commits intomainfrom
RDSC-4916-rdi-1-18-0-release-notes
Open

[RDI] Add RDI 1.18.0 release notes#3093
ZdravkoDonev-redis wants to merge 3 commits intomainfrom
RDSC-4916-rdi-1-18-0-release-notes

Conversation

@ZdravkoDonev-redis
Copy link
Copy Markdown
Collaborator

@ZdravkoDonev-redis ZdravkoDonev-redis commented Apr 23, 2026

What changed

Adds the RDI 1.18.0 release-notes page under content/integrate/redis-data-integration/release-notes/.

The page covers:

  • Snowflake source support for Helm installations, including multi-schema capture
  • API v2 additions for CDC-readiness validation, DLQ inspection, and flush-target
  • operational updates such as the optional AOF prerequisite-check disablement and reloader image configuration
  • security refreshes across operator, collector API, and Fluentd images

Why this changed

The docs repo was missing the 1.18.0 RDI release-notes entry. The source changelog also needed product-facing wording and a few scope clarifications, including:

  • Snowflake support applies to Helm installations, not VM installs
  • CDC-readiness validation is exposed through API v2
  • the DLQ section should call out the actual API endpoints
  • the reloader note should explain the Helm/private-registry impact

User impact

Users will see the 1.18.0 release notes in the RDI release-notes index with clearer guidance on feature scope, API surface, and operational caveats.

Root cause

The release-notes page had not yet been added to the docs repo for this version.

Validation

  • Reviewed the new Markdown page in place
  • Ran git diff --cached --check

Note

Low Risk
Low risk docs-only change that adds a new release-notes page and clarifies Helm values syntax for disabling the AOF prerequisite check.

Overview
Adds the RDI 1.18.0 release notes page, documenting Snowflake source support for Helm installs, new API v2 endpoints (DLQ inspection, flush-target, CDC-readiness validation), operational/reliability updates, and security refreshes.

Updates the RDI FAQ to clarify the Helm values path for disabling the AOF prerequisite check (operator.prerequisiteChecks.aofRequired: false) and includes a YAML example.

Reviewed by Cursor Bugbot for commit bb7b285. Bugbot is set up for automated code reviews on this repo. Configure here.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 23, 2026

@jit-ci
Copy link
Copy Markdown

jit-ci Bot commented Apr 23, 2026

🛡️ Jit Security Scan Results

CRITICAL HIGH MEDIUM

✅ No security findings were detected in this PR


Security scan by Jit

@ZdravkoDonev-redis ZdravkoDonev-redis self-assigned this Apr 23, 2026
@ZdravkoDonev-redis ZdravkoDonev-redis changed the title [codex] Add RDI 1.18.0 release notes [RDI] Add RDI 1.18.0 release notes Apr 23, 2026
@ZdravkoDonev-redis ZdravkoDonev-redis marked this pull request as ready for review April 23, 2026 19:41
### Snowflake and Source Integration

- **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations.
- **Snowflake multi-schema capture**: In Helm-based installations, Snowflake sources can now capture from multiple schemas in a single pipeline.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Combine with previous note; Snowflake is net-new.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: combined the Snowflake notes so the release note presents Snowflake support as net-new and includes multi-schema capture in the same bullet.


### Breaking Changes

- **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004))
- **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic` (the default). Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, there will be a brief (seconds) gap in Prometheus scraping that does not affect the data path.

I guess we don't want references to internal Jira tickets in the release notes.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the internal Jira reference, removed processors.type, shortened the upgrade wording, and kept only the Prometheus scrape gap plus Flink-based pipeline note.


### Snowflake and Source Integration

- **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations.
- **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status and component health are reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there aspects that are not working correctly or are not reported correctly? If not, I wouldn't call out status and component health reporting. When reading "Snowflake support added", I expect that everything is working correctly. When reading "this aspect is working correctly", I start wondering which other aspects aren't.

It's different of course if it's a bug fix, but here we're adding support for a new source.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree, I'll reword it.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the status/component-health wording and kept this as a net-new Snowflake source support note.

### Snowflake and Source Integration

- **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations.
- **Snowflake multi-schema capture**: In Helm-based installations, Snowflake sources can now capture from multiple schemas in a single pipeline.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

- **Snowflake multi-schema capture**: In Helm-based installations, Snowflake sources can now capture from multiple schemas in a single pipeline.
- **System truststore support**: RDI can optionally use well-known root CA certificates from the system truststore, reducing the need for manual certificate configuration for cloud-hosted source databases.
- **Collector resource reservation controls**: A new `sources.advanced.resources` section lets you control memory and CPU reservation for the collector.
- **CDC-readiness validation in API v2**: RDI API v2 can optionally validate whether a source database is ready for CDC as part of pipeline validation. This is available on pipeline create, update, and patch requests by using the `validate_cdc` query parameter, including dry-run requests. If validation fails, the API returns validation errors instead of applying the change. Coverage is still limited in this release and will expand over time.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Instead of "coverage is still limited", perhaps we should mention which source databases are actually covered?
  • Since this is for now just an API change, should we mention that clients (CLI, Redis Insight) are not yet able to use this, to avoid wrong expectations?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CLI - not able, yes. Redis Insight is not able, because it's not yet implemented. Yeah, worth mentioning.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: listed supported CDC validation sources explicitly: MariaDB, MySQL, PostgreSQL, SQL Server, Oracle, and MongoDB. Also called out that Spanner/Snowflake are not supported and that this is API v2 only, not CLI or Redis Insight.

- `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}` returns the DLQ count for a specific table.
- `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}/records` returns DLQ records for a specific table with pagination, sort order control, and optional field projection.
- **Flush target endpoint in API v2**: Added `POST /api/v2/pipelines/{name}/flush-target/{target_name}` so you can flush a target Redis database through the API.
- **Automated TypeScript SDK generation**: RDI now supports automated generation of the TypeScript SDK from the latest OpenAPI definition, helping keep the SDK aligned with the API.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Automated TypeScript SDK generation**: RDI now supports automated generation of the TypeScript SDK from the latest OpenAPI definition, helping keep the SDK aligned with the API.

I would remove this and other mentions of the SDK. The SDK is not yet exposed to customers I believe, and exposing it would be a major product decision.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is still private.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the TypeScript SDK bullet and renamed the section to RDI API.

```

- **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting.
- **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline.
- **Safer collector API property handling**: Fixed a `NullPointerException` in the collector API when connection property maps contained null values or resolved to null. Such cases are now rejected with a clear error.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition, I wouldn't mention NullPointerException and instead just say "an issue" or "an exception" or "a bug"

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: shortened the collector API note and removed the NullPointerException wording.


- **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting.
- **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline.
- **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This fixes a gap where the Reloader image could still default to `ghcr.io/stakater/reloader` instead of following the Redis-hosted image convention used elsewhere in the chart. This is especially useful for private-registry and mirrored-image deployments, where users previously had to edit the `rdi-reloader` Deployment manually after installation.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This fixes a gap where the Reloader image could still default to `ghcr.io/stakater/reloader` instead of following the Redis-hosted image convention used elsewhere in the chart. This is especially useful for private-registry and mirrored-image deployments, where users previously had to edit the `rdi-reloader` Deployment manually after installation.
- **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This is especially useful for private-registry and mirrored-image deployments.

Shortened, lower level details omitted.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: shortened the Reloader note and removed the lower-level fallback/default detail.


### Security Updates

- **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images.
- **Security updates across RDI images**: Updated dependencies and base packages to remove Critical and High CVEs in RDI images.

Shortened, generalized, better terminology.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: replaced this with a shorter consolidated security bullet using RDI image terminology.

### Security Updates

- **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images.
- **Dependency upgrades for CVE remediation**: Updated key dependencies including Spring Boot, Spring Framework, Spring Security, Netty, Kafka clients, MySQL Connector/J, containerd, `golang.org/x/crypto`, and Stakater Reloader.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Dependency upgrades for CVE remediation**: Updated key dependencies including Spring Boot, Spring Framework, Spring Security, Netty, Kafka clients, MySQL Connector/J, containerd, `golang.org/x/crypto`, and Stakater Reloader.

Duplicate with the above, except for the Reloader, which could be mentioned in the above bullet as well by "Updated third-party images, dependencies, and base packages ..."

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the duplicate dependency-upgrade bullet and folded third-party images/dependencies/base packages into the consolidated security bullet.


## Limitations

RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this, looks like a leftover from previous Release Notes?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, it's a copy-paste thing. thanks

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the copied Limitations section from these release notes.


### Breaking Changes

- **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we even expose setting the processor type to users already? I see it in the docs, but is it explained somewhere (including the helm values)?
The docs also say it's an enum with only the values classic 😅 https://redis.io/docs/latest/integrate/redis-data-integration/reference/config-yaml-reference/#processors-data-processing-configuration

If it's not (fully) exposed to customers yet, I would leave out the sentence mentioning processors.type, and maybe replace it with a note at the end that metrics-exporter is not required and not deployed for the upcoming Flink processor.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well technically the users are able to switch to the Flink processor, it's just not the default yet.
We synced with Pieter that we should try to push the Flink processor for testing by our on-prem customers, ideally to hit issues there during testing/QA before the new customers on the Cloud.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed processors.type entirely and framed the behavior as the exporter not being deployed for Flink-based pipelines.


### Snowflake and Source Integration

- **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there aspects that are not working correctly or are not reported correctly? If not, I wouldn't call out status and component health reporting. When reading "Snowflake support added", I expect that everything is working correctly. When reading "this aspect is working correctly", I start wondering which other aspects aren't.

It's different of course if it's a bug fix, but here we're adding support for a new source.

Comment on lines +53 to +61
```yaml
# prerequisiteChecks:
# Set to `false` to allow RDI deployment without AOF persistence on the RDI Redis database.
# Note: Disabling AOF persistence means data durability is not ensured across full cluster outages.
# This may affect streamed data, deployed configurations, and offsets.
# This option should only be used in environments where disk persistence cannot be enabled due to policy constraints.
# Defaults to `true` for backward compatibility and data durability.
# aofRequired: true
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, I don't think this needs to be in the release notes at all. This block with the verbose description will be in the values file anyway

```

- **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting.
- **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition, I wouldn't mention NullPointerException and instead just say "an issue" or "an exception" or "a bug"


### Operations and Reliability

- **Optional AOF prerequisite check disablement**: A new `operator.prerequisiteChecks` section in the Helm values file lets you disable the AOF prerequisite check when the RDI database does not have AOF enabled. Use this carefully, because disabling the check can lead to data loss in some failure scenarios. For example:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we follow a convention for the point of view used in release notes text? Here, we are addressing the reader directly ("Use this carefully..."). I prefer a third-person neutral approach for release notes: "AOF should only be disabled after careful consideration, as it can lead to data loss in some failure scenarios".

Maybe it's worth it to decide on the style we want to follow and then make sure we stick to it.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: changed the AOF wording to a neutral third-person style and moved the detailed Helm values example to the FAQ.

Copy link
Copy Markdown
Collaborator Author

@ZdravkoDonev-redis ZdravkoDonev-redis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed release note review comments in bb7b285.


### Breaking Changes

- **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004))
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the internal Jira reference, removed processors.type, shortened the upgrade wording, and kept only the Prometheus scrape gap plus Flink-based pipeline note.

- **Snowflake multi-schema capture**: In Helm-based installations, Snowflake sources can now capture from multiple schemas in a single pipeline.
- **System truststore support**: RDI can optionally use well-known root CA certificates from the system truststore, reducing the need for manual certificate configuration for cloud-hosted source databases.
- **Collector resource reservation controls**: A new `sources.advanced.resources` section lets you control memory and CPU reservation for the collector.
- **CDC-readiness validation in API v2**: RDI API v2 can optionally validate whether a source database is ready for CDC as part of pipeline validation. This is available on pipeline create, update, and patch requests by using the `validate_cdc` query parameter, including dry-run requests. If validation fails, the API returns validation errors instead of applying the change. Coverage is still limited in this release and will expand over time.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: listed supported CDC validation sources explicitly: MariaDB, MySQL, PostgreSQL, SQL Server, Oracle, and MongoDB. Also called out that Spanner/Snowflake are not supported and that this is API v2 only, not CLI or Redis Insight.

```

- **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting.
- **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: shortened the collector API note and removed the NullPointerException wording.

Comment on lines +53 to +61
```yaml
# prerequisiteChecks:
# Set to `false` to allow RDI deployment without AOF persistence on the RDI Redis database.
# Note: Disabling AOF persistence means data durability is not ensured across full cluster outages.
# This may affect streamed data, deployed configurations, and offsets.
# This option should only be used in environments where disk persistence cannot be enabled due to policy constraints.
# Defaults to `true` for backward compatibility and data durability.
# aofRequired: true
```
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: shortened the release note and moved the Helm values example to the FAQ persistence section instead of keeping the verbose block here.


- **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting.
- **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline.
- **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This fixes a gap where the Reloader image could still default to `ghcr.io/stakater/reloader` instead of following the Redis-hosted image convention used elsewhere in the chart. This is especially useful for private-registry and mirrored-image deployments, where users previously had to edit the `rdi-reloader` Deployment manually after installation.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: shortened the Reloader note and removed the lower-level fallback/default detail.


## Limitations

RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the copied Limitations section from these release notes.


### Security Updates

- **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: replaced this with a shorter consolidated security bullet using RDI image terminology.


### Breaking Changes

- **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004))
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed processors.type entirely and framed the behavior as the exporter not being deployed for Flink-based pipelines.

### Security Updates

- **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images.
- **Dependency upgrades for CVE remediation**: Updated key dependencies including Spring Boot, Spring Framework, Spring Security, Netty, Kafka clients, MySQL Connector/J, containerd, `golang.org/x/crypto`, and Stakater Reloader.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the duplicate dependency-upgrade bullet and folded third-party images/dependencies/base packages into the consolidated security bullet.


### Operations and Reliability

- **Optional AOF prerequisite check disablement**: A new `operator.prerequisiteChecks` section in the Helm values file lets you disable the AOF prerequisite check when the RDI database does not have AOF enabled. Use this carefully, because disabling the check can lead to data loss in some failure scenarios. For example:
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: changed the AOF wording to a neutral third-person style and moved the detailed Helm values example to the FAQ.

Copy link
Copy Markdown
Collaborator Author

@ZdravkoDonev-redis ZdravkoDonev-redis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed release note review comments in bb7b285.


### Snowflake and Source Integration

- **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the status/component-health wording and kept this as a net-new Snowflake source support note.

Copy link
Copy Markdown
Collaborator Author

@ZdravkoDonev-redis ZdravkoDonev-redis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed release note review comments in bb7b285.

- `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}` returns the DLQ count for a specific table.
- `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}/records` returns DLQ records for a specific table with pagination, sort order control, and optional field projection.
- **Flush target endpoint in API v2**: Added `POST /api/v2/pipelines/{name}/flush-target/{target_name}` so you can flush a target Redis database through the API.
- **Automated TypeScript SDK generation**: RDI now supports automated generation of the TypeScript SDK from the latest OpenAPI definition, helping keep the SDK aligned with the API.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in bb7b285: removed the TypeScript SDK bullet and renamed the section to RDI API.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants