Conversation
🛡️ Jit Security Scan Results✅ No security findings were detected in this PR
Security scan by Jit
|
| ### Snowflake and Source Integration | ||
|
|
||
| - **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations. | ||
| - **Snowflake multi-schema capture**: In Helm-based installations, Snowflake sources can now capture from multiple schemas in a single pipeline. |
There was a problem hiding this comment.
Combine with previous note; Snowflake is net-new.
There was a problem hiding this comment.
Addressed in bb7b285: combined the Snowflake notes so the release note presents Snowflake support as net-new and includes multi-schema capture in the same bullet.
|
|
||
| ### Breaking Changes | ||
|
|
||
| - **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004)) |
There was a problem hiding this comment.
| - **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004)) | |
| - **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic` (the default). Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, there will be a brief (seconds) gap in Prometheus scraping that does not affect the data path. |
I guess we don't want references to internal Jira tickets in the release notes.
There was a problem hiding this comment.
Addressed in bb7b285: removed the internal Jira reference, removed processors.type, shortened the upgrade wording, and kept only the Prometheus scrape gap plus Flink-based pipeline note.
|
|
||
| ### Snowflake and Source Integration | ||
|
|
||
| - **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations. |
There was a problem hiding this comment.
| - **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations. | |
| - **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status and component health are reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations. |
There was a problem hiding this comment.
Are there aspects that are not working correctly or are not reported correctly? If not, I wouldn't call out status and component health reporting. When reading "Snowflake support added", I expect that everything is working correctly. When reading "this aspect is working correctly", I start wondering which other aspects aren't.
It's different of course if it's a bug fix, but here we're adding support for a new source.
There was a problem hiding this comment.
Yeah I agree, I'll reword it.
There was a problem hiding this comment.
Addressed in bb7b285: removed the status/component-health wording and kept this as a net-new Snowflake source support note.
| ### Snowflake and Source Integration | ||
|
|
||
| - **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations. | ||
| - **Snowflake multi-schema capture**: In Helm-based installations, Snowflake sources can now capture from multiple schemas in a single pipeline. |
| - **Snowflake multi-schema capture**: In Helm-based installations, Snowflake sources can now capture from multiple schemas in a single pipeline. | ||
| - **System truststore support**: RDI can optionally use well-known root CA certificates from the system truststore, reducing the need for manual certificate configuration for cloud-hosted source databases. | ||
| - **Collector resource reservation controls**: A new `sources.advanced.resources` section lets you control memory and CPU reservation for the collector. | ||
| - **CDC-readiness validation in API v2**: RDI API v2 can optionally validate whether a source database is ready for CDC as part of pipeline validation. This is available on pipeline create, update, and patch requests by using the `validate_cdc` query parameter, including dry-run requests. If validation fails, the API returns validation errors instead of applying the change. Coverage is still limited in this release and will expand over time. |
There was a problem hiding this comment.
- Instead of "coverage is still limited", perhaps we should mention which source databases are actually covered?
- Since this is for now just an API change, should we mention that clients (CLI, Redis Insight) are not yet able to use this, to avoid wrong expectations?
There was a problem hiding this comment.
CLI - not able, yes. Redis Insight is not able, because it's not yet implemented. Yeah, worth mentioning.
There was a problem hiding this comment.
Addressed in bb7b285: listed supported CDC validation sources explicitly: MariaDB, MySQL, PostgreSQL, SQL Server, Oracle, and MongoDB. Also called out that Spanner/Snowflake are not supported and that this is API v2 only, not CLI or Redis Insight.
| - `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}` returns the DLQ count for a specific table. | ||
| - `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}/records` returns DLQ records for a specific table with pagination, sort order control, and optional field projection. | ||
| - **Flush target endpoint in API v2**: Added `POST /api/v2/pipelines/{name}/flush-target/{target_name}` so you can flush a target Redis database through the API. | ||
| - **Automated TypeScript SDK generation**: RDI now supports automated generation of the TypeScript SDK from the latest OpenAPI definition, helping keep the SDK aligned with the API. |
There was a problem hiding this comment.
| - **Automated TypeScript SDK generation**: RDI now supports automated generation of the TypeScript SDK from the latest OpenAPI definition, helping keep the SDK aligned with the API. |
I would remove this and other mentions of the SDK. The SDK is not yet exposed to customers I believe, and exposing it would be a major product decision.
There was a problem hiding this comment.
Yes, it is still private.
There was a problem hiding this comment.
Addressed in bb7b285: removed the TypeScript SDK bullet and renamed the section to RDI API.
| ``` | ||
|
|
||
| - **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting. | ||
| - **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline. |
There was a problem hiding this comment.
| - **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline. | |
| - **Safer collector API property handling**: Fixed a `NullPointerException` in the collector API when connection property maps contained null values or resolved to null. Such cases are now rejected with a clear error. |
There was a problem hiding this comment.
In addition, I wouldn't mention NullPointerException and instead just say "an issue" or "an exception" or "a bug"
There was a problem hiding this comment.
Addressed in bb7b285: shortened the collector API note and removed the NullPointerException wording.
|
|
||
| - **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting. | ||
| - **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline. | ||
| - **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This fixes a gap where the Reloader image could still default to `ghcr.io/stakater/reloader` instead of following the Redis-hosted image convention used elsewhere in the chart. This is especially useful for private-registry and mirrored-image deployments, where users previously had to edit the `rdi-reloader` Deployment manually after installation. |
There was a problem hiding this comment.
| - **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This fixes a gap where the Reloader image could still default to `ghcr.io/stakater/reloader` instead of following the Redis-hosted image convention used elsewhere in the chart. This is especially useful for private-registry and mirrored-image deployments, where users previously had to edit the `rdi-reloader` Deployment manually after installation. | |
| - **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This is especially useful for private-registry and mirrored-image deployments. |
Shortened, lower level details omitted.
There was a problem hiding this comment.
Addressed in bb7b285: shortened the Reloader note and removed the lower-level fallback/default detail.
|
|
||
| ### Security Updates | ||
|
|
||
| - **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images. |
There was a problem hiding this comment.
| - **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images. | |
| - **Security updates across RDI images**: Updated dependencies and base packages to remove Critical and High CVEs in RDI images. |
Shortened, generalized, better terminology.
There was a problem hiding this comment.
Addressed in bb7b285: replaced this with a shorter consolidated security bullet using RDI image terminology.
| ### Security Updates | ||
|
|
||
| - **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images. | ||
| - **Dependency upgrades for CVE remediation**: Updated key dependencies including Spring Boot, Spring Framework, Spring Security, Netty, Kafka clients, MySQL Connector/J, containerd, `golang.org/x/crypto`, and Stakater Reloader. |
There was a problem hiding this comment.
| - **Dependency upgrades for CVE remediation**: Updated key dependencies including Spring Boot, Spring Framework, Spring Security, Netty, Kafka clients, MySQL Connector/J, containerd, `golang.org/x/crypto`, and Stakater Reloader. |
Duplicate with the above, except for the Reloader, which could be mentioned in the above bullet as well by "Updated third-party images, dependencies, and base packages ..."
There was a problem hiding this comment.
Addressed in bb7b285: removed the duplicate dependency-upgrade bullet and folded third-party images/dependencies/base packages into the consolidated security bullet.
|
|
||
| ## Limitations | ||
|
|
||
| RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. |
There was a problem hiding this comment.
Do we need this, looks like a leftover from previous Release Notes?
There was a problem hiding this comment.
Ah yes, it's a copy-paste thing. thanks
There was a problem hiding this comment.
Addressed in bb7b285: removed the copied Limitations section from these release notes.
|
|
||
| ### Breaking Changes | ||
|
|
||
| - **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004)) |
There was a problem hiding this comment.
Do we even expose setting the processor type to users already? I see it in the docs, but is it explained somewhere (including the helm values)?
The docs also say it's an enum with only the values classic 😅 https://redis.io/docs/latest/integrate/redis-data-integration/reference/config-yaml-reference/#processors-data-processing-configuration
If it's not (fully) exposed to customers yet, I would leave out the sentence mentioning processors.type, and maybe replace it with a note at the end that metrics-exporter is not required and not deployed for the upcoming Flink processor.
There was a problem hiding this comment.
Well technically the users are able to switch to the Flink processor, it's just not the default yet.
We synced with Pieter that we should try to push the Flink processor for testing by our on-prem customers, ideally to hit issues there during testing/QA before the new customers on the Cloud.
There was a problem hiding this comment.
Addressed in bb7b285: removed processors.type entirely and framed the behavior as the exporter not being deployed for Flink-based pipelines.
|
|
||
| ### Snowflake and Source Integration | ||
|
|
||
| - **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations. |
There was a problem hiding this comment.
Are there aspects that are not working correctly or are not reported correctly? If not, I wouldn't call out status and component health reporting. When reading "Snowflake support added", I expect that everything is working correctly. When reading "this aspect is working correctly", I start wondering which other aspects aren't.
It's different of course if it's a bug fix, but here we're adding support for a new source.
| ```yaml | ||
| # prerequisiteChecks: | ||
| # Set to `false` to allow RDI deployment without AOF persistence on the RDI Redis database. | ||
| # Note: Disabling AOF persistence means data durability is not ensured across full cluster outages. | ||
| # This may affect streamed data, deployed configurations, and offsets. | ||
| # This option should only be used in environments where disk persistence cannot be enabled due to policy constraints. | ||
| # Defaults to `true` for backward compatibility and data durability. | ||
| # aofRequired: true | ||
| ``` |
There was a problem hiding this comment.
+1, I don't think this needs to be in the release notes at all. This block with the verbose description will be in the values file anyway
| ``` | ||
|
|
||
| - **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting. | ||
| - **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline. |
There was a problem hiding this comment.
In addition, I wouldn't mention NullPointerException and instead just say "an issue" or "an exception" or "a bug"
|
|
||
| ### Operations and Reliability | ||
|
|
||
| - **Optional AOF prerequisite check disablement**: A new `operator.prerequisiteChecks` section in the Helm values file lets you disable the AOF prerequisite check when the RDI database does not have AOF enabled. Use this carefully, because disabling the check can lead to data loss in some failure scenarios. For example: |
There was a problem hiding this comment.
Do we follow a convention for the point of view used in release notes text? Here, we are addressing the reader directly ("Use this carefully..."). I prefer a third-person neutral approach for release notes: "AOF should only be disabled after careful consideration, as it can lead to data loss in some failure scenarios".
Maybe it's worth it to decide on the style we want to follow and then make sure we stick to it.
There was a problem hiding this comment.
Addressed in bb7b285: changed the AOF wording to a neutral third-person style and moved the detailed Helm values example to the FAQ.
ZdravkoDonev-redis
left a comment
There was a problem hiding this comment.
Addressed release note review comments in bb7b285.
|
|
||
| ### Breaking Changes | ||
|
|
||
| - **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004)) |
There was a problem hiding this comment.
Addressed in bb7b285: removed the internal Jira reference, removed processors.type, shortened the upgrade wording, and kept only the Prometheus scrape gap plus Flink-based pipeline note.
| - **Snowflake multi-schema capture**: In Helm-based installations, Snowflake sources can now capture from multiple schemas in a single pipeline. | ||
| - **System truststore support**: RDI can optionally use well-known root CA certificates from the system truststore, reducing the need for manual certificate configuration for cloud-hosted source databases. | ||
| - **Collector resource reservation controls**: A new `sources.advanced.resources` section lets you control memory and CPU reservation for the collector. | ||
| - **CDC-readiness validation in API v2**: RDI API v2 can optionally validate whether a source database is ready for CDC as part of pipeline validation. This is available on pipeline create, update, and patch requests by using the `validate_cdc` query parameter, including dry-run requests. If validation fails, the API returns validation errors instead of applying the change. Coverage is still limited in this release and will expand over time. |
There was a problem hiding this comment.
Addressed in bb7b285: listed supported CDC validation sources explicitly: MariaDB, MySQL, PostgreSQL, SQL Server, Oracle, and MongoDB. Also called out that Spanner/Snowflake are not supported and that this is API v2 only, not CLI or Redis Insight.
| ``` | ||
|
|
||
| - **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting. | ||
| - **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline. |
There was a problem hiding this comment.
Addressed in bb7b285: shortened the collector API note and removed the NullPointerException wording.
| ```yaml | ||
| # prerequisiteChecks: | ||
| # Set to `false` to allow RDI deployment without AOF persistence on the RDI Redis database. | ||
| # Note: Disabling AOF persistence means data durability is not ensured across full cluster outages. | ||
| # This may affect streamed data, deployed configurations, and offsets. | ||
| # This option should only be used in environments where disk persistence cannot be enabled due to policy constraints. | ||
| # Defaults to `true` for backward compatibility and data durability. | ||
| # aofRequired: true | ||
| ``` |
There was a problem hiding this comment.
Addressed in bb7b285: shortened the release note and moved the Helm values example to the FAQ persistence section instead of keeping the verbose block here.
|
|
||
| - **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting. | ||
| - **Safer collector property handling**: Fixed a `NullPointerException` in the collector API when intercepted connection property maps contained null values or resolved to null. Null maps are now rejected earlier with a clear error instead of failing later in the pipeline. | ||
| - **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This fixes a gap where the Reloader image could still default to `ghcr.io/stakater/reloader` instead of following the Redis-hosted image convention used elsewhere in the chart. This is especially useful for private-registry and mirrored-image deployments, where users previously had to edit the `rdi-reloader` Deployment manually after installation. |
There was a problem hiding this comment.
Addressed in bb7b285: shortened the Reloader note and removed the lower-level fallback/default detail.
|
|
||
| ## Limitations | ||
|
|
||
| RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits. |
There was a problem hiding this comment.
Addressed in bb7b285: removed the copied Limitations section from these release notes.
|
|
||
| ### Security Updates | ||
|
|
||
| - **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images. |
There was a problem hiding this comment.
Addressed in bb7b285: replaced this with a shorter consolidated security bullet using RDI image terminology.
|
|
||
| ### Breaking Changes | ||
|
|
||
| - **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart, and is only rendered when `processors.type` is `classic`. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, Helm deletes the control-plane copy of the exporter resources before the operator recreates them under the pipeline release, resulting in a brief (seconds) gap in Prometheus scraping that does not affect the data path. ([RDSC-5004](https://redislabs.atlassian.net/browse/RDSC-5004)) |
There was a problem hiding this comment.
Addressed in bb7b285: removed processors.type entirely and framed the behavior as the exporter not being deployed for Flink-based pipelines.
| ### Security Updates | ||
|
|
||
| - **Security refresh across operator, collector API, and Fluentd images**: Refreshed dependencies and base packages to reduce Critical and High findings in the operator, collector API, and Fluentd images. | ||
| - **Dependency upgrades for CVE remediation**: Updated key dependencies including Spring Boot, Spring Framework, Spring Security, Netty, Kafka clients, MySQL Connector/J, containerd, `golang.org/x/crypto`, and Stakater Reloader. |
There was a problem hiding this comment.
Addressed in bb7b285: removed the duplicate dependency-upgrade bullet and folded third-party images/dependencies/base packages into the consolidated security bullet.
|
|
||
| ### Operations and Reliability | ||
|
|
||
| - **Optional AOF prerequisite check disablement**: A new `operator.prerequisiteChecks` section in the Helm values file lets you disable the AOF prerequisite check when the RDI database does not have AOF enabled. Use this carefully, because disabling the check can lead to data loss in some failure scenarios. For example: |
There was a problem hiding this comment.
Addressed in bb7b285: changed the AOF wording to a neutral third-person style and moved the detailed Helm values example to the FAQ.
ZdravkoDonev-redis
left a comment
There was a problem hiding this comment.
Addressed release note review comments in bb7b285.
|
|
||
| ### Snowflake and Source Integration | ||
|
|
||
| - **Snowflake source support for Helm installations**: RDI now supports Snowflake as a source in Helm-based installations. Status, state, and component health are now reported correctly for Snowflake sources. Snowflake sources are not yet supported for VM installations. |
There was a problem hiding this comment.
Addressed in bb7b285: removed the status/component-health wording and kept this as a net-new Snowflake source support note.
ZdravkoDonev-redis
left a comment
There was a problem hiding this comment.
Addressed release note review comments in bb7b285.
| - `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}` returns the DLQ count for a specific table. | ||
| - `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}/records` returns DLQ records for a specific table with pagination, sort order control, and optional field projection. | ||
| - **Flush target endpoint in API v2**: Added `POST /api/v2/pipelines/{name}/flush-target/{target_name}` so you can flush a target Redis database through the API. | ||
| - **Automated TypeScript SDK generation**: RDI now supports automated generation of the TypeScript SDK from the latest OpenAPI definition, helping keep the SDK aligned with the API. |
There was a problem hiding this comment.
Addressed in bb7b285: removed the TypeScript SDK bullet and renamed the section to RDI API.
What changed
Adds the RDI 1.18.0 release-notes page under
content/integrate/redis-data-integration/release-notes/.The page covers:
Why this changed
The docs repo was missing the 1.18.0 RDI release-notes entry. The source changelog also needed product-facing wording and a few scope clarifications, including:
User impact
Users will see the 1.18.0 release notes in the RDI release-notes index with clearer guidance on feature scope, API surface, and operational caveats.
Root cause
The release-notes page had not yet been added to the docs repo for this version.
Validation
git diff --cached --checkNote
Low Risk
Low risk docs-only change that adds a new release-notes page and clarifies Helm values syntax for disabling the AOF prerequisite check.
Overview
Adds the RDI 1.18.0 release notes page, documenting Snowflake source support for Helm installs, new API v2 endpoints (DLQ inspection, flush-target, CDC-readiness validation), operational/reliability updates, and security refreshes.
Updates the RDI FAQ to clarify the Helm values path for disabling the AOF prerequisite check (
operator.prerequisiteChecks.aofRequired: false) and includes a YAML example.Reviewed by Cursor Bugbot for commit bb7b285. Bugbot is set up for automated code reviews on this repo. Configure here.