Skip to content

[fix](set) fix coredump of set op if total data size exceeds 4G (#61471)#61557

Merged
yiguolei merged 1 commit intoapache:branch-4.0from
jacktengg:260320-pick-4.0
Mar 20, 2026
Merged

[fix](set) fix coredump of set op if total data size exceeds 4G (#61471)#61557
yiguolei merged 1 commit intoapache:branch-4.0from
jacktengg:260320-pick-4.0

Conversation

@jacktengg
Copy link
Copy Markdown
Contributor

Pick PR: #61471

Problem Summary:
Root Cause Analysis

核心原因:SetSinkOperatorX::sink() 中 build_block
被多次覆盖,导致哈希表中的旧条目成为悬空引用。

问题链路

  1. build_block 被覆盖

在 set_sink_operator.cpp:52-56:

if (eos || local_state._mutable_block.allocated_bytes() >= BUILD_BLOCK_MAX_SIZE) { // 4GB
build_block = local_state._mutable_block.to_block(); // 覆盖 build_block! RETURN_IF_ERROR(_process_build_block(local_state, build_block, state));
local_state._mutable_block.clear();
}

当数据总量超过 BUILD_BLOCK_MAX_SIZE(4GB)时,这个 flush 会触发多次:

  • 第一次 flush(allocated_bytes >= 4GB时):build_block = batch1(假设包含 rows 0..N1),哈希表存入 row_num = 0, 1, ..., N1
  • 第二次 flush(eos 时):build_block = batch2(新数据,rows 0..N2),batch1 的数据被销毁。哈希表新增 row_num = 0, 1, ..., N2
  1. 哈希表只存 row_num,不存 block 引用

RowRefListWithFlags 继承自 RowRef,只存储 uint32_t row_num(join_op.h:46),没有 block
指针或 offset。

在 hash_table_set_build.h:39,构建时存入的是:Mapped {k},即行号 k。

  1. 输出阶段使用单一 build_block

在 set_source_operator.cpp:161-162:

auto& column = *build_block.get_by_position(idx->second).column;
local_state._mutable_cols[idx->first]->insert_from(column, it->row_num);

此时 build_block 是最后一次 flush 的 batch2。但哈希表中来自 batch1 的条目的 row_num
可能超出 batch2 的行数范围。

  1. 越界访问导致 SIGSEGV

当 batch1 的 row_num = X(X > batch2 的行数)被用于 insert_from(column, X) 时:

// column_string.h:180-197
const size_t size_to_append = src.offsets[X] - src.offsets[X - 1]; // 越界读取 → 垃圾值
const size_t offset = src.offsets[X - 1]; // 垃圾值
// ...
memcpy(..., &src.chars[offset], size_to_append); // 垃圾 offset → 访问未映射内存 →
SIGSEGV

None

  • Test

    • Regression test
    • Unit Test
    • Manual test (add detailed scripts or steps below)
    • No need to test or manual test. Explain why:
  • This is a refactor/code format and no logic has been changed.
    - [ ] Previous test can cover this change. - [ ] No code files have been changed. - [ ] Other reason

  • Behavior changed:

    • No.
    • Yes.
  • Does this need documentation?

    • No.
  • Yes.

  • Confirm the release note

  • Confirm test cases

  • Confirm document

  • Add branch pick label

What problem does this PR solve?

Issue Number: close #xxx

Related PR: #xxx

Problem Summary:

Release note

None

Check List (For Author)

  • Test

    • Regression test
    • Unit Test
    • Manual test (add detailed scripts or steps below)
    • No need to test or manual test. Explain why:
      • This is a refactor/code format and no logic has been changed.
      • Previous test can cover this change.
      • No code files have been changed.
      • Other reason
  • Behavior changed:

    • No.
    • Yes.
  • Does this need documentation?

    • No.
    • Yes.

Check List (For Reviewer who merge this PR)

  • Confirm the release note
  • Confirm test cases
  • Confirm document
  • Add branch pick label

@jacktengg jacktengg requested a review from yiguolei as a code owner March 20, 2026 08:06
@jacktengg
Copy link
Copy Markdown
Contributor Author

run buildall

…he#61471)

Issue Number: close #xxx

Related PR: #xxx

Problem Summary:
Root Cause Analysis

  核心原因:SetSinkOperatorX::sink() 中 build_block
  被多次覆盖,导致哈希表中的旧条目成为悬空引用。

  问题链路

  1. build_block 被覆盖

  在 set_sink_operator.cpp:52-56:

if (eos || local_state._mutable_block.allocated_bytes() >=
BUILD_BLOCK_MAX_SIZE) { // 4GB
build_block = local_state._mutable_block.to_block(); // 覆盖 build_block!
RETURN_IF_ERROR(_process_build_block(local_state, build_block, state));
      local_state._mutable_block.clear();
  }

  当数据总量超过 BUILD_BLOCK_MAX_SIZE(4GB)时,这个 flush 会触发多次:

  - 第一次 flush(allocated_bytes >= 4GB时):build_block = batch1(假设包含 rows
  0..N1),哈希表存入 row_num = 0, 1, ..., N1
  - 第二次 flush(eos 时):build_block = batch2(新数据,rows 0..N2),batch1
  的数据被销毁。哈希表新增 row_num = 0, 1, ..., N2

  2. 哈希表只存 row_num,不存 block 引用

RowRefListWithFlags 继承自 RowRef,只存储 uint32_t row_num(join_op.h:46),没有
block
  指针或 offset。

  在 hash_table_set_build.h:39,构建时存入的是:Mapped {k},即行号 k。

  3. 输出阶段使用单一 build_block

  在 set_source_operator.cpp:161-162:

  auto& column = *build_block.get_by_position(idx->second).column;
local_state._mutable_cols[idx->first]->insert_from(column, it->row_num);

  此时 build_block 是最后一次 flush 的 batch2。但哈希表中来自 batch1 的条目的 row_num
  可能超出 batch2 的行数范围。

  4. 越界访问导致 SIGSEGV

  当 batch1 的 row_num = X(X > batch2 的行数)被用于 insert_from(column, X) 时:

  // column_string.h:180-197
const size_t size_to_append = src.offsets[X] - src.offsets[X - 1]; //
越界读取 → 垃圾值
const size_t offset = src.offsets[X - 1]; // 垃圾值
  // ...
memcpy(..., &src.chars[offset], size_to_append); // 垃圾 offset → 访问未映射内存
→
  SIGSEGV

None

- Test <!-- At least one of them must be included. -->
    - [ ] Regression test
    - [ ] Unit Test
    - [ ] Manual test (add detailed scripts or steps below)
    - [ ] No need to test or manual test. Explain why:
- [ ] This is a refactor/code format and no logic has been changed.
        - [ ] Previous test can cover this change.
        - [ ] No code files have been changed.
        - [ ] Other reason <!-- Add your reason?  -->

- Behavior changed:
    - [ ] No.
    - [ ] Yes. <!-- Explain the behavior change -->

- Does this need documentation?
    - [ ] No.
- [ ] Yes. <!-- Add document PR link here. eg:
apache/doris-website#1214 -->

- [ ] Confirm the release note
- [ ] Confirm test cases
- [ ] Confirm document
- [ ] Add branch pick label <!-- Add branch pick label that this PR
should merge into -->
@jacktengg
Copy link
Copy Markdown
Contributor Author

run buildall

@doris-robot
Copy link
Copy Markdown

BE UT Coverage Report

Increment line coverage 100.00% (21/21) 🎉

Increment coverage report
Complete coverage report

Category Coverage
Function Coverage 52.93% (19201/36278)
Line Coverage 36.10% (178845/495385)
Region Coverage 32.75% (138675/423489)
Branch Coverage 33.70% (60222/178695)

@hello-stephen
Copy link
Copy Markdown
Contributor

BE Regression && UT Coverage Report

Increment line coverage 100.00% (21/21) 🎉

Increment coverage report
Complete coverage report

Category Coverage
Function Coverage 71.36% (25342/35513)
Line Coverage 54.04% (267229/494499)
Region Coverage 51.57% (220599/427763)
Branch Coverage 53.09% (95206/179338)

@yiguolei yiguolei merged commit 1f94ec5 into apache:branch-4.0 Mar 20, 2026
26 of 28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants