Merge tag 'REL_15_17' into merge-pg#12
Open
fizaaluthra wants to merge 390 commits intoyugabyte:yb-pg15from
Open
Merge tag 'REL_15_17' into merge-pg#12fizaaluthra wants to merge 390 commits intoyugabyte:yb-pg15from
fizaaluthra wants to merge 390 commits intoyugabyte:yb-pg15from
Conversation
If a crash occurs while writing a WAL record that spans multiple pages, the recovery process marks the page with the XLP_FIRST_IS_OVERWRITE_CONTRECORD flag. However, logical decoding currently attempts to read the full WAL record based on its expected size before checking this flag, which can lead to an infinite wait if the remaining data is never written (e.g., no activity after crash). This patch updates the logic first to read the page header and check for the XLP_FIRST_IS_OVERWRITE_CONTRECORD flag before attempting to reconstruct the full WAL record. If the flag is set, decoding correctly identifies the record as incomplete and avoids waiting for WAL data that will never arrive. Discussion: https://postgr.es/m/CAAKRu_ZCOzQpEumLFgG_%2Biw3FTa%2BhJ4SRpxzaQBYxxM_ZAzWcA%40mail.gmail.com Discussion: https://postgr.es/m/CALDaNm34m36PDHzsU_GdcNXU0gLTfFY5rzh9GSQv%3Dw6B%2BQVNRQ%40mail.gmail.com Author: Vignesh C <vignesh21@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> Backpatch-through: 13
When pg_recvlogical receives a SIGHUP signal, it closes the current output file and reopens a new one. This is useful since it allows us to rotate the output file by renaming the current file and sending a SIGHUP. This behavior was previously undocumented. This commit adds the missing documentation. Back-patch to all supported versions. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Shinya Kato <shinya11.kato@gmail.com> Discussion: https://postgr.es/m/0977fc4f-1523-4ecd-8a0e-391af4976367@oss.nttdata.com Backpatch-through: 13
ECPGconnect() caches established connections to the server, supporting the case of a NULL connection name when a database name is not specified by its caller. A follow-up call to ECPGget_PGconn() to get an established connection from the cached set with a non-NULL name could cause a NULL pointer dereference if a NULL connection was listed in the cache and checked for a match. At least two connections are necessary to reproduce the issue: one with a NULL name and one with a non-NULL name. Author: Aleksander Alekseev <aleksander@tigerdata.com> Discussion: https://postgr.es/m/CAJ7c6TNvFTPUTZQuNAoqgzaSGz-iM4XR61D7vEj5PsQXwg2RyA@mail.gmail.com Backpatch-through: 13
Solaris has never bothered to add "const" to the second argument of PAM conversation procs, as all other Unixen did decades ago. This resulted in an "incompatible pointer" compiler warning when building --with-pam, but had no more serious effect than that, so we never did anything about it. However, as of GCC 14 the case is an error not warning by default. To complicate matters, recent OpenIndiana (and maybe illumos in general?) *does* supply the "const" by default, so we can't just assume that platforms using our solaris template need help. What we can do, short of building a configure-time probe, is to make solaris.h #define _PAM_LEGACY_NONCONST, which causes OpenIndiana's pam_appl.h to revert to the traditional definition, and hopefully will have no effect anywhere else. Then we can use that same symbol to control whether we include "const" in the declaration of pam_passwd_conv_proc(). Bug: #18995 Reported-by: Andrew Watkins <awatkins1966@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18995-82058da9ab4337a7@postgresql.org Backpatch-through: 13
If the number of sync requests is big enough, the palloc() call in AbsorbSyncRequests() will attempt to allocate more than 1 GB of memory, resulting in failure. This can lead to an infinite loop in the checkpointer process, as it repeatedly fails to absorb the pending requests. This commit limits the checkpointer requests queue size to 10M items. In addition to preventing the palloc() failure, this change helps to avoid long queue processing time. Also, this commit is for backpathing only. The master branch receives a more invasive yet comprehensive fix for this problem. Discussion: https://postgr.es/m/db4534f83a22a29ab5ee2566ad86ca92%40postgrespro.ru Backpatch-through: 13
This mostly reverts commit 6082b3d, "Use xmlParseInNodeContext not xmlParseBalancedChunkMemory". It turns out that xmlParseInNodeContext will reject text chunks exceeding 10MB, while (in most libxml2 versions) xmlParseBalancedChunkMemory will not. The bleeding-edge libxml2 bug that we needed to work around a year ago is presumably no longer a factor, and the argument that xmlParseBalancedChunkMemory is semi-deprecated is not enough to justify a functionality regression. Hence, go back to doing it the old way. Reported-by: Michael Paquier <michael@paquier.xyz> Author: Michael Paquier <michael@paquier.xyz> Co-authored-by: Erik Wienhold <ewie@ewie.name> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/aIGknLuc8b8ega2X@paquier.xyz Backpatch-through: 13
This commit documents differences in the definition of word separators for the initcap function between libc and ICU locale providers. Backpatch to all supported branches. Discussion: https://postgr.es/m/804cc10ef95d4d3b298e76b181fd9437%40postgrespro.ru Author: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru> Backpatch-through: 13
When I prepared 71c0921 et al yesterday, I was thinking that the logic involving explicitly freeing the node_list output was still needed to dodge leakage bugs in libxml2. But I was misremembering: we introduced that only because with early 2.13.x releases we could not trust xmlParseBalancedChunkMemory's result code, so we had to look to see if a node list was returned or not. There's no reason to believe that xmlParseBalancedChunkMemory will fail to clean up the node list when required, so simplify. (This essentially completes reverting all the non-cosmetic changes in 6082b3d.) Reported-by: Jim Jones <jim.jones@uni-muenster.de> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/997668.1753802857@sss.pgh.pa.us Backpatch-through: 13
For many optional libraries, we extract the -L and -l switches needed to link the library from a helper program such as llvm-config. In some cases we put the resulting -L switches into LDFLAGS ahead of -L switches specified via --with-libraries. That risks breaking the user's intention for --with-libraries. It's not such a problem if the library's -L switch points to a directory containing only that library, but on some platforms a library helper may "helpfully" offer a switch such as -L/usr/lib that points to a directory holding all standard libraries. If the user specified --with-libraries in hopes of overriding the standard build of some library, the -L/usr/lib switch prevents that from happening since it will come before the user-specified directory. To fix, avoid inserting these switches directly into LDFLAGS during configure, instead adding them to LIBDIRS or SHLIB_LINK. They will still eventually get added to LDFLAGS, but only after the switches coming from --with-libraries. The same problem exists for -I switches: those coming from --with-includes should appear before any coming from helper programs such as llvm-config. We have not heard field complaints about this case, but it seems certain that a user attempting to override a standard library could have issues. The changes for this go well beyond configure itself, however, because many Makefiles have occasion to manipulate CPPFLAGS to insert locally-desirable -I switches, and some of them got it wrong. The correct ordering is any -I switches pointing at within-the- source-tree-or-build-tree directories, then those from the tree-wide CPPFLAGS, then those from helper programs. There were several places that risked pulling in a system-supplied copy of libpq headers, for example, instead of the in-tree files. (Commit cb36f8e fixed one instance of that a few months ago, but this exercise found more.) The Meson build scripts may or may not have any comparable problems, but I'll leave it to someone else to investigate that. Reported-by: Charles Samborski <demurgos@demurgos.net> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/70f2155f-27ca-4534-b33d-7750e20633d7@demurgos.net Backpatch-through: 13
The configure checks used two incorrect functions when checking the
presence of some routines in an environment:
- __get_cpuidex() for the check of __cpuidex().
- __get_cpuid() for the check of __cpuid().
This means that Postgres has never been able to detect the presence of
these functions, impacting environments where these exist, like Windows.
Simply fixing the function name does not work. For example, using
configure with MinGW on Windows causes the checks to detect all four of
__get_cpuid(), __get_cpuid_count(), __cpuidex() and __cpuid() to be
available, causing a compilation failure as this messes up with the
MinGW headers as we would include both <intrin.h> and <cpuid.h>.
The Postgres code expects only one in { __get_cpuid() , __cpuid() } and
one in { __get_cpuid_count() , __cpuidex() } to exist. This commit
reshapes the configure checks to do exactly what meson is doing, which
has been working well for us: check one, then the other, but never allow
both to be detected in a given build.
The logic is wrong since 3dc2d62 and 792752a where these
checks have been introduced (the second case is most likely a copy-pasto
coming from the first case), with meson documenting that the configure
checks were broken. As far as I can see, they are not once applied
consistently with what the code expects, but let's see if the buildfarm
has different something to say. The comment in meson.build is adjusted
as well, to reflect the new reality.
Author: Lukas Fittl <lukas@fittl.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aIgwNYGVt5aRAqTJ@paquier.xyz
Backpatch-through: 13
Previously, we sorted rules by schema name and then rule name; if that wasn't unique, we sorted by rule OID. This can be problematic for comparing dumps from databases with different histories, especially since certain rule names like "_RETURN" are very common. Let's make the sort key schema name, rule name, table name, which should be unique. (This is the same behavior we've long used for triggers and RLS policies.) Andreas Karlsson This back-patches v18 commit 350e6b8 to all supported branches. The next commit will assert that pg_dump provides a stable sort order for all object types. That assertion would fail without stabilizing DO_RULE order as this commit did. Discussion: https://postgr.es/m/b4e468d8-0cd6-42e6-ac8a-1d6afa6e0cf1@proxel.se Discussion: https://postgr.es/m/20250707192654.9e.nmisch@google.com Backpatch-through: 13-17
pg_dump sorts objects by their logical names, e.g. (nspname, relname, tgname), before dependency-driven reordering. That removes one source of logically-identical databases differing in their schema-only dumps. In other words, it helps with schema diffing. The logical name sort ignored essential sort keys for constraints, operators, PUBLICATION ... FOR TABLE, PUBLICATION ... FOR TABLES IN SCHEMA, operator classes, and operator families. pg_dump's sort then depended on object OID, yielding spurious schema diffs. After this change, OIDs affect dump order only in the event of catalog corruption. While pg_dump also wrongly ignored pg_collation.collencoding, CREATE COLLATION restrictions have been keeping that imperceptible in practical use. Use techniques like we use for object types already having full sort key coverage. Where the pertinent queries weren't fetching the ignored sort keys, this adds columns to those queries and stores those keys in memory for the long term. The ignorance of sort keys became more problematic when commit 172259a added a schema diff test sensitive to it. Buildfarm member hippopotamus witnessed that. However, dump order stability isn't a new goal, and this might avoid other dump comparison failures. Hence, back-patch to v13 (all supported versions). Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/20250707192654.9e.nmisch@google.com Backpatch-through: 13
A deadlock can occur when the DDL command and the apply worker acquire catalog locks in different orders while dropping replication origins. The issue is rare in PG16 and higher branches because, in most cases, the tablesync worker performs the origin drop in those branches, and its locking sequence does not conflict with DDL operations. This patch ensures consistent lock acquisition to prevent such deadlocks. As per buildfarm. Reported-by: Alexander Lakhin <exclusion@gmail.com> Author: Ajin Cherian <itsajin@gmail.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 14, where it was introduced Discussion: https://postgr.es/m/bab95e12-6cc5-4ebb-80a8-3e41956aa297@gmail.com
Currently, ALTER DATABASE/ROLE/SYSTEM RESET [ALL] with an unknown custom GUC with a prefix reserved by MarkGUCPrefixReserved() errors (unless a superuser runs a RESET ALL variant). This is problematic for cases such as an extension library upgrade that removes a GUC. To fix, simply make sure the relevant code paths explicitly allow it. Note that we require superuser or privileges on the parameter to reset it. This is perhaps a bit more restrictive than is necessary, but it's not clear whether further relaxing the requirements is safe. Oversight in commit 8810356. The ALTER SYSTEM fix is dependent on commit 2d870b4, which first appeared in v17. Unfortunately, back-patching that commit would introduce ABI breakage, and while that breakage seems unlikely to bother anyone, it doesn't seem worth the risk. Hence, the ALTER SYSTEM part of this commit is omitted on v15 and v16. Reported-by: Mert Alev <mert@futo.org> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Discussion: https://postgr.es/m/18964-ba09dea8c98fccd6%40postgresql.org Backpatch-through: 15
It was not very clear that the triggers are only allowed on plain tables (not foreign tables). Also, rephrase the documentation for better readability. Follow up to commit 9e6104c. Reported-by: Etsuro Fujita <etsuro.fujita@gmail.com> Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Etsuro Fujita <etsuro.fujita@gmail.com> Discussion: https://postgr.es/m/CAPmGK16XBs9ptNr8Lk4f-tJZogf6y-Prz%3D8yhvJbb_4dpsc3mQ%40mail.gmail.com Backpatch-through: 13
… messages.
Previously, when running pgbench in pipeline mode with a custom script
that triggered retriable errors (e.g., serialization errors),
an assertion failure could occur:
Assertion failed: (res == ((void*)0)), function discardUntilSync, file pgbench.c, line 3515.
The root cause was that pgbench incorrectly assumed only a single
pipeline sync message would be received at the end. In reality,
multiple pipeline sync messages can be sent and must be handled properly.
This commit fixes the issue by updating pgbench to correctly process
multiple pipeline sync messages, preventing the assertion failure.
Back-patch to v15, where the bug was introduced.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>
Discussion: https://postgr.es/m/CAHGQGwFAX56Tfx+1ppo431OSWiLLuW72HaGzZ39NkLkop6bMzQ@mail.gmail.com
Backpatch-through: 15
It's possible to use a CHECK (col IS NOT NULL) constraint to skip scanning a table for nulls when adding a NOT NULL constraint on the same column. However, if the CHECK constraint is dropped on the same command that the NOT NULL is added, this fails, i.e., makes the NOT NULL addition slow. The best we can do about it at this stage is to document this so that users aren't taken by surprise. (In Postgres 18 you can directly add the NOT NULL constraint as NOT VALID instead, so there's no longer much use for the CHECK constraint, therefore no point in building mechanism to support the case better.) Reported-by: Andrew <psy2000usa@yahoo.com> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com> Discussion: https://postgr.es/m/175385113607.786.16774570234342968908@wrigleys.postgresql.org
Introduced by 578b229. Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/CAEZATCV_CzRSOPMf1gbHQ7xTmyrV6kE7ViCBD6B81WF7GfTAEA@mail.gmail.com Backpatch-through: 13
The result of "DirectFunctionCall1(numeric_float8, d)" is already in Datum form, but the code was incorrectly applying PG_RETURN_FLOAT8() to it. On machines where float8 is pass-by-reference, this would result in complete garbage, since an unpredictable pointer value would be treated as an integer and then converted to float. It's not entirely clear how much of a problem would ensue on 64-bit hardware, but certainly interpreting a float8 bitpattern as uint64 and then converting that to float isn't the intended behavior. As luck would have it, even the complete-garbage case doesn't break BRIN indexes, since the results are only used to make choices about how to merge values into ranges: at worst, we'd make poor choices resulting in an inefficient index. Doubtless that explains the lack of field complaints. However, users with BRIN indexes that use the numeric_minmax_multi_ops opclass may wish to reindex in hopes of making their indexes more efficient. Author: Peter Eisentraut <peter@eisentraut.org> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/2093712.1753983215@sss.pgh.pa.us Backpatch-through: 14
Fix commit f295494 to use consistent four-space indentation for verbose messages.
Recent ICU versions have added U_SHOW_CPLUSPLUS_HEADER_API, and we need to set this to zero as well to hide the ICU C++ APIs from pg_locale.h Per discussion, we want cpluspluscheck to work cleanly in backbranches, so backpatch both this and its predecessor commit ed26c4e to all supported versions. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/1115793.1754414782%40sss.pgh.pa.us Backpatch-through: 13
This reverts commit 1fe9e38. That commit was a documentation improvement, not a bug fix. We don't normally backpatch such changes. Discussion: https://postgr.es/m/d8eacbeb8194c578a98317b86d7eb2ef0b6eb0e0.camel%40j-davis.com
Use Min(NBuffers, MAX_CHECKPOINT_REQUESTS) instead of NBuffers in CheckpointerShmemSize() to match the actual array size limit set in CheckpointerShmemInit(). This prevents wasting shared memory when NBuffers > MAX_CHECKPOINT_REQUESTS. Also, fix the comment. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/1439188.1754506714%40sss.pgh.pa.us Author: Xuneng Zhou <xunengzhou@gmail.com> Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
Although the "Floating-Point Types" section says that "float" data type is taken to mean "double precision", this information was not reflected in the data type table that lists all data type aliases. Reported-by: alexander.kjall@hafslund.no Author: Euler Taveira <euler@eulerto.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/175456294638.800.12038559679827947313@wrigleys.postgresql.org Backpatch-through: 13
The backport of commit f295494 introduced a format string using %m. This is not wrong, since those have been supported since commit d6c55de, but only commit 2c8118e later introduced their use in this file. This use introduces a gratuitously different translatable string and also makes it inconsistent with the rest of the file. To avoid that, switch this back to the old-style strerror() route in the appropriate backbranches
Dropping twice a pgstats entry should not happen, and the error report generated was missing the "generation" counter (tracking when an entry is reused) that has been added in 818119a. Like d92573a, backpatch down to v15 where this information is useful to have, to gather more information from instances where the problem shows up. A report has shown that this error path has been reached on a standby based on 17.3, for a relation stats entry and an OID close to wraparound. Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/CAN4RuQvYth942J2+FcLmJKgdpq6fE5eqyFvb_PuskxF2eL=Wzg@mail.gmail.com Backpatch-through: 15
Commit 9e6104c disallowed transition tables on foreign tables, but failed to account for cases where a foreign table is a child table of a partitioned/inherited table on which transition tables exist, leading to incorrect transition tuples collected from such foreign tables for queries on the parent table triggering transition capture. This occurred not only for inherited UPDATE/DELETE but for partitioned INSERT later supported by commit 3d956d9, which should have handled it at least for the INSERT case, but didn't. To fix, modify ExecAR*Triggers to throw an error if the given relation is a foreign table requesting transition capture. Also, this commit fixes make_modifytable so that in case of an inherited UPDATE/DELETE triggering transition capture, FDWs choose normal operations to modify child foreign tables, not DirectModify; which is needed because they would otherwise skip the calls to ExecAR*Triggers at execution, causing unexpected behavior. Author: Etsuro Fujita <etsuro.fujita@gmail.com> Reviewed-by: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CAPmGK14QJYikKzBDCe3jMbpGENnQ7popFmbEgm-XTNuk55oyHg%40mail.gmail.com Backpatch-through: 13
This function is called from ATExecAttachPartition/ATExecAddInherit, which prevent tables with row-level triggers with transition tables from becoming partitions or inheritance children, to check if there is such a trigger on the given table, but failed to check if a found trigger is row-level, causing the caller functions to needlessly prevent a table with only a statement-level trigger with transition tables from becoming a partition or inheritance child. Repair. Oversight in commit 501ed02. Author: Etsuro Fujita <etsuro.fujita@gmail.com> Discussion: https://postgr.es/m/CAPmGK167mXzwzzmJ_0YZ3EZrbwiCxtM1vogH_8drqsE6PtxRYw%40mail.gmail.com Backpatch-through: 13
A corrupted string could cause code that iterates with pg_mblen() to overrun its buffer. Fix, by converting all callers to one of the following: 1. Callers with a null-terminated string now use pg_mblen_cstr(), which raises an "illegal byte sequence" error if it finds a terminator in the middle of the sequence. 2. Callers with a length or end pointer now use either pg_mblen_with_len() or pg_mblen_range(), for the same effect, depending on which of the two seems more convenient at each site. 3. A small number of cases pre-validate a string, and can use pg_mblen_unbounded(). The traditional pg_mblen() function and COPYCHAR macro still exist for backward compatibility, but are no longer used by core code and are hereby deprecated. The same applies to the t_isXXX() functions. Security: CVE-2026-2006 Backpatch-through: 14 Co-authored-by: Thomas Munro <thomas.munro@gmail.com> Co-authored-by: Noah Misch <noah@leadboat.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reported-by: Paul Gerste (as part of zeroday.cloud) Reported-by: Moritz Sanft (as part of zeroday.cloud)
A security patch changed them today, so close the coverage gap now. Test that buffer overrun is avoided when pg_mblen*() requires more than the number of bytes remaining. This does not cover the calls in dict_thesaurus.c or in dict_synonym.c. That code is straightforward. To change that code's input, one must have access to modify installed OS files, so low-privilege users are not a threat. Testing this would likewise require changing installed share/postgresql/tsearch_data, which was enough of an obstacle to not bother. Security: CVE-2026-2006 Backpatch-through: 14 Co-authored-by: Thomas Munro <thomas.munro@gmail.com> Co-authored-by: Noah Misch <noah@leadboat.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
pgp_sym_decrypt() and pgp_pub_decrypt() will raise such errors, while bytea variants will not. The existing "dat3" test decrypted to non-UTF8 text, so switch that query to bytea. The long-term intent is for type "text" to always be valid in the database encoding. pgcrypto has long been known as a source of exceptions to that intent, but a report about exploiting invalid values of type "text" brought this module to the forefront. This particular exception is straightforward to fix, with reasonable effect on user queries. Back-patch to v14 (all supported versions). Reported-by: Paul Gerste (as part of zeroday.cloud) Reported-by: Moritz Sanft (as part of zeroday.cloud) Author: shihao zhong <zhong950419@gmail.com> Reviewed-by: cary huang <hcary328@gmail.com> Discussion: https://postgr.es/m/CAGRkXqRZyo0gLxPJqUsDqtWYBbgM14betsHiLRPj9mo2=z9VvA@mail.gmail.com Backpatch-through: 14 Security: CVE-2026-2006
These data types are represented like full-fledged arrays, but functions that deal specifically with these types assume that the array is 1-dimensional and contains no nulls. However, there are cast pathways that allow general oid[] or int2[] arrays to be cast to these types, allowing these expectations to be violated. This can be exploited to cause server memory disclosure or SIGSEGV. Fix by installing explicit checks in functions that accept these types. Reported-by: Altan Birler <altan.birler@tum.de> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Security: CVE-2026-2003 Backpatch-through: 14
An upcoming patch requires this cache so that it can track updates in the pg_extension catalog. So far though, the EXTENSIONOID cache only exists in v18 and up (see 490f869). We can add it in older branches without an ABI break, if we are careful not to disturb the numbering of existing syscache IDs. In v16 and before, that just requires adding the new ID at the end of the hand-assigned enum list, ignoring our convention about alphabetizing the IDs. But in v17, genbki.pl enforces alphabetical order of the IDs listed in MAKE_SYSCACHE macros. We can fake it out by calling the new cache ZEXTENSIONOID. Note that adding a syscache does change the required contents of the relcache init file (pg_internal.init). But that isn't problematic since we blow those away at postmaster start for other reasons. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Security: CVE-2026-2004 Backpatch-through: 14-17
Selectivity estimators come in two flavors: those that make specific assumptions about the data types they are working with, and those that don't. Most of the built-in estimators are of the latter kind and are meant to be safely attachable to any operator. If the operator does not behave as the estimator expects, you might get a poor estimate, but it won't crash. However, estimators that do make datatype assumptions can malfunction if they are attached to the wrong operator, since then the data they get from pg_statistic may not be of the type they expect. This can rise to the level of a security problem, even permitting arbitrary code execution by a user who has the ability to create SQL objects. To close this hole, establish a rule that built-in estimators are required to protect themselves against being called on the wrong type of data. It does not seem practical however to expect estimators in extensions to reach a similar level of security, at least not in the near term. Therefore, also establish a rule that superuser privilege is required to attach a non-built-in estimator to an operator. We expect that this restriction will have little negative impact on extensions, since estimators generally have to be written in C and thus superuser privilege is required to create them in the first place. This commit changes the privilege checks in CREATE/ALTER OPERATOR to enforce the rule about superuser privilege, and fixes a couple of built-in estimators that were making datatype assumptions without sufficiently checking that they're valid. Reported-by: Daniel Firer as part of zeroday.cloud Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Security: CVE-2026-2004 Backpatch-through: 14
While the preceding commit prevented such attachments from occurring in future, this one aims to prevent further abuse of any already- created operator that exposes _int_matchsel to the wrong data types. (No other contrib module has a vulnerable selectivity estimator.) We need only check that the Const we've found in the query is indeed of the type we expect (query_int), but there's a difficulty: as an extension type, query_int doesn't have a fixed OID that we could hard-code into the estimator. Therefore, the bulk of this patch consists of infrastructure to let an extension function securely look up the OID of a datatype belonging to the same extension. (Extension authors have requested such functionality before, so we anticipate that this code will have additional non-security uses, and may soon be extended to allow looking up other kinds of SQL objects.) This is done by first finding the extension that owns the calling function (there can be only one), and then thumbing through the objects owned by that extension to find a type that has the desired name. This is relatively expensive, especially for large extensions, so a simple cache is put in front of these lookups. Reported-by: Daniel Firer as part of zeroday.cloud Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Noah Misch <noah@leadboat.com> Security: CVE-2026-2004 Backpatch-through: 14
Backpatch-through: 14 Security: CVE-2026-2006
On the CREATE POLICY page, the description of per-command policies stated that SELECT policies are applied when an INSERT has an ON CONFLICT DO NOTHING clause. However, that is only the case if it includes an arbiter clause, so clarify that. While at it, also clarify the comment in the regression tests that cover this. Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Viktor Holmberg <v@viktorh.net> Discussion: https://postgr.es/m/CAEZATCXGwMQ+x00YY9XYG46T0kCajH=21QaYL9Xatz0dLKii+g@mail.gmail.com Backpatch-through: 14
On the INSERT page, mention that SELECT privileges are also required for any columns mentioned in the arbiter clause, including those referred to by the constraint, and clarify that this applies to all forms of ON CONFLICT, not just ON CONFLICT DO UPDATE. Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Viktor Holmberg <v@viktorh.net> Discussion: https://postgr.es/m/CAEZATCXGwMQ+x00YY9XYG46T0kCajH=21QaYL9Xatz0dLKii+g@mail.gmail.com Backpatch-through: 14
The buildfarm occasionally shows a variant row order in the output of this UPDATE ... RETURNING, implying that the preceding INSERT dropped one of the rows into some free space within the table rather than appending them all at the end. It's not entirely clear why that happens some times and not other times, but we have established that it's affected by concurrent activity in other databases of the cluster. In any case, the behavior is not wrong; the test is at fault for presuming that a seqscan will give deterministic row ordering. Add an ORDER BY atop the update to stop the buildfarm noise. The buildfarm seems to have shown this only in v18 and master branches, but just in case the cause is older, back-patch to all supported branches. Discussion: https://postgr.es/m/3866274.1770743162@sss.pgh.pa.us Backpatch-through: 14
The pg_stat_activity view shows information for aux processes, but the pg_stat_get_backend_wait_event() and pg_stat_get_backend_wait_event_type() functions did not. To fix, call AuxiliaryPidGetProc(pid) if BackendPidGetProc(pid) returns NULL, like we do in pg_stat_get_activity(). In version 17 and above, it's a little silly to use those functions when we already have the ProcNumber at hand, but it was necessary before v17 because the backend ID was different from ProcNumber. I have other plans for wait_event_info on master, so it doesn't seem worth applying a different fix on different versions now. Reviewed-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: Chao Li <li.evan.chao@gmail.com> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://www.postgresql.org/message-id/c0320e04-6e85-4c49-80c5-27cfb3a58108@iki.fi Backpatch-through: 14
If the variable's value is null, exec_stmt_return() missed filling in estate->rettype. This is a pretty old bug, but we'd managed not to notice because that value isn't consulted for a null result ... unless we have to cast it to a domain. That case led to a failure with "cache lookup failed for type 0". The correct way to assign the data type is known by exec_eval_datum. While we could copy-and-paste that logic, it seems like a better idea to just invoke exec_eval_datum, as the ROW case already does. Reported-by: Pavel Stehule <pavel.stehule@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAFj8pRBT_ahexDf-zT-cyH8bMR_qcySKM8D5nv5MvTWPiatYGA@mail.gmail.com Backpatch-through: 14
The prior order caused spurious Valgrind errors. They're spurious because the ereport(ERROR) non-local exit discards the pointer in question. pg_mblen_cstr() ordered the checks correctly, but these other two did not. Back-patch to v14, like commit 1e7fe06. Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/20260214053821.fa.noahmisch@microsoft.com Backpatch-through: 14
Commit 1e7fe06 changed pg_mbstrlen_with_len() to ereport(ERROR) if the input ends in an incomplete character. Most callers want that. text_substring() does not. It detoasts the most bytes it could possibly need to get the requested number of characters. For example, to extract up to 2 chars from UTF8, it needs to detoast 8 bytes. In a string of 3-byte UTF8 chars, 8 bytes spans 2 complete chars and 1 partial char. Fix this by replacing this pg_mbstrlen_with_len() call with a string traversal that differs by stopping upon finding as many chars as the substring could need. This also makes SUBSTRING() stop raising an encoding error if the incomplete char is past the end of the substring. This is consistent with the general philosophy of the above commit, which was to raise errors on a just-in-time basis. Before the above commit, SUBSTRING() never raised an encoding error. SUBSTRING() has long been detoasting enough for one more char than needed, because it did not distinguish exclusive and inclusive end position. For avoidance of doubt, stop detoasting extra. Back-patch to v14, like the above commit. For applications using SUBSTRING() on non-ASCII column values, consider applying this to your copy of any of the February 12, 2026 releases. Reported-by: SATŌ Kentarō <ranvis@gmail.com> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Bug: #19406 Discussion: https://postgr.es/m/19406-9867fddddd724fca@postgresql.org Backpatch-through: 14
The error message added in 379695d referred to the public key being too long. This is confusing as it is in fact the session key included in a PGP message which is too long. This is harmless, but let's be precise about what is wrong. Per offline report. Reported-by: Zsolt Parragi <zsolt.parragi@percona.com> Backpatch-through: 14
'latest_page_number' is set to the correct value, according to nextOffset, early at system startup. Contrary to the comment, it hence should be set up correctly by the time we get to WAL replay. This fixes a failure to replay WAL generated on older minor versions, before commit 789d653 (18.2, 17.8, 16.12, 15.16, 14.21). The failure occurs after a truncation record has been replayed and looks like this: FATAL: could not access status of transaction 858112 DETAIL: Could not read from file "pg_multixact/offsets/000D" at offset 24576: read too few bytes. CONTEXT: WAL redo at 3/2A3AB408 for MultiXact/CREATE_ID: 858111 offset 6695072 nmembers 5: 1048228 (sh) 1048271 (keysh) 1048316 (sh) 1048344 (keysh) 1048370 (sh) Reported-by: Sebastian Webber <sebastian@swebber.me> Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru> Reviewed-by: Kirill Reshke <reshkekirill@gmail.com> Discussion: https://www.postgresql.org/message-id/20260214090150.GC2297@p46.dedyn.io;lightning.p46.dedyn.io Backpatch-through: 14-18
The receive function of hstore was not able to handle correctly duplicate key values when a new duplicate links to a NULL value, where a pfree() could be attempted on a NULL pointer, crashing due to a pointer dereference. This problem would happen for a COPY BINARY, when stacking values like that: aa => 5 aa => null The second key/value pair is discarded and pfree() calls are attempted on its key and its value, leading to a pointer dereference for the value part as the value is NULL. The first key/value pair takes priority when a duplicate is found. Per offline report. Reported-by: "Anemone" <vergissmeinnichtzh@gmail.com> Reported-by: "A1ex" <alex000young@gmail.com> Backpatch-through: 14
Various buildfarm members, having compilers like gcc 8.5 and 6.3, fail to deduce that text_substring() variable "E" is initialized if slice_size!=-1. This suppression approach quiets gcc 8.5; I did not reproduce the warning elsewhere. Back-patch to v14, like commit 9f4fd11. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/1157953.1771266105@sss.pgh.pa.us Backpatch-through: 14
Commit c67bef3 introduced this test helper function for use by src/test/regress/sql/encoding.sql, but its logic was incorrect. It confused an encoding ID for a boolean so it gave the wrong results for some inputs, and also forgot the usual return macro. The mistake didn't affect values actually used in the test, so there is no change in behavior. Also drop it and another missed function at the end of the test, for consistency. Backpatch-through: 14 Author: Zsolt Parragi <zsolt.parragi@percona.com>
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 6b4371ce13670e5660c8660b2984e780610b2a77
jasonyb
approved these changes
Apr 8, 2026
jasonyb
left a comment
There was a problem hiding this comment.
- nit: Can the message say "into yb-pg15" instead of "into merge-pg"?
- configure: the resolution notes could have simply justified as the YB cherry-pick being of a commit that is already in REL_15_17, so the conflict is almost always in favor of upstream REL_15_17 by default
- configure.ac: same
- arrayfuncs.c: this is the trickiest after pg_restore.sgml because the YB commit is not straightforward. You say "Resolved by keeping HEAD's version (as it is a superset) with the extra FLOAT8OID and XIDOID cases.", but what is "HEAD"? I assume you mean yb-pg15. And I assume these types are not getting exercised.
- configure:
- `for ac_func in ...`
- Both sides modified the same `for ac_func in ...` line. YB cherry-pick b47a672 removed `strchrnul`. Upstream PG commit 0de9560 also removed `strchrnul`, and upstream PG commit 00652b3 removed `memset_s`.
- Resolved by taking REL_15_17's list (which has both removals).
- ac_fn_c_check_decl block (hunk 2):
- Adjacent-line conflict. YB cherry-pick b47a672 added an `ac_fn_c_check_decl "$LINENO" "strchrnul" ...` block above this location. Upstream PG commit 00652b3 added an `ac_fn_c_check_decl "$LINENO" "memset_s" ...` block immediately after the `strchrnul` block.
- Resolved by accepting REL_15_17's `memset_s` block.
- configure.ac:
- AC_CHECK_DECLS for memset_s:
- Adjacent-line conflict. YB cherry-pick b47a672 added the
line with `strchrnul`.
- Upstream PG commit 00652b3 added `AC_CHECK_DECLS([memset_s], ...)` after the line with `strchrnul`.
- Resolved by accepting REL_15_17's `AC_CHECK_DECLS` addition.
- parallel_schedule:
- test line additions:
- YB cherry-pick f885e22 added `stats_import` to the parallel test line.
- Upstream PG commit 757bf81 added `encoding euc_kr` to the same parallel test line.
- Resolved by keeping both additions: `stats_import encoding euc_kr` appended to the test line. The resolved ordering matches PG master commit c67bef3.
- doc/src/sgml/ref/pg_restore.sgml:
- Option documentation:
- YB cherry-pick b84ef44 added `--with-data`, `--with-schema`, `--with-statistics` option documentation entries.
- Upstream PG commit 4240405 added `--restrict-key` option documentation.
- Resolved by keeping both. The correct ordering is REL_15_17's `--restrict-key` first (after `--no-tablespaces`), then yb-pg15's `--with-data/schema/statistics` entries after that. On PG master, by 6a46089 the `--with-*` options were well below `--no-tablespaces`, towards the end of the list. Later, 71ea0d6 added `--restrict-key` right after `--no-tablespaces`, so `--restrict-key` belongs before the `--with-*` entries.
- src/bin/pg_dump/pg_backup.h:
- RestoreOptions struct fields:
- YB cherry-picks 0873a75 added `dumpSchema` and `dumpData`; 2a5f853 added `dumpStatistics`.
- Upstream PG commit 4240405 added `restrict_key` char pointer field.
- Resolved by keeping both: yb-pg15's dump flags first, then REL_15_17's `restrict_key`. The resolved ordering and spacing matches PG master commit 71ea0d6.
- DumpOptions struct fields:
- Same pattern as RestoreOptions. Resolved identically.
- src/bin/pg_dump/pg_backup_archiver.c:
- Forward declarations:
- YB cherry-pick 2a5f853 changed `_printTocEntry` signature from `bool isData` to `const char *pfx` and kept the `sanitize_line` static forward declaration.
- Upstream PG commit 9751f93 moved `sanitize_line` from a file-local static to an `extern` in `dumputils.h`, removing the forward declaration from this file.
- Resolved by keeping yb-pg15's `_printTocEntry(AH, te, const char *pfx)` signature and dropping the `sanitize_line` forward declaration (now extern in `dumputils.h`).
- src/bin/pg_dump/pg_dump.c:
- getopt_long switch cases:
- YB cherry-pick 2a5f853 added cases 18-21 (`--statistics-only`, `--no-data`, `--no-schema`, `--no-statistics`). YB cherry-pick b84ef44 added cases 22-24 (`--with-data`, `--with-schema`, `--with-statistics`).
- Upstream PG commit 4240405 added case 25 for `--restrict-key`.
- Resolved by keeping both: yb-pg15's cases 18-24 first, then REL_15_17's case 25. The resolved ordering matches PG master commit 71ea0d6.
- src/bin/pg_dump/pg_dumpall.c:
- long_options array:
- YB cherry-pick 2a5f853 added `{"statistics-only", ...}`.
- Upstream PG commit 4240405 added `{"restrict-key", ...}`.
- Resolved by keeping both entries. The resolved ordering matches PG master commit 71ea0d6.
- src/backend/utils/adt/arrayfuncs.c:
- construct_array_builtin() switch cases:
- YB cherry-pick f885e22 imported the `construct_array_builtin()` function along with the `FLOAT8OID` and `XIDOID` cases.
- Upstream PG commit 13dd6f7 added `construct_array_builtin()` but without the `FLOAT8OID` and `XIDOID` cases.
- Resolved by keeping HEAD's version (as it is a superset) with the extra `FLOAT8OID` and `XIDOID` cases.
- src/bin/pg_dump/pg_restore.c:
- long_options array:
- YB cherry-pick 2a5f853 added `{"no-statistics", ...}`, `{"statistics-only", ...}`. YB cherry-pick b84ef44 added `{"with-data", ...}`, `{"with-schema", ...}`, `{"with-statistics", ...}`.
- Upstream PG commit 4240405 added `{"restrict-key", ...}`.
- Resolved by keeping both sets of entries. The resolved ordering matches PG master commit 71ea0d6.
- src/bin/pg_dump/t/002_pg_dump.pl:
- %full_runs:
- YB cherry-pick 2a5f853 added `no_statistics => 1` to the `%full_runs` hash.
- Upstream PG commit c8ed160 added `no_subscriptions => 1` and `no_subscriptions_restore => 1`.
- Resolved by keeping both additions in the hash. The resolved ordering matches PG master commit b54e8db.
- 'CREATE INDEX ON ONLY measurement' in %tests:
- YB cherry-pick e9c1b29 introduced `%full_runs` and `%dump_test_schema_runs` shorthand variables to replace repeated explicit hash entries.
- Upstream PG commit c8ed160 added `no_subscriptions` and `no_subscriptions_restore` to the explicit entry lists.
- Resolved by taking HEAD's `%full_runs` / `%dump_test_schema_runs` shorthand, which now includes the new subscription entries after resolving the `%full_runs` definition conflict above.
- 'ALTER INDEX ... ATTACH PARTITION (primary key)' in %tests:
- Same as above.
- src/test/recovery/t/027_stream_regress.pl:
- pg_dumpall command arguments (two conflicts, same pattern):
- YB cherry-pick 2a5f853 added `'--no-statistics'` to the pg_dumpall invocations.
- Upstream PG commit 4240405 added `'--restrict-key=test'` to the same invocations. The resolved ordering matches PG master commit 71ea0d6.
- Resolved by keeping both flags in each invocation.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Merge tag 'REL_15_17' into merge-pg
configure:
for ac_func in ...for ac_func in ...line. YB cherry-pick b47a672 removedstrchrnul. Upstream PG commit 0de9560 also removedstrchrnul, and upstream PG commit 00652b3 removedmemset_s.ac_fn_c_check_decl "$LINENO" "strchrnul" ...block above this location. Upstream PG commit 00652b3 added anac_fn_c_check_decl "$LINENO" "memset_s" ...block immediately after thestrchrnulblock.memset_sblock.configure.ac:
line with
strchrnul.AC_CHECK_DECLS([memset_s], ...)after the line withstrchrnul.AC_CHECK_DECLSaddition.parallel_schedule:
stats_importto the parallel test line.encoding euc_krto the same parallel test line.stats_import encoding euc_krappended to the test line. The resolved ordering matches PG master commit c67bef3.doc/src/sgml/ref/pg_restore.sgml:
--with-data,--with-schema,--with-statisticsoption documentation entries.--restrict-keyoption documentation.--restrict-keyfirst (after--no-tablespaces), then yb-pg15's--with-data/schema/statisticsentries after that. On PG master, by 6a46089 the--with-*options were well below--no-tablespaces, towards the end of the list. Later, 71ea0d6 added--restrict-keyright after--no-tablespaces, so--restrict-keybelongs before the--with-*entries.src/bin/pg_dump/pg_backup.h:
dumpSchemaanddumpData; 2a5f853 addeddumpStatistics.restrict_keychar pointer field.restrict_key. The resolved ordering and spacing matches PG master commit 71ea0d6.src/bin/pg_dump/pg_backup_archiver.c:
_printTocEntrysignature frombool isDatatoconst char *pfxand kept thesanitize_linestatic forward declaration.sanitize_linefrom a file-local static to anexternindumputils.h, removing the forward declaration from this file._printTocEntry(AH, te, const char *pfx)signature and dropping thesanitize_lineforward declaration (now extern indumputils.h).src/bin/pg_dump/pg_dump.c:
--statistics-only,--no-data,--no-schema,--no-statistics). YB cherry-pick b84ef44 added cases 22-24 (--with-data,--with-schema,--with-statistics).--restrict-key.src/bin/pg_dump/pg_dumpall.c:
{"statistics-only", ...}.{"restrict-key", ...}.src/backend/utils/adt/arrayfuncs.c:
construct_array_builtin()function along with theFLOAT8OIDandXIDOIDcases.construct_array_builtin()but without theFLOAT8OIDandXIDOIDcases.FLOAT8OIDandXIDOIDcases.src/bin/pg_dump/pg_restore.c:
{"no-statistics", ...},{"statistics-only", ...}. YB cherry-pick b84ef44 added{"with-data", ...},{"with-schema", ...},{"with-statistics", ...}.{"restrict-key", ...}.src/bin/pg_dump/t/002_pg_dump.pl:
no_statistics => 1to the%full_runshash.no_subscriptions => 1andno_subscriptions_restore => 1.%full_runsand%dump_test_schema_runsshorthand variables to replace repeated explicit hash entries.no_subscriptionsandno_subscriptions_restoreto the explicit entry lists.%full_runs/%dump_test_schema_runsshorthand, which now includes the new subscription entries after resolving the%full_runsdefinition conflict above.src/test/recovery/t/027_stream_regress.pl:
'--no-statistics'to the pg_dumpall invocations.'--restrict-key=test'to the same invocations. The resolved ordering matches PG master commit 71ea0d6.