Person of the week: https://postgresql.life/post/jan_karremans/
Ora2Pg 21.1, a tool for migrating Oracle databases to PostgreSQL, released. https://github.com/darold/ora2pg/blob/master/changelog
pgtt 2.3, an extension to implement global temporary tables, released. https://github.com/darold/pgtt/releases/tag/v2.3
SB Data Generator, GUI tool for generating and populating databases with test data, released. SB Data Generator
https://archives.postgresql.org/pgsql-jobs/2021-04/
Planet PostgreSQL: https://planet.postgresql.org/
PostgreSQL Weekly News is brought to you this week by David Fetter
Submit news and announcements by Sunday at 3:00pm PST8PDT to david@fetter.org.
David Rowley pushed:
Cache if PathTarget and RestrictInfos contain volatile functions. Here we aim to reduce duplicate work done by contain_volatile_functions() by caching whether PathTargets and RestrictInfos contain any volatile functions the first time contain_volatile_functions() is called for them. Any future calls for these nodes just use the cached value rather than going to the trouble of recursively checking the sub-node all over again. Thanks to Tom Lane for the idea. Any locations in the code which make changes to a PathTarget or RestrictInfo which could change the outcome of the volatility check must change the cached value back to VOLATILITY_UNKNOWN again. contain_volatile_functions() is the only code in charge of setting the cache value to either VOLATILITY_VOLATILE or VOLATILITY_NOVOLATILE. Some existing code does benefit from this additional caching, however, this change is mainly aimed at an upcoming patch that must check for volatility during the join search. Repeated volatility checks in that case can become very expensive when the join search contains more than a few relations. Author: David Rowley Discussion: https://postgr.es/m/3795226.1614059027@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/f58b230ed0dba2a3d396794a2ec84541e321d92d
Adjust design of per-worker parallel seqscan data struct. The design of the data structures which allow storage of the per-worker memory during parallel seq scans were not ideal. The work done in 56788d215 required an additional data structure to allow workers to remember the range of pages that had been allocated to them for processing during a parallel seqscan. That commit added a void pointer field to TableScanDescData to allow heapam to store the per-worker allocation information. However putting the field there made very little sense given that we have AM specific structs for that, e.g. HeapScanDescData. Here we remove the void pointer field from TableScanDescData and add a dedicated field for this purpose to HeapScanDescData. Previously we also allocated memory for this parallel per-worker data for all scans, regardless if it was a parallel scan or not. This was just a wasted allocation for non-parallel scans, so here we make the allocation conditional on the scan being parallel. Also, add previously missing pfree() to free the per-worker data in heap_endscan(). Reported-by: Andres Freund Reviewed-by: Andres Freund Discussion: https://postgr.es/m/20210317023101.anvejcfotwka6gaa@alap3.anarazel.de https://git.postgresql.org/pg/commitdiff/af527705edc3fd0b335264d17e0521c05edc5cca
Allow users of simplehash.h to perform direct deletions. Previously simplehash.h only exposed a method to perform a hash table delete using the hash table key. This meant that the delete function had to perform a hash lookup in order to find the entry to delete. Here we add a new function so that users of simplehash.h can perform a hash delete directly using the entry pointer, thus saving the hash lookup. An upcoming patch that uses simplehash.h already has performed the hash lookup so already has the entry pointer. This change will allow the code in that patch to perform the hash delete without the code in simplehash.h having to perform an additional hash lookup. Author: David Rowley Reviewed-by: Andres Freund Discussion: https://postgr.es/m/CAApHDvqFLXXge153WmPsjke5VGOSt7Ez0yD0c7eBXLfmWxs3Kw@mail.gmail.com https://git.postgresql.org/pg/commitdiff/ff53d7b159b93ce9fc884897f9d96b97744781e2
Fix compiler warning in unistr function. Some compilers are not aware that elog/ereport ERROR does not return. https://git.postgresql.org/pg/commitdiff/efd9d92bb39c74c2aded64fc08e2d601ce20c39d
Allow estimate_num_groups() to pass back further details about the estimation. Here we add a new output parameter to estimate_num_groups() to allow it to inform the caller of additional, possibly useful information about the estimation. The new output parameter is a struct that currently contains just a single field with a set of flags. This was done rather than having the flags as an output parameter to allow future fields to be added without having to change the signature of the function at a later date when we want to pass back further information that might not be suitable to store in the flags field. It seems reasonable that one day in the future that the planner would want to know more about the estimation. For example, how many individual sets of statistics was the estimation generated from? The planner may want to take that into account if we ever want to consider risks as well as costs when generating plans. For now, there's only 1 flag we set in the flags field. This is to indicate if the estimation fell back on using the hard-coded constants in any part of the estimation. Callers may like to change their behavior if this is set, and this gives them the ability to do so. Callers may pass the flag pointer as NULL if they have no interest in obtaining any additional information about the estimate. We're not adding any actual usages of these flags here. Some follow-up commits will make use of this feature. Additionally, we're also not making any changes to add support for clauselist_selectivity() and clauselist_selectivity_ext(). However, if this is required in the future then the same struct being added here should be fine to use as a new output argument for those functions too. Author: David Rowley Discussion: https://postgr.es/m/CAApHDvqQqpk=1W-G_ds7A9CsXX3BggWj_7okinzkLVhDubQzjA@mail.gmail.com https://git.postgresql.org/pg/commitdiff/ed934d4fa30f0f94e6f7125ad2154e6a58d1c7f7
Add Result Cache executor node. Here we add a new executor node type named "Result Cache". The planner can include this node type in the plan to have the executor cache the results from the inner side of parameterized nested loop joins. This allows caching of tuples for sets of parameters so that in the event that the node sees the same parameter values again, it can just return the cached tuples instead of rescanning the inner side of the join all over again. Internally, result cache uses a hash table in order to quickly find tuples that have been previously cached. For certain data sets, this can significantly improve the performance of joins. The best cases for using this new node type are for join problems where a large portion of the tuples from the inner side of the join have no join partner on the outer side of the join. In such cases, hash join would have to hash values that are never looked up, thus bloating the hash table and possibly causing it to multi-batch. Merge joins would have to skip over all of the unmatched rows. If we use a nested loop join with a result cache, then we only cache tuples that have at least one join partner on the outer side of the join. The benefits of using a parameterized nested loop with a result cache increase when there are fewer distinct values being looked up and the number of lookups of each value is large. Also, hash probes to lookup the cache can be much faster than the hash probe in a hash join as it's common that the result cache's hash table is much smaller than the hash join's due to result cache only caching useful tuples rather than all tuples from the inner side of the join. This variation in hash probe performance is more significant when the hash join's hash table no longer fits into the CPU's L3 cache, but the result cache's hash table does. The apparent "random" access of hash buckets with each hash probe can cause a poor L3 cache hit ratio for large hash tables. Smaller hash tables generally perform better. The hash table used for the cache limits itself to not exceeding work_mem * hash_mem_multiplier in size. We maintain a dlist of keys for this cache and when we're adding new tuples and realize we've exceeded the memory budget, we evict cache entries starting with the least recently used ones until we have enough memory to add the new tuples to the cache. For parameterized nested loop joins, we now consider using one of these result cache nodes in between the nested loop node and its inner node. We determine when this might be useful based on cost, which is primarily driven off of what the expected cache hit ratio will be. Estimating the cache hit ratio relies on having good distinct estimates on the nested loop's parameters. For now, the planner will only consider using a result cache for parameterized nested loop joins. This works for both normal joins and also for LATERAL type joins to subqueries. It is possible to use this new node for other uses in the future. For example, to cache results from correlated subqueries. However, that's not done here due to some difficulties obtaining a distinct estimation on the outer plan to calculate the estimated cache hit ratio. Currently we plan the inner plan before planning the outer plan so there is no good way to know if a result cache would be useful or not since we can't estimate the number of times the subplan will be called until the outer plan is generated. The functionality being added here is newly introducing a dependency on the return value of estimate_num_groups() during the join search. Previously, during the join search, we only ever needed to perform selectivity estimations. With this commit, we need to use estimate_num_groups() in order to estimate what the hit ratio on the result cache will be. In simple terms, if we expect 10 distinct values and we expect 1000 outer rows, then we'll estimate the hit ratio to be 99%. Since cache hits are very cheap compared to scanning the underlying nodes on the inner side of the nested loop join, then this will significantly reduce the planner's cost for the join. However, it's fairly easy to see here that things will go bad when estimate_num_groups() incorrectly returns a value that's significantly lower than the actual number of distinct values. If this happens then that may cause us to make use of a nested loop join with a result cache instead of some other join type, such as a merge or hash join. Our distinct estimations have been known to be a source of trouble in the past, so the extra reliance on them here could cause the planner to choose slower plans than it did previous to having this feature. Distinct estimations are also fairly hard to estimate accurately when several tables have been joined already or when a WHERE clause filters out a set of values that are correlated to the expressions we're estimating the number of distinct value for. For now, the costing we perform during query planning for result caches does put quite a bit of faith in the distinct estimations being accurate. When these are accurate then we should generally see faster execution times for plans containing a result cache. However, in the real world, we may find that we need to either change the costings to put less trust in the distinct estimations being accurate or perhaps even disable this feature by default. There's always an element of risk when we teach the query planner to do new tricks that it decides to use that new trick at the wrong time and causes a regression. Users may opt to get the old behavior by turning the feature off using the enable_resultcache GUC. Currently, this is enabled by default. It remains to be seen if we'll maintain that setting for the release. Additionally, the name "Result Cache" is the best name I could think of for this new node at the time I started writing the patch. Nobody seems to strongly dislike the name. A few people did suggest other names but no other name seemed to dominate in the brief discussion that there was about names. Let's allow the beta period to see if the current name pleases enough people. If there's some consensus on a better name, then we can change it before the release. Please see the 2nd discussion link below for the discussion on the "Result Cache" name. Author: David Rowley Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu Tested-By: Konstantin Knizhnik Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com https://git.postgresql.org/pg/commitdiff/b6002a796dc0bfe721db5eaa54ba9d24fd9fd416
Revert b6002a796. This removes "Add Result Cache executor node". It seems that something weird is going on with the tracking of cache hits and misses as highlighted by many buildfarm animals. It's not yet clear what the problem is as other parts of the plan indicate that the cache did work correctly, it's just the hits and misses that were being reported as 0. This is especially a bad time to have the buildfarm so broken, so reverting before too many more animals go red. Discussion: https://postgr.es/m/CAApHDvq_hydhfovm4=izgWs+C5HqEeRScjMbOgbpC-jRAeK3Yw@mail.gmail.com https://git.postgresql.org/pg/commitdiff/28b3e3905c982c42fb10ee800e6f881e9742c89d
Add Result Cache executor node (take 2). Here we add a new executor node type named "Result Cache". The planner can include this node type in the plan to have the executor cache the results from the inner side of parameterized nested loop joins. This allows caching of tuples for sets of parameters so that in the event that the node sees the same parameter values again, it can just return the cached tuples instead of rescanning the inner side of the join all over again. Internally, result cache uses a hash table in order to quickly find tuples that have been previously cached. For certain data sets, this can significantly improve the performance of joins. The best cases for using this new node type are for join problems where a large portion of the tuples from the inner side of the join have no join partner on the outer side of the join. In such cases, hash join would have to hash values that are never looked up, thus bloating the hash table and possibly causing it to multi-batch. Merge joins would have to skip over all of the unmatched rows. If we use a nested loop join with a result cache, then we only cache tuples that have at least one join partner on the outer side of the join. The benefits of using a parameterized nested loop with a result cache increase when there are fewer distinct values being looked up and the number of lookups of each value is large. Also, hash probes to lookup the cache can be much faster than the hash probe in a hash join as it's common that the result cache's hash table is much smaller than the hash join's due to result cache only caching useful tuples rather than all tuples from the inner side of the join. This variation in hash probe performance is more significant when the hash join's hash table no longer fits into the CPU's L3 cache, but the result cache's hash table does. The apparent "random" access of hash buckets with each hash probe can cause a poor L3 cache hit ratio for large hash tables. Smaller hash tables generally perform better. The hash table used for the cache limits itself to not exceeding work_mem * hash_mem_multiplier in size. We maintain a dlist of keys for this cache and when we're adding new tuples and realize we've exceeded the memory budget, we evict cache entries starting with the least recently used ones until we have enough memory to add the new tuples to the cache. For parameterized nested loop joins, we now consider using one of these result cache nodes in between the nested loop node and its inner node. We determine when this might be useful based on cost, which is primarily driven off of what the expected cache hit ratio will be. Estimating the cache hit ratio relies on having good distinct estimates on the nested loop's parameters. For now, the planner will only consider using a result cache for parameterized nested loop joins. This works for both normal joins and also for LATERAL type joins to subqueries. It is possible to use this new node for other uses in the future. For example, to cache results from correlated subqueries. However, that's not done here due to some difficulties obtaining a distinct estimation on the outer plan to calculate the estimated cache hit ratio. Currently we plan the inner plan before planning the outer plan so there is no good way to know if a result cache would be useful or not since we can't estimate the number of times the subplan will be called until the outer plan is generated. The functionality being added here is newly introducing a dependency on the return value of estimate_num_groups() during the join search. Previously, during the join search, we only ever needed to perform selectivity estimations. With this commit, we need to use estimate_num_groups() in order to estimate what the hit ratio on the result cache will be. In simple terms, if we expect 10 distinct values and we expect 1000 outer rows, then we'll estimate the hit ratio to be 99%. Since cache hits are very cheap compared to scanning the underlying nodes on the inner side of the nested loop join, then this will significantly reduce the planner's cost for the join. However, it's fairly easy to see here that things will go bad when estimate_num_groups() incorrectly returns a value that's significantly lower than the actual number of distinct values. If this happens then that may cause us to make use of a nested loop join with a result cache instead of some other join type, such as a merge or hash join. Our distinct estimations have been known to be a source of trouble in the past, so the extra reliance on them here could cause the planner to choose slower plans than it did previous to having this feature. Distinct estimations are also fairly hard to estimate accurately when several tables have been joined already or when a WHERE clause filters out a set of values that are correlated to the expressions we're estimating the number of distinct value for. For now, the costing we perform during query planning for result caches does put quite a bit of faith in the distinct estimations being accurate. When these are accurate then we should generally see faster execution times for plans containing a result cache. However, in the real world, we may find that we need to either change the costings to put less trust in the distinct estimations being accurate or perhaps even disable this feature by default. There's always an element of risk when we teach the query planner to do new tricks that it decides to use that new trick at the wrong time and causes a regression. Users may opt to get the old behavior by turning the feature off using the enable_resultcache GUC. Currently, this is enabled by default. It remains to be seen if we'll maintain that setting for the release. Additionally, the name "Result Cache" is the best name I could think of for this new node at the time I started writing the patch. Nobody seems to strongly dislike the name. A few people did suggest other names but no other name seemed to dominate in the brief discussion that there was about names. Let's allow the beta period to see if the current name pleases enough people. If there's some consensus on a better name, then we can change it before the release. Please see the 2nd discussion link below for the discussion on the "Result Cache" name. Author: David Rowley Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu, Hou Zhijie Tested-By: Konstantin Knizhnik Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com https://git.postgresql.org/pg/commitdiff/9eacee2e62d89cab7b004f97c206c4fba4f1d745
Attempt to fix unstable Result Cache regression tests. force_parallel_mode = regress is causing a few more problems than I thought. It seems that both the leader and the single worker can contribute to the execution. I had mistakenly thought that only the worker process would do any work. Since it's not deterministic as to which of the two processes will get a chance to work on the plan, it seems just better to disable force_parallel_mode for these tests. At least doing this seems better than changing to EXPLAIN only rather than EXPLAIN ANALYZE. Additionally, I overlooked the fact that the number of executions of the sub-plan below a Result Cache will execute a varying number of times depending on cache eviction. 32-bit machines will use less memory and evict fewer tuples from the cache. That results in the subnode being executed fewer times on 32-bit machines. Let's just blank out the number of loops in each node. https://git.postgresql.org/pg/commitdiff/a4fac4ffe8f8d543a10ac7debf1157e34963ece3
Remove useless Asserts in Result Cache code. Testing if an unsigned variable is >= 0 is pretty pointless. There's likely enough code in remove_cache_entry() to verify the cache memory accounting is correct in assert enabled builds. These Asserts were not adding much extra cover, even if they had been checking >= 0 on a signed variable. Reported-by: Andres Freund Discussion: https://postgr.es/m/20210402204734.6mo3nfacnljlicgn@alap3.anarazel.de https://git.postgresql.org/pg/commitdiff/1267d9862fc6a4f8cdc0ca38d1988b61f39da585
Peter Geoghegan pushed:
Peter Eisentraut pushed:
Reset standard_conforming_strings in strings test. After some tests relating to standard_conforming_strings behavior, the value was not reset to the default value. Therefore, the rest of the tests in that file ran with the nondefault setting, which affected the results of some tests. For clarity, reset the value and run the rest of the tests with the default setting again. https://git.postgresql.org/pg/commitdiff/ebedd0c78fc51c293abe56e99a18c67af14da0c9
Add unistr function. This allows decoding a string with Unicode escape sequences. It is similar to Unicode escape strings, but offers some more flexibility. Author: Pavel Stehule pavel.stehule@gmail.com Reviewed-by: Asif Rehman asifr.rehman@gmail.com Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRA5GnKT+gDVwbVRH2ep451H_myBt+NTz8RkYUARE9+qOQ@mail.gmail.com https://git.postgresql.org/pg/commitdiff/f37fec837ce8bf7af408ba66d32099e5a0182402
Clean up date_part tests a bit. Some tests for timestamp and timestamptz were in the date.sql test file. Move them to their appropriate files, or drop tests cases that were already present there. https://git.postgresql.org/pg/commitdiff/efcc7572f532ea564fedc6359c2df43045ee7908
Add upper boundary tests for timestamp and timestamptz types. The existing regression tests only tested the lower boundary of the range supported by the timestamp and timestamptz types because "The upper boundary differs between integer and float timestamps, so no check". Since this is obsolete, add similar tests for the upper boundary. https://git.postgresql.org/pg/commitdiff/bc9f1afdebc98b490d0a00468d75e8e4d080afb0
Add tests for date_part of epoch near upper bound of timestamp range. This exercises a special case in the implementations of date_part('epoch', timestamp[tz]) that was previously not tested. https://git.postgresql.org/pg/commitdiff/6131ffc43ff3d2f566e93f017e56a09e4e717318
doc: Remove Cyrillic from unistr example. Not supported by PDF build right now, so let's do without it. https://git.postgresql.org/pg/commitdiff/287d2a97c1de07486e4525c8ad06258f04bd6268
Add errhint_plural() function and make use of it. Similar to existing errmsg_plural() and errdetail_plural(). Some errhint() calls hadn't received the proper plural treatment yet. https://git.postgresql.org/pg/commitdiff/91c5a8caaa61055959aa5fb68a00e5f690e39a34
Add p_names field to ParseNamespaceItem. ParseNamespaceItem had a wired-in assumption that p_rte->eref describes the table and column aliases exposed by the nsitem. This relaxes this by creating a separate p_names field in an nsitem. This is mainly preparation for a patch for JOIN USING aliases, but it saves one indirection in common code paths, so it's possibly a win on its own. Author: Tom Lane tgl@sss.pgh.pa.us Discussion: https://www.postgresql.org/message-id/785329.1616455091@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/66392d396508c91c2ec07a61568bf96acb663ad8
Allow an alias to be attached to a JOIN ... USING. This allows something like SELECT ... FROM t1 JOIN t2 USING (a, b, c) AS x where x has the columns a, b, c and unlike a regular alias it does not hide the range variables of the tables being joined t1 and t2. Per SQL:2016 feature F404 "Range variable for common column names". Reviewed-by: Vik Fearing vik.fearing@2ndquadrant.com Reviewed-by: Tom Lane tgl@sss.pgh.pa.us Discussion: https://www.postgresql.org/message-id/flat/454638cf-d563-ab76-a585-2564428062af@2ndquadrant.com https://git.postgresql.org/pg/commitdiff/055fee7eb4dcc78e58672aef146334275e1cc40d
Make extract(timetz) tests a bit more interesting. Use a time zone offset with nonzero minutes to make the timezone_minute test meaningful. https://git.postgresql.org/pg/commitdiff/e2639a767bfa1afebaf1877515a1187feb393443
Fix internal extract(timezone_minute) formulas. Through various refactorings over time, the extract(timezone_minute from time with time zone) and extract(timezone_minute from timestamp with time zone) implementations ended up with two different but equally nonsensical formulas by using SECS_PER_MINUTE and MINS_PER_HOUR interchangeably. Since those two are of course both the same number, the formulas do work, but for readability, fix them to be semantically correct. https://git.postgresql.org/pg/commitdiff/91e7c903291116bd081abe7d4a058d40a2a06e16
Add support for NullIfExpr in eval_const_expressions. Author: Hou Zhijie houzj.fnst@cn.fujitsu.com Discussion: https://www.postgresql.org/message-id/flat/7ea5ce773bbc4eea9ff1a381acd3b102@G08CNEXMBPEKD05.g08.fujitsu.local https://git.postgresql.org/pg/commitdiff/9c5f67fd6256246b2a788a8feb1d42b79dcd0448
Andrew Dunstan pushed:
Allow matching the DN of a client certificate for authentication. Currently we only recognize the Common Name (CN) of a certificate's subject to be matched against the user name. Thus certificates with subjects '/OU=eng/CN=fred' and '/OU=sales/CN=fred' will have the same connection rights. This patch provides an option to match the whole Distinguished Name (DN) instead of just the CN. On any hba line using client certificate identity, there is an option 'clientname' which can have values of 'DN' or 'CN'. The default is 'CN', the current procedure. The DN is matched against the RFC2253 formatted DN, which looks like 'CN=fred,OU=eng'. This facility of probably best used in conjunction with an ident map. Discussion: https://postgr.es/m/92e70110-9273-d93c-5913-0bccb6562740@dunslane.net Reviewed-By: Michael Paquier, Daniel Gustafsson, Jacob Champion https://git.postgresql.org/pg/commitdiff/6d7a6feac48b1970c4cd127ee65d4c487acbb5e9
Fix typo in 6d7a6feac4. Per gripe from Daniel Gustafsson https://git.postgresql.org/pg/commitdiff/1877c9ac3acc05cc787dd6392d073202f8c8ee21
Álvaro Herrera pushed:
psql: call clearerr() just before printing. We were never doing clearerr() on the output stream, which results in a message being printed after each result once an EOF is seen: could not print result table: Success This message was added by commit b03436994bcc (in the pg13 era); before that, the error indicator would never be examined. So backpatch only that far back, even though the actual bug (to wit: the fact that the error indicator is never cleared) is older. https://git.postgresql.org/pg/commitdiff/8d645a116ef6e04bfb03e259149b8e163dbdf50c
Improve PQtrace() output format. Transform the PQtrace output format from its ancient (and mostly useless) byte-level output format to a logical-message-level output, making it much more usable. This implementation allows the printing code to be written (as it indeed was) by looking at the protocol documentation, which gives more confidence that the three (docs, trace code and actual code) actually match. Author: 岩田 彩 (Aya Iwata) iwata.aya@fujitsu.com Reviewed-by: 綱川 貴之 (Takayuki Tsunakawa) tsunakawa.takay@fujitsu.com Reviewed-by: Kirk Jamison k.jamison@fujitsu.com Reviewed-by: Kyotaro Horiguchi horikyota.ntt@gmail.com Reviewed-by: Tom Lane tgl@sss.pgh.pa.us Reviewed-by: 黒田 隼人 (Hayato Kuroda) kuroda.hayato@fujitsu.com Reviewed-by: "Nagaura, Ryohei" nagaura.ryohei@jp.fujitsu.com Reviewed-by: Ryo Matsumura matsumura.ryo@fujitsu.com Reviewed-by: Greg Nancarrow gregn4422@gmail.com Reviewed-by: Jim Doty jdoty@pivotal.io Reviewed-by: Álvaro Herrera alvherre@alvh.no-ip.org Discussion: https://postgr.es/m/71E660EB361DF14299875B198D4CE5423DE3FBA4@g01jpexmbkw25 https://git.postgresql.org/pg/commitdiff/198b3716dba68544b55cb97bd120738a86d5df2d
libpq_pipeline: add PQtrace() support and tests. The libpq_pipeline program recently introduced by commit acb7e4eb6b1c is well equipped to test the PQtrace() functionality, so let's make it do that. Author: Álvaro Herrera alvherre@alvh.no-ip.org Discussion: https://postgr.es/m/20210327192812.GA25115@alvherre.pgsql https://git.postgresql.org/pg/commitdiff/7bebd0d00998a28449d83376f4bcdeec65d5eea6
Fix some libpq_pipeline test problems. Test pipeline_abort was not checking that it got the rows it expected in one mode; make it do so. This doesn't fix the actual problem (no idea what that is, yet) but at least it should make it more obvious rather than being visible only as a difference in the trace output. While at it, fix other infelicities in the test: * I reversed the order of result vs. expected in like(). * The output traces from -t are being put in the log dir, which means the buildfarm script uselessly captures them. Put them in a separate dir tmp_check/traces instead, to avoid cluttering the buildfarm results. * Test pipelined_insert was using too large a row count. Reduce that a tad and add a filler column to make each insert a little bulkier, while still keeping enough that a buffer is filled and we have to switch mode. https://git.postgresql.org/pg/commitdiff/db973ffb3ca43e65a0bf15175a35184a53bf977d
Disable force_parallel_mode in libpq_pipeline. Some buildfarm animals with force_parallel_mode=regress were failing this test because the error is reported in a parallel worker quicker than the rows that succeed. Take the opportunity to move the SET of lc_messages out of the traced section, because it's not very interesting. Diagnosed-by: Tom Lane tgl@sss.pgh.pa.us Discussion: https://postgr.es/m/3304521.1617221724@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/a6d3dea8e5e0c8a0df2f95d66b6c3903a4354ca0
Initialize conn->Pfdebug to NULL when creating a connection. Failing to do this can cause a crash, and I suspect is what has happened with a buildfarm member reporting mysterious failures. This is an ancient bug, but I'm not backpatching since evidently nobody cares about PQtrace in older releases. Discussion: https://postgr.es/m/3333908.1617227066@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/aba24b51cc1b045a9810458b4bb15fee2c182948
Remove setvbuf() call from PQtrace(). It's misplaced there -- it's not libpq's output stream to tweak in that way. In particular, POSIX says that it has to be called before any other operation on the file, so if a stream previously used by the calling application, bad things may happen. Put setvbuf() in libpq_pipeline for good measure. Also, reduce fopen(..., "w+") to just fopen(..., "w") in libpq_pipeline.c. It's not clear that this fixes anything, but we don't use w+ anywhere. Per complaints from Tom Lane. Discussion: https://postgr.es/m/3337422.1617229905@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/6ec578e60101c3c02533f99715945a0400fb3286
libpq_pipeline: Must strdup(optarg) to avoid crash. I forgot to strdup() when processing argv[]. Apparently many platforms hide this mistake from users, but in those that don't you may get a program crash. Repair. Per buildfarm member drongo, which is the only one in all the buildfarm manifesting a problem here. While at it, move "numrows" processing out of the line of special cases, and make it getopt's -r instead. (A similar thing could be done to 'conninfo', but current use of the program doesn't warrant spending time on that -- nowhere else we use conninfo in so simplistic a manner.) Discussion: https://postgr.es/m/20210401124850.GA19247@alvherre.pgsql https://git.postgresql.org/pg/commitdiff/dde1a35aee6266dc8105717275335c46cd2b3650
Fix setvbuf()-induced crash in libpq_pipeline. Windows doesn't like
setvbuf(..., _IOLBF)
and crashes if you use it, which has been causing the
libpq_pipeline failures all along ... and our own port.h has known about it
for a long time: it offers PG_IOLBF that's defined to _IONBF
on that platform.
Follow its advice. While at it, get rid of a bogus bitshift that used a
constant of the wrong size. Decorate the constant as LL to fix. While at it,
remove a pointless addition that only confused matters. All as diagnosed by
Tom Lane. Discussion:
https://postgr.es/m/3458958.1617302154@sss.pgh.pa.us
https://git.postgresql.org/pg/commitdiff/a68a894f0198aaeffa81b3027f135adcdaa8abf6
Etsuro Fujita pushed:
Update obsolete comment. Back-patch to all supported branches. Author: Etsuro Fujita Discussion: https://postgr.es/m/CAPmGK17DwzaSf%2BB71dhL2apXdtG-OmD6u2AL9Cq2ZmAR0%2BzapQ%40mail.gmail.com https://git.postgresql.org/pg/commitdiff/bc2797ebb14bae663da1ee7845774dd98716c0d0
Add support for asynchronous execution. This implements asynchronous execution, which runs multiple parts of a non-parallel-aware Append concurrently rather than serially to improve performance when possible. Currently, the only node type that can be run concurrently is a ForeignScan that is an immediate child of such an Append. In the case where such ForeignScans access data on different remote servers, this would run those ForeignScans concurrently, and overlap the remote operations to be performed simultaneously, so it'll improve the performance especially when the operations involve time-consuming ones such as remote join and remote aggregation. We may extend this to other node types such as joins or aggregates over ForeignScans in the future. This also adds the support for postgres_fdw, which is enabled by the table-level/server-level option "async_capable". The default is false. Robert Haas, Kyotaro Horiguchi, Thomas Munro, and myself. This commit is mostly based on the patch proposed by Robert Haas, but also uses stuff from the patch proposed by Kyotaro Horiguchi and from the patch proposed by Thomas Munro. Reviewed by Kyotaro Horiguchi, Konstantin Knizhnik, Andrey Lepikhov, Movead Li, Thomas Munro, Justin Pryzby, and others. Discussion: https://postgr.es/m/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com Discussion: https://postgr.es/m/CA%2BhUKGLBRyu0rHrDCMC4%3DRn3252gogyp1SjOgG8SEKKZv%3DFwfQ%40mail.gmail.com Discussion: https://postgr.es/m/20200228.170650.667613673625155850.horikyota.ntt%40gmail.com https://git.postgresql.org/pg/commitdiff/27e1f14563cf982f1f4d71e21ef247866662a052
Amit Kapila pushed:
Add a xid argument to the filter_prepare callback for output plugins. Along with gid, this provides a different way to identify the transaction. The users that use xid in some way to prepare the transactions can use it to filter prepare transactions. The later commands COMMIT PREPARED or ROLLBACK PREPARED carries both identifiers, providing an output plugin the choice of what to use. Author: Markus Wanner Reviewed-by: Vignesh C, Amit Kapila Discussion: https://postgr.es/m/ee280000-7355-c4dc-e47b-2436e7be959c@enterprisedb.com https://git.postgresql.org/pg/commitdiff/f64ea6dc5c8ccaec9a3d3d39695ca261febb6883
Doc: Use consistent terminology for tablesync slots. At some places in the docs, we refer to them as tablesync slots and at other places as table synchronization slots. For consistency, we refer to them as table synchronization slots at all places. Author: Peter Smith Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CAHut+PvzYNKCeZ=kKBDkh3dw-r=2D3fk=nNc9SXSW=CZGk69xg@mail.gmail.com https://git.postgresql.org/pg/commitdiff/9f45631766bd0c51a74102770737ba3b0561977e
Remove extra semicolon in postgres_fdw tests. Author: Suraj Kharage Reviewed-by: Bharath Rupireddy, Vignesh C Discussion: https://postgr.es/m/CAF1DzPWRfxUeH-wShz7P_pK5Tx6M_nEK+TkS8gn5ngvg07Q5=g@mail.gmail.com https://git.postgresql.org/pg/commitdiff/13cb5bd84657ed49021fe6fc4ce46601c315c9a5
Ensure to send a prepare after we detect concurrent abort during decoding. It is possible that while decoding a prepared transaction, it gets aborted concurrently via a ROLLBACK PREPARED command. In that case, we were skipping all the changes and directly sending Rollback Prepared when we find the same in WAL. However, the downstream has no idea of the GID of such a transaction. So, ensure to send prepare even when a concurrent abort is detected. Author: Ajin Cherian Reviewed-by: Markus Wanner, Amit Kapila Discussion: https://postgr.es/m/f82133c6-6055-b400-7922-97dae9f2b50b@enterprisedb.com https://git.postgresql.org/pg/commitdiff/4778826532a62fd6e4d3fdeef9532c943604c730
Tom Lane pushed:
Further tweaking of pg_dump's handling of default_toast_compression. As committed in bbe0a81db, pg_dump from a pre-v14 server effectively acts as though you'd said --no-toast-compression. I think the right thing is for it to act as though default_toast_compression is set to "pglz", instead, so that the tables' toast compression behavior is preserved. You can always get the other behavior, if you want that, by giving the switch. Discussion: https://postgr.es/m/1112852.1616609702@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/54bb91c30e3964fd81059e6b02e377cc9dd2d64c
Remove small inefficiency in ExecARDeleteTriggers/ExecARUpdateTriggers. Whilst poking at nodeModifyTable.c, I chanced to notice that while its calls to ExecBRTriggers and ExecIRTriggers are protected by tests to see if there are any relevant triggers to fire, its calls to ExecARTriggers are not; the latter functions do the equivalent tests themselves. This seems possibly reasonable given the more complex conditions involved, but what's less reasonable is that the ExecAR functions aren't careful to do no work when there is no work to be done. ExecARInsertTriggers gets this right, but the other two will both force creation of a slot that the query may have no use for. ExecARUpdateTriggers additionally performed a usually-useless ExecClearTuple() on that slot. This is probably all pretty microscopic in real workloads, but a cycle shaved is a cycle earned. https://git.postgresql.org/pg/commitdiff/65158f497a7d7523ad438b2034d01a560fafe6bd
Rework planning and execution of UPDATE and DELETE. This patch makes two closely related sets of changes: 1. For UPDATE, the subplan of the ModifyTable node now only delivers the new values of the changed columns (i.e., the expressions computed in the query's SET clause) plus row identity information such as CTID. ModifyTable must re-fetch the original tuple to merge in the old values of any unchanged columns. The core advantage of this is that the changed columns are uniform across all tables of an inherited or partitioned target relation, whereas the other columns might not be. A secondary advantage, when the UPDATE involves joins, is that less data needs to pass through the plan tree. The disadvantage of course is an extra fetch of each tuple to be updated. However, that seems to be very nearly free in context; even worst-case tests don't show it to add more than a couple percent to the total query cost. At some point it might be interesting to combine the re-fetch with the tuple access that ModifyTable must do anyway to mark the old tuple dead; but that would require a good deal of refactoring and it seems it wouldn't buy all that much, so this patch doesn't attempt it. 2. For inherited UPDATE/DELETE, instead of generating a separate subplan for each target relation, we now generate a single subplan that is just exactly like a SELECT's plan, then stick ModifyTable on top of that. To let ModifyTable know which target relation a given incoming row refers to, a tableoid junk column is added to the row identity information. This gets rid of the horrid hack that was inheritance_planner(), eliminating O(N^2) planning cost and memory consumption in cases where there were many unprunable target relations. Point 2 of course requires point 1, so that there is a uniform definition of the non-junk columns to be returned by the subplan. We can't insist on uniform definition of the row identity junk columns however, if we want to keep the ability to have both plain and foreign tables in a partitioning hierarchy. Since it wouldn't scale very far to have every child table have its own row identity column, this patch includes provisions to merge similar row identity columns into one column of the subplan result. In particular, we can merge the whole-row Vars typically used as row identity by FDWs into one column by pretending they are type RECORD. (It's still okay for the actual composite Datums to be labeled with the table's rowtype OID, though.) There is more that can be done to file down residual inefficiencies in this patch, but it seems to be committable now. FDW authors should note several API changes: * The argument list for AddForeignUpdateTargets() has changed, and so has the method it must use for adding junk columns to the query. Call add_row_identity_var() instead of manipulating the parse tree directly. You might want to reconsider exactly what you're adding, too. * PlanDirectModify() must now work a little harder to find the ForeignScan plan node; if the foreign table is part of a partitioning hierarchy then the ForeignScan might not be the direct child of ModifyTable. See postgres_fdw for sample code. * To check whether a relation is a target relation, it's no longer sufficient to compare its relid to root->parse->resultRelation. Instead, check it against all_result_relids or leaf_result_relids, as appropriate. Amit Langote and Tom Lane Discussion: https://postgr.es/m/CA+HiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ@mail.gmail.com https://git.postgresql.org/pg/commitdiff/86dc90056dfdbd9d1b891718d2e5614e3e432f35
Improve style of some replication-related error messages. Put the remote end's error message into the primary error string, instead of relegating it to errdetail(). Although this could end up being awkward if the remote sends us a really long error message, it seems more in keeping with our message style guidelines, and more helpful in situations where the errdetail could get dropped. Peter Smith Discussion: https://postgr.es/m/CAHut+Ps-Qv2yQceCwobQDP0aJOkfDzRFrOaR6+2Op2K=WHGeWg@mail.gmail.com https://git.postgresql.org/pg/commitdiff/6197db5340b8154adce1c6d07f6d3325547429c1
Suppress compiler warning in libpq_pipeline.c. Some compilers seem to be concerned about the possibility that recv_step is not any of the defined enum values. Silence warnings about uninitialized cmdtag in a different way than I did in 9fb9691a8. https://git.postgresql.org/pg/commitdiff/522d1a89f8d7ed45681988c60bd0a687332a4023
Don't prematurely cram a value into a short int. Since a4d75c86b, some buildfarm members have been warning that Assert(attnum <= MaxAttrNumber); is useless if attnum is an AttrNumber. I'm not certain how plausible it is that the value coming out of the bitmap could actually exceed MaxAttrNumber, but we seem to have thought that that was possible back in 7300a6995. Revert the intermediate variable to int so that we have the same overflow protection as before. https://git.postgresql.org/pg/commitdiff/c545e9524dcfcfce25c370f584b31562e8d7a4b7
Silence compiler warning in non-assert builds. Per buildfarm. https://git.postgresql.org/pg/commitdiff/8998e3cafa23632790787b8cc726998e84067259
Fix portability and safety issues in pqTraceFormatTimestamp. Remove confusion between time_t and pg_time_t; neither gettimeofday() nor localtime() deal in the latter. libpq indeed has no business using <pgtime.h> at all. Use snprintf not sprintf, to ensure we can't overrun the supplied buffer. (Unlikely, but let's be safe.) Per buildfarm. https://git.postgresql.org/pg/commitdiff/f1be740a991406d7885047beb971e1ff5dbe8b71
Fix unportable use of isprint(). We must cast the arguments of <ctype.h> functions to unsigned char to avoid problems where char is signed. Speaking of which, considering that this is a <ctype.h> function, it's rather remarkable that we aren't seeing more complaints about not having included that header. Per buildfarm. https://git.postgresql.org/pg/commitdiff/9e20406dd847d0f8c1cbd803786c6d0ad33bcbdd
Fix pg_restore's misdesigned code for detecting archive file format. Despite
the clear comments pointing out that the duplicative code segments in
ReadHead() and _discoverArchiveFormat()
needed to be in sync, they were not:
the latter did not bother to apply any of the sanity checks in the former.
We'd missed noticing this partly because none of those checks would fail in
scenarios we customarily test, and partly because the oversight would be
masked if both segments execute, which they would in cases other than needing
to autodetect the format of a non-seekable stdin source. However, in a case
meeting all these requirements --- for example, trying to read a
newer-than-supported archive format from non-seekable stdin --- pg_restore
missed applying the version check and would likely dump core or otherwise
misbehave. The whole thing is silly anyway, because there seems little reason
to duplicate the logic beyond the one-line verification that the file starts
with "PGDMP". There seems to have been an undocumented assumption that
multiple major formats (major enough to require separate reader modules) would
nonetheless share the first half-dozen fields of the custom-format header.
This seems unlikely, so let's fix it by just nuking the duplicate logic in
_discoverArchiveFormat()
. Also get rid of the pointless attempt to seek back
to the start of the file after successful autodetection. That wastes cycles
and it means we have four behaviors to verify not two. Per bug #16951 from
Sergey Koposov. This has been broken for decades, so back-patch to all
supported versions. Discussion:
https://postgr.es/m/16951-a4dd68cf0de23048@postgresql.org
https://git.postgresql.org/pg/commitdiff/ec03f2df17a8ba5b431b34dd924e020a0be729f6
Rethink handling of pass-by-value leaf datums in SP-GiST. The existing convention in SP-GiST is that any pass-by-value datatype is stored in Datum representation, i.e. it's of width sizeof(Datum) even when typlen is less than that. This is okay, or at least it's too late to change it, for prefix datums and node-label datums in inner (upper) tuples. But it's problematic for leaf datums, because we'd prefer those to be stored in Postgres' standard on-disk representation so that we can easily extend leaf tuples to carry additional "included" columns. I believe, however, that we can get away with just up and changing that. This would be an unacceptable on-disk-format break, but there are two big mitigating factors: 1. It seems quite unlikely that there are any SP-GiST opclasses out there that use pass-by-value leaf datatypes. Certainly none of the ones in core do, nor has codesearch.debian.net heard of any. Given what SP-GiST is good for, it's hard to conceive of a use-case where the leaf-level values would be both small and fixed-width. (As an example, if you wanted to index text values with the leaf level being just a byte, then every text string would have to be represented with one level of inner tuple per preceding byte, which would be horrendously space-inefficient and slow to access. You always want to use as few inner-tuple levels as possible, leaving as much as possible in the leaf values.) 2. Even granting that you have such an index, this change only breaks things on big-endian machines. On little-endian, the high order bytes of the Datum format will now just appear to be alignment padding space. So, change the code to store pass-by-value leaf datums in their usual on-disk form. Inner-tuple datums are not touched. This is extracted from a larger patch that intends to add support for "included" columns. I'm committing it separately for visibility in our commit logs. Pavel Borisov and Tom Lane, reviewed by Andrey Borodin Discussion: https://postgr.es/m/CALT9ZEFi-vMp4faht9f9Junb1nO3NOSjhpxTmbm1UGLMsLqiEQ@mail.gmail.com https://git.postgresql.org/pg/commitdiff/1ebdec8c03294e55a9fdb6e676a9e8de680231cc
Strip file names reported in error messages on Windows, too. Commit dd136052b established a policy that error message FILE items should include only the base name of the reporting source file, for uniformity and succinctness. We now observe that some Windows compilers use backslashes in FILE strings, so truncate at backslashes as well. This is expected to fix some platform variation in the results of the new libpq_pipeline test module. Discussion: https://postgr.es/m/3650140.1617372290@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/53aafdb9ff6a561c7dea0f428a7c168f2b7e0f16
Improve psql's behavior when the editor is exited without saving. When editing the previous query buffer, if the editor is exited without modifying the temp file then clear the query buffer, rather than re-loading (and probably re-executing) the previous query buffer. This reduces the probability of accidentally re-executing something you didn't intend to. Similarly, in "\e file", if the file isn't actually modified then don't load it into the query buffer. And in "\ef" and "\ev", if no changes are made then clear the query buffer instead of loading the function or view definition into it. Cases where we fail to invoke the editor at all, or it returns a nonzero status, are treated like the no-file-modification case. Laurenz Albe, reviewed by Jacob Champion Discussion: https://postgr.es/m/0ba3f2a658bac6546d9934ab6ba63a805d46a49b.camel@cybertec.at https://git.postgresql.org/pg/commitdiff/55873a00e3c3349664e7215077dca74ccea08b4d
Fix confusion in SP-GiST between attribute type and leaf storage type. According to the documentation, the attType passed to the opclass config function (and also relied on by the core code) is the type of the heap column or expression being indexed. But what was actually being passed was the type stored for the index column. This made no difference for user-defined SP-GiST opclasses, because we weren't allowing the STORAGE clause of CREATE OPCLASS to be used, so the two types would be the same. But it's silly not to allow that, seeing that the built-in poly_ops opclass has a different value for opckeytype than opcintype, and that if you want to do lossy storage then the types must really be different. (Thus, user-defined opclasses doing lossy storage had to lie about what type is in the index.) Hence, remove the restriction, and make sure that we use the input column type not opckeytype where relevant. For reasons of backwards compatibility with existing user-defined opclasses, we can't quite insist that the specified leafType match the STORAGE clause; instead just add an amvalidate() warning if they don't match. Also fix some bugs that would only manifest when trying to return index entries when attType is different from attLeafType. It's not too surprising that these have not been reported, because the only usual reason for such a difference is to store the leaf value lossily, rendering index-only scans impossible. Add a src/test/modules module to exercise cases where attType is different from attLeafType and yet index-only scan is supported. Discussion: https://postgr.es/m/3728741.1617381471@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/ac9099fc1dd460bffaafec19272159dd7bc86f5b
Stephen Frost pushed:
Use a WaitLatch for vacuum/autovacuum sleeping. Instead of using pg_usleep() in vacuum_delay_point(), use a WaitLatch. This has the advantage that we will realize if the postmaster has been killed since the last time we decided to sleep while vacuuming. Reviewed-by: Thomas Munro Discussion: https://postgr.es/m/CAFh8B=kcdk8k-Y21RfXPu5dX=bgPqJ8TC3p_qxR_ygdBS=JN5w@mail.gmail.com https://git.postgresql.org/pg/commitdiff/4753ef37e0eda4ba0af614022d18fcbc5a946cc9
Add a docs section for obsoleted and renamed functions and settings. The new appendix groups information on renamed or removed settings, commands, etc into an out-of-the-way part of the docs. The original id elements are retained in each subsection to ensure that the same filenames are produced for HTML docs. This prevents /current/ links on the web from breaking, and allows users of the web docs to follow links from old version pages to info on the changes in the new version. Prior to this change, a link to /current/ for renamed sections like the recovery.conf docs would just 404. Similarly if someone searched for recovery.conf they would find the pg11 docs, but there would be no /12/ or /current/ link, so they couldn't easily find out that it was removed in pg12 or how to adapt. Index entries are also added so that there's a breadcrumb trail for users to follow when they know the old name, but not what we changed it to. So a user who is trying to find out how to set standby_mode in PostgreSQL 12+, or where pg_resetxlog went, now has more chance of finding that information. Craig Ringer and Stephen Frost Reviewed-by: Euler Taveira Discussion: https://postgr.es/m/CAGRY4nzPNOyYQ_1-pWYToUVqQ0ThqP5jdURnJMZPm539fdizOg%40mail.gmail.com Backpatch-through: 10 https://git.postgresql.org/pg/commitdiff/3b0c647bbfc52894d979976f1e6d60e40649bba7
Rename Default Roles to Predefined Roles. The term 'default roles' wasn't quite apt as these roles aren't able to be modified or removed after installation, so rename them to be 'Predefined Roles' instead, adding an entry into the newly added Obsolete Appendix to help users of current releases find the new documentation. Bruce Momjian and Stephen Frost Discussion: https://postgr.es/m/157742545062.1149.11052653770497832538%40wrigleys.postgresql.org and https://www.postgresql.org/message-id/20201120211304.GG16415@tamriel.snowman.net https://git.postgresql.org/pg/commitdiff/c9c41c7a337d3e2deb0b2a193e9ecfb865d8f52b
Bruce Momjian pushed:
In messages, use singular nouns for -1, like we do for +1. This outputs "-1 year", not "-1 years". Reported-by: neverov.max@gmail.com Bug: 16939 Discussion: https://postgr.es/m/16939-cceeb03fb72736ee@postgresql.org https://git.postgresql.org/pg/commitdiff/5da9868ed983f95cc1cff80dcd81252a513774f8
adjust dblink regression expected output for commit 5da9868ed9. Seems the -1/singular output is used in the dblink regression tests. Reported-by: Álvaro Herrera Discussion: https://postgr.es/m/20210330231506.GA10666@alvherre.pgsql https://git.postgresql.org/pg/commitdiff/9ee7d533dacf8594057ced2d016250f09056c284
doc: mention that intervening major releases can be skipped. Also mention that you should read the intervening major releases notes. This change was also applied to the website. Discussion: https://postgr.es/m/20210330144949.GA8259@momjian.us Backpatch-through: 9.6 https://git.postgresql.org/pg/commitdiff/2bda93f813919b58225f5a0e282e10b98d7633d4
Use macro MONTHS_PER_YEAR instead of '12' in /ecpg/pgtypeslib. All other places already use MONTHS_PER_YEAR appropriately. Backpatch-through: 9.6 https://git.postgresql.org/pg/commitdiff/84bc2b17523ef485f102be7f00f7affb88f00f18
Michaël Paquier pushed:
Add support for --extension in pg_dump. When specified, only extensions matching the given pattern are included in dumps. Similarly to --table and --schema, when --strict-names is used, a perfect match is required. Also, like the two other options, this new option offers no guarantee that dependent objects have been dumped, so a restore may fail on a clean database. Tests are added in test_pg_dump/, checking after a set of positive and negative cases, with or without an extension's contents added to the dump generated. Author: Guillaume Lelarge Reviewed-by: David Fetter, Tom Lane, Michael Paquier, Asif Rehman, Julien Rouhaud Discussion: https://postgr.es/m/CAECtzeXOt4cnMU5+XMZzxBPJ_wu76pNy6HZKPRBL-j7yj1E4+g@mail.gmail.com https://git.postgresql.org/pg/commitdiff/6568cef26e0f40c25ae54b8e20aad8d1410a854b
Fix comment in parsenodes.h. CreateStmt->inhRelations is a list of RangeVars, but a comment was incorrect about that. Author: Julien Rouhaud Discussion: https://postgr.es/m/20210330123015.yzekhz5sweqbgxdr@nol https://git.postgresql.org/pg/commitdiff/7ef64e7e72a65f191fc2f7d4bbe220f53dd8d5de
Move some client-specific routines from SSLServer to PostgresNode. test_connect_ok() and test_connect_fails() have always been part of the SSL tests, and check if a connection to the backend should work or not, and there are sanity checks done on specific error patterns dropped by libpq if the connection fails. This was fundamentally wrong on two aspects. First, SSLServer.pm works mostly on setting up and changing the SSL configuration of a PostgresNode, and has really nothing to do with the client. Second, the situation became worse in light of b34ca595, where the SSL tests would finish by using a psql command that may not come from the same installation as the node set up. This commit moves those client routines into PostgresNode, making easier the refactoring of SSLServer to become more SSL-implementation aware. This can also be reused by the ldap, kerberos and authentication test suites for connection checks, and a follow-up patch should extend those interfaces to match with backend log patterns. Author: Michael Paquier Reviewed-by: Andrew Dunstan, Daniel Gustafsson, Álvaro Herrera Discussion: https://postgr.es/m/YGLKNBf9zyh6+WSt@paquier.xyz https://git.postgresql.org/pg/commitdiff/0d1a33438d3a88938264e12e94c22818307d2f4d
doc: Clarify use of ACCESS EXCLUSIVE lock in various sections. Some sections of the documentation used "exclusive lock" to describe that an ACCESS EXCLUSIVE lock is taken during a given operation. This can be confusing to the reader as ACCESS SHARE is allowed with an EXCLUSIVE lock is used, but that would not be the case with what is described on those parts of the documentation. Author: Greg Rychlewski Discussion: https://postgr.es/m/CAKemG7VptD=7fNWckFMsMVZL_zzvgDO6v2yVmQ+ZiBfc_06kCQ@mail.gmail.com Backpatch-through: 9.6 https://git.postgresql.org/pg/commitdiff/ffd3391ea94165fbb5adc9534894c62d41138505
Improve stability of test with vacuum_truncate in reloptions.sql. This test has been using a simple VACUUM with pg_relation_size() to check if a relation gets physically truncated or not, but forgot the fact that some concurrent activity, like checkpoint buffer writes, could cause some pages to be skipped. The second test enabling vacuum_truncate could fail, seeing a non-empty relation. The first test would not have failed, but could finish by testing a behavior different than the one aimed for. Both tests gain a FREEZE option, to make the vacuums more aggressive and prevent page skips. This is similar to the issues fixed in c2dc1a7. Author: Arseny Sher Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/87tuotr2hh.fsf@ars-thinkpad backpatch-through: 12 https://git.postgresql.org/pg/commitdiff/fe246d1c111d43fd60a1b0afff25ed09b7ae11eb
doc: Clarify how to generate backup files with non-exclusive backups. The current instructions describing how to write the backup_label and tablespace_map files are confusing. For example, opening a file in text mode on Windows and copy-pasting the file's contents would result in a failure at recovery because of the extra CRLF characters generated. The documentation was not stating that clearly, and per discussion this is not considered as a supported scenario. This commit extends a bit the documentation to mention that it may be required to open the file in binary mode before writing its data. Reported-by: Wang Shenhao Author: David Steele Reviewed-by: Andrew Dunstan, Magnus Hagander Discussion: https://postgr.es/m/8373f61426074f2cb6be92e02f838389@G08CNEXMBPEKD06.g08.fujitsu.local Backpatch-through: 9.6 https://git.postgresql.org/pg/commitdiff/6fb66c268df2de1112cac3cf0a6cf0a8b96ceaf0
Refactor HMAC implementations. Similarly to the cryptohash implementations,
this refactors the existing HMAC code into a single set of APIs that can be
plugged with any crypto libraries PostgreSQL is built with (only OpenSSL
currently). If there is no such libraries, a fallback implementation is
available. Those new APIs are designed similarly to the existing cryptohash
layer, so there is no real new design here, with the same logic around buffer
bound checks and memory handling. HMAC has a dependency on cryptohashes, so
all the cryptohash types supported by cryptohash{_openssl}.c
can be used with
HMAC. This refactoring is an advantage mainly for SCRAM, that included its
own implementation of HMAC with SHA256 without relying on the existing crypto
libraries even if PostgreSQL was built with their support. This code has been
tested on Windows and Linux, with and without OpenSSL, across all the versions
supported on HEAD from 1.1.1 down to 1.0.1. I have also checked that the
implementations are working fine using some sample results, a custom extension
of my own, and doing cross-checks across different major versions with SCRAM
with the client and the backend. Author: Michael Paquier Reviewed-by: Bruce
Momjian Discussion:
https://postgr.es/m/X9m0nkEJEzIPXjeZ@paquier.xyz
https://git.postgresql.org/pg/commitdiff/e6bdfd9700ebfc7df811c97c2fc46d7e94e329a2
Use more verbose matching patterns for errors in SSL TAP tests. The TAP tests of src/test/ssl/ have been using rather generic matching patterns to check some failure scenarios, like "SSL error" or just "FATAL". These have been introduced in 081bfc1. Those messages are not wrong per se, but when working on the integration of new SSL libraries it becomes hard to know if those errors are legit or not, and existing scenarios may fail in incorrect ways. This commit makes all those messages more verbose by adding the information generated by OpenSSL. Fortunately, the same error messages are used for all the versions supported on HEAD (checked that after running the tests from 1.0.1 to 1.1.1), so the change is straight-forward. Reported-by: Jacob Champion, Álvaro Herrera Discussion: https://postgr.es/m/YGU3AxQh0zBMMW8m@paquier.xyz https://git.postgresql.org/pg/commitdiff/8d3a4c3eae5367fba60ab77c159814defba784fe
Noah Misch pushed:
Joe Conway pushed:
Fix has_column_privilege function corner case. According to the comments, when an invalid or dropped column oid is passed to has_column_privilege(), the intention has always been to return NULL. However, when the caller had table level privilege the invalid/missing column was never discovered, because table permissions were checked first. Fix that by introducing extended versions of pg_attribute_acl(check|mask) and pg_class_acl(check|mask) which take a new argument, is_missing. When is_missing is NULL, the old behavior is preserved. But when is_missing is passed by the caller, no ERROR is thrown for dropped or missing columns/relations, and is_missing is flipped to true. This in turn allows has_column_privilege to check for column privileges first, providing the desired semantics. Not backpatched since it is a user visible behavioral change with no previous complaints, and the fix is a bit on the invasive side. Author: Joe Conway Reviewed-By: Tom Lane Reported by: Ian Barwick Discussion: https://postgr.es/m/flat/9b5f4311-157b-4164-7fe7-077b4fe8ed84%40joeconway.com https://git.postgresql.org/pg/commitdiff/b12bd4869b5e64b742a69ca07915e2f77f85a9ae
Clarify documentation of RESET ROLE. Command-line options, or previous "ALTER (ROLE|DATABASE) ... SET ROLE ..." commands, can change the value of the default role for a session. In the presence of one of these, RESET ROLE will change the current user identifier to the default role rather than the session user identifier. Fix the documentation to reflect this reality. Backpatch to all supported versions. Author: Nathan Bossart Reviewed-By: Laurenz Albe, David G. Johnston, Joe Conway Reported by: Nathan Bossart Discussion: https://postgr.es/m/flat/925134DB-8212-4F60-8AB1-B1231D750CB4%40amazon.com Backpatch-through: 9.6 https://git.postgresql.org/pg/commitdiff/174edbe9f9c1538ab3347474e96d176223591cd1
Heikki Linnakangas pushed:
Add 'noError' argument to encoding conversion functions. With the 'noError' argument, you can try to convert a buffer without knowing the character boundaries beforehand. The functions now need to return the number of input bytes successfully converted. This is is a backwards-incompatible change, if you have created a custom encoding conversion with CREATE CONVERSION. This adds a check to pg_upgrade for that, refusing the upgrade if there are any user-defined encoding conversions. Custom conversions are very rare, there are no commonly used extensions that I know of that uses that feature. No other objects can depend on conversions, so if you do have one, you can fairly easily drop it before upgrading, and recreate it after the upgrade with an updated version. Add regression tests for built-in encoding conversions. This doesn't cover every conversion, but it covers all the internal functions in conv.c that are used to implement the conversions. Reviewed-by: John Naylor Discussion: https://www.postgresql.org/message-id/e7861509-3960-538a-9025-b75a61188e01%40iki.fi https://git.postgresql.org/pg/commitdiff/ea1b99a6619cd9dcfd46b82ac0d926b0b80e0ae9
Do COPY FROM encoding conversion/verification in larger chunks. This gives a small performance gain, by reducing the number of calls to the conversion/verification function, and letting it work with larger inputs. Also, reorganizing the input pipeline makes it easier to parallelize the input parsing: after the input has been converted to the database encoding, the next stage of finding the newlines can be done in parallel, because there cannot be any newline chars "embedded" in multi-byte characters in the encodings that we support as server encodings. This changes behavior in one corner case: if client and server encodings are the same single-byte encoding (e.g. latin1), previously the input would not be checked for zero bytes ('\0'). Any fields containing zero bytes would be truncated at the zero. But if encoding conversion was needed, the conversion routine would throw an error on the zero. After this commit, the input is always checked for zeros. Reviewed-by: John Naylor Discussion: https://www.postgresql.org/message-id/e7861509-3960-538a-9025-b75a61188e01%40iki.fi https://git.postgresql.org/pg/commitdiff/f82de5c46bdf8cd65812a7b04c9509c218e1545d
Robert Haas pushed:
HeapTupleSatisfies*
more closely to avoid coming to
erroneous conclusions. Mark Dilger and Robert Haas Discussion:
http://postgr.es/m/CA+Tgmob6sii0yTvULYJ0Vq4w6ZBmj7zUhddL3b+SKDi9z9NA7Q@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/3b6c1259f9ca8e21860aaf24ec6735a8e5598ea0Fujii Masao pushed:
Fix typos in comments. Author: Masahiko Sawada Discussion: https://postgr.es/m/CAD21AoA1YL7t0nzVSEySx6zOaE7xO3r0jyu8hkitGL2_XbaMxQ@mail.gmail.com https://git.postgresql.org/pg/commitdiff/98e5bd103f887326e381c509c2fbe879ba3ea2f3
Fix pgstat_report_replslot() to use proper data types for its arguments. The caller of pgstat_report_replslot() passes int64 values to the function. Also the function stores those values in PgStat_Counter (i.e., int64) fields of PgStat_MsgReplSlot struct. But previously the function used "int" as the data types of some arguments for those values, which could lead to the overflow of values. To avoid this risk, this commit fixes pgstat_report_replslot() to use PgStat_Counter type for the arguments. Since they are the statistics counters, PgStat_Counter, the data type used for counters, is used for them instead of int64. Reported-by: Vignesh C Author: Vignesh C Reviewed-by: Jeevan Ladhe, Fujii Masao Discussion: https://postgr.es/m/CALDaNm080OpG=ZwOb0i8EyChH5SyHAMFWJCKaKTXmrfvJLbgaA@mail.gmail.com https://git.postgresql.org/pg/commitdiff/96bdb7e19de80a0c9521c5696455bca2a685c919
postgres_fdw: Add option to control whether to keep connections open. This commit adds a new option keep_connections that controls whether postgres_fdw keeps the connections to the foreign server open so that the subsequent queries can re-use them. This option can only be specified for a foreign server. The default is on. If set to off, all connections to the foreign server will be discarded at the end of transaction. Closed connections will be re-established when they are necessary by future queries using a foreign table. This option is useful, for example, when users want to prevent the connections from eating up the foreign servers connections capacity. Author: Bharath Rupireddy Reviewed-by: Alexey Kondratov, Vignesh C, Fujii Masao Discussion: https://postgr.es/m/CALj2ACVvrp5=AVp2PupEm+nAC8S4buqR3fJMmaCoc7ftT0aD2A@mail.gmail.com https://git.postgresql.org/pg/commitdiff/b1be3074ac719ce8073fba35d4c8b52fb4ddd0c3
pg_checksums: Fix progress reporting. pg_checksums uses two counters, total size and current size, to calculate the progress. Previously the progress that pg_checksums reported could not reach 100% at the end. The cause of this issue was that the sizes of only pages excluding new ones in each file were counted as the current size while the size of each file is counted as the total size. That is, the total size of all new pages could be reported as the difference between the total size and current size. This commit fixes this issue by making pg_checksums count the sizes of all pages including new ones in each file as the current size. Back-patch to v12 where progress reporting was added to pg_checksums. Reported-by: Shinya Kato Author: Shinya Kato Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/TYAPR01MB289656B1ACA0A5E7CAD07BE3C47A9@TYAPR01MB2896.jpnprd01.prod.outlook.com https://git.postgresql.org/pg/commitdiff/2eb1fc8b1ae8b974007e85636fc7336a9b5d7222
Thomas Munro pushed:
Andres Freund pushed:
Split wait event related code from pgstat.[ch] into wait_event.[ch]. The wait event related code is independent from the rest of the pgstat.[ch] code, of nontrivial size and changes on a regular basis. Put it into its own set of files. As there doesn't seem to be a good pre-existing directory for code like this, add src/backend/utils/activity. Reviewed-By: Robert Haas robertmhaas@gmail.com Discussion: https://postgr.es/m/20210316195440.twxmlov24rr2nxrg@alap3.anarazel.de https://git.postgresql.org/pg/commitdiff/a333476b925134f6185037eaff3424c07a9f466f
Do not rely on pgstat.h to indirectly include storage/ headers. An upcoming patch might remove the (now indirect) proc.h include (which in turn includes other headers), and it's cleaner for the modified files to include their dependencies directly anyway... Discussion: https://postgr.es/m/20210402194458.2vu324hkk2djq6ce@alap3.anarazel.de https://git.postgresql.org/pg/commitdiff/1d9c5d0ce2dcac05850401cf266a9df10a68de49
Split backend status and progress related functionality out of pgstat.c.
Backend status (supporting pg_stat_activity) and command progress (supporting
pg_stat_progress*
) related code is largely independent from the rest of
pgstat.[ch] (supporting views like pg_stat_all_tables that accumulate data
over time). See also a333476b925. This commit doesn't rename the function
names to make the distinction from the rest of pgstat_ clearer - that'd be
more invasive and not clearly beneficial. If we were to decide to do such a
rename at some point, it's better done separately from moving the code as
well. Robert's review was of an earlier version. Reviewed-By: Robert Haas
robertmhaas@gmail.com Discussion:
https://postgr.es/m/20210316195440.twxmlov24rr2nxrg@alap3.anarazel.de
https://git.postgresql.org/pg/commitdiff/e1025044cd4e7f33f7304aed54d5778b8a82cd5d
Improve efficiency of wait event reporting, remove proc.h dependency. pgstat_report_wait_start() and pgstat_report_wait_end() required two conditional branches so far. One to check if MyProc is NULL, the other to check if pgstat_track_activities is set. As wait events are used around comparatively lightweight operations, and are inlined (reducing branch predictor effectiveness), that's not great. The dependency on MyProc has a second disadvantage: Low-level subsystems, like storage/file/fd.c, report wait events, but architecturally it is preferable for them to not depend on inter-process subsystems like proc.h (defining PGPROC). After this change including pgstat.h (nor obviously its sub-components like backend_status.h, wait_event.h, ...) does not pull in IPC related headers anymore. These goals, efficiency and abstraction, are achieved by having pgstat_report_wait_start/end() not interact with MyProc, but instead a new my_wait_event_info variable. At backend startup it points to a local variable, removing the need to check for MyProc being NULL. During process initialization my_wait_event_info is redirected to MyProc->wait_event_info. At shutdown this is reversed. Because wait event reporting now does not need to know about where the wait event is stored, it does not need to know about PGPROC anymore. The removal of the branch for checking pgstat_track_activities is simpler: Don't check anymore. The cost due to the branch are often higher than the store - and even if not, pgstat_track_activities is rarely disabled. The main motivator to commit this work now is that removing the (indirect) pgproc.h include from pgstat.h simplifies a patch to move statistics reporting to shared memory (which still has a chance to get into 14). Author: Andres Freund andres@anarazel.de Discussion: https://postgr.es/m/20210402194458.2vu324hkk2djq6ce@alap3.anarazel.de https://git.postgresql.org/pg/commitdiff/225a22b19ed2960acc8e9c0b7ae53e0e5b0eac87
Tomáš Vondra pushed:
Fix BRIN minmax-multi distance for interval type. The distance calculation for
interval type was treating months as having 31 days, which is inconsistent
with the interval comparator (using 30 days). Due to this it was possible to
get negative distance (b-a) when (a<b)
, trigerring an assert. Fixed by
adopting the same logic as interval_cmp_value. Reported-by: Jaime Casanova
Discussion:
https://postgr.es/m/CAJKUy5jKH0Xhneau2mNftNPtTy-BVgQfXc8zQkEvRvBHfeUThQ%40mail.gmail.com
https://git.postgresql.org/pg/commitdiff/2b10e0e3c2ca14d732521479123e5d5e2094e143
Fix BRIN minmax-multi distance for timetz type. The distance calculation
ignored the time zone, so the result of (b-a) might have ended negative even
if (b > a)
. Fixed by considering the time zone difference. Reported-by: Jaime
Casanova Discussion:
https://postgr.es/m/CAJKUy5jLZFLCxyxfT%3DMfK5mtPfSzHA1rVLowR-j4RRsFVvKm7A%40mail.gmail.com
https://git.postgresql.org/pg/commitdiff/7262f2421a1e099a631356f7b80ad198e34e2a8a
Fix BRIN minmax-multi distance for inet type. The distance calculation ignored the mask, unlike the inet comparator, which resulted in negative distance in some cases. Fixed by applying the mask in brin_minmax_multi_distance_inet. I've considered simply calling inetmi() to calculate the delta, but that does not consider mask either. Reviewed-by: Zhihong Yu Discussion: https://postgr.es/m/1a0a7b9d-9bda-e3a2-7fa4-88f15042a051%40enterprisedb.com https://git.postgresql.org/pg/commitdiff/e1fbe1181c86247eaf8b4b142b81361ce4efcc66
Fix order of parameters in BRIN minmax-multi calls. The BRIN minmax-multi consistent function incorrectly assumed it can lookup an operator, and then swap the arguments to get the commutator. For example <(a,b) would be called as <(b,a) to get >(a,b). This works when the arguments are of the same type, but with cross-type opclasses this fails. We can't swap <(float4,float8) arguments, for example. Fixed by passing arguments in the right order. Discussion: https://postgr.es/m/CAJKUy5jLZFLCxyxfT%3DMfK5mtPfSzHA1rVLowR-j4RRsFVvKm7A%40mail.gmail.com https://git.postgresql.org/pg/commitdiff/1dad2a5ea3d14dd205603c31cc94ec088183ab2a
Add regression test for minmax-multi macaddr8 type. The regression test for BRIN minmax-multi opclasses tested almost all supported data types, with the exception of macaddr8. So this adds it. https://git.postgresql.org/pg/commitdiff/4908684ddab35135869efa2af6b49c4d67c422f9
Fix bug in brin_minmax_multi_union. When calling sort_expanded_ranges() we need to remember the return value, because the function sorts and also deduplicates the ranges. So the number of ranges may decrease. brin_minmax_multi_union failed to do that, which resulted in crashes due to bogus ranges (equal minval/maxval but not marked as compacted). Reported-by: Jaime Casanova Discussion: https://postgr.es/m/20210404052550.GA4376%40ahch-to https://git.postgresql.org/pg/commitdiff/d9c5b9a9eeb9e3061ae139e0e564ce5358c94001
James Hilliard sent in another revision of a patch to fix detection of preadv/pwritev support for OSX.
Mark Rofail sent in another revision of a patch to implement foreign key arrays.
Tomáš Vondra sent in a patch to combine statistics from child relations using a new subcommand, ANALYZE (MERGE).
Zeng Wenjing sent in another revision of a patch to implement global temporary tables.
Marcus Wanner sent in four more revisions of a patch to add an xid argument to the filter_prepare callback for output plugins.
Euler Taveira de Oliveira sent in another revision of a patch to add row filtering specified by a WHERE clause for logical replication.
Peter Smith sent in another revision of a patch to add support for prepared transactions to built-in logical replication.
Arne Roland sent in two more revisions of a patch to make ALTER TRIGGER ... RENAME TO work on partitioned tables.
Tang sent in a patch to update the copyright year for nbtsearch.c.
Paul Guo sent in another revision of a patch to support node initialization from backup with tablespaces, fix the replay of create database records on standby, and fix database create/drop wal description.
Masahiro Ikeda sent in two more revisions of a patch to speed up reporting of WAL stats.
Daniil Zakhlystov sent in two more revisions of a patch to add zlib and zstd streaming compression, and implement libpq compression.
Atsushi Torikoshi and Fujii Masao traded patches to get memory contexts of an arbitrary backend process.
John Naylor sent in two revisions of a patch to document the recently added date_bin() function.
Dean Rasheed and Fabien COELHO traded patches to add a pseudo-random permutation function to pgbench.
Isaac Moreland sent in a patch to add an abs(interval) function and the related @ operator.
Kyotaro HORIGUCHI sent in a patch to make the box type's description clearer.
Vigneshwaran C sent in another revision of a patch to fail a prepared transaction if it has locked system tables/user catalog tables.
Douglas Hirn sent in another revision of a patch to allow multiple linear recursive self-references in WITH RECURSIVE.
Sait Talha Nisanci sent in a patch intended to fix a bug that manifested as crash in record_type_typmod_compare.
Tomáš Vondra sent in a patch to use extended statistics to improve join estimates.
Stephen Frost sent in another revision of a patch to rename default roles to predefined roles.
Vigneshwaran C sent in three revisions of a patch to handle the overwriting of replication slot statistic issue, and add total txns and total txn bytes to replication statistics.
Peter Geoghegan sent in two more revisions of a patch to simplify the state managed by VACUUM, refactor lazy_scan_heap(), remove the tupgone special case from vacuumlazy.c, truncate line pointer array during VACUUM, and bypass index vacuuming in some cases.
Peter Geoghegan and Matthias van de Meent traded patches to truncate a page's line pointer array when it has trailing unused ItemIds, and clobber free page space in PageRepairFragmentation.
Tang sent in another revision of a patch to support tab completion with a query result for upper character inputs in psql.
Fujii Masao sent in another revision of a patch to fix an assertion failure in walreciever.
John Naylor sent in another revision of a patch to replace pg_utf8_verifystr() with two faster implementations: one for Intel-ish processors that uses the SSE-4.1 instruction set, the other which uses a bespoke fallback function rather than one that relies on pg_utf8_verifychar() and pg_utf8_isvalid().
Peter Eisentraut sent in another revision of a patch to change the return type of EXTRACT to numeric.
Stephen Frost sent in a patch to add pg_read_all_data and pg_write_all_data roles.
Thomas Munro sent in a patch to use POSIX_NAMED_SEMAPHORES on OpenBSD.
Fujii Masao and Bharath Rupireddy traded patches to add a postgres_fdw server level option, keep_connections to not cache connection.
Heikki Linnakangas sent in a patch to simplify COPY FROM parsing by forcing lookahead.
Daniel Gustafsson sent in two more revisions of a patch to support NSS as a libpq TLS backend.
Yuzuko Hosoya and Álvaro Herrera traded patches to fix autovacuum on partitioned tables.
Bharath Rupireddy sent in a patch to emit a warning when a partitioned table's persistence is changed.
Amit Langote sent in another revision of a patch to create foreign key triggers in partitioned tables too, and use same to enforce foreign key correctly during cross-partition updates.
Euler Taveira de Oliveira sent in another revision of a patch to refactor the parse_output_parameters function to use the struct PGOutputData that encapsulates all pgoutput options instead of using multiple parameters, and use same to add logical decoding message support to pgoutput.
Peter Eisentraut sent in another revision of a patch to implement the SQL-standard function body.
Justin Pryzby sent in another revision of a patch to implement CLUSTER of partitioned tables.
Amit Langote sent in two more revisions of a patch to export get_partition_for_tuple(), and use same to avoid using SPI for some RI checks.
Julien Rouhaud sent in three more revisions of a patch to move pg_stat_statements query jumbling to core, and use same to expose queryid in pg_stat_activity, log_line_prefix, and verbose explain.
Joel Jacobson sent in a patch to add a MotD function.
Bharath Rupireddy sent in another revision of a patch to implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ...
Erik Rijkers sent in another revision of a patch to fix an old confusing JSON example.
Kazutaka Onishi sent in six more revisions of a patch to implement TRUNCATE on foreign tables.
Thomas Munro sent in another revision of a patch to add a buffer mapping table for SLRUs, and make all SLRU buffer sizes configurable.
Takamichi Osumi sent in two more revisions of a patch to Safeguard for archive recovery not to miss data. This disables the server to start up when it detects WAL generated with wal_level=minimal during archive recovery. This should be done regardless of the value of EnableHotStandby, because we don't think the scenario to go through the period of wal_level=minimal happens. The motivation of this patch is to protect user ends up with getting replica that could miss data in standby mode and getting the server to miss data in recovery mode.
Amit Langote sent in another revision of a patch to set ForeignScanState.resultRelInfo lazily, and initialize result relation information lazily.
Justin Pryzby sent in a patch to make track_activity_query_size a STATS_COLLECTOR category, make sure log_autovacuum_min_duration is LOGGING_WHAT, make track_commit_timestamp REPLICATION_SENDING, and change force_parallel_mode to a DEVELOPER GUC, and remove it from sample the configuration.
Pavel Stěhule sent in another revision of a patch to implement schema variables.
Anton Voloshin sent in a patch to fix a typo in collationcmds.c.
Zhihong Yu sent in a patch to remove an unused variable from AttrDefaultFetch.
Amit Langote sent in another revision of a patch to allow batching of inserts during cross-partition updates.
Anton Voloshin sent in a patch to use repalloc() instead of palloc() in icu_convert_case(), as the structure in question might already have been palloc()ed.
Tom Lane sent in a patch to fix some adbin inconsistencies.