With large join queries the computing time spent for the genetic query optimization seems to be a mere fraction of the time Postgres needs for freeing memory via routine MemoryContextFree, file backend/utils/mmgr/mcxt.c. Debugging showed that it get stucked in a loop of routine OrderedElemPop, file backend/utils/mmgr/oset.c. The same problems arise with long queries when using the normal Postgres query optimization algorithm.
In file backend/optimizer/geqo/geqo_params.c, routines gimme_pool_size and gimme_number_generations, we have to find a compromise for the parameter settings to satisfy two competing demands:
Optimality of the query plan
Computing time
In file backend/optimizer/geqo/geqo_eval.c, routine geqo_joinrel_size, the present hack for MAXINT overflow is to set the Postgres integer value of rel->size to its logarithm. Modifications of Rel in backend/nodes/relation.h will surely have severe impacts on the whole Postgres implementation.
Memory exhaustion may occur with more than 10 relations involved in a query. In file backend/optimizer/geqo/geqo_eval.c, routine gimme_tree is recursively called. Maybe I forgot something to be freed correctly, but I dunno what. Of course the rel data structure of the join keeps growing and growing the more relations are packed into it. Suggestions are welcome :-(
Enable bushy query tree processing within Postgres; that may improve the quality of query plans.