Achieving Correct Bloat Estimates of JSON Data in PostgreSQL

How we improved PostgreSQL's perception of space occupied by JSON data.

Posted on Nov 13, 2015

Recently my team realized that tables in one of our PostgreSQL databases were showing colossal amounts of bloat: up to 40 out of 54 GB total (75%) for bigger tables, and up to 95% of 17 GB total for smaller ones. Having accurate estimates is important to make certain database administration decisions aimed at reclaiming disk space and improving database performance, but these numbers were something very unusual, so we wanted to investigate first.

For the table and index bloat estimations, we used queries based on the PostgreSQL Wiki’s database-bloat query. Only certain tables were affected, pointing to problems with the tables themselves.

To investigate possible causes for these really unusual bloat estimation numbers, we used the pgstattuple [object Object]

We then turned our attention to the actual table schema. Our discovery: All the bloated tables included columns of type JSON.

As you can see from the estimation queries, pg_stats.avg_width[object Object][object Object][object Object][object Object][object Object][object Object]

Checking the avg_width[object Object]

After reading the code in analyze.c[object Object]

Verifying Assumptions In order to verify that statistics aren’t gathered for columns of type JSON, create a test table like this one —

CREATE TABLE testjson(
    id INT,
    tx TEXT,
    js JSON);

— and, after populating it with some data and running ANALYZE[object Object][object Object]

=# SELECT staattnum, stawidth
   FROM pg_statistic AS s
   JOIN pg_class AS c ON s.starelid = c.oid
  WHERE c.relname = 'testjson'
  ORDER BY 1;

 staattnum | stawidth
-----------+----------
         1 |        4
         2 |        5
(2 rows)

The pg_stats[object Object]

Post-9.4 versions of PostgreSQL come with type JSONB, which among other interesting properties, has the equality operator defined. In our problem case, the database in question had been upgraded to 9.4. But switching ~100 GB of tables over to a different column type — and that’s only on a single database shard, out of eight — was not a viable option.

If only for the sake of statistics calculation, you can enhance your plain-text JSON type with the equality operator. Just ensure that the operator is exclusively reserved for this particular use case so that your queries produce an error, and not a meaningless result for the accidental JSON comparison.

The Straightforward Way

Create a schema with a descriptive name, and revoke public access to it:

CREATE SCHEMA dontuse;
REVOKE USAGE ON SCHEMA dontuse FROM PUBLIC;

Now you need a function to create an operator on top of it. JSON is just a text type with a syntax check on input, so we can cast the parameters to TEXT and compare:

CREATE OR REPLACE FUNCTION dontuse.jsoneq(a JSON, b JSON)
RETURNS BOOL AS
$$
    SELECT a::TEXT = b::TEXT;
$$
LANGUAGE SQL SECURITY INVOKER IMMUTABLE;

And now, for the actual operator:

CREATE OPERATOR dontuse.= (
    PROCEDURE=dontuse.jsoneq,
    LEFTARG=JSON,
    RIGHTARG=JSON);

The final step is to create an operator class. Postgres defines the hash access method, which needs only one operator:

CREATE OPERATOR CLASS dontuse.json_ops
    DEFAULT FOR TYPE JSON USING hash AS
    OPERATOR 1 dontuse.= ;

Declare the operator class as the default for this type. Otherwise, the ANALYZE[object Object]

After re-analyzing the table, you can finally find the missing statistics entry for the column.

Case Closed?

If you now try to analyze a reasonably-populated table (in our test, two million rows), the process will take about 19 seconds(!) — whereas it takes only 200 milliseconds when the js[object Object]

If we use PL/PgSQL[object Object]

CREATE OR REPLACE FUNCTION dontuse.jsoneq(a JSON, b JSON)
RETURNS BOOL AS $$
BEGIN
    RETURN a::TEXT = b::TEXT;
END;
$$ LANGUAGE PLPGSQL SECURITY INVOKER IMMUTABLE;

— we can reduce the time spent in ANALYZE[object Object]

Can We Do Better?

Well, there are at least two possibilities: Either write the comparison function in C, taking advantage of the fact that JSON type’s internal representation is the same as that of type TEXT; or force the use of TEXT type’s equality comparison function for the defined operator.

Writing an external function is not hard, but it requires making an external loadable module, installing it on all affected systems, maintaining it, etc.

Using the existing function sounds more promising. Unfortunately, this cannot be done directly with the CREATE OPERATOR[object Object]

=# CREATE OPERATOR dontuse.= (
    PROCEDURE=pg_catalog.texteq,
    LEFTARG=JSON,
    RIGHTARG=JSON);
ERROR:  function pg_catalog.texteq(json, json) does not exist

What does work, however, is direct modification of the pg_operator[object Object]

=# BEGIN;
=# UPDATE pg_operator SET oprcode = 'pg_catalog.texteq'::regproc
    WHERE oprleft  = 114
      AND oprright = 114
      AND oprname  = '=';
UPDATE 1

In the example above, 114 is the OID of type JSON. If you test the ANALYZE[object Object]

Digging Even Deeper

Potential performance problems can result if the STATISTICS[object Object][object Object]

If the statistics target changes from 100 (the default) to 1000, for example, the ANALYZE[object Object][object Object][object Object]

CREATE OPERATOR CLASS dontuse.json_ops
    DEFAULT FOR TYPE JSON USING btree AS
    OPERATOR 1 dontuse.< ,
    OPERATOR 3 dontuse.= ,
    FUNCTION 1 dontuse.jsoncmp(JSON, JSON);

The exact definition of functions behind the operators = and < is not important here. For the purpose of ANALYZE[object Object][object Object]

If we go the safe route again and write the support function in SQL, we might end up with ~2.5 seconds on our default statistics target of 100. If we go the hacky way and override the support function to use the one provided for type TEXT, we can get the reasonable 300 milliseconds on statistics target 100 — and we’re back to only 2.5 seconds with target 1000.

For completeness, here’s a command that you also should not try at home:

=# BEGIN;
=# UPDATE pg_amproc SET amproc = 'pg_catalog.bttextcmp'::regproc
    WHERE amproclefttype  = 114
      AND amprocrighttype = 114
      AND amprocnum = 1;
UPDATE 1

The Final Chord

Since PostgreSQL 9.2, you can provide a second support routine for the btree[object Object][object Object]

Yet another possibility is to provide a custom ANALYZE[object Object][object Object][object Object]

Now, the proper fix can only be implemented in PostgreSQL core: even if a type doesn’t provide equality comparison operator, the average column width can still be estimated. The upcoming version of PostgreSQL 9.5 will have the fix incorporated.


We're hiring! Do you like working in an ever evolving organization such as Zalando? Consider joining our teams as a Software Engineer!



Related posts