How to index and query a JSON object in PostgreSQL®?
PostgreSQL® offers two types of data types to handle JSON data, JSON
and JSONB
. This doc showcases how to index a JSONB
column with a GIN index.
PostgreSQL® offers two types of data types to handle JSON data, JSON
and JSONB
. This doc showcases how to index a JSONB
column with a GIN index.
PostgreSQL® offers two types of data types to handle JSON data, JSON
and JSONB
, and provides several different ways to check if a value/field from a JSON document:
PostgreSQL® offers two types of data types to handle JSON data:
JSON
stores the JSON as text, performing a validation on the correctness of the JSON syntaxJSONB
optimizes the JSON storage in a custom binary format. Therefore, on top of validating the correctness of the JSON format, time is spent to properly parse and store the content.Companies are in a continuous motion: new requirements, new data streams, new technologies are popping up every day. When designing new data platforms supporting the needs of your company, failing to perform a complete assessment of the options available can have disastrous effects on a company’s capability to innovate, and making sure their data assets usable and reusable in the long term.
I started following the #PGSQLPhriday initiative a couple months back but never had the time to properly sit down and write due to conference traveling. Therefore I was super happy to be at home this week and find out that PGSQL Phriday #003 theme is about the PostgreSQL community!
If you’re reading this, there are good chances you’re thinking, writing or iterating over an abstract for a conference. The post is about little tips I learnt in more than 8 years of prepping for conferences and in 1 year or reviewing internal abstracts at Aiven before they are submitted for a conference.
Sometimes you want to query a remote InfluxDB server to understand the data in it. This can be done via cURL as explained in the InfluxDB docs
Extract metadata from your existing data tools and build a queryable graph interface in PostgreSQL®
A deep dive into Kafka connect settings and pitfalls, analysing several tips to have a better experience when setting up a connector.
TO_TIMESTAMP_LTZ
Sometimes when defining an Apache Flink® table using SQL we need to map an epoch timestamp and use it as record/message timestamp, this blog contains few tricks to get it right.