Preamble
Cursors and transactions serve as the basic building blocks for creating database applications. This article describes how cursors and transactions interact and how WITH HOLD can work around their limitations. We will also see the dangers involved and how to properly use WITH HOLD cursors in a PL/pgSQL procedure.
When a query is ready to be run, PostgreSQL makes a portal where the result rows can be fetched. You receive the entire result set in one step when a regular query is executed. A cursor, however, enables you to fetch the result rows one at a time. A position within a result set is indicated by a cursor. In procedural code on the client or in the database, cursors are especially helpful because they let you loop through the results of a query. Another benefit of using a cursor is that you can run multiple SQL statements concurrently, which is typically not possible in a single database session.
The following is a simple example of PL/pgSQL code that makes use of a cursor:
LANGUAGE plpgsql $$DECLARE and / CURSOR FOR SELECT table_schema, table_name FROM information_schema.tables, * declare and open * WHERE table_name LIKEs "old_%" AND table_schema = "mydata"; text in the schema and names OPEN LOOP /* prevent SQL injection /* EXECUTE format('DROP TABLE%I.%I', v_schema, v_name); END LOOP; /* get the next result row /* FETCH c INTO v_schema, v_name; /* system variable FOUND is set by FETCH /* EXIT WHEN NOT FOUND; /* prevent SQL injection /* Not required; close the window; end; $;
In this example, the SELECT is executed concurrently with the DROP TABLE statements.
The above is not the most readable way to write this in PL/pgSQL (you could have used “FOR v_schema, v_name IN SELECT ... LOOP ... END LOOP;”, which uses a cursor “under the hood”), but I wanted to make the cursor explicit.
Keep in mind that using a join in the database frequently prevents the need for a cursor loop. Because it completes the entire task in a single statement, this type of join is more effective. However, since we must run a dynamic SQL statement inside the loop, we must use a cursor in this instance.
Cursors and transactions
The most important thing to know about a PostgreSQL cursor is that it only exists while a database transaction is running. The fact that a cursor is a single SQL statement and that an SQL statement is always part of a single transaction does not surprise me. In the above example, we had no problem, because a DO statement is always executed in a single transaction anyway.
Cursors close themselves at the end of a transaction, so you don’t have to do it yourself unless they are part of a long-running transaction and you want to free up the resources that the statement used.
Cursors in SQL
PostgreSQL is the only database that supports using cursors in SQL. You create a cursor with the DECLARE statement:
Name [BINARY] DECLARE [ ASENSITIVE | INSENSITIVE ] [ [ NO ] SCROLL ] CURSOR [ { WITH | WITHOUT } HOLD ] FOR query
Here is a brief explanation of each choice:
BINARYwill fetch the results in the internal binary format, which may be useful if you want to readbyteacolumns and avoid the overhead of escaping them as stringsSCROLLmeans that you can move the cursor position backwards to fetch the same rows several timesWITH HOLDmakes a cursor that won't close on its own when a transaction is finished.ASENSITIVEandINSENSITIVEare redundant in PostgreSQL and are there for SQL standard compatibility
There is also an SQL statement FETCH that is more powerful than its PL/pgSQL equivalent, in that it can fetch more than one row at a time. Like PL/pgSQL, SQL also has a MOVE statement that moves the cursor position without retrieving rows.
SQL cursors are closed with the CLOSE statement, or by the end of the transaction.
SCROLL cursors
Some execution plans, such as a sequential scan or a B-tree index scan, can be carried out both ways. With this kind of execution plan, a query’s cursor is automatically scrollable, so you can move it backwards in the result set. Scrollable cursors for these types of queries are included at no extra charge because PostgreSQL computes query result rows “on demand” and streams them to the client.
Other, more complicated execution plans require the explicit keyword SCROLL for the cursor to become scrollable. Because the server must cache the entire result set, such cursors add overhead.
Here is a quick illustration of a scrollable cursor:
BEGIN; /* This cursor is scrollable by default */; DECLARE c SCROLL CURSOR FOR SELECT * FROM generate_series(1, 10); FETCH 5 FROM c; generate_series ═════════════════ 1 2 3 4 5 (5 rows) MOVE BACKWARD 2 FROM c; FETCH BACKWARD 2 FROM c; generate_series FETCH ABSOLUTE 6 FROM c; generate_series 2 1 (2 rows) /* The sixth result row */ ═════════════════ 6 (1 row) FETCH ALL FROM c; generate_series ═════════════════ 7 8 9 10 (4 rows) COMMIT;
Cursor sensitivity
The SQL standard distinguishes SENSITIVE, INSENSITIVE and ASENSITIVE cursors. A sensitive cursor changes as the data underneath it changes. This means that if you scroll back to an earlier row, you might get a different result. Since a PostgreSQL statement always sees a stable snapshot of the data, this database doesn’t have sensitive cursors.
PostgreSQL cursors are always insensitive. This means that changes to the data underneath the cursor that happen after the cursor has started processing are not reflected in the data fetched from the cursor. The PostgreSQL equivalent of “insensitive” is “asensitive,” which denotes that the sensitivity is implementation-dependent.
Note that this insensitivity also applies if you modify a table via the special statements “UPDATE/DELETE ... WHERE CURRENT OF <cursor>”.
WITH HOLD corsors
PostgreSQL must compute the entire result set at COMMIT time and cache it on the server since WITH HOLD cursors last longer than a transaction but statements do not. Because of this, COMMIT can take an unusually long period.
Furthermore, WITH HOLD cursors must be CLOSEd in order to prevent the result set from using server resources up to the end of the database session. This is because WITH HOLD cursors do not close automatically when a transaction is done.
An illustration of a WITH HOLD cursor in use is provided here:
BEGIN; DECLARE c CURSOR WITH HOLD FOR SELECT i FROM generate_series(1, 10) AS i; FETCH 3 FROM c; i ═══ 1 2 3 (3 rows) COMMIT; FETCH 3 FROM c; i ═══ 4 5 6 (3 rows) /* important */ CLOSE c;
Cursors in PL/pgSQL
Cursors in PL/pgSQL are variables of the special data type refcursor. Such a variable actually has a string as its value. When a query is bound to the cursor variable and the cursor is opened, the name of the portal that is opened is represented by that string.
Using refcursor variables, you can also pass cursors between PL/pgSQL functions or procedures:
c_open(an integer) CREATE FUNCTION RETURNS refcursor LANGUAGE plpgsql AS $$DECLARE /* the cursor variable is bound to a query */ c CURSOR (x integer) FOR SELECT * FROM generate_series(1, x); BEGIN /* the cursor is opened */ OPEN c(n); RETURN c; END; $$; CREATE FUNCTION c_fetch(cur refcursor) RETURNS T 1 2 3 4 5 (5 rows)
For historical reasons, SCROLL is supported by cursor declarations in PL/pgSQL but not WITH HOLD because PostgreSQL functions are always executed as a single transaction. Additionally, a PL/pgSQL cursor can only FETCH a single row at a time.
WITH HOLD cursors in PL/pgSQL procedures
Procedures that were added to PostgreSQL v11 allow transaction commands like COMMIT and ROLLBACK to work in particular situations. The inclusion of WITH HOLD cursors in procedures would therefore be advantageous. There are two ways to work around the lack of WITH HOLD cursors in PL/pgSQL:
- create the cursor in SQL and pass it as a
refcursorargument to the procedure - use dynamic SQL to declare an SQL cursor
Here is sample code that illustrates the second technique:
CREATE PROCEDURE del_old() LANGUAGE plpgsql AS $$DECLARE /* assign the portal name */ c refcursor := 'curs'; v_schema text; v_name text; BEGIN /* dynamic SQL to create the cursor */ EXECUTE $_$DECLARE curs CURSOR WITH HOLD FOR SELECT table_schema, table_name FROM information_schema.tables WHERE table_schema = 'mydata' AND table_name LIKE 'old_%'$_$; LOOP FETCH c INTO v_schema, v_name; EXIT WHEN NOT FOUND; /* * We need to make sure that the cursor is closed * in the case of an error. For that, we need an * extra block, because COMMIT cannot be used in * a block with an EXCEPTION clause. */ BEGIN /* avoid SQL injection */ EXECUTE format( 'DROP TABLE %I.%I', v_schema, v_name ); EXCEPTION WHEN OTHERS THEN CLOSE c; RAISE; WHEN query_canceled THEN CLOSE c; RAISE; END; /* reduce deadlock risk when dropping many tables */ COMMIT; END LOOP; /* we need to close the cursor */ CLOSE c; END;$$;
Keep in mind how the code rigorously ensures that the cursor cannot “leak” from the process!
Conclusion
Cursors and transactions are typical database features. Cursors typically only last for one database transaction at a time. But by using WITH HOLD, you can escape that limitation. Useful as this feature is, you have to be aware of the performance impact during COMMIT, and you have to make sure that you close the cursor to free the server’s resources.
About Enteros
Enteros offers a patented database performance management SaaS platform. It proactively identifies root causes of complex business-impacting database scalability and performance issues across a growing number of clouds, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Empowering the Cloud Center of Excellence: How Enteros Uses Generative AI for Real-Time Monitoring and Performance Optimization in the Technology Sector
- 4 November 2025
- Database Performance Management
Introduction In the era of digital transformation, the technology sector stands at the forefront of innovation, harnessing cloud computing, artificial intelligence, and big data to drive performance and efficiency. However, as cloud infrastructures scale in size and complexity, managing performance, resource allocation, and cost optimization becomes increasingly challenging. Enter the Cloud Center of Excellence (CCoE) … Continue reading “Empowering the Cloud Center of Excellence: How Enteros Uses Generative AI for Real-Time Monitoring and Performance Optimization in the Technology Sector”
AI SQL Meets Healthcare Innovation: Enteros’ Breakthrough in Database Performance Optimization
Introduction In the modern healthcare landscape, data has become both a vital asset and a formidable challenge. Hospitals, research institutions, and digital health startups generate and process massive amounts of data—from patient records and clinical trial results to real-time monitoring devices and medical imaging. Yet, the performance of these complex data ecosystems often determines how … Continue reading “AI SQL Meets Healthcare Innovation: Enteros’ Breakthrough in Database Performance Optimization”
Redefining Insurance RevOps: Enteros’ AIOps-Powered Approach to Smarter Database Performance Management
- 3 November 2025
- Database Performance Management
Introduction The insurance industry has always been data-intensive—dealing with massive volumes of customer information, risk assessments, policy records, and claims data. But in today’s fast-evolving digital landscape, traditional data management models no longer suffice. The sector now faces unprecedented demands for real-time insights, operational agility, and cost efficiency. Modern insurers must process data from a … Continue reading “Redefining Insurance RevOps: Enteros’ AIOps-Powered Approach to Smarter Database Performance Management”
Enteros for Energy Leaders: Redefining Performance Optimization Through Generative AI and Cloud FinOps Intelligence
Introduction The global energy sector is undergoing a monumental transformation. As the demand for clean, efficient, and sustainable energy solutions grows, so too does the complexity of managing massive data flows across power grids, refineries, renewables, and smart infrastructure. Energy companies are no longer just producers and distributors—they’re becoming data-driven enterprises, leveraging cloud computing, AI, … Continue reading “Enteros for Energy Leaders: Redefining Performance Optimization Through Generative AI and Cloud FinOps Intelligence”