Skip to content

SQL Query Optimization: How to Make Your Queries Faster

Posted on:March 16, 2026

Welcome, Developer đź‘‹

Most slow queries are not caused by bad databases. They are caused by how we ask questions.

A query that runs in 8 seconds in development will bring your app to its knees under real load. And the frustrating part is that the fix is usually straightforward once you know where to look.

In this post I will walk through the techniques I reach for first when a query is too slow. These are not theoretical. They are the things that have actually worked in production.

The examples here are written for PostgreSQL. Most of the concepts apply to any relational database, but the syntax does vary. Wherever a command is different in MySQL or SQL Server, I call it out so you do not have to guess.

Start With EXPLAIN. Always.

Before touching anything, you need to understand what the database is actually doing.

In PostgreSQL, run EXPLAIN ANALYZE before your query. It executes the query and shows you the actual execution plan with real timing.

EXPLAIN ANALYZE
SELECT * FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE o.status = 'pending';

What you are looking for is a Seq Scan on a large table. That means the database is reading every single row to find the ones you want. That is almost always the root of a slow query.

Do not skip this step. Optimizing without understanding the execution plan is just guessing.

MySQL: Also supports EXPLAIN ANALYZE since version 8.0.18. On older versions, use EXPLAIN (without ANALYZE) to see the plan without executing it.

SQL Server: Use SET STATISTICS IO ON; and SET STATISTICS TIME ON; before your query for text-based stats, or enable “Include Actual Execution Plan” in SSMS (Ctrl+M) for the graphical plan. There is no EXPLAIN keyword.


Add Indexes on the Columns You Filter and Join On

If EXPLAIN shows a sequential scan, an index is usually the answer.

Without an index, the database has no choice. It reads everything. With the right index, it jumps directly to the matching rows.

-- Before: full scan on every query
SELECT * FROM orders WHERE status = 'pending';
 
-- Add the index
CREATE INDEX idx_orders_status ON orders(status);
 
-- Now it is fast regardless of table size

This basic CREATE INDEX syntax works the same in PostgreSQL, MySQL, and SQL Server.

If you regularly filter by more than one column, a composite index will serve you better than two separate ones.

CREATE INDEX idx_orders_status_date
  ON orders(status, created_at DESC);

The column order matters. Put the most selective one first.

MySQL note: MySQL supports DESC in index definitions since version 8.0. On older versions it accepts the keyword but ignores it, and all indexes are ascending. If you are on MySQL 5.7 or earlier, just drop the DESC.


Stop Using SELECT *

This one is easy to fix and makes a real difference.

Fetching every column forces the database to read and transfer data you probably do not need. It also prevents the query planner from using certain optimizations.

-- What most of us write
SELECT * FROM products WHERE category_id = 5;
 
-- What we should write
SELECT id, name, price FROM products WHERE category_id = 5;

It feels like a small change. In practice, on large tables or slow networks, the difference is very noticeable.

This applies the same way in PostgreSQL, MySQL, and SQL Server.


Replace Correlated Subqueries With JOINs

A correlated subquery runs once for every row in the outer query.

If your outer query returns 50,000 customers, that subquery runs 50,000 times. It feels fine on a small dataset and becomes a disaster in production.

-- This runs the subquery once per customer
SELECT name,
  (SELECT COUNT(*) FROM orders WHERE customer_id = c.id) AS order_count
FROM customers c;
 
-- This runs a single pass over both tables
SELECT c.name, COUNT(o.id) AS order_count
FROM customers c
LEFT JOIN orders o ON o.customer_id = c.id
GROUP BY c.id, c.name;

The JOIN version is not just faster. It scales. The subquery version does not.

This works identically in PostgreSQL, MySQL, and SQL Server.


Use EXISTS Instead of COUNT for Existence Checks

This one shows up everywhere. You just want to know if something exists, but you write a query that counts every matching row first.

The database does not need to count 10,000 orders to tell you that a customer has at least one. It just needs to find the first match and stop.

-- Counts everything, then checks if it is more than zero
SELECT c.name
FROM customers c
WHERE (SELECT COUNT(*) FROM orders WHERE customer_id = c.id) > 0;
 
-- Stops at the first match
SELECT c.name
FROM customers c
WHERE EXISTS (
  SELECT 1 FROM orders WHERE customer_id = c.id
);

Same result. The EXISTS version short-circuits the moment it finds a row. On a table with millions of orders, that is a huge difference.

This works identically in PostgreSQL, MySQL, and SQL Server.


Use EXISTS Instead of IN for Large Subqueries

This is related but slightly different. IN with a subquery builds the full result set in memory, then checks each row against it. EXISTS checks row by row and bails out early.

-- Builds the full list of customer IDs, then filters
SELECT name FROM customers
WHERE id IN (SELECT customer_id FROM orders WHERE total > 500);
 
-- Checks each customer one at a time, stops early
SELECT name FROM customers c
WHERE EXISTS (
  SELECT 1 FROM orders
  WHERE customer_id = c.id AND total > 500
);

For small subquery results, IN is fine. When that inner query returns thousands or millions of rows, EXISTS wins.

And here is a gotcha worth knowing: NOT IN behaves unexpectedly with NULL values. If the subquery returns even one NULL, the entire NOT IN condition evaluates to unknown and you get zero rows back. NOT EXISTS does not have this problem.

-- Returns nothing if any customer_id in orders is NULL
SELECT name FROM customers
WHERE id NOT IN (SELECT customer_id FROM orders);
 
-- Works correctly regardless of NULLs
SELECT name FROM customers c
WHERE NOT EXISTS (
  SELECT 1 FROM orders WHERE customer_id = c.id
);

If you have been burned by a query mysteriously returning zero rows, this is probably why.

This behavior is the same across PostgreSQL, MySQL, and SQL Server. The NULL trap with NOT IN is defined by the SQL standard, so every engine does it.


Prefer UNION ALL Over UNION

UNION removes duplicate rows from the combined result. That sounds reasonable, but removing duplicates requires sorting or hashing the entire dataset. If you already know there will be no duplicates, or you do not care about them, you are paying for work that adds nothing.

-- Sorts and deduplicates the entire combined result
SELECT id, name FROM active_customers
UNION
SELECT id, name FROM premium_customers;
 
-- Just appends the results together
SELECT id, name FROM active_customers
UNION ALL
SELECT id, name FROM premium_customers;

This is one of those things you do not notice on small tables. On large result sets, the difference is significant. Get in the habit of defaulting to UNION ALL and only switching to UNION when you actually need deduplication.

This works identically in PostgreSQL, MySQL, and SQL Server.


Be Careful With LIMIT and OFFSET at Scale

Pagination with OFFSET looks harmless until your users reach page 500.

The problem is that OFFSET 10000 still requires the database to scan and discard the first 10,000 rows before returning your 20. It gets slower the deeper you go.

-- Gets painful at high page numbers
SELECT * FROM posts ORDER BY created_at DESC
LIMIT 20 OFFSET 10000;

The better approach is keyset pagination. Instead of skipping rows, you continue from where you left off.

SELECT * FROM posts
WHERE created_at < :last_seen_date
ORDER BY created_at DESC
LIMIT 20;

This stays fast at any depth because it uses an index rather than counting rows.

MySQL: The LIMIT ... OFFSET syntax shown above works the same in MySQL.

SQL Server: Does not support LIMIT. Use OFFSET ... FETCH or TOP instead:

-- Equivalent of LIMIT 20 OFFSET 10000
SELECT * FROM posts ORDER BY created_at DESC
OFFSET 10000 ROWS FETCH NEXT 20 ROWS ONLY;
 
-- Keyset pagination with TOP
SELECT TOP 20 * FROM posts
WHERE created_at < @last_seen_date
ORDER BY created_at DESC;

Do Not Wrap Indexed Columns in Functions

This is one of the most common and least obvious performance killers.

When you wrap a column in a function inside a WHERE clause, the database cannot use the index on that column. It has to evaluate the function for every row first.

-- Index on created_at is ignored here
SELECT * FROM users
WHERE EXTRACT(YEAR FROM created_at) = 2025;
 
-- Index is used here
SELECT * FROM users
WHERE created_at >= '2025-01-01'
  AND created_at < '2026-01-01';

The rule is simple. The column you are filtering on should appear alone on one side of the condition. If you are applying a function to it, rewrite the condition so the function is on the value side instead.

MySQL: Uses YEAR(created_at) instead of EXTRACT(YEAR FROM ...). Both work, but the rewritten range condition is the same across all three databases and is the one you should use regardless.

SQL Server: Also uses YEAR(created_at). Same advice: rewrite it to a range condition to let the index do its job.


Watch Out for Implicit Type Conversions

This is the sneaky cousin of wrapping columns in functions. When the types in your comparison do not match, the database silently casts the column value for every row. That kills your index just the same.

-- phone_number is VARCHAR, but you are comparing to an integer
-- The database casts every row's phone_number to a number to compare
SELECT * FROM users WHERE phone_number = 5551234567;
 
-- Match the type. Compare string to string.
SELECT * FROM users WHERE phone_number = '5551234567';

This also happens with dates, decimals, and character sets. If your EXPLAIN plan shows a full scan on a column you know is indexed, check whether the types on both sides of the comparison actually match. Nine times out of ten, that is the problem.

This behavior applies across PostgreSQL, MySQL, and SQL Server, though each engine handles the implicit casting rules slightly differently. The fix is always the same: make sure the types match.


Use Covering Indexes for Read-Heavy Queries

A covering index includes all the columns a query needs.

When the database can satisfy a query entirely from the index without touching the main table, the performance difference can be dramatic. We are talking 2x to 5x improvements on some read-heavy workloads.

-- Query that only needs these three columns
SELECT id, name, price FROM products WHERE category_id = 5;
 
-- A covering index for exactly this query (PostgreSQL)
CREATE INDEX idx_products_cat_cover
  ON products(category_id) INCLUDE (name, price);

The INCLUDE clause adds columns to the index leaf pages without making them part of the index key. This is most useful for queries that run constantly, like the ones powering your main API endpoints.

SQL Server: Supports INCLUDE with the same syntax. This has been available since SQL Server 2005.

MySQL: Does not support INCLUDE. To get a covering index in MySQL, add the extra columns to the index key itself:

CREATE INDEX idx_products_cat_cover
  ON products(category_id, name, price);

This achieves the same covering index behavior, but those extra columns now influence index ordering. In practice, it still works well for this use case.


Use Window Functions Instead of Self-Joins

Self-joins are how people used to solve ranking and comparison problems before window functions existed. They still work, but they are usually slower and harder to read.

Say you want each employee’s salary alongside their department average. The old way is to join the table to itself through an aggregation. The modern way is a window function that does it in a single pass.

-- Self-join: scans the table twice
SELECT e.name, e.salary, d.avg_salary
FROM employees e
JOIN (
  SELECT department_id, AVG(salary) AS avg_salary
  FROM employees
  GROUP BY department_id
) d ON e.department_id = d.department_id;
 
-- Window function: single pass
SELECT name, salary,
  AVG(salary) OVER (PARTITION BY department_id) AS avg_salary
FROM employees;

Window functions are especially good for running totals, row numbering, and fetching the previous or next row without a join.

-- Running total of sales by date
SELECT order_date, amount,
  SUM(amount) OVER (ORDER BY order_date) AS running_total
FROM sales;
 
-- Get each row's rank within its category
SELECT name, category, price,
  ROW_NUMBER() OVER (PARTITION BY category ORDER BY price DESC) AS rank
FROM products;

The query planner handles these efficiently because it only needs to scan the data once. With a self-join, it has to materialize the subquery and then join back, which is more work for both the database and anyone reading the code.

Window functions work in PostgreSQL, MySQL (8.0+), and SQL Server (2012+) with the same syntax.

MySQL note: If you are on MySQL 5.7 or earlier, window functions are not available. You will need to stick with the self-join approach or upgrade.


Be Smart About CTEs

Common Table Expressions make complex queries more readable. But depending on your database, they might also make them slower.

In PostgreSQL (before version 12), CTEs were always materialized. That means the database executed them as separate queries, stored the full result, and then ran the outer query against that stored result. It could not push filters down into the CTE, which often killed performance.

-- PostgreSQL < 12 materializes this, even if you only need a few rows
WITH all_orders AS (
  SELECT * FROM orders
)
SELECT * FROM all_orders WHERE status = 'pending';

Starting with PostgreSQL 12, the planner can inline CTEs and optimize across the boundary. If you need to force the behavior either way, PostgreSQL lets you be explicit.

-- Force materialization (useful when the CTE is referenced multiple times)
WITH expensive_query AS MATERIALIZED (
  SELECT customer_id, SUM(total) AS lifetime_value
  FROM orders
  GROUP BY customer_id
)
SELECT * FROM expensive_query WHERE lifetime_value > 10000;
 
-- Force inlining (useful when you want filters to push down)
WITH filtered AS NOT MATERIALIZED (
  SELECT * FROM orders
)
SELECT * FROM filtered WHERE status = 'pending' AND created_at > NOW() - INTERVAL '7 days';

The takeaway: CTEs are great for readability. Just be aware that in some engines they create optimization barriers. If a CTE-based query is slower than you expect, try rewriting it as a subquery and compare the plans.

MySQL: CTEs are available since MySQL 8.0. MySQL also materializes CTEs by default and re-evaluates whether to do so based on how many times the CTE is referenced. There is no MATERIALIZED / NOT MATERIALIZED hint. If a CTE is slowing things down in MySQL, rewriting it as a subquery is your main option.

SQL Server: Supports CTEs but does not materialize them. The optimizer inlines them and treats them like subqueries. No special hints are needed, but it also means SQL Server will re-execute the CTE each time it is referenced. If the CTE is expensive and referenced multiple times, consider using a temp table instead.


Watch for the N+1 Problem in Application Code

This one is not a SQL problem. It is an ORM problem. But it kills performance just as effectively.

The pattern looks innocent. You query a list of items, then loop through them and fire a separate query for each one to get related data.

# Pseudocode that fires 101 queries
customers = db.query("SELECT * FROM customers LIMIT 100")
for customer in customers:
    orders = db.query(f"SELECT * FROM orders WHERE customer_id = {customer.id}")

That is 1 query for the customer list plus 100 queries for their orders. At scale, this destroys your database.

The fix is to fetch everything you need in a single query (or at most two).

-- One query gets everything
SELECT c.id, c.name, o.id AS order_id, o.total
FROM customers c
LEFT JOIN orders o ON o.customer_id = c.id
ORDER BY c.id;

If you are using an ORM, look for eager loading or join-based loading features. In Django it is select_related and prefetch_related. In ActiveRecord it is includes. In Entity Framework it is Include(). Every major ORM has something for this, because every major ORM has caused this problem.

This is an application-level concern and applies regardless of which database you use.


Batch Your Writes

Inserting or updating one row at a time inside a loop is slow for the same reason N+1 reads are slow. Each statement has overhead: parsing, planning, network round trip, transaction commit.

-- Slow: one insert per round trip
INSERT INTO events (user_id, action) VALUES (1, 'click');
INSERT INTO events (user_id, action) VALUES (2, 'view');
INSERT INTO events (user_id, action) VALUES (3, 'click');
-- ... 997 more
 
-- Fast: one statement, one round trip
INSERT INTO events (user_id, action) VALUES
  (1, 'click'),
  (2, 'view'),
  (3, 'click'),
  -- ... up to a few thousand at a time
  (1000, 'view');

Multi-row INSERT with VALUES works the same in PostgreSQL, MySQL, and SQL Server.

For bulk updates, the syntax diverges. In PostgreSQL you can use UPDATE ... FROM with a VALUES list.

-- PostgreSQL: bulk update using a VALUES list
UPDATE products AS p
SET price = v.new_price
FROM (VALUES
  (1, 29.99),
  (2, 49.99),
  (3, 9.99)
) AS v(id, new_price)
WHERE p.id = v.id;

MySQL: Does not support UPDATE ... FROM. Use a JOIN-based update instead:

UPDATE products p
JOIN (
  SELECT 1 AS id, 29.99 AS new_price
  UNION ALL SELECT 2, 49.99
  UNION ALL SELECT 3, 9.99
) v ON p.id = v.id
SET p.price = v.new_price;

SQL Server: Supports a similar pattern using a derived table, or the MERGE statement:

MERGE INTO products AS p
USING (VALUES (1, 29.99), (2, 49.99), (3, 9.99)) AS v(id, new_price)
ON p.id = v.id
WHEN MATCHED THEN UPDATE SET p.price = v.new_price;

If you are doing thousands of inserts, batch them in groups of 1,000 to 5,000. Going much larger than that can cause lock contention or blow out your transaction log. Find the sweet spot for your setup.


Consider Table Partitioning for Large Tables

When a table grows into the hundreds of millions of rows, even good indexes start to slow down. Partitioning splits the table into smaller physical pieces that the database can query independently.

The most common strategy is range partitioning by date. Here is how it looks in PostgreSQL using declarative partitioning (available since version 10).

-- PostgreSQL: declarative partitioning
CREATE TABLE events (
  id BIGINT GENERATED ALWAYS AS IDENTITY,
  event_date DATE NOT NULL,
  user_id INT,
  payload JSONB
) PARTITION BY RANGE (event_date);
 
-- Create partitions for each month
CREATE TABLE events_2026_01 PARTITION OF events
  FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');
 
CREATE TABLE events_2026_02 PARTITION OF events
  FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');
 
CREATE TABLE events_2026_03 PARTITION OF events
  FOR VALUES FROM ('2026-03-01') TO ('2026-04-01');

When you query with a date filter, the planner skips partitions that cannot possibly contain matching rows. That is called partition pruning, and it means your query only touches a fraction of the data.

-- Only scans the March partition, ignores everything else
SELECT * FROM events
WHERE event_date >= '2026-03-01' AND event_date < '2026-04-01';

Partitioning also makes maintenance operations faster. Dropping old data is just detaching a partition, which is nearly instant compared to deleting millions of rows.

-- PostgreSQL: instant removal of old data
ALTER TABLE events DETACH PARTITION events_2026_01;
DROP TABLE events_2026_01;

Do not partition everything. It adds complexity to schema management and can make some queries slower if they span many partitions. Use it when your table is large enough that index scans alone are no longer cutting it.

MySQL: Supports range partitioning with different syntax:

CREATE TABLE events (
  id BIGINT AUTO_INCREMENT,
  event_date DATE NOT NULL,
  user_id INT,
  payload JSON,
  PRIMARY KEY (id, event_date)
) PARTITION BY RANGE (TO_DAYS(event_date)) (
  PARTITION p_2026_01 VALUES LESS THAN (TO_DAYS('2026-02-01')),
  PARTITION p_2026_02 VALUES LESS THAN (TO_DAYS('2026-03-01')),
  PARTITION p_2026_03 VALUES LESS THAN (TO_DAYS('2026-04-01'))
);
 
-- Drop old data
ALTER TABLE events DROP PARTITION p_2026_01;

Note: In MySQL, the partition column must be part of the primary key.

SQL Server: Uses partition functions and partition schemes, which is a more involved setup:

CREATE PARTITION FUNCTION pf_event_date (DATE)
  AS RANGE RIGHT FOR VALUES ('2026-02-01', '2026-03-01', '2026-04-01');
 
CREATE PARTITION SCHEME ps_event_date
  AS PARTITION pf_event_date ALL TO ([PRIMARY]);
 
CREATE TABLE events (
  id BIGINT IDENTITY(1,1),
  event_date DATE NOT NULL,
  user_id INT,
  payload NVARCHAR(MAX)
) ON ps_event_date(event_date);

Dropping old partitions in SQL Server requires switching the partition out to a staging table, then truncating it.


Filter Early With WHERE, Not HAVING

HAVING filters rows after aggregation. WHERE filters before. If you can eliminate rows before the database even starts grouping, you save it a lot of work.

-- Bad: aggregates every order, then filters by status
SELECT customer_id, SUM(total) AS total_spent
FROM orders
GROUP BY customer_id
HAVING SUM(total) > 1000 AND status = 'completed';
 
-- Good: filters first, aggregates only what is needed
SELECT customer_id, SUM(total) AS total_spent
FROM orders
WHERE status = 'completed'
GROUP BY customer_id
HAVING SUM(total) > 1000;

The rule of thumb is simple. If a condition does not depend on an aggregated value, put it in WHERE. Only use HAVING for conditions on aggregates like SUM, COUNT, or AVG.

This works identically in PostgreSQL, MySQL, and SQL Server.


Keep Your Statistics Fresh

The query planner uses statistics to decide how to execute a query. Stale statistics lead to bad decisions. Bad decisions lead to plans that are 10x slower than they need to be.

After large data loads or bulk deletes, run an explicit analyze.

-- PostgreSQL
ANALYZE orders;
 
-- MySQL
ANALYZE TABLE orders;
 
-- SQL Server
UPDATE STATISTICS orders;

Most databases have auto-analyze enabled by default. Make sure yours does too, and run it manually after any operation that moves large amounts of data.


Know When to Use Materialized Views

Sometimes you have a complex aggregation query that powers a dashboard or report, and no matter how well you optimize it, it just takes too long. That is when materialized views earn their keep.

A materialized view stores the result of a query as a physical table. Instead of recalculating the aggregation every time someone hits the endpoint, you serve a precomputed result.

-- PostgreSQL: create a materialized view for a slow dashboard query
CREATE MATERIALIZED VIEW monthly_revenue AS
SELECT
  DATE_TRUNC('month', order_date) AS month,
  region,
  SUM(total) AS revenue,
  COUNT(*) AS order_count
FROM orders
GROUP BY DATE_TRUNC('month', order_date), region;
 
-- Query is instant
SELECT * FROM monthly_revenue WHERE region = 'US';
 
-- Refresh when needed (after new data loads, on a schedule, etc.)
REFRESH MATERIALIZED VIEW CONCURRENTLY monthly_revenue;

The CONCURRENTLY keyword in PostgreSQL lets you refresh without locking reads, which is important if the view backs a live application. But it requires a unique index on the view.

CREATE UNIQUE INDEX idx_monthly_revenue
  ON monthly_revenue(month, region);

Materialized views work best for data that changes on a known schedule, like nightly data imports or batch processing jobs. They are less useful when data needs to be real-time, unless you are comfortable with a refresh interval that introduces some staleness.

The tradeoff is straightforward: you trade storage space and refresh time for faster reads.

MySQL: Does not have native materialized views. The common workaround is to create a regular table and populate it with a scheduled event or cron job:

CREATE TABLE monthly_revenue AS
SELECT
  DATE_FORMAT(order_date, '%Y-%m-01') AS month,
  region,
  SUM(total) AS revenue,
  COUNT(*) AS order_count
FROM orders
GROUP BY DATE_FORMAT(order_date, '%Y-%m-01'), region;
 
-- Refresh by truncating and reinserting
TRUNCATE TABLE monthly_revenue;
INSERT INTO monthly_revenue
SELECT
  DATE_FORMAT(order_date, '%Y-%m-01') AS month,
  region, SUM(total), COUNT(*)
FROM orders
GROUP BY DATE_FORMAT(order_date, '%Y-%m-01'), region;

SQL Server: Supports indexed views (also called materialized views). They refresh automatically when the base tables change, but come with strict requirements (no outer joins, no subqueries, schema-binding required, etc.):

CREATE VIEW dbo.monthly_revenue WITH SCHEMABINDING AS
SELECT
  CAST(DATETRUNC(MONTH, order_date) AS DATE) AS month,
  region,
  SUM(total) AS revenue,
  COUNT_BIG(*) AS order_count
FROM dbo.orders
GROUP BY CAST(DATETRUNC(MONTH, order_date) AS DATE), region;
 
CREATE UNIQUE CLUSTERED INDEX idx_monthly_revenue
  ON dbo.monthly_revenue(month, region);

Note: DATETRUNC is available in SQL Server 2022+. On older versions, use DATEFROMPARTS(YEAR(order_date), MONTH(order_date), 1).


A Quick Reference

TechniqueWhen to use itCross-DB
EXPLAIN ANALYZEAlways, before anything elseSyntax varies
Add indexesWhen you see sequential scansSame
Avoid SELECT *AlwaysSame
Replace correlated subqueriesAggregated lookupsSame
EXISTS instead of COUNTChecking if rows exist, not how manySame
EXISTS instead of INLarge subquery result sets, or NULLs in dataSame
UNION ALL over UNIONWhen you do not need deduplicationSame
Keyset paginationPaginated APIs with deep pagesLIMIT vs TOP/FETCH
Avoid functions on columnsDate and string filtersFunction names vary
Match column typesPrevent implicit casting in WHERE clausesSame
Covering indexes (INCLUDE)High-frequency read queriesNo INCLUDE in MySQL
Window functionsRankings, running totals, row comparisonsMySQL 8.0+, SQL Server 2012+
CTEs with awarenessReadable queries, watch materializationBehavior varies by engine
Fix N+1 in app codeORM loops that fire one query per rowSame
Batch writesBulk inserts and updatesBulk UPDATE syntax varies
Table partitioningTables with hundreds of millions of rowsSyntax varies significantly
WHERE before HAVINGFilter early, aggregate only what you needSame
Update statisticsAfter bulk operationsCommand varies
Materialized viewsExpensive aggregations on a known schedulePostgreSQL only (workarounds exist)

Conclusion

SQL optimization is about understanding what the database is doing, then removing the expensive parts.

Most of the time, one well-placed index is the difference between a query that runs in 8 seconds and one that runs in 45 milliseconds. Start with EXPLAIN, follow the evidence, and measure everything before and after.

Stay focused, Developer!