H
ere are some thoughts on optimizing stored the performance of stored procedures.


In no particular order:

Clustered Indexes

Make sure your tables have clustered indexes.

It’s amazing to me that folks create tables without clustered indexes. A clustered index is the physical order of data in the table and it’s my understanding that it is difficult for SQL to reclaim space from a table without a clustered index when rows from that table are removed.

A table without a clustered index is called a heap, and heaps are generally considered to be bad. Here’s the MSDN description of heap tables and what some of the effects a heap table can have on your queries:

https://msdn.microsoft.com/en-us/library/hh213609.aspx

While choosing good indexes can be an arcane art, generally any clustered index is better than no clustered index. It’s true that there are situations where creating a table as a heap can be advantageous but identifying those situations is an advanced skill.

There is a blog posting here that looks at some of those circumstances where using a heap might be beneficial.

Parameter Sniffing

Keep your parameters scent-free!

If you’re a victim of parameter sniffing one of the symptoms you’ll notice is that your stored procedure’s performance will vary drastically. It will execute quickly in the morning (for example), then slowly in the afternoon and performance will seem to vary independently of database workload.

This happens because when you first compile a stored procedure, SQL Server compiles and stores the best query plan that it can determine using using sample values. This is good, usually. However, if the distribution of the data changes (perhaps because you’re inserting and deleting records) then it’s possible that the query plan will no longer represent a good plan for the data. The stored procedure will exhibit bad performance.

Here’s a nice blog article at mssqltips that lists some work arounds for parameter sniffing.

The one that I’ve typically used is to create local variables and assign parameter values to those local variables, which are then used internally in the procedure.

I can then be sure that the performance that I see in development will be representative of what I see in production.

Option (force order)

Do what I say, not what I mean!

I typically develop stored procedures against test databases. The stored procedures are then deployed to production databases, which have different data. Different data can mean a different–and unexpected–query plan. Sometimes that unexpected query plan is better…and sometimes it is worse.

Rather than deal with the slings and arrows of outrageous fortune, I use the OPTION(FORCE ORDER) query hint.

In a nutshell, this query hint forces the query to be evaluated in order of the tables in the query and bypasses much of the SQL Query optimizer. In general, this is bad, but it really helps if what you need is consistency across databases.

When using option(force order) you would typically want to order your query by most restrictive to least restrictive.

Temp tables

Divide and conquer!

If you have a complex sub query in your stored procedure, sometimes removing it and pre-selecting the subquery into a temporary table can improve the performance of the overall query.

Temporary tables have some advantages over sub queries in that they can be indexed like regular tables. This is typically something that I have done as a last resort when trying to optimize particularly time-sensitive stored procedures.

In-Memory tables

Why walk when you can fly!

If there is a small query that you are considering storing in a temp table, consider storing it in a table variable instead. Table variables are stored in SQL Server memory rather than written to tempdb and are very fast. However, they are stored in memory, so be considerate to other users and keep the size of the table variable in mind. If you will be working with thousands of rows, then a traditional temp table is usually a better choice.

Cursor tips

What happens in my proc — stays in my proc

Although everyone frowns on cursors, sometimes they’re the only way to get the job done. There are a lot of cursor options, and some are faster (much faster) than others.


DECLARE XCN CURSOR FOR
SELECT NAME FROM SYSOBJECTS

The cursor declared above is without explicit scope. The scope of a cursor indicates whether it is visible only to the current batch (local) or to all batches currently executing on the server (global). The scope of the cursor is controlled by the CURSOR_DEFAULT database option. By default this value is "false" which indicates that cursors are global in scope.

The cursor is also writable, which means that the table on which the cursor is based can be updated using transact-sql syntax.

Neither of these options are typically necessary. Usually for a cursor, you are looping through the results of a query and, for each iteration of the loop executing some logic and possibly executing other queries.

If you don’t need an updatable global cursor you can make your cursor peform up to five times faster by declaring it as local (meaning only the current connection can see it) and read-only.

Here’s how you would declare a local, read-only cursor:


DECLARE XCN CURSOR LOCAL FAST_FORWARD FOR
SELECT NAME FROM SYSOBJECTS

Estimated Query Plans

Any sufficiently complicated technology is indistinguishable from magic.

The first place you should look when optimizing performance of a query is the estimated completion plan. Query plans can be complex, and strategies for optimizing them arcane. Full coverage of interpreting estimated query plans is beyond the scope of this modest article, however here is an article that touches on the topic: www.sqlshack.com/sql-server-query-execution-plans-understanding-reading-plans.

In short, when looking at an estimated query plan the red flags are "table scans", and cost % as indicated in the plan will give you an idea for where to start optimizing first.

Happy optimizing!