Forwarded Fetches and Bookmark Lookups

Base Table

When you choose to forgo putting  a clustered index on your table, you may find your queries utilizing forwarded fetches — SQL Server’s little change of address form for rows that don’t fit on the page anymore.

This typically isn’t a good thing, though. All that jumping around means extra reads and CPU that can be really confusing to troubleshoot.

All sorts of things might get blamed, like parameter sniffing, out of date stats, eye tax frog man station, and more common bogeypersons that take the blame for all SQL Server performance issues.

Dummo

Here’s a heap. It’s not very special. When we examine it with sp_BlitzIndex, there’s not much going on.
Right in the middle of a swamp.

When we run a simple query against it, we get a plan with a bookmark lookup.

The MAXDOP 1 hint is just to make reading the tea leaves easier.

BML

When we update the table, the fun starts. By fun I mean terribleness.

Now sp_BlitzIndex shows up something new!

Ehhh

We racked up ~6300 forwarded fetches just during the update.

If we re-run our original select query, that number doubles.

Ach.

If we run Profiler to capture some query metrics, because it’s late and I’d rather enjoy this wine than use XE, the issues show themselves a bit more.

Pro-Filer.

I think the change in metrics from before and after the update speak for themselves.

This is all on a relatively small heap, with queries that touch a small number of rows.

My select only returns a little under 9000 records, but it takes ~6000 extra reads to get them with the fetches involved.

CPU doesn’t do much, but it does show up where it hadn’t before.

Fear Of Endings

Heaps have their uses. Some of my favorite people love heaps.

But you have to be really careful when choosing which tables you’re going to leave clustered indexes off of.

Typically, if any part of the workload involves updates or deletes, a heap is a bad idea.

Thanks for reading!

Previous Post
Going to Summit? Here’s a Calendar Invite for My Session.
Next Post
[Video] Office Hours 2018/9/5 (With Transcriptions)

4 Comments. Leave new

  • Yes, deletes on heaps without doing a table lock is bad, it doesn’t de-allocate the space in the table, so new rows will not get added there. You have to rebuild the table or temporarily add a clustered index to the table to do so. We had tables with gigs of unused space in them.

    Reply
    • Rob – when you say “new rows will not get added there,” I assume you mean new rows from *other* indexes or tables? The space is available for reuse for the same object. You can prove it by creating a heap, loading it with 1GB of rows, delete without a table lock, and then insert another 1GB of data. The heap won’t be 2GB in size. (Just clarifying to make sure we’re on the same page.)

      Reply
  • “eye tax frog man station” ?
    Wypo (Wine typo), or just an old person’s reference that I don’t get cause I’m too darn young? 😛

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.