The Transaction Log, Delayed Durability, and considerations for its use with In- ...

If Warhol did transaction Logging…

In this mini-series of posts, we will discuss how the mechanics of SQL Server’s transaction logging work to provide transactional durability. We will look at how Delayed Durability changes the logging landscape and then we will specifically see how In-Memory OLTP logging builds upon the on-disk logging mechanism. Finally, we will pose the question “should we use Delayed Durability with In-Memory or not” and discuss this scenario in detail. But in order to understand how delayed durability works, it is first important for us to understand how the transaction log works -so we shall start there…

Caching caching everywhere nor any drop to drink!

SQL Server is a highly efficient transaction processing platform and nearly every single operation performed by it, is usually first performed within memory. When operations are performed within memory, the need to touch physical resources (such as physical disk IOPS) are also reduced, and reducing the need to touch physical resources means those physical boundaries (and their limitations) have less impact tothe overall system performance. Cool right?!

A little bit about the Buffer Cache

All data access such as insertion, deletion or updates are all first made in-memory, and if you are a Data Administrator, you will (or should!) already be familiar with the Buffer Cache. So when data pages are needed to fulfill updates, deletions or inserts (and assuming they are not already present in the Buffer Cache), the Storage Engine first reads those data pages into the Buffer Cache before performing any requested operations on those pages in memory. Those dirty pages will ultimately be persisted directly to disk (i.e. to the physical data file\s) through automatic checkpointing and these pages will contain changes performed by either committed or uncommitted transactions. In the latter case of uncommitted transactions, the presence of those dirty pages in the data file\s is whyUNDO is a necessary operation that is required upon recovery -and those changes must be rolled back using the transaction log records to do so. Any dirty pages that are a result of committed transaction changes but have not yet been hardened to the physical data file\s through a CHECKPOINT operation would require the REDO portion of the transaction log (assuming that the SQL Server failed) to roll those changes forward. I will refer back to this again when we move on to talk about In-Memory OLTP.

It’s all about the log, about the log, no trouble!

The Transaction Log, Delayed Durability, and considerations for its use with In- ...
But it is not the hardening of data to data files that we are focusing on here (since there is little relevance to what we are focusing on), we are more concerned about transactional durability in SQL Server. There is a common misunderstanding with SQL Server DBAs that when transactions are making changes, the transactional changes are written immediately to the logfile. The durability claims of SQL Server make this misunderstanding easy to understand, but requiring a physical IO operation to occur upon every single transactional data modification would clearly be a huge potential bottleneck (and dependency) upon the transaction log disk performance. The SQL Server architects recognised this challenge, so in a similar way that data page modifications are first cached, so are the transactional changes. In the case of the transaction log, the “cached” area is provided by in-memory constructs known as log buffers that can store up to 60 Kilobytes of transaction log records.

Now that you know that log buffers are there to delay transaction logging to maximize the efficiency of a single physical IOP to disk, we must now consider when these structures must absolutely be flushed to physical disk and stored within the transaction log in order to still provide the Durability property of ACID that SQL Server adheres to. There are two main situations:

On transactional COMMIT. When a client application receivescontrol back after issuing a successful COMMIT statement, SQL Server must provide durability guaranteesabout the changes that have occurred within the transaction. All transactional operations must have been written and persisted to the transaction log, therefore the log buffer containing those changes must be flushed on COMMIT to provide this guarantee. On log buffer full. Strictly speaking, as long as the first rule is adhered to, there is no logical requirement that SQL Server should flush the log buffer to disk when it becomes full. However, if you consider that it is paramount for SQL Server to never run out of available log buffers, then it is obvious that the best way for it to avoid this situation is to hold onto the log buffer only as long as it needs to. If a log buffer is full, then it serves no further purpose in the buffering story and furthermore, given that the log buffer will contain a maximum of 60Kilobytes of changes, there is good alignment to the physical write to the transaction log. When the log buffer is full it makes sense to flush it to disk.
The Transaction Log, Delayed Durability, and considerations for its use with In- ...
Now we have a better understanding how the transaction log provides transactional durability to SQL Server and how it delays physical resource writes to the transaction log disk to improve logging performance, we can look at howDelayed Durability changes the logging landscape, why we might use it and what we are risking by doing so in the second post in this series.

本文数据库(mssql)相关术语:熊片数据库 mssql数据库 oracle数据库 pubmed数据库 access数据库 万方数据库

主题: SQLSQL Server
本文标题:The Transaction Log, Delayed Durability, and considerations for its use with In- ...

技术大类 技术大类 | 数据库(mssql) | 评论(0) | 阅读(23)