Slow Index creation
Performance and slowness are a common scenario for database in many companies and organizations, normally it takes time to find out what’s the root cause and what action do we need to take.
In my recent case the customer knew what the issue was and asked for some help with improvement. Let me explain it.
The customer in out of business hours tried to create indexes as they mentioned all newly populated data and most for huge rows, billions of records for each table. Seemingly little to no progress on their side on the builds for the indexes. Also, when they checked they found indexing creation jobs are having heavy waits on THROTTLE_LOG_RATE_LOG_STORAGE.
For the first step I asked the customer to check the transaction log file for the database. Monitoring log space use by using sys.dm_db_log_space_usage is always a good idea in this situation.
This DMV returns information about the amount of log space currently used and indicates when the transaction log needs truncation. See the referral link here.
Here is the example, The following query returns the total free log space in megabytes (MB) available in MyTestDB.
USE MyTestDB;
GO
SELECT (total_log_size_in_bytes - used_log_space_in_bytes)*1.0/1024/1024 AS [free log space in MB]
FROM sys.dm_db_log_space_usage;
As creating the index will easily reach throughput limit either for data or log files, I also advised to the customer that they might need to increase individual file sizes. As Microsoft explains here If you notice high IO latency on some database file or you see that IOPS/throughput is reaching the limit, you might improve performance by increasing the file size.
Might you ask how we can find out about IOPS/throughput seen against each database file. Here is the script in Github, it gives you all details that you need
In the case which I explained at the end customer said the modifications to the Log file size has helped significantly, the script also helped them to show the MAX IOPS and Throughput.