Monthly Archives: September 2018

How to reset password for all users in WordPress site within MYSQL databases

A typical installation of WordPress usually has MySQL database as the backend, recently I found a need to reset the password for multiple users in one go. Here is how it was achieved.

First install MYSQL Workbench on the required machine. MySQL Workbench can be downloaded here.

Identify the database and the connection string details from the wp-config.php file.

Connect to the database from within MYSQL Workbench as listed here.

Once connected look for the table wp_users in the database.

In a new query windows execute the below query to reset a single or multiple rows password as required.


UPDATE wordpress746.wp_users
SET user_pass = MD5('NewPassword') WHERE length(user_login)=4

e.g. in this case I reset the password for all users who have a four letter username.

And that’s it !!

SQL 2019:- sp_estimate_data_compression_savings

This procedure allows us to estimate the potential space savings on a table before we apply a compression algorithm. It’s not exactly a new feature of SQL 2019 because it was also available in SQL 2017 however it now provides information on Columnstore indexes as well. The reason why this is important is because we often apply compression to very large tables that are often partitioned. This is done to improve the disk utilization as well as ensure that historical data doesn’t consume excessive space. However it’s difficult to estimate how much space would be saved when we utilize the massive compression afforded by COLUMNSTORE Indexes. While there are other way to get an estimate this procedure provides a fairly simple and easy way to do the job and as a result make database administrators lives easier when capacity planning.

Here are all the different ways you can call this procedure, its follows the typical Schema, object, partition, index, type format

sp_estimate_data_compression_savings ‘dbo’


sp_estimate_data_compression_savings ‘dbo’


sp_estimate_data_compression_savings ‘dbo’


sp_estimate_data_compression_savings ‘dbo’


sp_estimate_data_compression_savings ‘dbo’


The below screenshot shows the results but further down I have also included an excel calculation on the potential saving as well. One thing I would have liked to see in the output was the row count so that I provides an easier way to figure out per row savings too.



As can be seen below Columnstore indexes do provide the most compression by as much as 85%.

SQL 2019:- sys.dm_db_page_info


So we have a new DMF to allow us to view details for the page something I do very often during trainings using the DBCC Page Utility. I figured this DMF would be pretty useful since I can be queried and joined with sys.dm_db_database_page_allocations to provide some
interesting in
sight on the underlying pages mapped directly with table level allocation details. But after trying it out I must say I am a bit disappointed since it doesn’t really provide actual page internals. It just provide the page header details. To understand this let’s look at how I use the function.

When explaining indexes, you can see a video of it here I start off by looking for pages that have been allocated to the table I am analyzing using the query below.

select database_id ,
, index_id , allocation_unit_type_desc , allocated_page_page_id , is_iam_page , is_allocated , page_free_space_percent , page_type_desc



The next step is to identity the IAM page, a data page and an index page and show participants what each of these pages look like internally. E.g. in the above screenshot the IAM PAGE is 228, the index page is 160 and the data page is 320.

SO let’s look at the IAM page first

page (5,1,


Which gives me an idea of which pages have been allocated to this table as we can see from above, but here is the same IAM page queried using the DMF dm_db_page_info


So without paged level details this DM just shows me basic level information about my IAM page. But let’s move on to the Data Page, here is the output for data page 321 using DMF

And using DBCC utility, notice how I can actually see the contents of the page and the values for the columns in the table( towards the bottom)

Which is very useful when explaining things like encryption or variable datatypes. But my biggest complaint is when it comes to index pages. In the below screenshot I am showing the output from the DMF

Opposed to the DBCC utility

The DBCC command provides me with much more in depth information about the contents of the index and how the B tree looks based on the values of the key columns. Which is something I use extensively to explain how queries traverse the B tree to answer predicates. In the end just because of this one aspect it looks like I can’t really use the DMF dm_db_page_info for my trainings.

So where Can I use the DMF? At the moment I don’t see any particular use case where I might need to use this on a regular basis as part of troubleshooting SQL Server or for training’s, If something comes up I will post it here.









A function that might prove useful when querying very large datasets, it provides a count of distinct values for any column in the database except text,ntext , image data types. Why would this be useful? In many cases we see that an approximate count of the rows is sufficient and we don’t really need an exact count. However the default behavior is to provide exact counts mostly because of the C in ACID. So as its name suggests it provides an approximate count of the distinct values so it raises some questions which I have listed below and tried to answer

How to use it?

Using the function is fairly straightforward as shown below

from [dbo].[counts]


How Approximate is it?

In this case the query is executed against a table with 979265 rows. The result is



The actual row count is



So we see it’s off by 9 rows. Which I guess it not bad.

Let’s try something more unique

The Approx_count_distinct looked like it was always estimating on the higher side but that doesn’t seem to be the case as can be seen here

In each case I was able to calculate the error at less than 1.1% but the official documentation suggests 2%

Is it faster than a Normal Count Distinct?

It is hard to say if it will always give better performance because for most runs they seem neck and neck e.g. let’s examine the performance where we start off with a cold cache (no data in memory)

Notice the Read ahead reads, which indicate some kind of disk activity going on in the background

With a warm cache we see that the query is much faster but is heavier on the CPU.

Now let’s try the same with a Count Distinct Query with cold cache

Here again we see that there are read ahead happening which is to be expected. There are no indexes on the columns being queried and we see that performance wise we see the count distinct is both accurate and faster. Let’s see what happens in a warm cache

A warm cache execute just as fast as an Approx_count_distnct but we see its slight more heavy on the CPU comparatively. A look at the execution plan shows that while the count distinct does more operations there isn’t much to separate them in terms of overall Cost.


From where does it get its number?

I figured it was fast because it used the statistics so I tried an experiment, notice the counts below, the value for the Approx_count_distinct is 611.

We see that the statistics show the number as 612 not 611. So I figured the stats isn’t what is used to perform this operation.

Next I updated 500K records to the above value and disabled auto update stats. In spite of the stats not being updated I see that the counts are reflecting the actual current state and not values shown from the stats.

So at this moment I don’t have an answer for where it gets its numbers from but it doesn’t seem like it is coming from the stats that have already been calculated. If it is calculating the number on the fly I expect the IO performance to get worse as we query larger sets of data.

I tried updating a bunch of rows in the table to simulate a locking blocking scenario to see if the APPROX_COUNT_DISTNCT is affected by lock on the underlying data pages and it is. Which is to be expected considering the read ahead reads we saw earlier. In summary there is a disk level component to the way it works, what exactly I don’t know. Yet!!

Is there a way it can be confused?


I had tried to disable auto update stats and delete/Update records from the table but it was still able to provide fairly reasonable counts. So I assume it isn’t using prebuilt statistics to arrive at the numbers. Adding an index didn’t improve the approx_count_distinct value either. Creating the index did result in an Index scan so I have to assume the index is being read in some way.

Should I use it for a KPI or a Average calculation?

I would not use it for a KPI or any other calculations because of discrete values KPIs might end up showing Green when it is actually Yellow or vice versa and due to the inherent error in the number I would also not use it for any further calculations because the errors will add up with each step and the final result might be way of mark.

To accomplish this action, set property PartitionScheme

SSMS 18.X error when creating NonClustered index

This seems to be a bug in SSMS as the index can be created using TSQL just fine here is the full error message.

TITLE: Microsoft SQL Server Management Studio



Exception has been thrown by the target of an invocation. (mscorlib)





To accomplish this action, set property PartitionScheme. (Microsoft.SqlServer.Smo)


For help, click:







Installing SQL 2019 CTP 2.0

It’s finally out and it’s going to be awesome but there is only one way to find out, so here is the very first step, installing SQL SERVER.

The trail version CTP 2.0 can be downloaded from the below link. You have four different installers.

  • SQL for Windows
  • SQL for Linux
  • SQL in Docker containers image
  • SQL for BIG data in Kubernets


In this post we are going to start with the very first one, which is SQL for Windows. You can run the online installer or download the media to install on other devices etc.

As always the first screen is the bootstrap as shown below

The first link allows us to install SQL DB engine and takes us to the edition screen below

Next is the license screen

Next is our pre install checks

Bit surprised to see Windows 2008R2 and windows & in the list here, I am wondering if official support will be offered for these OS during the actual launch.
Next we look for updates which in my case I have bypassed for obvious reasons.

The next step is install rules

In feature selection we see a new addition for Java Connector for HDFS data sources.

Not much else has changed as far as feature selection is concerned but we are still far off from actual launch.

Complete screenshot below

Next we have feature specific rules , at this point I noticed that certain screen were being skipped even if the next button is pressed once , luckily for us there isn’t anything really important in these so far.

The above KB article is for installing on windows 8 OS and contains critical updates and security patches, I doubt you would see the same in a production environment.

The next step is instance configuration as you can see I have a number of instance installed so giving my SQL 2019 instance another name

Next we have the service account details, I am using the defaults, Considering the benefits given by Volume maintenance permissions I think it should be enabled by default.

Not much has changed collation wise either.

Leaving everything at default for now. Next is the screen where security settings are implemented and here again we don’t see much change in the way things are implemented. I was wondering if this screen needs to be revamped to allow backup encryption keys, security enclaves and DAC and Contained database features by default. Basically the onetime security features we end up using anyway.



The above screen is for file paths and I feel maybe it is time we added multiple file paths by default for data and log files, I see many customers have implemented large installations of SQL SERVER and the benefit for multiple drives is well understood, by allowing us to actually do it during install things might actually get done proactively than after the tables become huge. I am thinking something like below.

Next we have the TempDB configuration , not much has changed in this screen as expected.

And the last screen is the FileStream access, again not much has changed here either.

Next we have the confirmation screen and after hitting install we are on our way.

As expected there isn’t much difference between the install screens but that is because this is still a preview and the real changes will come later. But for the most part the really changes are within the database engine so I doubt we will any major changes are far as the screens as concerned.

A minor problem we face after install is that older SSMS will not work with the new version of SQL as expected. Preview version of SSMS 18.X can be downloaded here.

Please note this doesn’t upgrade older version of SSMS and my laptop is getting cluttered with all these installs, guessing MS needs to invest some time to find a better way to make these changes.