SQL 2016 – In-Memory OLTP enhancements

By | January 11, 2016

A bit late in the day but I am still posting the changes to In-memory OLTP that are proposed for SQL 2016. Why? In-memory OLTP in SQL 2014 wasn’t production ready in my opinion but with the upcoming changes it’s become more compatible with what real world OLTP databases look like. I have arranged the list in order of what I think are the most useful changes.

Altering in Memory tables and natively compiled procedures is now supported No surprises here, this was probably the biggest flaw in in-memory tables when it was launched. With all the DevOps and agile development happening it seemed regressive to not allow DDL once the object has been created.

PK/FK constraints are supported now off in memory OLTP tables it will help enforce proper database design instead of treating it as optional. Also it will help in-memory OLTP tables be more consistent with design principles of other tables too. Developers can also code better without having to add additional referential logic within the application code.

Up to 2TB of data can now be saved in RAM. The previous recommended limit was 250 GB and most companies would agree that this is a good amount of space and works for most tables that need to be ported. However I always felt it should have supported at least 1 TB from the very beginning else the overhead of managing what resides and vacates the RAM becomes a burden for the DBA and Developer. Now that 2 TB is supported out of the box we can code simpler and that’s always a good thing.

Additional Query Constructs added to natively compiled Procedure Natively compiled procedures was pretty limited in what it could do initially but with the changes in SQL 2016 it should allow developers code more easily since a lot of functions and features we take for granted in interpreted procedures are now supported in natively compiled procedures as well. For those who are starting new projects on SQL 2014 this was not a serious issue but it was a pain for migration projects since almost every procedure would have some amount of rewrite required unless it was CRUD. The documentation for natively compiled procedures however states that best performance was achieved for procs that have complicated business logic that was compiled into machine language and thus works much faster.

Support for Any Collation In SQL 2014 any meaningful column required a BIN2 collation but now we can have any collation when you combine this with the fact the index columns now support nulls we have a table that actually represents what most companies would see in the real world.

Multithreaded merge In SQL 2014 the DBA was given the option if performing merge of checkpoint files in case the system wasn’t able to keep up with the amount of DML being performed. In 2016 multiple threads are used to persist the data from in memory table to the file group checkpoint files. This allows a much faster recovery time compared to SQL 2014 since I have noticed consistently the in-memory OLTP tables holdup the database recovery in most cases. Since I am currently testing on a VM I am not able to see the real effects of this yet but I will post more detailed info once I get my hands on RTM.

For the complete list of changes see below

https://msdn.microsoft.com/en-us/library/bb510411.aspx#InMemory

 

Leave a Reply