Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database

Planning an efficient upgrade from SQL Server 2005: Step 2 of 3

Step 2: Target destinations for each application and workload

As you know, the end of support deadline for SQL Server 2005 is on April 12, 2016. In this three- part blog series, we are discussing important steps to take as you plan your upgrade. In our last post, we walked through the importance of discovering what’s running in your IT environment before you begin the actual upgrade process. Running the Map Assessment Tool Kit can really help get you on track for developing an efficient upgrade plan.

After you’ve collected data on what’s running in your IT environment, the next step to think about is where you’ll run your upgraded databases. When it comes to modernizing your data platform, you have a few options for where to move applications and workloads. You can upgrade your on-premises servers, implement virtual machines or move to the cloud — most likely you’ll choose some combination of the three.

We’ve put together the SQL Server Upgrade Target Decision Tool to help you think through this process, compare your options, and identify upgrade targets based on your individual business needs.

Consolidation and virtualization

Upgrading is the perfect time to look for ways to gain efficiency and manage performance — in this case, we’re talking about consolidation and virtualization. The following diagram highlights how you might approach consolidating and virtualizing your applications and workloads as well as taking Web applications to the cloud with Azure SQL Database.

You can move certain applications and processes to upgraded on-premises servers to get the reliable performance you need for mission-critical tasks.

If you choose to virtualize using Windows Server Hyper-V or Azure VMs, one of the great benefits of virtualization is that you can run multiple applications or workloads on the same server while each maintains its own separate environment. Another benefit is that Azure VMs help further reduce total cost of ownership and increase availability. Because Azure VMs are hosted in the cloud, you can leverage the high availability of Microsoft’s cloud infrastructure while reducing capital expense for purchasing your own hardware and related IT costs.

Another option is to consolidate multiple databases or instances using Windows Server and Hyper-V to reduce IT sprawl, increase efficiency and free up resources for new tasks.

Moving your Web applications to the cloud and reducing administration is possible using Azure SQL Database, which is a database-as-a-service in the cloud. With this option, you get the benefits of a cloud infrastructure with enterprise-grade security and scale, high availability, superior performance and near-zero maintenance.

Ready for increased ROI?

According to a Microsoft-commissioned Forrester Consulting study1, analysis based on information from interviewed businesses resulted in a 9.5-month payback when migrating to SQL Server 2012 or 2014. Check out the SQL Server Upgrade Target Decision Tool to compare target impacts and specific upgrade features.

For more detailed information to help you plan your custom data platform upgrade, take a look at Directions from Microsoft’s report, “Migrating From SQL Server 2005.”

In the final blog in this series, we’ll talk through some common upgrade strategies. Come back here to learn ways to approach the upgrade process effectively.

1Microsoft-commissioned Forrester Consulting study: The Total Economic Impact™ of Microsoft SQL Server, July 2014

Categories: Database

Enhanced Always On Availability Groups in SQL Server 2016

Always On Availability Groups

Always On Availability Groups are a fundamental component of the availability story for SQL Server, and provide a robust disaster recovery solution as well. Availability Groups were first introduced in SQL Server 2012, replacing the prior Database Mirroring technologies. They enable a DBA to configure two SQL instances to host replicas of a set of databases, which can be kept exactly in sync, giving zero data loss guarantees, or near exactly in sync (async replication, which is optimal for geographically remote replicas). This technology has become the standard which the majority of critical production SQL Server instances leverage.

Enhancements in SQL Server 2016 Availability Groups

SQL Server 2016 is making some significant improvements to the Always On Availability Groups set of features. There are a number of features, such as:

  • Round-robin load balancing in readable secondaries
  • Increased number of auto-failover targets
  • Enhanced log replication throughput and redo speed
  • Support for group-managed service accounts
  • Support for Distributed Transactions (DTC)
  • Basic HA in Standard edition
  • Direct seeding of new database replicas

These each deserve their own blog post, and I hope to get to them all. However, there is one feature area that I’d like to focus on today because it opens up some interesting scenarios, and has been requested for a very long time.

Domain Independent Availability Groups

When Always On Availability Groups was introduced in SQL Server 2012, it represented a substantial advancement in the HA story over the previous technology, Database Mirroring (DBM). By adopting certain layers of Windows Server Failover Cluster (WSFC) functionality to use as infrastructure, the new solution is able to be much more performant, and to scale much better than the original DBM solution. Where DBM was limited to only two replicas, the Principal and the Mirror, Availability Groups could initially support five replicas (up to eight as of SQL Server 2014). Perhaps more significantly, where DBM established a relationship between one database and a copy of that database, Availability Groups, as the name implies, establishes a relationship between a set or group of databases, and replicas of that group of databases on one or more replicas. This means that for applications which access more than one database such as a SharePoint farm, we can now have all of the databases in the group move as a unit. We no longer need to worry about implementing complex scripting solutions to cause multiple databases to move as a unit.

One of the additional restrictions that came with Availability Groups was the restriction that all nodes in the Availability Group must reside in the same Active Directory Domain.

While this works well for a majority of our customers, there are some deployment patterns which cannot be met with current Always On Availability Groups/Windows Server Failover Cluster solutions:

  • Some corporate environments have multiple domains that cannot be merged, but would benefit from spanning an Availability Group across them. These multiple domains may or may not have trust relationships between them, but still there is a need to have servers hosting DR replicas.
  • Some customers or specific installations are not in the context of Active Directory domains at all.
SQL Server and Windows Server Collaboration

In order to relieve this situation, SQL Server had two options: We could (once again) re-implement the functionality that we are currently getting from WSFC, or we could collaborate with the Windows Server Failover Cluster team to eliminate the single-domain restriction. The two teams met and worked for over a year to understand what the technical restrictions were that led to this limitation, and to envision what a solution that did not rely on AD for authentication and security would look like, and how it could be made usable.

The result is that with the release of Windows Server 2016, Windows Server Failover Clusters will no longer require that all nodes in a cluster reside in the same domain. In fact, they will no longer require the nodes to live in any domain at all. You can now form a WSFC cluster from machines which are in workgroups. Because of this change, SQL Server 2016 is now able to deploy Always On Availability Groups in environments with:

  • All nodes in a single domain
  • Nodes in multiple domains with full trust
  • Nodes in multiple domains with no trust
  • Nodes in no domain at all

This opens up a great number of new scenarios to customers, and removes previous blocks preventing migration from the deprecated Database Mirroring technology to AlwaysOn Availability Groups.

Installing and Configuring an Always On Availability Group without Domains

Although many customers will see this change as a great step forward in openness and flexibility, the AD infrastructure does make many operations simpler, so in order to operate without it we must compensate for these changes.

Setting up Windows Server Failover Clustering

In order to set up Windows Server Failover Clustering in an environment without AD security, we must provide a means to securely configure the cluster and provide for secure communications within the cluster. WSFC as of Windows Server 2016, creates self-signed certificates and uses these to secure intra-cluster authentication. In environments with AD security, we leverage that security infrastructure in order to establish the secure communication links. Without AD, we must take an extra step. In order to establish the cluster in environments without AD, WSFC requires you to set up synchronized admin accounts on all proposed cluster nodes. That means that there must be an account with the same name and the same password, that is in the Administrators group, on each node in the cluster. You must also be able to resolve the hostnames of all nodes in the cluster so that they can find each other.

As of now, WSFC cannot be managed using the UI in environments without AD security, so you will need to form and manipulate the cluster using PowerShell cmdlets. Fortunately, the minimum of commands for our purposes is very simple.

PS C:\> New-Cluster -Name “MyCluster” -Nodes Node1,Node2,Node3,Node4 -AdministrativeAccessPoint DNS

That’s it! It really is that simple once the basics are set up.

You can find more information on Domainless clusters from the Windows Cluster team on the Failover Clustering and Network Load Balancing team blog.

Configuring an Always On Availability Group

In environments without AD security, we must secure the “Mirroring Endpoints” in the same way they were when used in Database Mirroring without domains. This involves a number of steps, and can be cumbersome, especially when there are a number of nodes in the AG. In order to facilitate this, I’ve created a pair of scripts to help out. The first one, CreateEndpointCert, creates a certificate based on a password, and backs it up to a network share that all machines in the AG will have access to. In order to facilitate this, I recommend that you configure the SQL Server service to run as a user account which is synchronized in the same way that the Admin account used to set up the cluster was. This account need not be a box administrator, but this will enable a network share to be set up with write access to this account. The script then sets up the Mirroring Endpoint to be authenticated using this certificate. This means that when a connection attempt is made via the endpoint to another machine, that machine will be presented with the certificate as credentials for the login attempt.

The second script, InstallEndpointCert, takes as parameters the remote computer whose certificate is to be installed on this instance, the share on which the certificates were backed up, and a strong password to be used for a local login.

The script will restore the remote certificate, and create a login based on that certificate. It will then grant that user and login access to the instance. Now, when the endpoint from the other system attempts to connect, and presents the certificate, the local node will recognize that certificate as being linked to the local user, and grant the connection based upon that.

So, the total sequence looks like:

Node A

Node B

Node C

EXEC CreateEndpointCert ‘\\NodeB\MyShare’ '1R3@llyStr0ngP@ssw0rd!'

EXEC CreateEndpointCert ‘\\NodeB\MyShare’ '1R3@llyStr0ngP@ssw0rd!'

EXEC CreateEndpointCert ‘\\NodeB\MyShare’ '1R3@llyStr0ngP@ssw0rd!'

EXEC InstallEndpointCert ‘NodeB’ 'D1ff3rentStr0ngP@ssw0rd!'

EXEC InstallEndpointCert ‘NodeA’ 'D1ff3rentStr0ngP@ssw0rd!'

EXEC InstallEndpointCert ‘NodeA’ 'D1ff3rentStr0ngP@ssw0rd!'

EXEC InstallEndpointCert ‘NodeC’ 'D1ff3rentStr0ngP@ssw0rd!'

EXEC InstallEndpointCert ‘NodeC’ 'D1ff3rentStr0ngP@ssw0rd!'

EXEC InstallEndpointCert ‘NodeB’ 'D1ff3rentStr0ngP@ssw0rd!'

Finishing the Setup

If you have set up the instances with synchronized service accounts, from this point on, you can create and manipulate the Availability Group much like any other. You can script the creation, or use the New AG wizard to create the AG. You manipulate it like any other, and monitor it using the Dashboard, DMVs or other SQL tools.

Scripts

CreateEndpointCert

--------------------------------------------------------------------------------------------------
-- This procedure automates the creation of a local certificate and the endpoints required for a domainless AG.
-- Parameters are the strong password for the cert, and the location of a share which receives the backup of the cert.
-- The share should be accessible to all nodes in the AG, as they will need to read the certs for each other.
-- The procedure also creates the endpoint based upon the newly created cert.
--
-- EXEC CreateEndpointCert '\\Myserver\Myshare' '1R3@llyStr0ngP@ssw0rd!'
---------------------------------------------------------------------------------------------------


CREATE PROCEDURE CreateEndpointCert
 @ShareName SYSNAME ,
 @StrongPassword SYSNAME
AS BEGIN
 
 --This must be executed in the context of Master
 IF (DB_NAME() <> 'master')
 BEGIN
  PRINT N'This SP must be executed in master.  USE master and then retry.'
  RETURN (-1)
 END

    DECLARE @DynamicSQL varchar(1000);
    DECLARE @CompName varchar(250);
 DECLARE @HasMasterKey INT;
    SELECT @CompName = CONVERT(SysName, SERVERPROPERTY('MachineName'));

 -- Only create a master key if it doesn't already exist

 SELECT @HasMasterKey = is_master_key_encrypted_by_server from sys.databases where name = 'master'
 IF (@HasMasterKey = 0)
 BEGIN
 --Create a MASTER KEY to encrypt the certificate.
  SET @DynamicSQL = CONCAT('CREATE MASTER KEY ENCRYPTION BY PASSWORD = ' ,  QUOTENAME(@StrongPassword, ''''));
  EXEC (@DynamicSQL)
 END

 --Create the certificate to authenticate the endpoint
    SET @DynamicSQL = CONCAT('CREATE CERTIFICATE ', QUOTENAME(@CompName + '-Cert'), ' WITH SUBJECT = ', QUOTENAME(@CompName, '''')) ;
    EXEC (@DynamicSQL);

    --Create the database mirroring endpoint authenticated by the certificate.
    SET @DynamicSQL =
        CONCAT('CREATE ENDPOINT Endpoint_Mirroring
            STATE = STARTED
            AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
            FOR DATABASE_MIRRORING (AUTHENTICATION = CERTIFICATE ',QUOTENAME(@CompName + '-Cert'), ' , ENCRYPTION = REQUIRED ALGORITHM AES, ROLE = ALL)');
    EXEC (@DynamicSQL);

    --Back up the certificate to a common network share for import into other nodes in the cluster
    SET @DynamicSQL = CONCAT('BACKUP CERTIFICATE ',QUOTENAME(@CompName + '-Cert'),' To FILE = ', QUOTENAME( @ShareName + '\SQL-' + @CompName + '.cer', ''''));
    EXEC (@DynamicSQL);
END
GO

InstallEndpointCert

-------------------------------------------------------------------------------------
--  This procedure assumes that a certificate has been created on another node in the AG, and backed up to a common network share.
--  Parameters:
--    @CompName - The name of the computer whose certificate needs to be installed here.  i.e. the other replica that this node needs to communicate with.
--    @ShareName - A common network share to which certificates were backed up from each machine in the cluster/AG.
--    @StrongPassword - A strong password to be used for the login created to log in on behalf of the endpoint on the other node.
--
--  This procedure assumes that each node has run CreateEndpointCert and that all cert backup files reside on the share pointed to by the second parameter.
--  The procedure creates a login and a user for the remote machine, and then created a certificate to authorize the user when the certificate is used as authentication from the remote endpoint.
---------------------------------------------------------------------------------------

CREATE PROCEDURE InstallEndpointCert
    @CompName SYSNAME,
 @ShareName SYSNAME,
 @StrongPassword SYSNAME
AS BEGIN
    DECLARE @DynamicSQL varchar(1000);
    DECLARE @MyCompName varchar(250);
    SELECT @MyCompName = CONVERT(SysName, SERVERPROPERTY('MachineName'));
  --Don't need to create LOGINs for the local system
    IF (@MyCompName <> @CompName)
    BEGIN
        SET @DynamicSQL = CONCAT('CREATE LOGIN ', QUOTENAME (@CompName + '-Login'), ' WITH PASSWORD= ', QUOTENAME( @StrongPassword, ''''));
        EXEC (@DynamicSQL);

        SET @DynamicSQL = CONCAT('CREATE USER ', QUOTENAME( @CompName + '-User'), ' FOR LOGIN ', QUOTENAME(@CompName + '-Login'));
        EXEC (@DynamicSQL);

        SET @DynamicSQL = CONCAT('CREATE CERTIFICATE ', QUOTENAME(@CompName +'-Cert'), ' AUTHORIZATION ', QUOTENAME(@CompName +'-User'), ' FROM FILE = ', QUOTENAME(@ShareName + '\SQL-' + @CompName + '.cer' , ''''));
        EXEC (@DynamicSQL);

        SET @DynamicSQL = CONCAT('GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO ', QUOTENAME(@CompName +'-Login'));
        EXEC (@DynamicSQL);
 END
END
GO

See the other posts in the SQL Server 2016 blogging series.

Categories: Database

Generate Random Passwords

Database Journal News - Mon, 12/14/2015 - 18:50

Use this script to generate a random password.

Categories: Database

Watch the PASS Keynote Video: “Accelerating Your Business with a Modern Data Strategy”

The annual PASS Summit provides an opportunity for the SQL Server community to meet, share expertise, and gain firsthand insights into advances in the Microsoft data platform technology directly from Microsoft executives. The fall 2015 PASS Summit gave attendees the opportunity to learn about the upcoming release of SQL Server 2016 when Joseph Sirosh, corporate vice president of the Microsoft Data Group, delivered the Day 1 keynote, “SQL Server 2016: Accelerating Your Business with a Modern Data Strategy.”

Now you can watch that keynote and find out about SQL Server 2016 and its unmatched innovation across on-premises and the cloud to help you turn data into intelligent action. Some of the exciting announcements that Sirosh discloses include the availability of SQL Server 2016 Community Technology Preview (CTP) 3.0 (download the preview or use a virtual machine in Microsoft Azure) and the innovations it offers. Sirosh provides fascinating insights achieved with advanced data analytics and explains how SQL Server and Cortana Analytics Suite can bring actionable big data and operational knowledge to every organization. With examples from SQL Server customer organizations, Sirosh demonstrates how companies can benefit from new technologies such as Always Encrypted security, In-Memory OLTP and Operational Analytics, as well as new hybrid capabilities with Stretch Database.

Watch the video today and find out how you get all the built-in advantages of industry-leading mission-critical OLTP, unparalleled security, the highest performing data warehouse, end-to-end mobile BI on any device, and in-database advanced analytics with SQL Server 2016.

Share the excitement of the PASS community even if you weren’t able to attend the Summit. You’ll get the inside scoop on this new release that’s packed with innovation and valuable new capabilities—and it’s all built in. Get started now and accelerate your business with SQL Server 2016 and a modern data strategy.

Categories: Database

Top 10 MySQL Reporting Tools

Database Journal News - Mon, 12/14/2015 - 09:01

Both management and various departments rely on data reports to help guide their business decisions.  Rob Gravelle presents ten open source and commercial report generation tools for MySQL and other relational databases.

Categories: Database

Configuring SSL for IBM Data Server Driver for ODBC and CLI

IBM - DB2 and Informix Articles - Fri, 12/11/2015 - 06:00
The DB2 database system supports Transport Layer Security (TLS) and Secure Sockets Layer (SSL), which means that a DB2 client application that also supports SSL can connect to a DB2 database by using an SSL-protected socket. The TLS/SSL connection in the Call Level Interface (CLI) can be established by using server authentication or client authentication. This article discusses how a TLS/SSL connection can be configured on client and server for the different types of authentication and talks about how the TLS/SSL connection process has been simplified.
Categories: Database

Speeding up Transactions with In-Memory OLTP in SQL Server 2016 and Azure SQL Database

In-Memory OLTP, a performance feature in SQL Server since 2014, allows you to speed up transactional workloads up to 30X. SQL Server 2016 and Azure SQL DB have In-Memory OLTP in public preview, and contain many improvements to In-Memory OLTP, making it drastically easier to adopt in an existing application, and allowing many other applications to benefit from the performance improvements.

Besides speeding up transactional workloads (up to 30X), In-Memory OLTP can also be beneficial in a number of other ways, such as ETL and transient data (think temp table) scenarios. For analytics query workloads, we recommend using In-Memory Analytics (Columnstore). For mixed workloads, we recommend combining the technologies to gain real-time insight based on operational data.

In-Memory OLTP can also potentially simplify your application architecture. For scaled-out implementations, you can typically reduce the number of shards, or potentially even consolidate all shards into one machine with SQL Server 2016. With SQL Azure DB, you can improve performance without increasing the database service tier. For example, one customer was able to consolidate a deployment of 18 servers into one server using In-Memory OLTP, resulting in cost savings of up to $100K per year.

If you are currently using a caching layer in your architecture because the database could not keep up with the user requests – either there was not enough CPU, or there were locking/blocking issues – you can potentially eliminate this caching layer by using In-Memory OLTP, which uses less CPU to do the same work, and removes locking/blocking issues through its lock-free architecture.

Performance benefits

The main performance benefits of In-Memory OLTP derive from more efficient CPU utilization and the removal of any locks/latches. Memory-optimized data access and native compilation reduce the cost of data and transaction processing, allowing a higher throughput using the same set of resources – a given CPU can process more transactions per second. The lock-free transaction processing engine allows scaling linearly with the number of cores in the system, meaning transactions executing concurrently in the system performing data access do not get in each other’s way, so you can get the most out of the cores you have in the system. Lock-free in this case means doing away with any type of locks, including latches and spinlocks. See the recent post, Concurrency Control in the In-Memory OLTP Engine for more details.

Now, what does all that mean in terms of actual performance? The following numbers are based on using pre-release bits of SQL Server 2016.

In the lab using a modern four-socket server in the ASP.net session state scenario, we have seen a sustained throughput of 1.2 million batch requests per second, with four kilobytes per request. That is 4.8 gigabytes of data processed per second, on a single server! This workload is purely in-memory using SCHEMA_ONLY tables, which are not logged and not persisted to disk. For durable tables, which are fully logged, using an order processing workload similar to some of the classical database benchmarks, we are seeing a sustained throughput of close to one gigabyte per second in log generation, again using a four-socket server.

Even for lower-end machines you can expect solid performance improvement from In-Memory OLTP. On my modest consumer-grade laptop that has two physical cores, I can achieve 17,000 transactions per second with the sample workload for In-Memory OLTP using AdventureWorks2016CTP3 — a 10X performance improvement compared with traditional tables. To run this sample workload on your own hardware, download the AdventureWorks2016CTP3 database and sample scripts and follow the instructions in the zip file.

Does this mean you can expect performance benefits for every workload? Well, no. For analytics queries, which scan and/or do aggregation on large data sets, you’ll want to use Columnstore indexes. You can actually add a Columnstore index to your memory-optimized table! For transactional workloads, how much gain you will see from using In-Memory OLTP in your application largely depends on where the performance bottleneck is. In-Memory OLTP really optimizes the time spent in data processing within the database engine. If, for a given user request, most of the time is being spent in the mid-tier app, or if most of the time of a given transaction is being spent waiting for the log IO device, you may not see as much performance gain. Good indicators of whether In-Memory OLTP is going to benefit a transactional workload are high CPU utilization and/or a lot of lock and latch waits.

Relieving tempdb Contention and Improving Scalar UDF performance

The performance numbers quoted above are for workloads using In-Memory OLTP’s memory-optimized tables and natively compiled stored procedures at their core. But you don’t have to use In-Memory OLTP for all of your workloads; that is the beauty of the deep integration of In-Memory OLTP in the SQL Engine. Here are a couple of thoughts on how you could use In-Memory OLTP to relieve tempdb contention and improve scalar UDF performance in your existing app with minimal changes:

  • Use SCHEMA_ONLY memory-optimized tables to replace temp tables in your app. The advantages are that tempdb is never touched, and the schema of the table is persisted, meaning you can always rely on this table being available. But do keep in mind that you’ll want to create the table before running the workload; In-Memory OLTP is optimized for runtime, not create time.
  • Use memory-optimized table types to replace traditional table types in your app. Memory-optimized types can be used in any place you use traditional table types, meaning you will no longer need to touch tempdb for these variables since memory-optimized table variables live purely in the memory space of the user database Plus, you benefit from the more efficient memory-optimized data access.
  • Use natively compiled scalar User-Defined Functions to replace traditional scalar UDFs that do not perform data access. Native compilation significantly reduces the runtime of the UDF. Native UDFs cannot access traditional disk-based tables; this is not an issue if the UDF does not perform data access; if it does, consider migrating these tables to memory-optimized.
Getting started

Here are a few ways you can get started with In-Memory OLTP:

See the other posts in the SQL Server 2016 blogging series.

 Learn more

Categories: Database

Planning an efficient upgrade from SQL Server 2005: Step 1 of 3

Step 1: Discover what’s in your database environment

Upgrading a database can seem daunting. Fortunately, Microsoft offers exceptional tools and support to help you smoothly upgrade from SQL Server 2005 to SQL Server 2014 and Microsoft Azure SQL Database before the end-of-support deadline on April 12, 2016. In this series of three blog posts, we’ll review three steps for your that can help make this a manageable and simplified process.

To make your upgrade more efficient with less unintended downtime, it’s important to plan before getting started. The first step in your planning process is to discover what’s in your database environment: workloads, virtual or physical servers, what’s running on which versions and editions of software, and so on. For example, you’ll want to find out if SQL Server 2005 is running in your environment. Then, discover if any applications are running on SQL Server 2005 so you can protect critical data after the end-of-support date.

One way to gather this information is to run the Microsoft Assessment and Planning Toolkit, which automates the process of gathering data about your database environment. It can also help you find unidentified databases that might be impacted by end of support. Microsoft Services and Microsoft Partners also provide offerings to help you do a detailed assessment of your database environment and recommend next steps. Finding the right partner can greatly reduce work and worries on an upgrade.

If you are running an assessment on your own, check out the recommended tasks for the discovery step below.

Assess your database environment with an automated tool

Run the Microsoft Assessment and Planning (MAP) Toolkit. The MAP toolkit is an agentless, automated inventory, assessment and reporting tool that gives you a complete, network-wide inventory of database instances, identifies different software editions and assesses hardware used. The wide-ranging details of databases and server instances that MAP provides are vital information you can use to consolidate databases and better use hardware and database resources.

Document application workloads and dependencies
  • Identify in-house applications. When documenting and identifying which standalone, in-house applications to upgrade, it’s good to have a clear understanding of the application you’re targeting for upgrade.
  • Identify ISV applications. Migrating applications from independent software vendors requires you identify ISV licensing policies and issues, and develop upgrade strategies different from those for in-house applications.
  • Find documentation for non-default configurations. If you have a customized SQL Server instance, find out if anyone documented it before you begin the migration. Your database administrators will need this documentation in the new database environment.
  • Validate inter-application connectivity. A key dependency is to identify applications that connect to the SQL Server database you’re upgrading. You’ll need to keep those applications connected when you upgrade the database.
Understand usage and compatibility requirements
  • Understand application usage and outage windows. Because databases are more than just data and log files sitting on storage somewhere, you’ll need to fit together many pieces of the DB puzzle — application, storage, operating system, network, and so on — to minimize downtime.
  • Assess compatibility requirements. When planning for a SQL Server database upgrade, you’ll want to understand what features are deprecated, discontinued or changed in the new version. Being aware of these changes in advance helps you prevent performance problems and issues related to application availability.

In the next post in this series, we’ll talk about choosing a target destination for workloads that you’re upgrading. Leveraging the data you’ve gathered in this step, you’ll now have the opportunity to evaluate your current data infrastructure. You can then upgrade to solutions that streamline your investment by moving data to the cloud or by consolidating it.

Categories: Database

Exploration of SQL Server 2016 Always Encrypted – Part 2

Database Journal News - Thu, 12/10/2015 - 09:01

It is important to only allow authorized individuals access to confidential data.  Therefore when setting up an Always Encrypted table it is important to consider splitting configuration tasks between multiple individuals/groups to improve security.   Follow Greg Larsen as he shows how to setup up SQL Server Always Encrypted tables where database administrators can’t see the clear text confidential data.

Categories: Database

Real-Time Operational Analytics Using In-Memory Technology

Operational workloads refer to the business transactions that are critical to running a business. For example, a retail store has a transactional system to create or modify new orders, and a credit card company tracks all charges made by vendors on behalf of its customers. These transactional systems are critical to businesses, as any downtime or slowdown will have a direct impact on the business’s bottom line. Therefore, these systems are designed for performance/scalability and configured with high availability. Equally important to operational workload are the analytics that business use to answers questions such as, “What is the average time to fulfill an order?”

Most customers implement analytics by setting up a Data Warehouse on a different machine similar to the configuration described in my recent post on using In-Memory technology with periodic flow of data through ETL (Extract, Transform and Load) from operational system to Data Warehouse. This approach of optimizing/isolating operational and analytics workloads has served well, but there are some drawbacks:

  • Data Latency: At best, the data for analytics is as current as of the last ETL job. If the ETL job moves data once a day, your analytic queries are running on data up to 24 hours behind. More businesses are demanding analytics on live data to better support operations. For example, a retail chain could run analytics on transactional data to track real-time product sales and use this knowledge to offer discounts or replenish inventory in a timely fashion.
  • Complexity: Migrating data from operational data can be complex. For example, identifying what has changed since the last ETL operation is often difficult.
  • Cost: You need to setup a separate server to host the Data Warehouse, which means another license for a database server and the overhead of creating and maintaining the ETL application.
Solution

Real-Time Operational Analytics in SQL Server 2016 enables running analytics queries directly on your operational workload using columnstore indexes. The image below shows one possible configuration using Analysis Server in Direct Query mode, but you can use any analytics tool or custom solution at your disposal. This solution addresses the drawbacks mentioned above since the analysis happens directly on the operational data.

 For real-time operational analytics, take these steps:

  • Identify the operational table(s) and the columns you need to run analytics on. This can be either a disk-based or memory-optimized table or both.
  • Create the columnstore index on the tables identified. Note, starting with SQL Server 2016, the nonclustered columnstore index is now updateable. Your operational workload continues to work as before without requiring any application changes.
  • Set up Analytics framework to source data directly from the operational store. SQL Server query optimizer will automatically choose the columnstore index as needed for analytics queries.

Not all workloads will fit this model. For example, a separate Data Warehouse is still needed if data needs to be aggregated from multiple sources for analytics.

While real-time operational analytics is promising, one may question its impact on the operational workload while delivering high performance analysis.

Minimizing impact of Operational Workload

SQL Server 2016 provides options to minimize the impact on operational workload as follows:

  • Filtered Columnstore Index: For many operational workloads, the data is ‘hot’ (i.e. target of DML operations) only for a short time, and then it transitions to a ‘warm’ or ‘cold’ state. For example, consider an online retail vendor where customers browse a catalog and place orders, and then these orders are shipped. An order is ‘hot’ while it is going through the steps of being processed, but once it is shipped it transitions into a ‘cold’ state since it may never get updated. This eliminates the overhead of maintaining the columnstore index on the ‘hot’ data. However, when an Analytics query is run; SQL Server generates the query plan to access the rows both from columnstore index as well as the ones that have been filtered out. SQL Server allows you to create a columnstore index with a filtered condition as shown in the example below:

    create nonclustered columnstore index <index-name> on Orders (<list of columms>) where order_status = 'Shipped'

  • Offloading Analytics Queries to Readable Secondary: As mentioned earlier, most operational workloads are mission critical and are likely to have high availability configured using AlwaysOn. You can choose to offload the analytics workload to readable secondary to minimize impact on the primary replica as shown in the picture below.
 Analytics query performance

Running analytics on an operational schema, which is highly normalized, will not be as performant compared to say running it on the schema optimized for analytics (Star schema). However, using a columnstore index does speed up analytics query performance significantly to compensate for extra complexity (more joins) in the query.

Microsoft Dynamics AX, currently in public preview, leverages SQL 2016 with real time operational analytics.

Summary

SQL Server 2016 supports real-time operational analytics both on disk-based and memory-optimized tables so that you can leverage it without any changes to your applications. With memory-optimized and columnstore combination, you get the best of OLTP performance and analytics query performance. Columnstore Indexes for Real-Time Operational Analytics provides more detail about this solution.

See the other posts in the SQL Server 2016 blogging series.

Categories: Database

Speeding up Business Analytics Using In-Memory Technology

New challenges

Businesses have always used data and analytics for improved business decisions. As the new age of the IOT (Internet of Things) approaches, businesses are experiencing exponential data growth and the challenges that come along with managing large data sets. At the same time, tools like Power Pivot democratize data access to these large data sets across the whole business. As more employees have access to these large data sets, the analytics queries asked by these employees tends to be more ad-hoc rather than queries easily answered by pre-aggregated cubes. In-Memory analytics improvements in SQL Server 2016 addresses these new challenges by providing significant data compression and speeding up analytics query performance up to 100x.

Common configuration

Traditionally, customers have modeled the data using multidimensional online analytical processing (MOLAP) with pre-aggregated data or with Tabular Model and the source of data in the database with periodic ETL (Extract, Transform and Load). The picture below shows a typical deployment using SQL Server for relational DW and SQL Server Analysis Server (SSAS) for data modelling.

 For both MOLAP and Tabular models the analytical queries run inside SSAS as shown by the blue box. This allows customers to gain the best analytical performance, even when their data sources are slow, but this approach has some challenges as described below.

  • Cube Processing: The data needs to be refreshed and aggregated periodically.
  • Data Latency: Data is as fresh as of the last refresh cycle. For many businesses, this is increasingly unacceptable. They want the ability to run analytics immediately after the data has been loaded.
  • Very Large Database Support: For massive data warehouses, it may not be feasible to load the data to a separate BI server due to resource and time limits.

In-Memory Analytics addresses these challenges by leveraging updateable clustered columnstore index (CCI) available since SQL Server 2014. Clustered columnstore indexes (CCIs) store the data in columnar storage format as shown below storing each column independently. The two key benefits of storing data in columnar storage are:

  • All values of a column are stored together, and they can be compressed really well, typically 10x. This can not only reduce the storage footprint significantly, but reduces the IO/memory usage proportionally and improve the performance of analytics queries.
  • Only the columns referenced in the query are read into the memory, which further reduces both IO and memory needed. This further improves the performance of analytics queries.

SQL Server executes queries on columnstore index in a special batch mode that processes set of rows together for execution efficiency. For example, when aggregating total sales for a quarter, SQL Server can aggregate around 1000 values together instead of aggregating one value at a time.

With the combination of reduced IO and BatchMode processing, analytics query performance can see up to a 100x  improvement. The compression achieved and query performance will vary with workloads. Here are couple customer case studies with the usage of clustered columnstore index with SQL Server 2014. Customers can expect to get better query performance with columnstore indexes in SQL Server 2016.

Recommended configuration

Column store indexes remove the necessity of pre-aggregating data for reporting queries, allowing configurations as shown in the picture below.

When compared with previous configuration, Analysis Services can be configured to use either ROLAP (Multi-Dimensional) or DirectQuery Mode (Tabular) mode that sends the query directly to SQL Server with data as clustered columnstore index which with 10x data compression and up to 100x speed up in analytics query processing, can reduce storage requirements significantly and eliminate the need to pre-aggregate the data for many workloads. This means the data is available for analytics immediately after it is loaded through ETL. Another point to note that SSAS Tabular (DirectQuery) mode has been improved in SQL Server 2016 to generate better queries that can give order of magnitude query performance over SSAS 2014.

 Migrating workload to use columnstore index

To add a column store index, simply execute the following command, dropping the existing columnstore index if it exists. Also, with columnstore indexes, you may not need some of the other indexes so you can consider dropping them.

Create clustered columnstore index <index-name> on <table>
with (drop_existing=ON)

Changes in SQL Server 2016

SQL Server 2016 has significant advancements over SQL Server 2014 for In-Memory analytics. Some highlights are functionality (e.g. ability to create traditional nonclustered indexes to enforce PK/FK constraints, performance (e.g. addition of new BatchMode operators, Aggregate pushdown), Online index defragmentation, and supportability (e.g. new DMVs, Perfmon counters and XEvents).

Recommendation

We recommend using Clustered columnstore Indexes (CCIs) for all Data Warehouse workloads. It is available in Enterprise Edition in SQL Server and in Azure SQL Database. It will benefit you in all configurations with or without Analysis Server or with any other third party visualization tools.

See the other posts in the SQL Server 2016 blogging series.

Categories: Database

Azure SQL Database Resiliency - Point-in-Time Restore

Database Journal News - Mon, 12/07/2015 - 09:01

Azure SQL Database provides a number of benefits that leverage resiliency and redundancy built into the underlying cloud infrastructure. This includes a range of options that facilitate automatic backup and failover, which help you recover from human errors, hardware component failures, or even regional disasters. In this article, we will review these options.

Categories: Database

PoWA 3.0.0 released

PostgreSQL News - Mon, 12/07/2015 - 01:00

Paris, December 07 2015

DALIBO is proud to present the third release of PoWA, the PostgreSQL Workload Analyzer.

Realtime traffic analysis and dynamic graphs

PoWA is a workload analyzer that gives a clear view of the activity of your PostgreSQL servers with a query runtime graph and a block hit/read graph along with a chart of time-consuming queries over the specified time period.

If you zoom anywhere in one of the graphs, the chart will adjust and show you which queries were running at that time. If you click on a specific query, you will get additional graphs such as read/write time, number of rows affected, local and shared hit, etc.

Better predicate analyzer

The pg_qualstats extension stores new counters. It's now possible to know the most executed predicates in relation to all the related queries. It also tracks non-normalized queries so that it’s possible to execute an EXPLAIN of any query tracked by pg_stat_statements.

Database global optimization

PoWA is now able to use statistics about every predicate used by any query executed on a database to suggest the smallest index set that optimizes every one of those predicates.

In particular, the heuristics place heavy emphasis in consolidating many indexes into one by giving preference to definitions spanning multiple columns. This can provide new information about the actual load and correlation between predicates that are traditionally hard to discover for the DBA.

Index suggestion check

Thanks to the HypoPG extension, the benefits of the suggested index creations can automatically be checked by running the queries against hypothetical indexes. You can see instantly if the suggested index is relevant and how much it'll improve the query.

Credits

DALIBO would like to thank the users and developers who contributed to this release.

PoWA is an open project available under the PostgreSQL License. Any contribution to build a better tool is welcome. You just have to send your ideas, patches or features requests using the GitHub tools or to powa-users@googlegroups.com

Links

For PoWA 3.0.0:

For the stats extensions:

About PoWA

PoWA is PostgreSQL Workload Analyzer that gathers performance stats and provides real-time charts and graph to help monitor and tune your PostgreSQL servers.

Code & Demo at dalibo.github.io/powa

About DALIBO

DALIBO is the leading PostgreSQL company in France, providing support, trainings and consulting to its customers since 2005. The company contributes to the PostgreSQL community in various ways, including : code, articles, translations, free conferences and workshops.

Check out DALIBO's open source projects at dalibo.github.io

Categories: Database, Open Source

SQL Server 2016: Advanced Capabilities without Expensive Add-Ons

What does the industry leading SQL Server 2016 have in common with spaghetti sauce? Just the fact that if you need something, “it’s in there.” With SQL Server 2016 the advanced capabilities you need in a data platform are built in. You don’t need to purchase expensive add-ons to get necessary functionality immediately: Looking for real-time operational analytics that will help you gain insights to advance your business? It’s in there. Need outstanding performance and scalability? It’s in there: SQL Server 2016 includes optimized in-memory technology that produces up to 30x transactional performance and over100x faster queries for data warehouse and BI workloads.

As the digital economy evolves, it’s ever more crucial for your data platform to include capabilities that allow you to study past data—such as operational and social media data—to identify potential trends, or to analyze the results of campaigns or events, or to predict outcomes based on past and current data. Therefore, if you’re like most decision makers, you’re deeply concerned about how technology can enable your organization to keep up with industry trends and adapt business strategies to meet demands. Indeed, industry research shows that such concerns have been growing in the past few years. For example, in 2010, 37% of those surveyed by MIT/SLOAN Management review believed that business analytics created a competitive advantage in their organization. In 2011, that percentage increased to 58%, in 2012 the percentage had grown to 67%, and percentage continues to rise.

“Today the adoption of in-memory analytics is growing in the hopes it can deliver speed, deeper insights and allow companies to do more with the data they have to solve a variety of business problems” (Tapan Patel, In-Memory Analytics: Get Faster, Better Insights from Big Data). SQL Server 2016 addresses this need, and In-Memory processing will now enable real-time operational insights on data, both in-memory and on disk. In contrast to other in-memory technologies, SQL Server 2016 uniquely provides the speed of in-memory with operational analytics. Not only will OLTP performance be enhanced, but an increase in concurrency means customers can now take advantage of even higher parallel compute (double the compute, from 64 CPUs to 128) and memory allocations (in Terabytes).

With SQL Server 2016, operational analytics “is in there.”

SQL Server 2016 was designed to enable advanced operational analytics and to do so at industry-leading scale. Witness the fact that the top three TPC-H performance benchmarks for data warehousing workloads are held by SQL Server for non-clustered scale-up performance. And the right to deploy the SQL Server massive parallel processing (MPP) data warehousing appliance is included with the SQL Server 2016 Enterprise Edition license. Again, “it’s in there,” and at half the cost of Oracle Exadata.

As the leader in the most recent Gartner Magic Quadrant for Operational Database Management Systems, surpassing Oracle in both execution and vision, SQL Server is the advanced data platform for mission-critical applications, analytics, scalability, and performance—with everything, including innovation, built in. Upcoming blog posts in this series will dive into the technical details and explain how built-in in-memory analytics, real-time operational analytics, and in-memory OLTP can give your organization an edge in today’s digital economy.

To learn more about SQL Server 2016, see what’s new in SQL Server 2016.

See the other posts in the SQL Server 2016 blogging series.

Categories: Database

Exploration of SQL Server 2016 Always Encrypted – Part 1

Database Journal News - Thu, 12/03/2015 - 09:01

With the introduction of SQL Server 2016 you now have a new way to encrypt columns called Always Encrypted. With Always Encrypted, data is encrypted at the application layer via ADO.NET. This means you can encrypt your confidential data with your .NET application prior to the data being sent across the network to SQL Server. Follow Greg Larsen as he explores setting up a table that stores always encrypted data.

Categories: Database

Troubleshooting SQL problems in your database

IBM - DB2 and Informix Articles - Thu, 12/03/2015 - 06:00
An application's poor performance is sometimes caused by a performance problem in a database. The way that the Structured Query Languages (SQLs) in the database are performing will give you a good indication of how the database is performing. In this article, learn how to query your SQLs to see how they are ranking and to identify possible performance issues in DB2 for Linux, UNIX, and Windows databases.
Categories: Database

Get the most out of SQL Server 2016

Blog post series to highlight new features and enhancements available with SQL Server 2016.

Data, and the ability to extract actionable intelligence from it, is driving transformation in how we do business today. With SQL Server 2016, it’s never been easier to capture, transform, mash-up, analyze and visualize any data, of any size, at any scale in its native format. Plus, you can do all of this while using familiar tools, languages and frameworks in a trusted environment, both on-premises and in the cloud — with what Corporate Vice President of Microsoft’s Data Group Joseph Sirosh calls “the best SQL Server release in history.”

In the recent Gartner Magic Quadrant for Operational Database Management Systems, Microsoft is positioned as an industry leader, rated the highest in execution and furthest in vision. SQL Server 2016 builds on this leadership, and comes packed with powerful built-in features.

In our upcoming SQL Server 2016 post series, kicking off December 3, 2015, you can learn about our new technologies, discover best practices, and gain product insights from the engineers who built it. Our engineers are lined up to share specifics from their area of expertise.

The series will cover:

  • App and data performance features: Discover enhancements of In Memory OLTP and real-time Operational Analytics.
  • Data security features: Learn about Always Encrypted technology, an industry-first that helps protect data at rest and in motion, on-premises and in the cloud.
  • Business intelligence: Gain insights into rich visualizations as we cover SQL Server 2016’s BI capabilities and additions, including the upcoming release of Mobile BI resulting from our Datazen acquisition.
  • Big Data and analytics: Uncover ways to use Advanced Analytics, including a walk-through of how to use R, the popular Big Data scientist language which is now a part of SQL Server. We’ll also provide details on PolyBase, which allows you to extract value from unstructured and structured data using your existing T-SQL skills.
  • Hybrid and cloud: Learn about SQL Database, backup improvements and Stretch Database, which enables stretching a single database between on-premises and Azure.

Read the current posts from this blog series:

Start using SQL Server Community Technology Preview (CTP) 3.2 today

To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the CTP 3.2 Preview or try it in Microsoft Azure. Then, start evaluating the impact these new innovations can have for your business.

Have questions? Join the discussion of the new SQL Server 2016 capabilities at MSDN and StackOverflow. If you run into an issue or would like to make a suggestion, you can let us know using Microsoft’s Connect tool. We look forward to hearing from you.

Categories: Database

Always Encrypted Data Security and Competitive Advantage

As data becomes the center of digital business, your organization’s ability to secure data is crucial for compliance and confidentiality. But data security is so important that it can also be a competitive differentiator. Indeed, Forrester Research reports a focus on data security can help drive customer loyalty and retention, create opportunities for premium offers and new sources of revenue, and protect future revenue streams.

For example, suppose your data platform’s security technology can assure customers that their credit card transactions complete quickly at the point of sale. In addition, suppose your data platform allows you to protect customer data by means of encryption throughout that transaction. A data platform that includes built-in technology to encrypt the customer’s sensitive data within the database and during transaction-processing provides the foundation for customer satisfaction—and differentiates your organization from competitors that rely on less advanced data security.

Such assurance is built into the SQL Server 2016 data platform with Always Encrypted technology. (For details on the security built into the SQL Server platform, see "Always Encrypted in SQL Server & Azure SQL Database.”). And Always Encrypted capabilities are just one example of Microsoft’s innovative and comprehensive approach to security. Microsoft has made security a priority for more than a decade. As a result, according to the National Institute of Standards and Technology2 (NIST) public security board, SQL Server has the fewest security vulnerabilities when compared with the major database vendors. In addition, SQL Server has been deemed “the most secure database” by the Information Technology Industry Council (ITIC).

Besides devoting internal efforts to security, Microsoft works closely with customers and the SQL Server community to ensure that security is properly implemented. As noted by Douglas McDowell, SQL Server MVP and North America CEO, of consulting company SolidQ, “SQL Server continues to be one of the most secure database platforms. Microsoft increases SQL Server security with each release (check out Always Encrypted in SQL Server 2016). SQL Server security is all about the platform being secure by design and by default, running it under a hardened configuration, and then making sure best practice security processes and procedures are followed.”

This across-the-board focus on security contrasts with the position of other database vendors: Recall the news in August 2015 when Oracle’s chief security officer berated customers for performing their own security testing on Oracle software and demanded they stop. Although May Ann Davidson’s statements were later removed from her blog, her comments reveal a lack of understanding when it comes to security and customer needs.

As the Forrester study highlighted, excellent data security can help organizations compete. And if there is any doubt about how data professionals rate the importance of advanced security technology, consider the results of a recent independent study by King Research: More than 400 security professionals rated the importance of various criteria for selecting security products on a scale of 1 to 10. Respondents placed “Security Advantage by Using Superior Technology” at a very high 7.5 on that scale.

Microsoft recognizes that a commitment to security is a key criterion when you evaluate data platforms. SQL Server provides superior data platform security technology that can serve as the foundation for a comprehensive data security strategy to help your organization compete. Find out how to take advantage of that technology by attending the webinar, "Tackle the top five data challenges with SQL Server." For a more detailed overview download the white paper on SQL Server 2016 mission-critical capabilities.

References:

  1. The Future of Data Security And Privacy: Growth And Competitive Differentiation Vision: The Data Security And Privacy Playbook, John Kindervag, Heidi Shey, and Kelley Mak, Forrester, July 10, 2015
  2. National Institute of Standards and Technology Comprehensive Vulnerability Database update 10/2015
  3. Companies Lack Security Controls for Accessing Enterprise Applications, King Research
Categories: Database

Introducing: SQL Server 2016 Community Technology Preview 3.1

The SQL Server 2016 Community Technology Preview (CTP) 3.1 is now available for download! This release fully embraced the “Cloud First” tenet, as this release build was deployed to SQL Azure Database service first, running in production prior to the release build.

In SQL Server 2016 CTP 3.1, part of our new rapid preview model, we made enhancements to several features. You can try them now in your SQL Server 2016 development and test environments.

As part of our new rapid release model, you don’t have to wait several months for responses to feedback or to test new capabilities. This means you can accelerate testing and time to production. The frequent updates are also engineered to be of the same quality as major CTPs, so you don’t have to be concerned about build quality. You also have the flexibility to choose which CTPs you will deploy for development and test environments and won’t be forced to upgrade to the most recent preview release.

In SQL Server 2016 CTP 3.1, available for download, or in an Azure VM today, you will see enhancements in several areas, including:

  • New In-Memory OLTP improvements, including Unique indexes, LOB data types, and Indexes with NULLable key columns
  • Programmability improvement for the AT TIME ZONE clause
  • Enhancements to SQL Server Analysis Services
  • PowerPivot and SSRS/Power View are now available for SharePoint Server 2016 Beta 2

For additional details, check out the detailed SQL Server 2016 CTP 3.1 Engineering blog post.

Download SQL Server 2016 CTP 3.1 today!

As the foundation of our end-to-end data platform, SQL Server 2016 is the biggest leap forward in Microsoft's data platform history with real-time operational analytics, rich visualizations on mobile devices, built-in advanced analytics, new advanced security technology, and both on-premises and in the cloud.

To learn more about SQL Server 2016, visit the SQL Server 2016 preview page. To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the preview or try the preview by using a virtual machine in Microsoft Azure and start evaluating the impact these new innovations can have for your business.

Have questions? Join the discussion of the new SQL Server 2016 capabilities at MSDN and StackOverflow. If you run into an issue or would like to make a suggestion, you can let us know at Connect. We look forward to hearing from you!

For additional information about CTP 3.1, see What’s New in SQL Server 2016 and SQL Server 2016 Release Notes.

Categories: Database

SQL Server 2016 Community Technology Preview 3.1 is available

The SQL Server engineering team is pleased to announce the immediate availability of SQL Server 2016 November public preview release CTP 3.1.

To learn more about SQL Server 2016, visit the SQL Server 2016 preview page. To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the preview or try the preview by using a virtual machine in Microsoft Azure and start evaluating the impact these new innovations can have for your business.

Have questions? Join the discussion of the new SQL Server 2016 capabilities at MSDN and StackOverflow. If you run into an issue or would like to make a suggestion, you can let us know through Microsoft’s Connect tool. We look forward to hearing from you. 

New In-Memory OLTP improvements in CTP3.1 include:

  • Unique indexes in memory-optimized tables, to complement the support for unique constraints that was released in CTP3
  • LOB data types varchar(max), nvarchar(max), and varbinary(max) in memory-optimized tables and natively compiled modules
  • Indexes with NULLable key columns in memory-optimized tables

Example: Illustrate the new features using a memory-optimized table natively compiled stored procedure.

CREATE TABLE Sales.Store_inmem(
        [BusinessEntityID] int IDENTITY NOT NULL PRIMARY KEY NONCLUSTERED,
        [Name] nvarchar(200) NOT NULL,
        [SalesPersonID] int NULL,
[Demographics] nvarchar(max) NULL, -- LOB column
        [ModifiedDate] datetime2 NOT NULL DEFAULT (sysdatetime()),
        INDEX ix_SalesPersonID NONCLUSTERED (SalesPersonID), -- index on Nullable column
INDEX ix_Name UNIQUE NONCLUSTERED (Name) -- UNIQUE index
) WITH (MEMORY_OPTIMIZED=ON)
GO

CREATE PROCEDURE Sales.usp_InsertStore
@Name nvarchar(200) NOT NULL,
@SalesPersonID int = NULL,
@Demographics nvarchar(max) = NULL
WITH NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE=N'Dutch')
INSERT Sales.Store_inmem (Name, SalesPersonID, Demographics)
VALUES (@Name, @SalesPersonID, @Demographics)
 
END
GO

DECLARE @Demographics nvarchar(max) = REPLICATE(N'LOBtest', 10000)
EXECUTE Sales.usp_InsertStore @Name='test', @Demographics=@Demographics
SELECT * FROM Sales.Store_inmem
GO

Programmability improvement - AT TIME ZONE clause
Support for AT TIME ZONE clause that allows customers to convert datetime values across different time zones, honoring time zone conversion rules. It can be used to appropriately present datetime values in applications that target international markets.

Syntax inputdate AT TIME ZONE timezone.

  • Inputdate: An expression that can be resolved to a smalldatetime, datetime, datetime2, or datetimeoffset value.
  • Timezone: Name of the destination time zone in standard format as enumerated by Windows. Available time zones can be found by querying sys.time_zone_info.
Examples

SELECT SalesOrderID, OrderDate, TODATETIMEOFFSET (OrderDate, '-07:00') as OrderDatePST,
TODATETIMEOFFSET (OrderDate, '-07:00') AT TIME ZONE 'Central European Standard Time' AS OrderCETTime
FROM Sales.[SalesOrderHeader]

If inputdate is without provided offset information, function applies offset of the time zone assuming that value was originally provided in that time zone:

SELECT convert(datetime2, '2015-06-27 11:20:08.5783475')
AT TIME ZONE 'Central European Standard Time' as CETTimeWithOffset

SELECT convert(datetime2, '2014-11-04 11:15:30.3214560')
AT TIME ZONE 'Central European Standard Time' as CETTimeWithOffset

If inputdate contains offset, then function performs conversion to destination time zone:

SELECT convert(datetimeoffset, '2015-06-27 11:20:08.5783475 +02:00')
AT TIME ZONE 'Pacific Standard Time' as PacificStandardTime

SELECT convert(datetimeoffset, '2014-11-04 10:15:30.3214560 +00:00')
AT TIME ZONE 'Pacific Standard Time' as PacificStandardTime

SQL Server Analysis Services (SSAS) updates allow upgrading your existing models to 1200 compatibility level and a JSON editor for SSDT; please visit the SSAS team blog post to learn more.

SQL Server PowerPivot and Reporting Services/Power View for SharePoint 2016 available now with CTP3.1! Make sure you use SharePoint Server 2016 Beta 2 available for download here. Earlier versions of SharePoint Server are not supported. You also need Office Online Server Preview because SharePoint Server no longer includes Excel Services. Excel Services, now called Excel Online Server, is now only available with Office Online Server. PowerPivot and Reporting Services/Power View for SharePoint 2016 therefore require Office Online Server.

For more information, see What's New in SQL Server 2016 and SQL Server 2016 Release Notes, and read our individual Engineering team CTP 3.1 blog posts for SQL Server Analysis Services (SSAS) and SQL Server Management Studio (SSMS).

Categories: Database