Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database

Processing and content analysis of various document types using MapReduce and InfoSphere BigInsights

IBM - DB2 and Informix Articles - Tue, 07/29/2014 - 05:00
Businesses often need to analyze large numbers of documents of various file types. Apache Tika is a free open source library that extracts text contents from a variety of document formats, such as Microsoft Word, RTF, and PDF. Learn how to run Tika in a MapReduce job within InfoSphere BigInsights to analyze a large set of binary documents in parallel. Explore how to optimize MapReduce for the analysis of a large number of smaller files. Learn to create a Jaql module that makes MapReduce technology available to non-Java programmers to run scalable MapReduce jobs to process, analyze, and convert data within Hadoop.
Categories: Database

Understanding Hugepages in Oracle Database

Database Journal News - Mon, 07/28/2014 - 08:01

Large Oracle instances running on Linux can benefit from using hugepages.  Read on to see what hugepages are and how they help Oracle run better.

Categories: Database

For proven in-memory technology without costly add-ons, migrate your Oracle databases to SQL Server 2014

Today, we are making available a new version of SQL Server Migration Assistant (SSMA), a free tool to help customers migrate their existing Oracle databases to SQL Server 2014. Microsoft released SQL Server 2014 earlier this year, after months of customer testing, with features such as In-Memory OLTP to speed up transaction performance, In-Memory Columnstore to speed up query performance, and other great hybrid cloud features such as backup to cloud directly from SQL Server Management Studio and the ability to utilize Azure as a disaster recovery site using SQL Server 2014 AlwaysOn.

Available now, the SQL Server Migration Assistant version 6.0 for Oracle databases, greatly simplifies the database migration process from Oracle databases to SQL Server. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing to reduce cost and reduce risk of database migration projects. Moreover, SSMA version 6.0 for Oracle databases brings additional features such as automatically moving Oracle tables into SQL Server 2014 in-memory tables, the ability to process 10,000 Oracle objects in a single migration, and increased performance in database migration and report generation.

Many customers have realized the benefits of migrating their database to SQL Server using previous versions of SSMA. For example:

SSMA for Oracle is designed to support migration from Oracle 9i or later version to all editions of SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, and SQL Server 2012 and SQL Server 2014. The SSMA product team is also available to answer your questions and provide technical support at ssmahelp@microsoft.com

To download SSMA for Oracle, go here. To evaluate SQL Server 2014, go here.  

Categories: Database

Get started backing up to the cloud with SQL Server Backup to Microsoft Azure Tool

If you’re considering backing up your SQL Server database to the cloud, there are many compelling reasons. Not only will you have an offsite copy of your data for business continuity and disaster recovery purposes, but you can save on CAPEX by using Microsoft Azure for cost-effective storage.  And now, you can choose to backup to Microsoft Azure even for databases that aren’t running the latest version of SQL Server – creating a consistent backup strategy across your database environment. 

SQL Server has these tools and features to help you back up to the cloud:

  • In SQL Server 2014, Managed Backup to Microsoft Azure manages your backup to Microsoft Azure, setting backup frequency based on data activity.  It is available inside the SQL Server Management Studio in SQL Server 2014.
  • In SQL Server 2012 and 2014, Backup to URL provides backup to Microsoft Azure using T-SQL and PowerShell scripting.
  • For prior versions, SQL Server Backup to Microsoft Azure Tool enables you to back up to the cloud all supported versions of SQL Server, including older ones.  It can also be used to provide encryption and compression for your backups – even for versions of SQL Server that don’t support these functions natively.

To show you how easy it is to get started with SQL Server Backup to Microsoft Azure Tool, we’ve outlined the four simple steps you need to follow:

Prerequisites: Microsoft Azure subscription and a Microsoft Azure Storage Account.  You can log in to the Microsoft Azure Management Portal using your Microsoft account.  In addition, you will need to create a Microsoft Azure Blob Storage Container:  SQL Server uses the Microsoft Azure Blob storage service and stores the backups as blobs. 

Step 1: Download the SQL Server Backup to Microsoft Azure Tool, which is available on the Microsoft Download Center.

Step 2: Install the tool. From the download page, download the MSI (x86/x64) to your local machine that has the SQL Server Instances installed, or to a local share with access to the Internet. Use the MSI to install the tool on your production machines. Double click to start the installation. 

Step 3: Create your rules.  Start the Microsoft SQL Server Backup to Microsoft Azure Tool Service by running SQLBackup2Azure.exe.  Going through the wizard to setup the rules allows the program to process the backup files that should be encrypted, compressed or uploaded to Azure storage. The Tool does not do job scheduling or error tracking, so you should continue to use SQL Server Management Studio for this functionality.

On the Rules page, click Add to create a new rule.    This will launch a three screen rule entry wizard.

The rule will tell the Tool what local folder to watch for backup file creation. You must also specify the file name pattern that this rule should apply to.

To store the backup in Microsoft Azure Storage, you must specify the name of the account, the storage access key, and the name of the container.  You can retrieve the name of the storage account and the access key information by logging into the Microsoft Azure management portal.

At this time, you can also specify whether or not you wish to have the backup files encrypted or compressed.

Once you have created one or more rules, you will see the existing rules and the option to Modify or Delete the rule.

Step 4: Restore a Database from a Backup Taken with SQL Server Backup to Microsoft Azure Tool in place. The SQL Server Backup to Microsoft Azure Tool creates a ‘stub’ file with some metadata to use during restore.  Use this file like your regular backup file when you wish to restore a database.  SQL Server uses the metadata from this file and the backup on Microsoft Azure storage to complete the restore. 

If the stub file is ever deleted, you can recover a copy of it from the Microsoft Azure storage container in which the backups are stored.  Place the stub file into a folder on the local machine where the Tool is configured to detect and upload backup files.

That’s all it takes!  Now you’re up and running with Backup to and Restore from Microsoft Azure.

To learn more about why to back up to the cloud, join Forrester Research analyst Noel Yuhanna in a webinar on Database Cloud Backup and Disaster Recovery.  You’ll find out why enterprises should make database cloud backup and DR part of their enterprise database strategy. 

The webinar takes place on Tuesday, 7/29 at 9 AM Pacific time; register now.

Categories: Database

Getting Started with Hashing in SQL Server

Database Journal News - Thu, 07/24/2014 - 08:01

Encryption brings data into a state which cannot be interpreted by anyone who does not have access to the decryption key, password, or certificates. Hashing brings a string of characters of arbitrary size into a usually shorter fixed-length value or key. Read on to learn about hashing in SQL Server and how it is different from encryption.

Categories: Database

Use the DB2 with BLU Acceleration Pattern to easily deploy a database

IBM - DB2 and Informix Articles - Thu, 07/24/2014 - 05:00
The database as a service (DBaaS) 1.1.0.8 component of the IBM PureApplication System introduced many new features. This article describes some of them, including deployment of DB2 with BLU, increasing the database resources, and backup. You also will learn how the DB2 with BLU Acceleration Pattern can make it easier and faster to create and deploy BLU-enabled datasets in DBaaS 1.1.0.8.
Categories: Database

DB2 monitoring enhancements for BLU Acceleration

IBM - DB2 and Informix Articles - Thu, 07/24/2014 - 05:00
BLU Acceleration is a collection of technologies for analytic queries that was introduced in IBM DB2 for Linux, UNIX and Windows (LUW) Version 10.5. BLU Acceleration can provide significant benefits in many areas including performance, storage savings, and overall time to value. This article provides an overview of the monitoring capabilities that support BLU Acceleration. These capabilities provide insight into the behavior of the database server and assist with tuning and problem determination activities. Extensive example queries help you start monitoring workloads that take advantage of BLU Acceleration.
Categories: Database

PostgreSQL updates 9.3.5, 9.2.9, 9.1.14, 9.0.18, and 8.4.22 released

PostgreSQL News - Thu, 07/24/2014 - 01:00

The PostgreSQL Global Development Group has released an update to all supported version of the database system, including versions 9.3.5, 9.2.9, 9.1.14, 9.0.18, and 8.4.22. This minor release fixes a number of issues discovered and reported by users over the last four months, including some data corruption issues, and is the last update of version 8.4. Users of version 9.3 will want to update at the earliest opportunity; users of version 8.4 will want to schedule an upgrade to a supported PostgreSQL version.

Among the notable issues fixed in this release are:

PostgreSQL 9.3 and pg_upgrade: Users who upgraded to version 9.3 using pg_upgrade may have an issue with transaction information which causes VACUUM to eventually fail. These users should run the script provided in the release notes to determine if their installation is affected, and then take the remedy steps outlined there.

PostgreSQL 9.3 crash recovery: Three issues which could compromise data integrity during crash recovery on master or standby servers in PostgreSQL 9.3 have been fixed.

GIN and GiST indexes: Three issues with GIN and GiST indexes, used for PostGIS and full text indexing, can cause corruption or incorrect query responses. Any indexes on bit or bit varying columns should be rebuilt following the instructions in the release notes.

Security during make check: The insecure socket permissions during "make check", reported in a previous security announcement, have now been fixed.

With this release, version 8.4 is now End-of-Life (EOL), per our Versioning Policy. This means that no additional updates will be released for version 8.4, and users should plan to upgrade to a later version very soon.

In addition the above, this update release includes the following fixes which affect multiple PostgreSQL versions:

  • Fix race condition with concurrent tuple updating
  • Prevent "could not find pathkey item to sort" planner error
  • Properly optimize subqueries with set-returning functions
  • Repair planner regression in optimizing AND/OR NULL
  • Fix planner handling of VARIADIC functions
  • Make json_populate_recordset handle nested JSON properly
  • Prevent corruption of TOAST values when creating complex types
  • Prevent "record type has not been registered" query error
  • Fix a possible crash condition with functions and rewinding cursors
  • Patch three memory leaks
  • Fix row checks for rows deleted by subtransactions
  • Change how pg_stat_activity displays sessions during PREPARE TRANSACTION
  • Prevent multixact ID corruption during VACUUM FULL
  • Fix indentation when displaying complex view definitions
  • Fix client hostname lookup in pg_hba.conf
  • Fix libpython linking on OSX
  • Avoid buffer bloat in libpq
  • Fix an issue with dumping materialized views
  • Fix pg_upgrade's handling of multixact IDs
  • Make sure that pgcrypto clears sensitive information from memory
  • Time zone updates for Crimea, Egypt, and Morocco

Four Windows-specific fixes are included in this release:

  • Prevent tablespace creation recovery errors
  • Fix detection of socket failures
  • Allow users to change parameters after startup
  • Properly quote executable names so they don't fail

A few of the issues above require post-update steps to be carried out by affected users. Please see the release notes for details.

As with other minor releases, users are not required to dump and reload their database or use pg_upgrade in order to apply this update release; you may simply shut down PostgreSQL and update its binaries. Users who have skipped multiple update releases may need to perform additional post-update steps; see the Release Notes for details.

Links:

Categories: Database, Open Source

PostgreSQL 9.4 Beta 2 Released

PostgreSQL News - Thu, 07/24/2014 - 01:00

The PostgreSQL Global Development Group has made available the second beta release of PostgreSQL 9.4. This beta contains previews of all of the features which will be available in version 9.4, plus corrections for many of the issues discovered by users who tested 9.4 Beta 1. Please download, test, and report what you find.

Among the changes made since 9.4 Beta 1 are:

  • Fix handling of two-phase commit and prepared statements for logical decoding
  • Multiple fixes for bugs in pg_recvlogical
  • Change slot handling logic for replication slots
  • Add support for BSD and e2fsprogs UUID libraries.
  • Multiple jsonb bug fixes
  • Remove use_json_as_text options from json functions
  • Make json_build_* functions STABLE instead of IMMUTABLE
  • Prevent ALTER SYSTEM from changing the data directory
  • Prevent autovacuum-related crash
  • Many documentation improvements and changes

Beta 2 includes changes to pg_control and to the system catalogs. As such, users who have been testing Beta 1 will need to upgrade in order to test Beta 2. We suggest using pg_upgrade for this upgrade in order to test that as well.

For a full listing of the features in version 9.4 Beta, please see the release notes. Additional descriptions and notes on the new features are available on the 9.4 Features Wiki Page.

We depend on our community to help test the next version in order to guarantee that it is high-performance and bug-free. Please download PostgreSQL 9.4 Beta 2 and try it with your workloads and applications as soon as you can, and give feedback to the PostgreSQL developers. Features and APIs in Beta 2 will not change substantially before final release, so it is now safe to start building applications against the new features. More information on how to test and report issues

Get the PostgreSQL 9.4 Beta 2, including binaries and installers for Windows, Linux and Mac from our download page.

Full documentation of the new version is available online, and also installs with PostgreSQL.

Categories: Database, Open Source

Managing Big Data DBAs

Database Journal News - Mon, 07/21/2014 - 08:01

Technical support teams usually support familiar hardware and software configurations. Specialization in particular combinations of operating systems and database management software is common, and this allows some team members to gain in-depth experience that is extremely valuable in an enterprise IT setting. How has big data changed this paradigm?

Categories: Database

IBM InfoSphere Optim Data Growth: Setting up your first Archive

IBM - DB2 and Informix Articles - Mon, 07/21/2014 - 05:00
This article introduces the IBM InfoSphere Optim product. It explains its fundamentals elements, and shows step-by-step how to set up your first archive request; thus help technical audience in achieving comfort of getting on to product quickly.
Categories: Database

Sentiment Analysis with Microsoft APS and StreamInsight

In this overview and demo, we will show you what sentiment analysis is and how to build a quick mashup that combines real-time access to multiple data sources using tools from Microsoft.

Sentiment analysis is one of the hottest topics in the Big Data space. Sentiment analysis is the process of analyzing customer comments and feedback from Facebook, Twitter, Email, and more. The purpose of the analysis is to understand the overall sentiment the customer is trying to convey. This could be a negative sentiment, when the customer is unhappy with a company or its product. Neutral sentiment, when the customer is only mentioning a company or product, in passing, without a good or a bad feeling. The last is positive sentiment, when a customer is happy or excited about a company or its product.

Traditionally sentiment analysis was complicated because it required a mixture of very complex platforms and tools. Each component required for sentiment analysis was offered from a different company and required a large amount of custom work. The difficulty is further exasperated by hard-to- achieve business requirements. When we discuss sentiment analysis there are 3 key business requirements we see repeated:

  • Real-time access
  • Full granular data set (structured & unstructured)
  • BI and SQL front-end
Real-time Access

In the case of real-time access, business users need access to fresh data. In the world of social media, customer sentiment can change rapidly. With images and videos quickly being posted with re-tweets and Facebook ‘like’ capabilities, a good or bad aspect of a company’s product can go viral in minutes. Business users need to have the ability to analyze data as it comes in, in real-time. We will show in our overview video and demo, how we can utilize Microsoft’s StreamInsight technology for real-time data analysis and complex-event processing.

Full Granular Data Set

In the case of full granular data, in practice we have seen that using a traditional database system can hinder development. This is because a lot of the data that comes in for sentiment analysis such as email, is in a semi-structured or unstructured format. This means the data is not easily modeled into a database. The data does not come in a simple row/column format. Thus we utilize our Big Data technology that is meant for this type of data:  HDInsight (Hadoop). HDInsight is essentially Hortonworks Data Platform running on Windows. In our case we utilize HDInsight to land all of the data, in its raw original format, into the distributed file system HDFS. This allows us to ingest any kind of data, regardless of structure, and store that data online for further analysis at low cost. The Hadoop software is open-source and readily available.

BI and SQL Front-End

The most important area around delivering sentiment analysis to the business is access, making sure we are able to provide the data both in real-time (and high-fidelity) within the tools that our business users know and love. Previously when our customers were doing sentiment analysis on Hadoop systems, BI and SQL access was not available. This was not because the tools could not integrate with Hadoop systems. This was because they could not scale or have the same level of functionality. Some BI users have chosen Hive ODBC in Hadoop, which many claim to be slow and ‘buggy’. Instead here we utilize one of our flagship technologies: PolyBase. With PolyBase we expose the data in Hadoop, and relational SQL Server, with one T-SQL query. What this means is users can use BI tools like Excel, SSAS, or other 3rd party tools. They can then utilize PolyBase within Analytics Platform System (APS) to query that data either in Hadoop, or Parallel Data Warehouse (SQL Server), or mash up the data from both systems!

How It Works

Now we will show you how to use all of the tools from the SQL Server data platform to achieve sentiment analysis. This will allow you to quickly deploy and meet all 3 business requirements through a set of tools and platforms that are very easy to use, fully integrated, and ‘just work’ together.

Let’s get started with the first video (~5 minutes) where we present sentiment analysis using Microsoft technologies. We show you how sentiment analysis works, and how the Microsoft products fit. We then follow up by discussing the architecture in detail surrounding StreamInsight, HDInsight, and Analytics Platform System.

Watch the overview video:

Demo

In the second video (~7 minutes), we show you sentiment analysis in action. The demo will include a full sentiment-analysis engine running in real-time against Twitter data along with a web dashboard. We then stream Twitter data to both HDInsight and Parallel Data Warehouse. Finally, we end the demo by showcasing PolyBase, our flagship technology. With Polybase we can do data mashups combining data from relational and non-relational systems. We will use Polybase to write standard T-SQL queries against this data to determine tweet analytics and how social sentiment is fairing for our marketing campaigns and products.

Watch the demo video:

Categories: Database

Exadata: When A Smart Scan Isn't

Database Journal News - Thu, 07/17/2014 - 08:01

Exadata is known far and wide for Smart Scans, but sometimes Oracle can do better without one.  Read on to see how to know when Oracle decided to not continue with a Smart Scan.

Categories: Database

Deploy and explore the DB2 10.5 pureScale Feature with WebSphere Commerce V7

IBM - DB2 and Informix Articles - Thu, 07/17/2014 - 05:00
The IBM DB2 pureScale Feature for Advanced Enterprise Server Edition is designed for continuous availability and tolerance of both planned maintenance and unplanned accidental component failure. This article describes how to deploy the DB2 10.5 pureScale Feature with IBM WebSphere Commerce V7 for both new and existing WebSphere Commerce applications, including the instance setup and application configuration from the Admin Console of WebSphere Application Server.
Categories: Database

DB2 monitoring: Migrate from snapshot monitor interfaces to in-memory metrics monitor interfaces

IBM - DB2 and Informix Articles - Thu, 07/17/2014 - 05:00
This article helps you migrate from the snapshot monitor interfaces to the in-memory metrics monitor interfaces that were first introduced in DB2 for Linux, UNIX, and Windows Version 9.7.
Categories: Database

Microsoft named a Leader in Agile Business Intelligence by Forrester

We are pleased to see Microsoft acknowledged by Forrester Research as a Leader in The Forrester Wave™: Agile Business Intelligence Platforms, Q3 2014.  

We are happy to see what we believe to be an affirmation in our approach and in the strength of our technologies. Our placement in this report reflects both high scores from our clients for product vision, as well as for client feedback collected as part of the customer survey. Forrester notes that “Microsoft received high client feedback scores for its agile, business user self-service and [advanced data visualization] ADV functionality. Clients also gave Microsoft BI a high score for its product vision”. This feedback from our customers is especially gratifying to see.

Microsoft is delivering on our vision of making business intelligence more agile and accessible through the tools that people use every day. With the accessibility of Excel and the recent release of Power BI for Office 365, we aim to lower the barrier of entry for users and reduce the complexity of deploying business intelligence solutions for IT. Using Microsoft’s business intelligence solution, companies such as MediaCom have reduced time to reporting from weeks to days, Carnegie Mellon is using data to reduce energy consumption by 30%, and Helse Vest is combining hospital data to visualize trends in real time.

We appreciate the recognition of our software in this report. Above all, we value our customer’s voice in helping shape and validate this approach.

Categories: Database

Build a simple web app for student math drills using the Bluemix SQLDB service

IBM - DB2 and Informix Articles - Tue, 07/15/2014 - 05:00
Learn how to create a Node.js application that relies on a managed database service, SQLDB, to handle the demanding web and transactional workloads for your application.
Categories: Database

Importing Into MySQL from Other Databases

Database Journal News - Mon, 07/14/2014 - 08:01

Importing into MySQL from databases of different types is challenging because vendors have their own proprietary tools and SQL extensions. Rob Gravelle presents some software products that can abstract each vendor's particular language so that data may be transferred between them in a seamless process.

Categories: Database

New: ASP.NET Session State Provider for SQL Server In-Memory OLTP

Microsoft SQL Server 2014 brings new performance and scalability gains by introducing In-Memory OLTP.  In-Memory OLTP contains tables and indexes optimized for in memory. Transactions execute under lock-free algorithms to provide linear scalability and Transact-SQL stored procedures can be compiled in native machine code for maximum efficiency in processing.

Working with SQL Server customers on In-Memory OLTP engagements, a common pattern emerged around the desire for increased performance and scalability when using ASP.NET session state. Some early adopters modified their SQL Server objects to take advantage of In-Memory OLTP for ASP.NET session state, with great success. To learn more, read the bwin.party case study “Gaming site can scale to 250,000 requests per second and improve player experience”. To further enhance this scenario, we have created a new provider to make it easier for customers to take advantage of SQL Server In-Memory OLTP when using ASP.NET session state.

This ASP.NET session state provider is fully optimized for In-Memory OLTP by calling natively compiled Transact-SQL stored procedures and by creating all tables as memory-optimized. The functionality of the provider was tested both internally and by external customers. The results showed the implementation was able to provide some significant gains at scale levels which would have previously exhibited a bottleneck on the database.

NOTE: While some testing has been done before the release, we recommend executing your own testing and validation to understand how this implementation behaves in your specific environment.

Getting Started

Setting up the provider requires two steps, installing the provider into the ASP.NET application and creating the In-Memory OLTP database and objects in Microsoft SQL Server 2014.

The provider and scripts can be accessed in two ways:

1. The package has been uploaded to NuGet: https://www.nuget.org/packages/Microsoft.Web.SessionState.SqlInMemory/

2. The source code is also accessible through CodePlex: https://msftdbprodsamples.codeplex.com/releases/view/125282

NuGet Installation

Download the ASP.NET Session State Provider for SQL Server In-Memory OLTP from the NuGet gallery by running the following command from the Visual Studio Package Manager Console:

PM> Install-Package Microsoft.Web.SessionState.SqlInMemory

More information about the NuGet package can be found here:

https://www.nuget.org/packages/Microsoft.Web.SessionState.SqlInMemory/

Installing the package will do the following things:

  • Add references to the ASP.NET Session State Provider assembly.
  • Add to the web.config file a customProvider equals to "SqlInMemoryProvider", where the connectionString attribute needs to be updated.
    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
      <system.web>
        <sessionState mode="Custom" customProvider="SqlInMemoryProvider">
          <providers>
            <add name="SqlInMemoryProvider"
                 type="Microsoft.Web.SessionState.SqlInMemoryProvider"
                 connectionString="data source=sqlserver;initial catalog=ASPStateInMemory;User ID=user;Password=password;" />
          </providers>
        </sessionState>
      </system.web>
    </configuration>
  • Adds an ASPStateInMemory.sql file that includes the script for creating the SQL Server database configured to support In-Memory OLTP.
Setting up In-Memory OLTP Database and objects

Open the T-SQL script file "ASPStateInMemory.sql" and update the 'CREATE DATABASE' statement to replace the 'FILENAME' attributes to specify a path that will exist in your SQL Server machine where the memory-optimized filegroup should exist. For further considerations on placement of this filegroup see Books Online section Creating and Managing Storage for Memory-Optimized Objects

CREATE DATABASE [ASPStateInMemory]
ON PRIMARY (
  NAME = ASPStateInMemory, FILENAME = 'D:\SQL\data\ASPStateInMemory_data.mdf'
),
FILEGROUP ASPStateInMemory_xtp_fg CONTAINS MEMORY_OPTIMIZED_DATA (
  NAME = ASPStateInMemory_xtp, FILENAME = 'D:\SQL\data\ASPStateInMemory_xtp'
)
GO

After updating the 'FILENAME' attributes, run the entire script for creating the In-Memory tables and the natively compiled stored procedures.

Additionally, create a periodic task in SQL Server to run the stored procedure 'dbo.DeleteExpiredSessions'. This procedure removes the expired sessions and frees up the memory consumed.

NOTE: The memory-optimized tables are created with a durability of SCHEMA_ONLY to optimize for performance. If session data durability is required, then change the 'DURABILITY' attribute from 'SCHEMA_ONLY' to 'SCHEMA_AND_DATA'. More information can be found in Books Online sections Defining Durability for Memory-Optimized Objects and Durability for Memory-Optimized Tables.

Conclusion

SQL Server In-Memory OLTP has shown to greatly improve the performance of ASP.NET session state applications. This provider allows customers to optimize ASP.NET web farms to take advantage of SQL Server In-Memory OLTP using a packaged solution with ease.

For further considerations on session state with In-Memory OLTP, along with other solution patterns which have shown success with SQL Server In-Memory OLTP, please reference the whitepaper: In-Memory OLTP – Common Workload Patterns and Migration Considerations.  

Download the Microsoft SQL Server 2014 Evaluation and see how in-memory processing built into SQL Server 2014 delivers breakthrough performance.

Categories: Database

Increase DSS Performance by 100x and OLTP by 2x: Switch to Oracle 12c

Database Journal News - Thu, 07/10/2014 - 15:07

The next big thing after Exadata is the new Oracle 12c Database In-memory feature, which will dramatically improve database performance for analytical queries and OLTP without application re-coding.

Categories: Database