Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database

How to Maintain Enterprise-Level Service with Oracle Standard Edition

Database Journal News - Wed, 11/26/2014 - 09:01

For business executives looking to save on information technology costs, it may seem like a win-win scenario in Oracle database environments to transition from Oracle Enterprise to Standard Edition. However, executives making this decision may not always be aware of or truly understand how transitioning from Enterprise to Standard Edition will affect the delivery of critical IT services, ultimately creating the potential to impact end user efficiency and revenue.

Categories: Database

Oracle: Can Adaptive Cursor Sharing Plans Depend On Execution Order?

Database Journal News - Mon, 11/24/2014 - 09:01

Adaptive cursor sharing is a great feature that can tailor execution plans to bind variable values.  Read on to see how it behaves when query order is reversed and if it chooses 'bad' execution plans.

Categories: Database

Using the IBM InfoSphere Guardium REST API

IBM - DB2 and Informix Articles - Mon, 11/24/2014 - 06:00
Organizations that use InfoSphere Guardium for data security and compliance can take advantage of a rich set of APIs to automate processes and maintain the system in a more efficient manner. As of InfoSphere Guardium 9.1, the Guardium API is exposed to external systems as online RESTful web services, which provide organizations with a modern interface to expose Guardium capabilities in a Web portal or via the Cloud.
Categories: Database

Enterprises Gain New Power to Optimize Project Portfolio Investment with New Version of Primavera Portfolio Management

Oracle Database News - Thu, 11/20/2014 - 16:00
Enterprises Gain New Power to Optimize Project Portfolio Investment with New Version of Primavera Portfolio Management New release helps users expedite project portfolio decisions, boost productivity, and improve transparency Redwood Shores, Calif. – November 20, 2014News Summary

When it comes to making critical enterprise investment decisions, the stakes have never been higher. Business leaders must choose carefully from an ever-expanding set of initiatives, and uninformed decisions about which projects to undertake can adversely affect operations, competitiveness, and profitability. Oracle’s Primavera Portfolio Management 9.1 features new workflow and governance capabilities as well as expanded browser support to provide increased levels of flexibility, insight, and productivity for project portfolio decision-making. The new release gives organizations additional power to prioritize and optimize their enterprise investment portfolio and reduce wasteful and redundant spending.

News Facts Oracle today announced the release of Primavera Portfolio Management 9.1, which features new workflow and governance capabilities that help organizations make more informed and timely decisions about strategic enterprise investments, optimize resource use, and reduce waste. Expanded workflows and early alert triggers bring new levels of automation to project portfolio management, enabling organizations to expedite and improve the precision of strategic investment decisions, boost resource productivity, and increase transparency. Users can conduct searches based on multiple criteria, such as status or start date, and act on the search results—expanding insight into projects and investments and improving governance to optimize investment and resource use. Organizations can enhance understanding of governance processes with new links to multiple workflow features, such as workflow diagrams and instance reports. Managers can address approvals, make decisions more quickly, and avoid project delays thanks to e-mail notifications that now contain links to specific tasks. Expanded support for internet browsers makes it easier for users to access the solution. Primavera Portfolio Management 9.1 supports Microsoft’s Internet Explorer versions 8, 9, 10, and 11. A new planning and control process enables organizations to clarify key strategic enterprise investment objectives by presenting them in measurable ways, propose initiatives that align with strategies and missions, and prioritize and select investments with strong business cases to justify action. Enhancements also enable proactive portfolio management with the ability to track activities, review portfolio performance in real time, compare past data to identify gaps and potential problems, and adjust course—increasing, decreasing, or withdrawing investment funding where appropriate.Supporting Quotes “Every organization faces critical enterprise investment decisions—no company would function or grow without them,” said Mike Sicilia senior vice president and general manager, Oracle Primavera Global Business Unit. “Oracle’s Primavera Portfolio Management 9.1 provides organizations with a seamless decision-making process through its unmatched configurability and flexibility, enabling organizations, in both the private and public sectors, to deliver measurable results through an easy-to-use solution. It helps organizations keep a laser focus on where they need to invest while remaining open to alternative opportunities that might accelerate those initiatives, reduce their cost, and optimize resource use enterprisewide.”Supporting Resources Oracle Primavera Oracle Primavera Portfolio Management Release 9.1 Oracle Primavera on Facebook Oracle Primavera on Twitter

 

About Oracle

Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.Contact Info

Valerie Beaudett
Oracle
+1.650.400.7833
Valerie.beaudett@oracle.com

Mary Tobin
O’Keeffe & Company
+1.503.658.7396

mtobin@okco.com

Categories: Database, Vendor

Lead and Lag Functions in SQL Server 2012

Database Journal News - Thu, 11/20/2014 - 09:01

Arshad Ali discusses how to use CTE and the ranking function to access or query data from previous or subsequent rows. He also shows you how to leverage the LEAD and LAG analytics functions to achieve the same result without writing a self-join query using CTE and ranking function.

Categories: Database

PostgreSQL 9.4 RC1 Released

PostgreSQL News - Thu, 11/20/2014 - 01:00

The PostgreSQL Global Development Group has released 9.4 RC 1, the first release candiate for the next version of PostgreSQL. This release should be identical to the final version of PostgreSQL 9.4, excepting any fixes for bugs found in the next two weeks. Please download, test, and report what you find.

For a full listing of the features in version 9.4, please see the release notes. Additional descriptions and notes on the new features are available on the 9.4 Features Wiki Page.

We depend on our community to help test the next version in order to guarantee that it is high-performance and bug-free. Please download PostgreSQL 9.4 RC 1 and try it with your workloads and applications as soon as you can, and give feedback to the PostgreSQL developers. Features and APIs in this release candidate should be identical to 9.4.0, allowing you to build and test your applications against it. More information on how to test and report issues

Get the PostgreSQL 9.4 RC 1, including binaries and installers for Windows, Linux and Mac from our download page.

Full documentation of the new version is available online, and also installs with PostgreSQL.

Categories: Database, Open Source

Azure HDInsight Adds Deeper Tooling Experience in Visual Studio

To allow developers in Visual Studio to more easily incorporate the benefits of “big data” with their custom applications, Microsoft is adding a deeper tooling experience for HDInsight in Visual Studio in the most recent version of the Azure SDK. This extension to Visual Studio helps developers to visualize their Hadoop clusters, tables and associated storage in familiar and powerful tools. Developers can now create and submit ad hoc Hive queries for HDInsight directly against a cluster from within Visual Studio, or build a Hive application that is managed like any other Visual Studio project.

Download the Azure SDK now for VS 2013 | VS 2012 | VS 2015 Preview.

Integration of HDInsight objects into the “Server Explorer” brings your Big Data assets onto the same page as other cloud services under Azure. This allows for quick and simple exploration of clusters, Hive tables and their schemas, down to querying the first 100 rows of a table.  This helps you to quickly understand the shape of the data you are working with in Visual Studio.

Also, there is tooling to create Hive queries and submit them as jobs. Use the context menu against a Hadoop cluster to immediately begin writing Hive query scripts. In the example below, we create a simple query against a Hive table with geographic info to find the count of all countries and sort them by country. The Job Browser tool helps you visualize the job submissions and status.  Double click on any job to get a summary and details in the Hive Job Summary window. 

You can also navigate to any Azure Blob container and open it to work with the files contained there. The backing store is associated with the Hadoop cluster during cluster creation in the Azure dashboard. Management of the Hadoop cluster is still performed in the same Azure dashboard.

For more complex script development and lifecycle management, you can create Hive projects within Visual Studio. In the new project dialog (see below) you will find a new HDInsight Template category. A helpful starting point is the Hive Sample project type. This project is pre-populated with a more complex Hive query and sample data for the case of processing web server logs.

To get started visit the Azure HDInsight page to learn about Hadoop features on Azure. 

Categories: Database

Oracle Helps Midsize Organizations Quickly and Easily Migrate to Oracle Sales Cloud

Oracle Database News - Wed, 11/19/2014 - 14:00
Oracle Helps Midsize Organizations Quickly and Easily Migrate to Oracle Sales Cloud New Oracle partner solutions streamline CRM migration for midsize organizations by reducing complexity, cost, and business downtime Redwood Shores, Calif. – November 19, 2014News Summary

The rise of the always-connected customer, third-party data providers, and data-driven marketing has significantly increased the amount of customer data available to sales teams. To capitalize on the opportunities this wealth of data presents, growing midsize organizations often need to migrate to more advanced customer relationship management (CRM) solutions that can use the data to increase sales and optimize efficiency through enhanced mobility, analytics, partner relationship management, and industry-specific capabilities. Oracle’s new Oracle Accelerate for Oracle Sales Cloud simplifies and streamlines such migrations, enabling partners to help customers quickly and easily transform their sales operations with Oracle Sales Cloud.

News Facts To simplify and streamline CRM migration, Oracle has introduced Oracle Accelerate for Oracle Sales Cloud, a new solution that enables partners to minimize business downtime for customers migrating to Oracle Sales Cloud. With Oracle Accelerate for Oracle Sales Cloud, Oracle PartnerNetwork (OPN) partners can help midsize customers accelerate transitions from incumbent CRM providers and reduce complexity in the migration process, while enhancing data integrity. The program offers rapid implementation tools, templates, and process flows to reduce time to productivity by simplifying the migration to Oracle Sales Cloud. In some cases, customers can be migrated to Oracle Sales Cloud in just a few weeks. With the transition complete, midsize organizations can benefit from the enhanced mobility, analytics, partner relationship management, and industry-specific solutions delivered by the Oracle Sales Cloud. By providing a simple, intuitive, insight-driven, and mobile-enabled solution, Oracle Sales Cloud equips sales teams with the processes, tools, and resources they need to help increase sales, reporting capabilities, and customer understanding. Additionally, customers can leverage powerful integrations with other best-of-breed customer experience (CX) applications including Oracle CX Cloud’s Oracle Marketing Cloud, Oracle Social Cloud, Oracle Service Cloud, and Oracle Configure, Price, and Quote Cloud (Oracle CPQ Cloud) solutions. Part of Oracle Applications Cloud, Oracle CX Cloud applications empower organizations to improve experiences, enhance loyalty, differentiate their brands, and drive measurable results by creating consistent, connected, and personalized brand experiences across all channels and devices. Sixteen Oracle partners, including Hitachi Consulting, BPI OnDemand, and Enigen UK, now offer Oracle Accelerate for Oracle Sales Cloud, with more partners expected to offer the solution over the coming months. Supporting Quotes “We are excited to see so many partners embrace Oracle Accelerate for Oracle Sales Cloud. Clearly the market is looking for fast ways to migrate from legacy systems, and our partners are embracing this trend,” said Steve Cox, vice president, midsize applications business, Oracle. “These fast migrations help our midsize customers rapidly transform their sales organizations with a new set of capabilities designed to enhance sales team efficiency and help drive revenues.” “Our customers are asking for much more than a pipeline tool. They demand real productivity gains,” said Fred Wilkinson, managing director, BPI OnDemand. “Thanks to Oracle Sales Cloud, we now have access to insights anywhere, anytime, and on any device, which has enabled us to improve sales productivity.” “We’ve created accelerators right across Oracle’s customer experience suite to help midsize enterprises simply and easily move to Oracle Sales Cloud, Oracle Marketing Cloud, and Oracle Service Cloud from other cloud-based applications,” said Alex Love, managing director, Enigen UK. “We’ve tested them in our own business and designed a blueprint to ensure every project is quick and cost effective.” Supporting Resources Oracle Accelerate Oracle Customer Experience Applications Oracle PartnerNetwork Oracle Sales Cloud Oracle Sales Cloud Blog Oracle Sales Cloud Facebook Oracle Sales Cloud YouTube Oracle Sales Cloud Twitter Connect with Oracle Accelerate on Facebook, Twitter, and LinkedIn Read the Oracle Midsize BlogAbout Oracle

Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NYSE:ORCL), visit oracle.com.

TrademarkOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

# # #

Chaundera Wolfe
Oracle
+1.650.506.9857

chaundera.wolfe@oracle.com

Simon Jones
Blanc & Otus
+1.415.856.5155
sjones@blancandotus.com

Categories: Database, Vendor

Oracle Advances Data Integration Portfolio with Major Enhancements to Oracle GoldenGate 12c

Oracle Database News - Tue, 11/18/2014 - 14:00

Oracle Advances Data Integration Portfolio with Major Enhancements to Oracle GoldenGate 12cIndustry-leading real-time data integration solution enables organizations to support emerging technology trends including cloud, big data, and real-time analyticsREDWOOD SHORES, Calif. – Nov. 18, 2014News Summary

Data volumes are growing at unprecedented rates, making it more complex for organizations to effectively use that information to make strategic business decisions and gain a competitive advantage. To drive greater value from their data and take advantage of emerging technology trends such as cloud computing and big data, companies must have the ability to move data across highly diverse IT environments and firewalls with very low latency. With Oracle GoldenGate 12c, customers can implement real-time data integration and transactional data replication between on-premises and cloud environments and across a broader set of heterogeneous platforms, achieving faster time to value and a greater return from their data assets.

News Facts Reinforcing its leadership in the data integration and replication space, Oracle announced an enhanced version of Oracle GoldenGate 12c with expanded support for heterogeneous databases and big data solutions, improved manageability, and support for hybrid cloud environments. Oracle GoldenGate 12c provides best-of-breed, real-time data integration and heterogeneous database replication. New features include: Migration utility for Oracle Streams: A new migration utility, Streams2OGG helps Oracle Streams customers move to Oracle GoldenGate and leverage its latest features, such as integrated capture and delivery processes and advanced conflict management. Support for IBM Informix: Oracle GoldenGate 12c now supports real-time data capture and delivery for the latest Informix database versions on all major platforms. Support has also been extended to Oracle GoldenGate Veridata so IBM Informix customers can integrate and replicate high volumes of real-time data throughout their heterogeneous business environments. Extended support for Microsoft SQL Server and MySQL: The new release adds support for real-time, log-based capture and delivery for Microsoft SQL Server 2012 and 2014 as well as MySQL Community Edition databases. SOCKS5 compliance: Oracle GoldenGate 12c now leverages customers’ SOCKS compliance setting for data transfer, enabling customers to replicate between on-premises and cloud environments without an extra VPN connection open. Support for big data: Oracle GoldenGate Adapter for Java enables integration with Oracle NoSQL, Apache Hadoop, Apache HDFS, Apache HBase, Apache Storm, Apache Flume, Apache Kafka, and others and allows real-time, noninvasive data streaming into big data targets to give customers new insights into business and improve the customer experience. Out-of-sync data repair: In addition to comparing heterogeneous databases and reporting data discrepancies without interrupting business operations, Oracle GoldenGate Veridata 12c now provides data repair and revalidation capabilities for out-of-sync data. Data capture from Oracle Active Data Guard: Oracle Active Data Guard customers who want to remove any replication impact on their production environment can use Oracle GoldenGate 12c on their standby systems to capture data in real time. Simplified application upgrades: The new release simplifies the upgrade process for customers leveraging the Oracle Database Edition-Based Redefinition feature by coordinating the upgrades and bringing the target database to the same edition-based version. Improved management and monitoring: Customers using the new release also gain the ability to start and stop processes, edit parameter files, collect information about operations, and diagnose issues easily with Oracle Enterprise Manager Plug-In 12.1.3, part of Oracle Management Pack for Oracle GoldenGate 12c. Oracle GoldenGate is part of the industry-leading Oracle Data Integration portfolio, which includes real-time and bulk data movement, transformations, data governance, data virtualization, and data quality and enables organizations to easily keep pace with new data-oriented technology trends such as cloud computing, big data analytics, real-time business intelligence, and continuous data availability. Oracle Data Integration has also gained notable industry recognition, with Gartner, Inc. naming Oracle a Leader in its July 24, 2014 Magic Quadrant for Data Integration Tools and Database Trends and Applications naming Oracle GoldenGate 12c a winner in the “Best Data Replication Solution” category of its 2014 Readers’ Choice Awards. Oracle Data Integration is part of Oracle Fusion Middleware, the leading business innovation platform for the enterprise and the cloud.Supporting Quotes “Real-time data integration is a critical enabling technology for organizations that want to get more and faster value from their data and connect systems deployed in diverse and hybrid cloud environments,” said Jeff Pollock, vice president of product management at Oracle. “By continuously broadening Oracle GoldenGate’s support for non-Oracle databases, deepening integration with Oracle technologies, and streamlining cloud integration capabilities, we’re helping customers to easily and cost-effectively achieve pervasive, continuous access to timely data.” “Rakuten welcomes the release of the new Oracle GoldenGate. It adds key solutions to the Oracle GoldenGate product family that expand heterogeneous data synchronization support,” said Yuji Takahashi, group manager, EC Database Administration Group, Rakuten Ichiba Development Department at Rakuten, Inc. “We have used Oracle GoldenGate solutions with Oracle Exadata Database Machine, and we strongly expect continued product enhancement and improvement to support our further business requirements.” “Before embarking on a big data or cloud journey, enterprises need to ensure that they are not creating any more data silos – disjointed analytics leads to partial insights,” said Surya Mukherjee, senior analyst, Information Management at Ovum. “Oracle GoldenGate 12c is a step in the right direction; its new features allow real-time data integration across their diverse data platforms, so that enterprises can build strategies for their data and application universe, whether it be in the cloud, on-premises,  structured, or semi-structured. “In recent years, we have been collaborating with Oracle — within the framework of CERN openlab (www.cern.ch/openlab) — on the development of Oracle GoldenGate. This work has primarily focused on enabling replication within our challenging, workload-intensive and dynamic environment. In addition to easier deployment and administration, Oracle GoldenGate has proven to be more easily scalable than Oracle Streams. Its heterogeneity and its in-database monitoring and reporting functions are also beneficial. With Oracle GoldenGate 12c, we have seen a performance increase of over 50 percent compared to Oracle Streams 11g Release 2 — using our production workload on the same hardware. We are, therefore, confident about its advantages over the technology we were using in the past. Finally, we were able to complete our migration from Oracle Streams deployments to Oracle GoldenGate without any issues using the migration tool provided by Oracle,” said Lorena Lobato, CERN openlab researcher, Database Services Group, CERN IT department.Supporting Resources Oracle GoldenGate 12c Register for the Oracle GoldenGate 12c for the Cloud Webcast Oracle GoldenGate12c new Features Whitepaper Oracle Streams to Oracle GoldenGate Migration Replication Technologies at Worldwide LHC Computing Grid (WLCG) Oracle GoldenGate 12c Whitepaper Oracle Data Integration Oracle Fusion Middleware Watch the Big Data Integration Webcast Connect with Oracle Data Integration via Blog, Facebook, Twitter, and LinkedInAbout the Magic Quadrant

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Oracle

Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NYSE:ORCL), visit oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Contact Info

Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com

Drew Smith
Blanc & Otus
+1.415.856.5127
drew.smith@blancandotus.com

Categories: Database, Vendor

Azure HDInsight Clusters Allows Custom Installation of Spark Using Script Action

Apache Spark is a popular open source framework for distributed cluster computing. Spark has been gaining popularity for its ability to handle both batch and stream processing as well as supporting in-memory and conventional disk processing. Starting today, Azure HDInsight will make it possible to install Spark as well as other Hadoop sub-projects on its clusters. This is delivered through a new customization feature called Script Action. This will allow you to experiment and deploy Hadoop projects to HDInsight clusters that were not possible before. We are making this easier specifically for Spark and R by documenting the process to install these modules.

To do this, you will have to create an HDInsight cluster with Spark Script Action. Script Action allow users to specify PowerShell scripts that will be executed on cluster nodes during cluster setup. One of the sample scripts that are released with the preview is Script Action to install Spark. During preview the feature is available through PowerShell, so you will need to run PowerShell scripts to create your Spark cluster. Below is the snippet of the PowerShell code where “spark-installer-v01.ps1” is the Script Action that installs Spark on HDInsight:

New-AzureHDInsightClusterConfig -ClusterSizeInNodes $clusterNodes

| Set-AzureHDInsightDefaultStorage -StorageAccountName $storageAccountName 
        -StorageAccountKey $storageAccountKey -StorageContainerName $containerName

| Add-AzureHDInsightScriptAction -Name "Install Spark"
        -ClusterRoleCollection HeadNode,DataNode
        -Uri https://hdiconfigactions.blob.core.windows.net/sparkconfigactionv01/spark-installer-v01.ps1

| New-AzureHDInsightCluster -Name $clusterName -Location $location

Once the cluster is provisioned it will have the Spark component installed on it. You can RDP into the cluster and use Spark shell:

  • In Hadoop command line window change directory to C:\apps\dist\spark-1.0.2
  • Run the following command to start the Spark shell.

.\bin\spark-shell

  • On the Scala prompt, enter the spark query to count words in a sample file stored in Azure Blob storage account:

val file = sc.textFile("example/data/gutenberg/davinci.txt")
val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)
counts.toArray().foreach(println)

Read more on installing and using Spark on HDInsight here:

Read more on Script Action to make other customizations here :

For more information on Azure HDInsight:

Categories: Database

IT Budget Planning for Big Data

Database Journal News - Mon, 11/17/2014 - 09:01

Big data software, hardware, application suites, business analytics solutions ... suddenly, it seems, IT enterprises are deluged with vendor offerings that solve problems it didn't know it had. As you dive into what will most likely be your largest IT project of the year, ensure that you have planned and budgeted for the following items that are unique to big data implementations.

Categories: Database

APS Best Practice: How to Optimize Query Performance by Minimizing Data Movement

by Rob Farley, LobsterPot Solutions

The Analytics Platform System, with its MPP SQL Server engine (SQL Server Parallel Data Warehouse) can deliver performance and scalability for analytics workloads that you may not have expected from SQL Server. But there are key differences in working with SQL Server PDW and SQL Server Enterprise Edition that one should be aware of in order to take full advantage of the SQL Server PDW capabilities. One of the most important considerations when tuning queries in Microsoft SQL Server Parallel Data Warehouse is the minimisation of data movement. This post shows a useful technique regarding the identification of redundant joins through additional predicates that simulate check constraints.

Microsoft’s PDW, part of the Analytics Platform System (APS), offers scale-out technology for data warehouses. This involves spreading data across a number of SQL Server nodes and distributions, such that systems can host up to many petabytes of data. To achieve this, queries which use data from multiple distributions to satisfy joins must leverage the Data Movement Service (DMS) to relocate data during the execution of the query. This data movement is both a blessing and a curse; a blessing because it is the fundamental technology which allows the scale-out features to work, and a curse because it can be one of the most expensive parts of query execution. Furthermore, tuning to avoid data movement is something which many SQL Server query tuning experts have little experience, as it is unique to the Parallel Data Warehouse edition of SQL Server.

Regardless of whether data in PDW is stored in a column-store or row-store manner, or whether it is partitioned or not, there is a decision to be made as to whether a table is to be replicated or distributed. Replicated tables store a full copy of their data on each compute node of the system, while distributed tables distribute their data across distributions, of which there are eight on each compute node. In a system with six compute nodes, there would be forty-eight distributions, with an average of less than 2.1% (100% / 48) of the data in each distribution.

When deciding whether to distribute or replicate data, there are a number of considerations to bear in mind. Replicated data uses more storage and also has a larger management overhead, but can be more easily joined to data, as every SQL node has local access to replicated data. By distributing larger tables according to the hash of one of the table columns (known as the distribution key), the overhead of both reading and writing data is reduced – effectively reducing the size of databases by an order of magnitude.

Having decided to distribute data, choosing which column to use as the distribution key is driven by factors including the minimisation of data movement and the reduction of skew. Skew is important because if a distribution has much more than the average amount of data, this can affect query time. However, the minimisation of data movement is probably the most significant factor in distribution-key choice.

Joining two tables together involves identifying whether rows from each table match to according a number of predicates, but to do this, the two rows must be available on the same compute node. If one of the tables is replicated, this requirement is already satisfied (although it might need to be ‘trimmed’ to enable a left join), but if both tables are distributed, then the data is only known to be on the same node if one of the join predicates is an equality predicate between the distribution keys of the tables, and the data types of those keys are exactly identical (including nullability and length). More can be read about this in the excellent whitepaper about Query Execution in Parallel Data Warehouse.

To avoid data movement between commonly-performed joins, creativity is often needed by the data warehouse designers. This could involve the addition of extra columns to tables, such as adding the CustomerKey to many fact data tables (and using this as the distribution key), as joins between orders, items, payments, and other information required for a given report, as all these items are ultimately about a customer, and adding additional predicates to each join to alert the PDW Engine that only rows within the same distribution could possibly match. This is thinking that is alien for most data warehouse designers, who would typically feel that adding CustomerKey to a table not directly related to a Customer dimension is against best-practice advice.

 

Another technique commonly used by PDW data warehouse designers that is rarely seen in other SQL Server data warehouses is splitting tables up into two, either vertically or horizontally, whereas both are relatively common in PDW to avoid some of the problems that can often occur.

Splitting a table vertically is frequently done to reduce the impact of skew when the ideal distribution key for joins is not evenly distributed. Imagine the scenario of identifiable customers and unidentifiable customers, as increasingly the situation as stores have loyalty programs allowing them to identify a large portion (but not all) customers. For the analysis of shopping trends, it could be very useful to have data distributed by customer, but if half the customers are unknown, there will be a large amount of skew.

To solve this, sales could be split into two tables, such as Sales_KnownCustomer (distributed by CustomerKey) and Sales_UnknownCustomer (distributed by some other column). When analysing by customer, the table Sales_KnownCustomer could be used, including the CustomerKey as an additional (even if redundant) join predicate. A view performing a UNION ALL over the two tables could be used to allow reports that need to consider all Sales.

The query overhead of having the two tables is potentially high, especially if we consider tables for Sales, SaleItems, Deliveries, and more, which might all need to be split into two to avoid skew while minimising data movement, using CustomerKey as the distribution key when known to allow customer-based analysis, and SalesKey when the customer is unknown.

By distributing on a common key the impact is to effectively create mini-databases which are split out according to groups of customers, with all of the data about a particular customer residing in a single database. This is similar to the way that people scale out when doing so manually, rather than using a system such as PDW. Of course, there is a lot of additional overhead when trying to scale out manually, such as working out how to execute queries that do involve some amount of data movement.

By splitting up the tables into ones for known and unknown customers, queries that were looking something like the following:

SELECT …
FROM Sales AS s
JOIN SaleItems AS si
   ON si.SalesKey = s.SalesKey
JOIN Delivery_SaleItems AS dsi
   ON dsi.LineItemKey = si.LineItemKey
JOIN Deliveries AS d
   ON d.DeliveryKey = dsi.DeliveryKey

…would become something like:

SELECT …
FROM Sales_KnownCustomer AS s
JOIN SaleItems_KnownCustomer AS si
   ON si.SalesKey = s.SalesKey
   AND si.CustomerKey = s.CustomerKey
JOIN Delivery_SaleItems_KnownCustomer AS dsi
   ON dsi.LineItemKey = si.LineItemKey
   AND dsi.CustomerKey = s.CustomerKey
JOIN Deliveries_KnownCustomer AS d
   ON d.DeliveryKey = dsi.DeliveryKey
   AND d.CustomerKey = s.CustomerKey
UNION ALL
SELECT …
FROM Sales_UnknownCustomer AS s
JOIN SaleItems_UnknownCustomer AS li
   ON si.SalesKey = s.SalesKey
JOIN Delivery_SaleItems_UnknownCustomer AS dsi
   ON dsi.LineItemKey = s.LineItemKey
   AND dsi.SalesKey = s.SalesKey
JOIN Deliveries_UnknownCustomer AS d
   ON d.DeliveryKey = s.DeliveryKey
   AND d.SalesKey = s.SalesKey

I’m sure you can appreciate that this becomes a much larger effort for query writers, and the existence of views to simplify querying back to the earlier shape could be useful. If both CustomerKey and SalesKey were being used as distribution keys, then joins between the views would require both, but this can be incorporated into logical layers such as Data Source Views much more easily than using UNION ALL across the results of many joins. A DSV or Data Model could easily define relationships between tables using multiple columns so that self-serving reporting environments leverage the additional predicates.

The use of views should be considered very carefully, as it is easily possible to end up with views that nest views that nest view that nest views, and an environment that is very hard to troubleshoot and performs poorly. With sufficient care and expertise, however, there are some advantages to be had.

 

The resultant query would look something like:

SELECT …
FROM Sales AS s
JOIN SaleItems AS li
   ON si.SalesKey = s.SalesKey
   AND si.CustomerKey = s.CustomerKey
JOIN Delivery_SaleItems AS dsi
   ON dsi.LineItemKey = si.LineItemKey
   AND dsi.CustomerKey = s.CustomerKey
   AND dsi.SalesKey = s.SalesKey
JOIN Deliveries AS d
   ON d.DeliveryKey = dsi.DeliveryKey
   AND d.CustomerKey = s.CustomerKey
   AND d.SalesKey = s.SalesKey

Joining multiple sets of tables which have been combined using UNION ALL is not the same as performing a UNION ALL of sets of tables which have been joined. Much like any high school mathematics teacher will happily explain that (a*b)+(c*d) is not the same as (a+c)*(b+d), additional combinations need to be considered when the logical order of joins and UNION ALLs.

Notice that when we have (TableA1 UNION ALL TableA2) JOIN (TableB1 UNION ALL TableB2), we must perform joins not only between TableA1 and TableB1, and TableA2 and TableB2, but also TableA1 and TableB2, and TableB1 and TableA2. These last two combinations do not involve tables with common distribution keys, and therefore we would see data movement. This is despite the fact that we know that there can be no matching rows in those combinations, because some are for KnownCustomers and the others are for UnknownCustomers. Effectively, the relationships between the tables would be more like the following diagram:

There is an important stage of Query Optimization which must be considered here, and which can be leveraged to remove the need for data movement when this pattern is applied – that of Contradiction.

The contradiction algorithm is an incredibly useful but underappreciated stage of Query Optimization. Typically it is explained using an obvious contradiction such as WHERE 1=2. Notice the effect on the query plans of using this predicate.

Because the Query Optimizer recognises that no rows can possibly satisfy the predicate WHERE 1=2, it does not access the data structures seen in the first query plan.

This is useful, but many readers may not consider queries that use such an obvious contradiction are going to appear in their code.

But suppose the views that perform a UNION ALL are expressed in this form:

CREATE VIEW dbo.Sales AS
SELECT *
FROM dbo.Sales_KnownCustomer
WHERE CustomerID > 0
UNION ALL
SELECT *
FROM dbo.Sales_UnknownCustomer
WHERE CustomerID = 0;

Now, we see a different kind of behaviour.

Before the predicates are used, the query on the views is rewritten as follows (with SELECT clauses replaced by ellipses).

SELECT …
FROM   (SELECT …
        FROM   (SELECT ...
                FROM   [sample_vsplit].[dbo].[Sales_KnownCustomer] AS T4_1
                UNION ALL
                SELECT …
                FROM   [tempdb].[dbo].[TEMP_ID_4208] AS T4_1) AS T2_1
               INNER JOIN
               (SELECT …
                FROM   (SELECT …
                        FROM   [sample_vsplit].[dbo].[SaleItems_KnownCustomer] AS T5_1
                        UNION ALL
                        SELECT …
                        FROM   [tempdb].[dbo].[TEMP_ID_4209] AS T5_1) AS T3_1
                       INNER JOIN
                       (SELECT …
                        FROM   (SELECT …
                                FROM   [sample_vsplit].[dbo].[Delivery_SaleItems_KnownCustomer] AS T6_1
                                UNION ALL
                                SELECT …
                                FROM   [tempdb].[dbo].[TEMP_ID_4210] AS T6_1) AS T4_1
                               INNER JOIN
                               (SELECT …
                                FROM   [sample_vsplit].[dbo].[Deliveries_KnownCustomer] AS T6_1
                                UNION ALL
                                SELECT …
                                FROM   [tempdb].[dbo].[TEMP_ID_4211] AS T6_1) AS T4_2
                               ON (([T4_2].[CustomerKey] = [T4_1].[CustomerKey])
                                   AND ([T4_2].[SalesKey] = [T4_1].[SalesKey])
                                       AND ([T4_2].[DeliveryKey] = [T4_1].[DeliveryKey]))) AS T3_2
                       ON (([T3_1].[CustomerKey] = [T3_2].[CustomerKey])
                           AND ([T3_1].[SalesKey] = [T3_2].[SalesKey])
                               AND ([T3_2].[SaleItemKey] = [T3_1].[SaleItemKey]))) AS T2_2
               ON (([T2_2].[CustomerKey] = [T2_1].[CustomerKey])
                   AND ([T2_2].[SalesKey] = [T2_1].[SalesKey]))) AS T1_1

Whereas with the inclusion of the additional predicates, the query simplifies to:

SELECT …
FROM   (SELECT …
        FROM   (SELECT …
                FROM   [sample_vsplit].[dbo].[Sales_KnownCustomer] AS T4_1
                WHERE  ([T4_1].[CustomerKey] > 0)) AS T3_1
               INNER JOIN
               (SELECT …
                FROM   (SELECT …
                        FROM   [sample_vsplit].[dbo].[SaleItems_KnownCustomer] AS T5_1
                        WHERE  ([T5_1].[CustomerKey] > 0)) AS T4_1
                       INNER JOIN
                       (SELECT …
                        FROM   (SELECT …
                                FROM   [sample_vsplit].[dbo].[Delivery_SaleItems_KnownCustomer] AS T6_1
                                WHERE  ([T6_1].[CustomerKey] > 0)) AS T5_1
                               INNER JOIN
                               (SELECT …
                                FROM   [sample_vsplit].[dbo].[Deliveries_KnownCustomer] AS T6_1
                                WHERE  ([T6_1].[CustomerKey] > 0)) AS T5_2
                               ON (([T5_2].[CustomerKey] = [T5_1].[CustomerKey])
                                   AND ([T5_2].[SalesKey] = [T5_1].[SalesKey])
                                       AND ([T5_2].[DeliveryKey] = [T5_1].[DeliveryKey]))) AS T4_2
                       ON (([T4_1].[CustomerKey] = [T4_2].[CustomerKey])
                           AND ([T4_1].[SalesKey] = [T4_2].[SalesKey])
                               AND ([T4_2].[SaleItemKey] = [T4_1].[SaleItemKey]))) AS T3_2
               ON (([T3_2].[CustomerKey] = [T3_1].[CustomerKey])
                   AND ([T3_2].[SalesKey] = [T3_1].[SalesKey]))
        UNION ALL
        SELECT …
        FROM   (SELECT …
                FROM   [sample_vsplit].[dbo].[Sales_UnknownCustomer] AS T4_1
                WHERE  ([T4_1].[CustomerKey] = 0)) AS T3_1
               INNER JOIN
               (SELECT …
                FROM   (SELECT …
                        FROM   [sample_vsplit].[dbo].[SaleItems_UnknownCustomer] AS T5_1
                        WHERE  ([T5_1].[CustomerKey] = 0)) AS T4_1
                       INNER JOIN
                       (SELECT …
                        FROM   (SELECT …
                                FROM   [sample_vsplit].[dbo].[Delivery_SaleItems_UnknownCustomer] AS T6_1
                                WHERE  ([T6_1].[CustomerKey] = 0)) AS T5_1
                               INNER JOIN
                               (SELECT …
                                FROM   [sample_vsplit].[dbo].[Deliveries_UnknownCustomer] AS T6_1
                                WHERE  ([T6_1].[CustomerKey] = 0)) AS T5_2
                               ON (([T5_2].[CustomerKey] = [T5_1].[CustomerKey])
                                   AND ([T5_2].[SalesKey] = [T5_1].[SalesKey])
                                       AND ([T5_2].[DeliveryKey] = [T5_1].[DeliveryKey]))) AS T4_2
                       ON (([T4_1].[CustomerKey] = [T4_2].[CustomerKey])
                           AND ([T4_1].[SalesKey] = [T4_2].[SalesKey])
                               AND ([T4_2].[SaleItemKey] = [T4_1].[SaleItemKey]))) AS T3_2
               ON (([T3_2].[CustomerKey] = [T3_1].[CustomerKey])
                   AND ([T3_2].[SalesKey] = [T3_1].[SalesKey]))) AS T1_1

This may seem more complex – it’s certainly longer – but this is the original, preferred version of the join. This is a powerful rewrite of the query.

Furthermore, the astute PDW-familiar reader will quickly realise that the UNION ALL of two local queries (queries that don’t require data movement) is also local, and that therefore, this query is completely local. The TEMP_ID_NNNNN tables in the first rewrite are more evidence that data movement has been required.

When the two plans are shown using PDW’s EXPLAIN keyword, the significance is shown even clearer.

The first plan appears as following, and it is obvious that there is a large amount of data movement involved.

The queries passed in are identical, but the altered definitions of the views have removed the need for any data movement at all. This should allow your query to run a little faster. Ok, a lot faster.

Summary

When splitting distributed tables vertically to avoid skew, views over those tables should include predicates which reiterate the conditions that cause the data to be populated into each table. This provides additional information to the PDW Engine that can remove unnecessary data movement, resulting in much-improved performance, both for standard reports using designed queries, and ad hoc reports that use a data model.

Categories: Database

Preview Release of the SQL Server JDBC Driver

Today we are pleased to announce the availability of a community technology preview release of the Microsoft JDBC Driver for SQL Server! Download the preview driver today here.

The JDBC Driver for SQL Server is a Java Database Connectivity (JDBC) 4.1 compliant driver that provides robust data access to Microsoft SQL Server and Microsoft Azure SQL Database.  Microsoft JDBC Driver 4.1 (Preview) for SQL Server now supports Java Development Kit (JDK) version 7.0.

The updated driver is part of SQL Server’s wider interoperability program, which includes the recent announcement of a preview driver for SQL Server compatible with PHP 5.5.

We look forward to hearing your feedback about the new driver. Let us know what you think on Microsoft Connect.

Categories: Database

In-Memory Technology in SQL Server 2014 Provides Samsung ElectroMechanics with Huge Performance Gains

We’ve been talking a lot lately about our in-memory technology in SQL Server. If you attended the PASS Summit last week you likely heard a fair share. So, why all the fuss? Simply put, SQL Server 2014’s in-memory delivers serious business impact. According to CMS Wire “Microsoft SQL 2014 just may be the most complete in-memory solution on the market.”

Last week we told you the story of Dell and how they have boosted website performance and enabled faster online shopping experiences with SQL Server’s in-memory online transaction processing technology. Dell is not alone. Nasdaq, Bwin and EdgeNet all have seen significant performance gains. Let’s take a look at another customer, Samsung Electro-Mechanics.

Samsung Electro-Mechanics, an electrical and mechanical devices manufacturer, uses its Statistical Process Control system to manage quality control for its large-scale manufacturing facilities.  As the system evolved and became more complex, database performance suffered, impacting manufacturing quality.  To stabilize and increase performance, Samsung Electro-Mechanics implemented SQL Server 2014 in-memory OLTP and CCI (Clustered Columnstore Indexes).

By doing so, Samsung Electro-Mechanics was able to increase transactional performance by 24x using in-memory OLTP, and improve query and reporting by 22x using the in-memory Columnstore.  These performance gains far exceeded their initial goal of improving overall performance by 2x.

So consider what impact SQL Server in-memory could have on your business.

Learn more about SQL Server 2014 in-memory, or try SQL Server 2014 now.

Categories: Database

AlwaysOn Availability Groups Now Support Internal Listeners on Azure Virtual Machines

We’re excited to announce that AlwaysOn Availability Groups now support Internal Listeners on Azure Virtual Machines. Today we updated our official documentation accordingly.

Availability Groups and Listeners on Azure Virtual Machines

Availability Groups, released in SQL Server 2012 and enhanced in SQL Server 2014, detect conditions impacting SQL Server availability (e.g. SQL service being down or losing connectivity).  When detecting these conditions, the Availability Group fails over a group of databases to a secondary replica. In the context of Azure Infrastructure Services, this significantly increases the availability of these databases during Microsoft Azure’s VM Service Healing (e.g. due to physical hardware failures), platform upgrades, or your own patching of the guest OS or SQL Server.

Client applications connect to the primary replica of an availability group using an Availability Group Listener. The Listener specifies a DNS name that remains the same, irrespective of the number of replicas or where these are located.  

For example: Server=tcp:ListenerName,1433;Database=DatabaseName;

To support this in Azure Virtual Machines, the Listener must be assigned the IP address of an Azure Load Balancer. The Load Balancer routes connections to endpoint of the primary replica of the Availability Group.

Internal Availability Group Listeners

Until now, the IP address of the Azure Load Balancer had to be a public IP reachable over the Internet. To restrict access to the listener only to trusted entities, you could configure an access control list for the Load Balancer IP. However, maintaining this list could be cumbersome over time.

To simplify this, you can now configure an Internal Azure Load Balancer. This has an internal IP address reachable only within a Virtual Network. This makes the Listener accessible only to client applications located

This is depicted in the picture below. An availability group has three replicas, two in Virtual Network 1 and one in Virtual Network 2. The Virtual Networks are connected via a VPN tunnel. The Availability Group has a Listener configured using an Internal Load Balancer. This disallows access outside of the connected Virtual Networks.

To create an Internal Azure Load Balancer execute the Powershell cmdlet Add-AzureInternalLoadBalancer. As depicted below, this cmdlet receives the name of the Load Balancer, the Cloud Service where it’ll be created, and a static IP address in the Virtual Network. This is the internal IP address that should be used for the listener.

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

Check our official documentation and start using Internal Availability Groups today!

To learn more about SQL Server in Azure Virtual Machines check our start page.

Categories: Database

Oracle: Deferred Segment Creation And Tablespace Restrictions

Database Journal News - Thu, 11/13/2014 - 09:01

Oracle offers Deferred Segment Creation for tables and indexes, which allows users with no access to a tablespace to create tables and indexes successfully.  Read on to see why this is a problem.

Categories: Database

SQL Server 2014 In-Memory Gives Dell the Boost it Needed to Turn Time into Money

There’s an old adage: time is money. Technology and the internet have changed the value of time and created a very speed-oriented culture. The pace at which you as a business deliver information, react to customers, enable online purchases, etc. directly correlates with your revenue. For example, reaction times and processing speeds can mean the difference between making a sale and a consumer losing interest. This is where the right data platform comes into play.

If you attended PASS Summit or watched the keynotes online, you saw us speak about Dell and the success they’ve had in using technology performance to drive their business. For Dell, providing its customers with the best possible online experience is paramount. That meant boosting its website performance so that each day its 10,000 concurrent shoppers (this number jumps to nearly 1 million concurrent shoppers during the holiday season) could enjoy faster, frustration-free shopping experiences. For Dell, time literally means money.

With a very specific need and goal in mind Dell evaluated numerous other in-memory tools and databases, but ultimately selected SQL Server 2014.

Dell turned to Microsoft’s in-memory OLTP (online transaction processing) technology because of its unique lock and latch free table architecture the removed database contention while still guaranteeing 100 percent durability. By removing database contention Dell could utilize far more parallel processors to not only improve transactional speed but also significantly increase the number of concurrent users. And choosing SQL Server 2014 with in-memory built-in meant Dell did not have to learn new APIs or tools their developers could use familiar SQL Server tools and T-SQL to easily implement the new in-memory technologies.

All of this meant Dell was able to double its application speeds and process transactions 9x faster. Like Dell, you also can take advantage of the workload optimized in-memory technologies built into the SQL Server 2014 data platform for faster transactions, faster queries and faster analytics. And you can do it all without expensive add-ons utilizing your existing hardware, and existing development skills. 

Learn more about SQL Server 2014 in-memory technology

Categories: Database

Oracle to Showcase Carrier Ethernet 2.0 Network-as-a-Service Orchestration with Full Business System Integration at Metro Ethernet Forum’s Global Ethernet Networking 2014

Oracle Database News - Wed, 11/12/2014 - 15:10
Oracle to Showcase Carrier Ethernet 2.0 Network-as-a-Service Orchestration with Full Business System Integration at Metro Ethernet Forum’s Global Ethernet Networking 2014Redwood Shores, Calif. – November 12, 2014News Summary

Oracle and InfoVista have been selected to demonstrate “CE 2.0 Network-as-a-Service (NaaS) Orchestrated and Assured” at the Metro Ethernet Forum’s Global Ethernet Networking (MEF GEN14) event’s Proof of Concept Showcase. Taking place November 17–20 in Washington D.C., the showcase is the focal point of the GEN14 event and will highlight leading-edge implementations of dynamic cloud-centric services featuring innovations in software-defined networking (SDN), network functions virtualization (NFV), service orchestration, and automated provisioning.

News Facts Oracle today announced that Oracle and InfoVista will demonstrate a joint proof of concept (PoC), “CE 2.0 Network-as-a-Service (NaaS) Orchestrated and Assured,” at the MEF GEN14 Proof of Concept Showcase. The PoC will demonstrate self-serve ordering of NaaS, dynamically orchestrated and assured, across service provider and wholesale partner networks, with full integration into critical business processes and designed for network technology abstraction. Oracle and InfoVista will present industry-leading capabilities for service providers and enterprises alike in the PoC, including: Self-serve ordering of a complex Carrier Ethernet product as if it were a simple IT service—unassisted customer ordering experience from any device, anywhere, with full integration into the service provider’s critical business processes NaaS orchestration, including automated ordering of wholesale e-access—dynamic order decomposition and orchestration, leveraging end-to-end service design and automated service delivery within a multiprovider, multinetwork, and multivendor environment NaaS assurance—real-time performance assurance of delivered NaaS capabilities, with dynamically synchronized service visualization, on-demand SLA monitoring, and self-service dashboards NaaS agility—layered IT architecture and design environment that decouples commercially offered products from the underlying technical services and network technologies to abstract and localize the impact of changes for faster and more flexible operations Virtual network control layer leveraging SDN principles—abstracting complex multiprotocol label switching (MPLS) and Metro Ethernet service implementations from the NaaS orchestration layer leveraging service-to-resource abstractionsAdditional Information The “Oracle, InfoVista Network-as-a-Service Orchestrated and Assured” demonstration will be located at the MEF Gen14 Proof of Concept Showcase in the main exhibition hall at the Gaylord National Resort and Conference Center in National Harbor, Maryland.Supporting Quotes “We are delighted that industry leaders such as Oracle and InfoVista are bringing their innovative approach to the Proof of Concept Showcase,” said Nan Chen, president of the MEF. “The showcase will display how Carrier Ethernet, SDN, NFV, and service orchestration technology can be used to create new business models for all stakeholders. MEF GEN14 attendees will be able to witness, firsthand, what is possible in this new world of service delivery at this exciting new event.” “As enterprises continue to embrace cloud-based services and applications, service providers need to offer a more dynamic network control experience to their key customers,” said Bhaskar Gorti, senior vice president and general manager, Oracle Communications. “Oracle and InfoVista’s demonstration of dynamic NaaS orchestration and assurance over a complex network, and integrated with business processes, provides an MEF standards-aligned design blueprint for how service providers can turn this opportunity into reality.”Supporting Resources Oracle Communications Oracle Communications Rapid Offer Design and Order Delivery Oracle Communications Rapid Service Design and Order Delivery Oracle Communications Network Resource Management Oracle Communications Business Services Videos Business Services Transformation Offer Design Order Capture Order Lifecycle Management Awards: Oracle Communications Business Services Solution Wins TM Forum 2014 Solution Excellence Award Related Articles: TM Forum Quick Insights: Enterprise Services—Show Me the Money Connect with Oracle Communications on Diigo, Facebook, LinkedIn, Twitter, and YouTubeAbout MEF The MEF is the defining body behind the global market for Ethernet services. It will be showcasing its Third Network vision at the industrywide Global Ethernet Networking 2014 (GEN14) event taking place November 17–20, 2014, in Washington DC. With an anticipated audience exceeding 1,200 attendees, GEN14 is the must-attend networking event of the year for professionals involved in the CE services and technology industry. GEN14 will bring together 115+ CE, SDN, NFV, and cloud expert speakers from across the globe. Participants will consists of retail, wholesale, and mobile service providers; data center providers; cloud service providers; midsize to large businesses; government organizations; utilities; network technology vendors; press; analysts; and investors. For GEN14 registration and other information, see www.gen14.com.About Oracle

Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NYSE:ORCL), visit oracle.com.

Trademark

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

# # #

Contacts

Katie Barron

Oracle

+1.202.904.1138

katie.barron@oracle.com

Janice Clayton

O’Keeffe & Company

+1.443.759.8151

jclayton@okco.com

Categories: Database, Vendor

Five Things to Know About SQL Server’s In-Memory Technology

Last week was an exciting week for the SQL Server team, as one of our favorite events happened – PASS Summit. If you attended PASS, you probably heard a ton about the latest version of SQL Server 2014.

One of the key drivers of SQL 2014’s design was the in-memory technology that is built into the product. These capabilities and the way they were designed are a key differentiator for SQL Server 2014. Recently we discussed how using SQL Server 2014’s in-memory technology can have a dramatic impact on your business – speeding transactions, queries, and insights. Today let’s  delve a little deeper into our in-memory solution and our unique approach to its design.

We built in-memory technology into SQL Server from the ground up, making it the first in-memory database that works across all workloads. These in-memory capabilities are available not only on-premises, but also in the cloud when you use SQL Server in an Azure VM or use the upcoming in-memory columnstore capabilities within Azure SQL Database. So just what makes our approach so unique? This video describes it well.

We have five core design points for SQL Server in-memory. These are: 

  1. It’s built-in. If you know SQL Server, you’re ready to go. You don’t need new development tools, to rewrite the entire app, or learn new APIs.
  2. It increases speed and throughput. SQL Server’s in-memory OLTP design removes database contention with lock and latch-free table architecture while maintaining 100 percent data durability. This means you can take advantage of all your compute resources in parallel, for more concurrent users.
  3. It’s flexible. Your entire database doesn’t need to be in-memory. You can choose to store hot data in-memory and cold data on disk, while still being able to access both with a single query. This give you the ability to optimize new or existing hardware.
  4. It’s easy to implement. The new migration advisory built right into SQL Server Management Studio lets you easily decide what to migrate to memory.
  5. It’s workload-optimized. In-memory OLTP is optimized for faster transactions, enhanced in-memory ColumnStore gives you faster queries and reports, and in-memory built into Excel and Analysis Services speeds analytics.  

All of this combined leads to up to 30x faster transactions, over 100x faster queries and reporting, and easy management of millions of rows of data in Excel. Think about what this can do for your business.

Learn more about SQL Server 2014 in-memory, or try SQL Server 2014 now. 

Categories: Database

Oracle Dramatically Improves Developer Productivity with New Oracle Solaris Studio 12.4

Oracle Database News - Tue, 11/11/2014 - 17:00
Oracle Dramatically Improves Developer Productivity with New Oracle Solaris Studio 12.4Enhanced analysis tools and support for the latest programming language standards in Oracle’s popular developer tool suite help companies easily build powerful, portable applicationsRedwood Shores, Calif. – November 11, 2014News Summary

Oracle Solaris Studio 12.4 enables developers to generate higher performing, reliable, and secure applications in record time. The latest release of Oracle’s #1 C, C++ and Fortran development environment for Oracle systems includes dramatically enhanced software analysis tools to help developers optimize application performance and quickly identify memory errors for improved application reliability. It also includes enhanced standards-based, high performance compilers with advanced optimization capabilities, resulting in the best application performance on Oracle’s newest SPARC and x86 systems.

Oracle Solaris Studio is included in Oracle’s Software in Silicon Cloud and uses the Software in Silicon Application Data Integrity (ADI) feature to help developers find and fix memory errors with minimal overhead.

Oracle Solaris Studio 12.4 is free to download from the Oracle Technology Network site.

News Facts Oracle Solaris Studio provides a complete software development environment for building enterprise applications for deployment on Oracle Solaris, Oracle Linux, and other Linux–based systems. Oracle Solaris Studio 12.4 includes support for C++ 2011, a significant update to the C++ programming language standard. It also has enhanced support for popular Boost libraries and provides compatibility with GCC shared libraries, making it easy to deliver feature-rich, portable applications. The Oracle Solaris Studio Performance Analyzer has been completely redesigned, giving developers unprecedented insight into application performance. New features such as intuitive data organization, timeline visualization, code navigation, versatile data filtering, remote data analysis, and cross-architecture support, are available via a click of the mouse, significantly increasing developer efficiency. The Oracle Solaris Studio Code Analyzer protects applications from coding vulnerabilities, including memory leaks and memory access issues. It provides fast and accurate identification of common coding errors and includes patented technology that ranks untested functions, helping increase overall code coverage and ensure application reliability. Oracle Solaris Studio high performance compilers support code generation for the industry’s latest generation of processors, including Oracle’s SPARC M6 and T5 systems, Fujitsu M10 systems, and Intel® Haswell-based systems, and deliver up to 4.8x greater performance compared to open source alternatives on industry standard benchmarks. Oracle Solaris Studio 12.4 supports the OpenMP 4.0 parallel programming specification, the latest specification available from the OpenMP industry group. Most Oracle products for the Oracle Solaris platform are built using Oracle Solaris Studio, providing developers confidence they are using tools that have been thoroughly tested and optimized.Supporting Quotes “Oracle Solaris Studio engineers work closely with the Oracle Solaris and SPARC design teams to ensure the compilers generate the best possible code for Oracle servers,” said Don Kretsch, senior director, Software Development, Oracle. “The new analysis tools in Solaris Studio 12.4 offer features typically found in expensive third-party products and give developers the ability to create fast, reliable code.” “SAS delivers enterprise applications for high end business analytics and we require development and deployment platforms that are specifically optimized for business-critical applications,” said Bob Huemmer, software development manager, UNIX Platform Delivery, SAS. “The Oracle Solaris Studio development tools are world class; we use the Performance Analyzer on Oracle Solaris to tune and optimize our applications which also typically yields performance benefits across all of our platforms. We’ve seen impressive long term and consistent innovation for both Oracle Solaris Studio and Oracle Solaris. Rock solid, worry free binary compatibility translates to lower development and support costs, higher productivity and affords our Solaris customers the confidence to readily upgrade their systems to enable innovative functionality such as Kernel Zones.” “AsiaInfo, an industry leading supplier of software solutions and services for telecommunications, has been an Oracle partner for many years and our SMS Gateway solution is supported on Oracle SPARC T5 and Oracle Solaris 11,” said Mr. Fu Tingsheng, director of engineering, China Mobile Customer Data Business Division of AsiaInfo. “Oracle Solaris has proven to be a trusted platform for our applications, and the latest release of Oracle Solaris Studio delivers high-productivity features and tools that help improve our time to market. We used the Oracle Solaris Studio Code Analyzer for memory leak protection and it helped us be more proactive and improve our efficiency by 50 percent. Our developers were impressed with the ease of use and depth of data provided by the Oracle Solaris Studio analysis tool suite.” Capitek, one of the leading suppliers in wireless access software in China, has a long history of using Oracle Solaris and Oracle Solaris Studio for its Authentication, Authorization, Accounting (AAA) solution. “We are excited about the latest innovations in Oracle Solaris and Oracle Solaris Studio for developing, analyzing and running our mission-critical applications. The Oracle Solaris Studio compilers are highly optimized for the latest Oracle systems and advanced analysis tools, such as the Performance Analyzer, allow us to easily profile our technology solutions for optimal scalability. The combination of Oracle Solaris and Oracle Solaris Studio delivers a robust and reliable platform with high performance, high efficiency and high value,” said Jerry Chen, senior manager, Telecom Software Product Department at Capitek.Supporting Resources Oracle Solaris Studio Download: Oracle Solaris Studio 12.4 Oracle Solaris Oracle Linux Oracle Software in Silicon Cloud Connect with Oracle Solaris Studio on Facebook, TwitterAbout Oracle

Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Contact Info

Michelle Jenkins
Oracle
+1.425.945.8306
michelle.jenkins@oracle.com

Drew Smith
Blanc & Otus
+1.415.856.5127
dsmith@blancandotus.com

  
Categories: Database, Vendor