Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database

Decisions @ the speed of thought with SQL Server 2016

In his 1999 book, “Business @ the Speed of Thought,” Bill Gates predicted the future “will be about velocity. About how quickly business itself will be transacted. About how information access will alter the lifestyle of consumers and their expectations of business…. When the increase in velocity is great enough, the very nature of business changes.”

That prediction seems almost prescient in 2016, as data has become the engine of business velocity, and anytime, anywhere access to that data is a requirement. The role of data in business decision-making is taking on ever greater importance, and the difference between success and failure can hinge on how fast you’re able to analyze data. As a result, business intelligence (BI) and advanced analytics (AA) are indeed changing the very nature of business.

Attesting to this point, research firms such as Gartner, Aberdeen and Forrester emphasize the business importance of advanced, agile, mobile — and speedy — data analytics. For example, when you consider Bill Gates’ prediction and the rapidly expanding role of data, BI and AA, it’s interesting to note that the February 2016 Gartner Magic Quadrant for Business Intelligence and Analytics Platforms positions Microsoft as a leader in both ability to execute and completeness of vision. Microsoft’s wide range of both on-premises and cloud capabilities, as well as Power BI, all contribute to Gartner’s assertion of Microsoft’s leadership.

SQL Server 2016: Technology @ the speed of business

With the recent launch of SQL Server 2016, it’s worth taking a look at how this release plays into Microsoft’s leadership and vision for BI and analytics. One example includes in-memory analytics improvements that enable significant data compression to speed up analytics query performance up to 100 times faster and empower employees to perform ad hoc queries much more quickly. Since the enhancements to SQL Server 2016 are based on technology that IT is already familiar with, IT can build on existing skill sets to perform modern BI and AA responsibilities and support a modern data strategy. For example, SQL Server 2016 lets IT transform data into easily understood, trusted data models with user access control to make sure only appropriate users see sensitive data. This means IT can transform complex data from multiple sources into powerful, scalable models that business analysts can easily understand and access by using familiar data analytics and discovery tools such as Excel or Power BI.

In addition, SQL Server 2016 includes R, the language used by data scientists for advanced analytics. With R and the in-memory capabilities of SQL, data scientists can add value in the creation of models and algorithms and run these in-database, reducing time and data movement. With ScaleR built into SQL Server 2016 R Services, typical memory limitations of Open Source R are eliminated and parallelization of scripts provides added benefits.

PolyBase in SQL Server 2016 enables database administrators and data analysts to use their existing T-SQL skills to extract value from both Hadoop unstructured data as well as structured data. Specifically on the import side, PolyBase lets analysts import external Hadoop and Azure blob storage data for persistent columnar storage. On the export side, PolyBase data can be exported from SQL Server tables to Hadoop or Azure blob storage for cold storage/archival while remaining queryable. PolyBase integrates seamlessly with all BI tools, including Microsoft’s BI tools such as SQL Server Analysis Services, SQL Server Reporting Services, Power BI and third-party tools like Tableau, Microstrategy and Cognos. In terms of mobile access, SQL Server 2016 delivers built-in mobile BI capabilities for both IT and business users. IT professionals have the tools they need to administer user access and shared data sets in one place.

Business users can then access insights in an intuitive, engaging way from their desktop and mobile devices and expect the insights to be optimized for different form factors and major mobile platforms. Find out more about these enhancements and innovations in upcoming blogs from the SQL Server engineering team. They’ll provide detailed discussions about the technology behind these enhancements and innovations.

Decisions @ the speed of thought

The amount of data available is expanding hugely, and the ability to quickly use that data in decision making is crucial. To this point, Michael Lock, vice president and principal analyst, Analytics and Business Intelligence, Aberdeen, emphasizes that his “research shows that companies experiencing rapid data expansion… are actually more likely to exploit the information to uncover business opportunities and drive growth,” and “expediting the transfer of information between relevant parties will pave the way for quicker decisions, fewer missed opportunities, and enhanced business performance as a result.”

Adding perspective to the relevance of this assertion, Forrester reports that 86 percent of marketing executives they surveyed agree that “predictive analytics helps evaluate opportunities to enter new markets.” And Gartner believes that half of large enterprises will be using advanced analytics tools by 2018 to gain competitive advantage. With all this in mind, find out how SQL Server 2016 can enable your business to take advantage of data to make decisions @ the speed of thought and transform your business.

See the other posts in the SQL Server 2016 blogging series.

Try SQL Server 2016 RC

Categories: Database

Getting Started with Temporal Table in SQL Server 2016 – Part 1

Database Journal News - Mon, 03/21/2016 - 08:01

With the new temporal table feature, SQL Server 2016 internally manages two tables; a base table, which contains the latest data all the time and a history table, which contains a history of all of the changes. Read on to learn about this new feature, how it works, and how to either create a new table with this feature or enable it for an existing table.

Categories: Database

Database .NET v18 released

PostgreSQL News - Mon, 03/21/2016 - 01:00

I am happy to announce the new major release of Database .NET v18. It is an innovative, powerful and intuitive multiple database management tool, With it you can browse objects, design tables, edit rows, export data, run queries and migrate data with a consistent interface.

Free, All-In-One, Portable, Single executable file and Multi-language.

Major New features from version 17.1 to 18.0:

  • Compatible with PostgreSQL 8.4 ~ 9.5+
  • Added Visual Execution Plan for PostgreSQL 9.0+
  • Massive performance improvements
  • Added support for Cancelling executing command
  • Added Inspect Object
  • Added Dark Mode
  • Added support for Renaming Results tab
  • Single click for Browsing and Editing
  • Improved Data Import and Export
  • Improved Data Editor and Browser
  • Improved Generating Batch Scripts
  • Improved SQL Editor
  • Improved AutoComplete and IntelliSense
  • Improved SQL File Version Control
  • Improved Database Migration and Data Synchronization (Pro)
  • ...and more
The new version is immediately available for download.
Categories: Database, Open Source

SQL Server 2016 Release Candidate 1 now available

We are excited to announce that our second SQL Server 2016 release candidate, SQL Server 2016 Release Candidate (RC) 1, is now available for download.

This is an important milestone in the release of SQL Server 2016, as the product is nearly feature complete, and means that a very rich set of capabilities is now available. These include real-time operational analytics, rich visualizations on mobile devices, built-in advanced analytics, new advanced security technologies and new hybrid scenarios that allow you to securely stretch data to the cloud.

As part of our new rapid release model, we will be publishing multiple release candidates on a more frequent cadence. SQL Server 2016 RC 1, which you can try in your development and test environments, is available for download today.

In SQL Server 2016 RC 1, we made enhancements to SQL Server Reporting Services, including:

  • Updated preview of the new web portal: The new web portal by default, and the classic Report Manager now removed.  Additionally, open the Mobile Report Publisher and Report Builder from the new web portal using any modern browser.
  • Custom branding: Customize the web portal with your organization’s logo and colors.
  • KPIs and mobile reports: Click a KPI and see a view with more details, and connect KPIs and mobile reports to parameterized datasets.
  • Modern paginated reports: Design beautifully modern paginated reports with new, modern styles for charts, gauges, maps and other data visualizations.

For additional detail, please read the detailed SSRS blog RC 1 post.

Download SQL Server 2016 RC 1 today!

To learn more, visit the SQL Server 2016 preview page. To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the preview and start evaluating the impact these new innovations can have for your business.

Questions?

Join the discussion of the new SQL Server 2016 capabilities at MSDN and Stack Overflow. If you run into an issue or would like to make a suggestion, you can let us know at Connect. We look forward to hearing from you!

Try SQL Server 2016 RC

Categories: Database

Enable business insights for everyone with SQL Server 2016: Part 2

This blog post was authored by Kasper de Jonge, Senior Program Manager, SQL Server Analysis Services.

 

This is the second installment of a two-part series. If you missed it, please have a look at part one and learn how SQL Server Analysis Services (SSAS) provides fast access to data to allow analysis at the speed of thought. Read on to learn about the specific improvements made to SSAS for SQL Server 2016.

SQL Server 2016 Analysis Services improvements

SQL Server 2016 offers a number of significant enhancements compared to previous versions, such as:

  • The improved Tabular model allows BI developers to solve increasingly complex business problems in an agile and straightforward manner with out-of-the-box support for modeling and calculation scenarios such as percentile and many-to-many patterns.
  • Improvements to the DirectQuery storage mode allow BI developers to utilize data sets from different data sources directly without the need to move the data.
  • Improvements to the high-performing in-memory (VertiPaq) storage mode can enable analytics on large data sets at the speed of thought.

Many organizations today have built their infrastructure around the rich Semantic model and highly scalable Multidimensional models. In this release of SQL Server, we have improved or added some highly requested performance and manageability features, such as:

  • Distinct count ROLAP optimization for data sources like DB2 and Oracle allow for faster ROLAP performance.
  • Drill-through, multi-selection support with Excel 2016 allows for better drill-through support when using slicers and other filters on the PivotTable.
  • Check for corruption issues on database or individual objects with DBCC support for Analysis Services (supported for both Tabular and Multidimensional models).
  • Diagnose problems with xEvents, a lightweight tracing and performance monitoring system in SQL Server Management Studio (supported for both Tabular and Multidimensional models).
  • Visualize and explore existing models with Power BI and Power BI Desktop directly or through the enterprise gateway.
Tabular Semantic Model improvements

Organizations who want to use the lighter, in-memory and more agile way of building Semantic models can use the Tabular model introduced in SQL Server 2012. In SQL Server 2016, Tabular models received a number of updates; some of the enhancements to the Tabular Semantic Model in SQL Server 2016 are highlighted below:

  • Bi-directional cross filtering allows modelers to determine how they want filters to flow between two tables. In SQL Server 2014 and previous versions, Tabular models did not support scenarios such as many-to-many patterns without having to write complicated DAX expressions. However, now with support for bi-directional cross filtering, this and many other scenarios are available by simply changing the type of relationship.
  • An improved diagram view layout helps navigate complex models with ease. The design has been optimized to help BI professionals understand relationships in models easily and be able to immediately see the filter direction of a relationship and the cardinality of data.
  • More than 50 additional DAX functions and added support for variables in DAX help solve complex business problems faster.
  • A translated Semantic model enables business users in multinational organizations to access the BI Semantic Model in their preferred language(s).
DirectQuery improvements

Some companies may prefer to access data sources directly, because their data may be too large to be moved or they need real-time access. Microsoft in SQL Server 2016 provides the ability for BI developers to connect the Tabular BI Semantic Model to its underlying data directly with DirectQuery.

Here is an overview of the improvements to DirectQuery in SQL Server 2016:

Analysis Services improvements

With SQL Server 2016, Analysis Services is designed to be faster out of the box. By upgrading your existing server, you can benefit from these enhancements:

  • Improvements to existing DAX functions and query engine help increase performance for client tools like Excel and Power View when using Tabular models.
  • Additional DAX functions can be used to further optimize, measure and query performance. Power BI can detect that these functions are available in SQL Server 2016 Analysis Services and then use them to query the data. This can result in significant performance enhancements when compared with Analysis Services in SQL Server 2014.
  • Extensively reduced data load time through parallel partition processing.
BI developer productivity improvements

The following enhancements have been made to improve BI professionals’ and developer productivity in SQL Server 2016:

  • SQL Server Data Tools (SSDT) for Analysis Services is now available as part of SSDT Visual Studio 2015 Preview and can be downloaded here. This provides a single simplified installation experience for all of your SQL Server data tools that are now available in Visual Studio 2015.
  • Updates to the formula bar help write formulas with more ease by differentiating functions, fields and measures using syntax coloring. They provide intelligent function and field suggestions and tell if parts of DAX expression are wrong using error “squiggles.” The updates further enable the use of multiple lines (Alt + Enter) and indentation (Tab) while the formula bar also lets BI professionals write comments as part of the measures.
  • The new JSON-based Tabular Model Scripting Language (TMSL) allows for simplified scripting and development for Tabular models. It uses Tabular concepts instead of Multidimensional concepts, as was the case in SQL Server 2014. The changes to the metadata only impact a single object, resulting in faster metadata operations and enabling simple code merges in SSDT.
  • The new Tabular Object Model is part of Analysis Management Objects (AMO), the complete library of programmatically accessed objects that enables an application to manage a running instance of Microsoft SQL Server Analysis Services. With the Tabular Object Model, BI professionals can now use concepts familiar to the Tabular developer instead of using Multidimensional concepts. This allows for simpler and more readable code when developing against a Tabular model.
Getting started

This is an exciting time for SQL Server Analysis Services. SQL Server 2016 will provide many improvements for a wide range of use cases, from DirectQuery to modeling enhancements and developer productivity. Many of the performance enhancements can be enjoyed by just upgrading the server to SQL Server 2016, with no other changes required.

For more information and details on the changes made for SQL Server Analysis Services in SQL Server 2016, please visit the Analysis Services and PowerPivot Team Blog and part one of this blog post. Looking for a deep-dive into the updates to the Tabular semantic model and Analysis Services DirectQuery? Be sure to reference our video sessions from Data Driven 2016 to gain an overview and learn how to implement these updates.

See the other posts in the SQL Server 2016 blogging series.

Try SQL Server 2016 RC

Categories: Database

Exploring SQL Server 2016 Dynamic Data Masking – Part Two - Masking and Exporting Data in Existing Tables

Database Journal News - Thu, 03/17/2016 - 08:01

Dynamic Data Masking allows you to obscure your confidential data column values at the database engine level for both new and existing SQL Server data. Being able to alter the definition of an existing column to add a masking rule makes it very simple to obscure your existing column values without even changing your application code. Read on to learn more.

Categories: Database

Enable business insights for everyone with SQL Server 2016: Part 1

This post was authored by Kasper de Jonge, Senior Program Manager, SQL Server Analysis Services.

 

This is the first installment of a two-part series. Read on to learn how SQL Server 2016 Analysis Services (SSAS) can provide fast access to data to allow analysis at the speed of thought and stay tuned for part two, where we discuss the specific improvements made to SSAS for SQL Server 2016.

Many companies are generating more and more data, and the increased use of mobile devices, sensors and business applications adds to this trend every day. With this wealth of information and the business opportunity unlocking it provides to organizations, it has become increasingly important to allow more business users to easily access data to help them make better decisions when and where they need to.

Many organizations have two types of data users. The first type is those who geek out on data, are passionate about it, and know where to locate the information and manipulate it into insights by using tools such as Excel or Power BI Desktop. They are the ones who often discover hidden insights, solve important ad-hoc questions that arise in an organization and tend to be on the cutting edge of technology and data knowledge. The second type (and majority of data users) often don’t have the same inclination to search for data but still need access to make informed decisions. This group of data users often specifically benefit from SQL Server Analysis Services (SSAS) by accessing data models that they can easily understand, with the tools they are familiar with. IT developers can use SSAS to empower these users to unlock business intelligence based on data that can be trusted, is reusable and easy to interpret.

Create powerful BI Semantic Models and transform complex data

IT developers play an important role in helping business users unlock data that produce actionable  insights. For example, they can create an Analysis Services BI Semantic Model that allows business users to explore data and surface insights through visualization tools. When connecting to a BI Semantic Model, business users don’t have to worry about where the data is coming from or how it is joined together.

The model provides BI professionals with an intuitive abstraction (as opposed to complex data) by creating either a traditional multidimensional model or a simpler tabular model. On top of that, they can apply specific business logic to the data using a powerful calculation language that allows them to describe logic in forms such as year-to-date or year-over-year change.

Flexible data access layer

The BI Semantic Model also provides fast access to data to allow analysis at the speed of thought. Historically, this has been done by loading data into the analysis storage engine. Given that the amount of data at some companies is growing at an unprecedented rate, organizations have needed to increase scale by investing in infrastructure and databases, often utilizing in-memory to handle the increased volume. However, when more data needs to be moved from source systems to the BI system to widen access, the length of time required to retrieve data increases every day.

For some organizations, the volume of data provides a challenge and they need a solution that no longer has a dependency on moving data from the source system. In addition to the option to load the data into memory, SQL Server Analysis Services provides the capability to directly connect models to the data sources. This option might be especially attractive when the source data is available on high-performing infrastructure.

The image below shows an overview of the BI Semantic Model:

BI Semantic Model
Figure 1: Overview of SQL Server Analysis Services (SSAS).

We have only scratched the surface on this important topic. In a follow-up blog post, we will share more information on the enhancements made for SQL Server Analysis Services in SQL Server 2016. Further information on these updates can also be found via our videos sessions from Data Driven 2016 on the Tabular semantic model and Analysis Services DirectQuery. The latest details can always be found on the Analysis Services and PowerPivot Team Blog.

See the other posts in the SQL Server 2016 blogging series.

Try SQL Server 2016 RC

Categories: Database

ORACLE REPORTS GAAP EPS OF $0.50; NON-GAAP EPS OF $0.64

Oracle Database News - Tue, 03/15/2016 - 23:35
Press Release ORACLE REPORTS GAAP EPS OF $0.50; NON-GAAP EPS OF $0.64 Without the effect of U.S. Dollar strengthening both would have been 4 cents higher

Redwood Shores, Calif.—Mar 15, 2016

CLOUD SAAS AND PAAS REVENUES UP 57% IN U.S. DOLLARS AND UP 61% IN CONSTANT CURRENCY

Oracle Corporation (NYSE: ORCL) today announced fiscal 2016 Q3 results. The strengthening of the U.S. dollar compared to foreign currencies had a significant impact on results in the quarter. Total Revenues were $9.0 billion, down 3% in U.S. dollars and up 1% in constant currency. Cloud plus On-Premise Software Revenues were $7.1 billion, down 1% in U.S. dollars and up 3% in constant currency. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $583 million, up 57% in U.S. dollars and up 61% in constant currency. Cloud infrastructure as a service (IaaS) revenues were $152 million, down 2% in U.S. dollars and up 2% in constant currency. Total Cloud Revenues were $735 million, up 40% in U.S. dollars and up 44% in constant currency. Total On-Premise Software Revenues were $6.3 billion, down 4% in U.S. dollars and unchanged in constant currency. Total Hardware Revenues were $1.1 billion, down 13% in U.S. dollars and down 8% in constant currency. Total Services Revenues were $793 million, down 7% in U.S. dollars and down 2% in constant currency.

Operating Income was $3.0 billion and Operating Margin was 34%. Non-GAAP Operating Income was $3.8 billion and non-GAAP Operating Margin was 42%. Net Income was $2.1 billion while non-GAAP Net Income was $2.7 billion. Earnings Per Share was $0.50, while non-GAAP Earnings Per Share was $0.64. Without the impact of the U.S. dollar strengthening compared to foreign currencies, Oracle’s reported GAAP and non-GAAP Earnings Per Share would have been 4 cents higher.

Short-term deferred revenues were $6.9 billion, up 7% in U.S. dollars and up 11% in constant currency compared with a year ago. Operating cash flow on a trailing twelve-month basis was $14.1 billion.

“Our Cloud SaaS and PaaS revenue growth rate accelerated to 61% in constant currency in Q3,” said Oracle CEO, Safra Catz. “This dramatic revenue increase drove our non-GAAP SaaS and PaaS gross margins up to 51% in Q3 as compared with 43% in Q2. Our cloud business is now in a hyper-growth phase. Our gross margins are climbing toward our target of 80%. These two factors will ignite substantial EPS and cash flow growth over Oracle’s next few quarters.”

“Our SaaS and PaaS gross deferred revenue grew 96% in Q3—twice as fast as Workday and three times faster than Salesforce.com reported in their most recent quarters,” said Oracle CEO, Mark Hurd. “Q3 SaaS and PaaS bookings were up 77% in constant currency. We added 942 new SaaS customers in the quarter, including several customers that switched from Workday HCM to Oracle Fusion HCM. We had more than 250 customers go live on Fusion SaaS HCM and Fusion ERP in Q3 alone. We now have over 11,000 SaaS customers with nearly 2,000 Fusion ERP customers—ten times more ERP customers than Workday claims to have.”

“In absolute dollar terms, Oracle is already selling more enterprise SaaS and PaaS new cloud revenue than any other company in the world—including Salesforce.com,” said Larry Ellison, Oracle Chairman and CTO. “We are growing much faster than Salesforce.com. We also have many more SaaS products than Salesforce.com. In some of our most important SaaS markets, such as ERP, HCM, Supply Chain and Manufacturing, Salesforce.com does not participate at all. By successfully competing in all of these markets, Oracle has the ability to sustain its high growth over a long period of time. That should make it easy for us to pass Salesforce.com and become the largest SaaS and PaaS cloud company in the world.”

The Board of Directors also declared a quarterly cash dividend of $0.15 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on April 14, 2016, with a payment date of April 28, 2016.

Oracle also announced that its Board of Directors authorized the repurchase of up to an additional $10 billion of common stock under its existing share repurchase program in future quarters.

Q3 Fiscal 2016 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast of this event, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q3 results and Fiscal 2016 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Pass Code: 67000932.

Contact Info Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding our expectations of future growth in our gross margins, EPS and cash flow; the timing of such growth; the growth of our cloud business generally; and the competitive landscape for enterprise cloud are all "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Economic, geopolitical and market conditions, including the continued slow economic recovery in Europe, parts of the U.S. and other parts of the world, can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (2) We may fail to achieve our financial forecasts due to such factors as delays or size reductions in transactions, fewer large transactions in a particular quarter, fluctuations in currency exchange rates, delays in delivery of new products or releases or a decline in our renewal rates for contracts. (3) Our cloud computing strategy, including our Cloud SaaS, PaaS, IaaS and Database as a Service offerings, may not be successful. (4) If we are unable to develop new or sufficiently differentiated products and services, or to enhance and improve our products and support services in a timely manner or to position and/or price our products and services to meet market demand, customers may not buy new software licenses, cloud software subscriptions or hardware products or purchase or renew support contracts. (5) Our international sales and operations subject us to additional risks that can adversely affect our operating results, including risks relating to foreign currency gains and losses. (6) If the security measures for our software, hardware, services or Oracle Cloud offerings are compromised or subject to a successful cyber-attack, or if such offerings contain significant coding, manufacturing or configuration errors, we may experience reputational harm, legal claims and financial exposure. (7) We have an active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (SEC) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of March 15, 2016. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Follow Oracle Corporate

Categories: Database, Vendor

Time is running out: Upgrade SQL Server 2005 now

Less than one month remains before support ends for SQL Server 2005 on April 12, 2016. If you’re still using this version or other legacy versions of SQL Server, there’s never been a better time to upgrade to SQL Server 2014 and Microsoft Azure SQL Database to safeguard your business and reap all the benefits of a modern data platform.

With just a few weeks remaining, we wanted to remind you of the many resources available to help you make the move:

Understand the process

Read our blog post series for a full description of the upgrade process:

  • Step One: Understand and map out your database and its dependencies.
  • Step Two: Target the destination for each application and workload.
  • Step Three: Identify your upgrade strategy.

Visit the SQL Server 2005 End of Support page for detailed information on planning and executing the migration, along with options for your new database strategy.

Understand the benefits Plan your upgrade Upgrade to SQL Server 2014 now.

Microsoft’s deep commitment to this industry leading technology continues with the availability of the release candidate for SQL Server 2016, the biggest leap forward in Microsoft’s data platform history. SQL Server 2016 will include real-time operational analytics, rich visualizations on mobile devices, built-in advanced analytics, new advanced security technology, and new hybrid cloud scenarios.

See what’s coming in SQL Server 2016.

Categories: Database

Data Driven Top 10

Last week at our Data Driven event in New York City, we had the opportunity to meet with several hundred customers to talk about data in person, as well as many more online. More specifically, we talked about the opportunity and challenges facing our customers, how data can help them innovate and get ahead and how Microsoft, and SQL Server 2016, can help.

A few themes were clear throughout these great conversations:

  • A massive explosion of data from things, apps and services is affecting every customer.
  • In order to drive impact from all their data, there was high interest in using the new advanced analytics capabilities to derive intelligent action from data.
  • TCO matters to customers, there was high interest in the Oracle migration offer given the nearly 12X savings from SQL Server over Oracle.*

So, for our many customers and partners who couldn’t be there in person I wanted to summarize the top 10 questions I heard – and the answers.

Thank you for being a SQL Server customer. I encourage you to learn more about SQL Server 2016 and find out how we can help you transform your business.

– Takeshi

Top 10 customer questions 1. What’s the biggest challenge customers face today in managing their data strategy and how is what you announced helping them?

Customers want to be more agile to use and respond to new data and data sources as they are available. But they can’t completely replace their IT environment so those new solutions needs to work with what they already have and be deployed where their data lives. Our commitment is to help customers turn data into intelligent action with solutions that meet them where they are today, with the tools and languages they use most and the platform and applications they need, on-premises or in the cloud. At the Data Driven event we talked about:

  • How the rapid release model of the cloud enables us to quickly iterate on new capabilities and test features at scale. SQL Server is truly the first “born in the cloud” database where features such as Always Encrypted and Role Level Security were first validated in Azure by hundreds of thousands of customers and billions of queries.
  • Why our consistent model across on-premises and cloud delivers unique hybrid scenarios such as Stretch Database for customers who are on-premises so they can take advantage of the economics of the cloud by keeping terabytes of historical data online at a fraction of a cost of a SAN on-premises.
  • How SQL Server continues to deliver the performance customers want with the applications they need as evident by holding the #1 SAP benchmark on Windows.**
  • Our future plans for SQL Server on Linux which is part of our strategy to meet customers where they are.
2. Can you help better define what you mean by the core relational database being available on Linux?

We’ll first release the core relational database capabilities on Linux targeting mid calendar year 2017 and will work with customers to prioritize the additional capabilities. The core relational database capabilities, inclusive of transaction processing and data warehousing, are the core foundation of building intelligent applications and will enable customers to get started quickly with their deployments. To find out more about SQL Server on Linux, you can sign up to get regular updates and provide input to the team.

3. When SQL Server is available on Linux, how will customers license it?

A customer who buys a SQL Server license (per-server or per-core) will be able to use it on Windows Server or Linux. This is just part of how we are enabling greater customer choice and meeting customers where they are.

4. When SQL Server is available on Linux, what if customers have Software Assurance on their existing SQL Server licenses?

For customers who have made an investment in Software Assurance, as always, they will have rights to the future releases of SQL Server as we make them generally available.

5. What can customers expect in the future for SQL Server on Windows?

We have a 23-year history of Windows Server and SQL Server joint engineering and proven innovation. Windows Server and SQL Server today are the most widely deployed database and operating system pairing on the planet, with industry leading price/performance, unparalleled security, and simply-configured and powerful high availability and disaster recovery. We will continue to invest in helping SQL Server customers take advantage of the latest innovations in Windows Server to increase cost efficiencies and maximize performance.

6. Why should customers migrate off Oracle to use SQL Server today?

SQL Server 2016 is the database built for mission critical intelligence. It is the biggest leap forward in Microsoft’s data platform history with real-time operational analytics, rich visualizations on any mobile device, built-in advanced analytics, new advanced security technology to encrypt data at rest, in motion and in-memory, and new hybrid cloud scenarios. All of these capabilities are built-in into this latest release offering mission-critical capabilities at significantly lower TCO.

Microsoft is a leader in the Gartner Operational Database Management Systems Magic Quadrant with the execution score furthest to the top for execution and furthest to the right on strategy, which is a strong proof point for SQL Server 2014 as well as SQL Server 2016.

Customers can also choose to deploy SQL Server 2014 today or any supported version of SQL Server. Customers who take advantage of the Oracle migration offer with SQL Server 2014 today will have rights to deploy future versions of SQL Server once generally available during the term of Software Assurance.

7. Given this event celebrated the launch of SQL Server 2016, when is the general availability of SQL Server 2016?

Customers can expect SQL Server 2016 to be generally available this calendar year (2016). Last week we shipped the first release candidate (RC0) in which the core database is feature complete.  With the new rapid release model, customers can expect multiple release candidate prior to general availability so stay tuned for future updates.

8. When does the competitive migration offer expire? Is it only for Oracle?

To take advantage of the offer, customers would need to purchase by June 30, but have flexibility on the deployment timeline. Customers can use the migration offer to move off of all commercial databases including Oracle, IBM DB2 and SAP Sybase ASE to any supported release of SQL Server.

9. What are early adopter customers saying about SQL Server 2016?

Hear what DocuSign had to say about SQL Server 2016 and how they help organizations build entire approval workflows without a single sheet of paper or filing cabinet in sight. DocuSign partnered with Microsoft to help secure their customers’ data, realize insights with SQL Server analytics and BI capabilities and receive world-class support. Hear directly from their chief architect and vice president of platforms, Eric Fleischman:

10. How are partners responding to SQL 2016?

Partners are excited about the capabilities of SQL Server 2016.  At the Data Driven event, SQL Server partner OpenText shared that they are piloting use of the Stretch Database feature with Content Suite 16. Stretch Database will address OpenText’s customers’ hot and cold data storage and compute needs and enable customers to better leverage the mission critical data that powers their businesses. According to Adam Howatson, CMO, “The Stretch DB capability in SQL Server 2016 addresses this need and allows us to stretch information to the cloud to optimize their spend and the way that they manage information.”

Likewise, Royce Kallesen, senior director data science at PROS says of their pricing management software, “Getting data to the customers as soon as possible is vitally important. SQL Server with R Services is a big step forward for us and a great opportunity in the fact that we can have that Advanced Analytics embedded in with our database.”

 

* Annualized pricing based on Oracle US commercial list price and SQL Server open ERP EE price, assuming 16 core servers (two procs with eight cores each) running OLTP, BI, DW, AA and ETL Tools, 1000 BI users. http://www.oracle.com/us/corporate/pricing/technology-price-list-070617.pdf

** Benchmark Certification #2016002: Two-tier configured SAP SD Standard Application Benchmark. Using SAP ERP 6.0 Enhancement Package 5, achieving the results of 100000 SD benchmark users using HPE Integrity Superdome X, 16 processors/288 cores/576 threads Intel E7-8890 v3 with 4096GB of main memory. Operating System: Windows Server 2012R2 with SQL Server 2014 Enterprise Edition as DBMS. For more details see: http://www.sap.com/benchmark.

Categories: Database

SQL Server 2016 documentation survey

The SQL Server documentation team is working to improve the SQL Server documentation in an effort to enable you be more successful with and gain more value from SQL Server. Please let us know your thoughts on how we can improve the documentation by taking this brief survey by March 18.

We appreciate and are listening to your feedback! Thank you.

Categories: Database

Partition Limits In Oracle

Database Journal News - Mon, 03/14/2016 - 08:01

Partitioning can be a real asset when used properly, as long as you don't run afoul of the maximum number of partitions Oracle will allow.  Read on to learn the maximum number of partitions and discover some of the errors you can see if you try to exceed the limit.

Categories: Database

Mapping the Universe with SQL Server

This blog post was co-authored by Joseph Sirosh, Corporate Vice President, and Rimma V. Nehme, Principal Software Engineer, at the Data Group at Microsoft.

 

Szalay quote
Figure 1: Visible objects of the Sloan Digital Sky Survey (SDSS) DR7 dataset.

Over the last 15 years a database helped revolutionize an entire field of science. Astronomical discovery and sophisticated analyses of properties of the aggregate universe was turbocharged by a vast public mapping effort of the sky, called the Sloan Digital Sky Survey, whose data was served in a public database built with Microsoft SQL Server. This was the first in the field and opened up an entirely new window into the Universe.

The Fourth Paradigm
Figure 2: The Fourth Paradigm: Data-Intensive Scientific Discovery book dedicated to Jim Gray.

Now scientists in every field, from astronomy to zoology, are recognizing that the rate of accumulation of data in their fields are greatly outstripping the rate of accumulation of interpretation, i.e. the rate at which the scientific community can assimilate data into an interpretive framework. And there’s widespread recognition that powerful scientific discoveries lie hidden in such massive data. The Fourth Paradigm of scientific discovery, driven by novel techniques for analyzing massive data, is a driving force in science like never before.

Sloan Digital Sky Survey: The Cosmic Genome Project

It all started in the early 90’s when Dr. Alex Szalay together with the late Dr. Jim Gray took on a daring endeavor to build what could be called the first “DataScope” – an efficient data intensive computing infrastructure for astronomers called the Sloan Digital Sky Survey (SDSS) using Microsoft SQL Server as the back-end database.

Super Computing
Figure 3: Jim Gray, Alex Szalay and other astronomers at Super Computing 2003

SDSS had a bold goal – to create a map of the universe in a database for exploration by all. It is often referred to as the Cosmic Genome Project. A dedicated 2.5-m-diameter telescope in New Mexico used a 120-megapixel camera to image more than one-quarter of the entire night sky, 1.5 square degrees of sky at a time, about eight times the area of the full Moon, both inside and outside of the Milky Way, and helped create a three-dimensional (3D) map of millions of galaxies, quasars and stars.

The SDSS maps sparked a revolution in the way astronomy is practiced. No longer did scientists have to wait months for access to a telescope to learn about the night sky; instead, entire research projects could be accomplished by querying the online database. The SDSS made its entire data set available through SkyServer database – an online portal for public use,  and invited volunteer contributions to scientific research. Prior to SDSS, only the leading scientists and astronomers had telescopes and instruments to collect data for serious research, with most others largely excluded from direct and active engagement with astronomy. Now, with access to the visual data that SkyServer offers, anyone with Internet access could explore the universe with data just as the top scientists do.

SDSSIV_MilkyWay
Figure 4: SDSS-IV can view the whole Milky Way

SkyServer’s architecture was fairly simple to start with: a front-end IIS web server accepted HTTP requests processed by JavaScript Active Server Pages (ASP). These scripts used Active Data Objects (ADO) to query the backend Microsoft SQL Server database. SQL Server returned record sets that the JavaScript formatted into pages. The website was about 40,000 lines of code and was originally built by two people as a spare-time activity.

Why Microsoft SQL Server?

While building applications to study the correlation properties of galaxies, Szalay and his team have discovered that many of the patterns in their statistical analysis involved tasks that were much better performed inside the database engine than outside, on flat files. The Microsoft SQL Server gave them high-speed sequential search of complex predicates using multiple CPUs, multiple disks and large main memories. It also had sophisticated indexing and data joining algorithms far outperforming hand-written programs against flat files. Many of the multi-day batch files were replaced with database queries that ran in minutes thanks to the sophisticated query optimizer.

Impact

Sloan Telescope
Figure 5: Dr. Jim Gray in front of the Sloan telescope in Apache Point, NM

The most recent version of the database has a 15TB queryable public dataset, with about 150TB additional raw and calibrated files. A recent scan of the logs showed more than 1.6 billion web hits in the past 14 years and more than four million distinct IP addresses accessing the site. The total number of professional astronomers worldwide is only about 15,000. Furthermore, the multiuser collaborative environment in SDSS called CasJobs which allows users to launch extensive analyses has more than 6,820 registered users – almost half of the professional astronomy community.

SDSS has been successful in generating new scientific discoveries, including the measurements of thousands of asteroids, maps of the complicated merger history of the outer Milky Way, and the first detection of the baryon acoustic peak – a measurement of how structure formed from ultra-low frequency standing sound waves in the early universe. These surveys have produced data to support 5,800 papers with more than 245,000 citations.  This has made SDSS one of the highest impact projects in the field of astronomy.

SkyServer data

The amount of astronomical data in SkyServer is truly unprecedented. When the SDSS began in 1998, astronomers had data for less than 200,000 galaxies. Within five years after SDSS began, SkyServer had data on 200 million galaxies in the database. Today, the SDSS data exceeds 150 terabytes, covering more than 220 million galaxies and 260 million stars. The images alone include 2.5 trillion pixels of original raw data. SkyServer allows users to search for stars at a given position in the sky, or they can search for galaxies brighter than a certain limit. Users can also enter queries to the database in SQL directly, which allows more flexible and sophisticated searches.

Examples of queries users can ask in SkyServer:

  • What resources are in this part of the sky?
  • What is the common area of these surveys?
  • Is this point in the survey?
  • Give me all objects in this region
  • Give me all “good” objects (exclude “bad” areas)
  • Give me the cumulative counts over areas
  • Compute fast spherical transforms of densities
  • Interpolate sparsely sampled functions (extinction maps, dust temperature, …)

SkyServer
Figure 6: SkyServer portal

Galaxy Zoo

Another project that SDSS data access has enabled is a “citizen science” website, called Galaxy Zoo, where Internet volunteers have classified galaxies using SDSS images. Typically, astronomers used to classify galaxies by eye. If you have 200 million galaxies, on average at three per minute, classification would take 600 million minutes or 1142 years of 24 hours per day, seven days per week. Galaxy Zoo was the first astronomy crowdsourcing portal which allowed private citizens to look at data by eye, and contribute classifications to scientists in a much shorter time.

Hannys Voorwerp
Figure 7: Hanny’s Voorwerp. The mass (shown here in green) is a new cosmic object discovered by a Dutch school teacher, an astronomy novice, while using Galaxy Zoo.

There have been a number of scientific discoveries using Galaxy Zoo including determination of the relation between the morphology of galaxies and their environment and the discovery by a Dutch school teacher of Hanny’s Voorwerp – a very rare type of astronomical object called a quasar ionization echo. These discoveries would not have been possible without the participation of thousands of Galaxy Zoo volunteers – between them, they have visually classified over 40 million galaxies to date.

From SkyServer to SciServer: Big Data infrastructure for science

A new effort called SciServer, a descendant from SkyServer, aims to go beyond astronomy and build a long-term, flexible ecosystem for scientists to provide access to the enormous data sets from observations and simulation to enable collaborative research. SciServer aims to meet the challenges of Big Data in scientific world. By building a common infrastructure, the goal is to create data access and analysis tools useful to all areas of science. Led by Alex Szalay, the work on SciServer will deliver significant benefits to the scientific community by extending the infrastructure developed for SDSS astronomy data to many other areas of science.

SciServer
Figure 8: SciServer: A collaborative research environment for large-scale data-driven science.

The approach in designing SciServer is the same as in the SkyServer: bring the analysis to the data. This means that scientists can search and analyze Big Data without downloading terabytes or petabytes of data, resulting in much faster processing times. Bringing analysis to data also makes it much easier to compare and combine datasets allowing researchers to discover new and surprising connections between data and make experiments more reproducible.

To help ease the burden on researchers, the team developed SciDrive, a cloud data storage system for scientific data that allows scientists to upload and share data using a Dropbox-like interface. The interface automatically reads the data into a database, and one can search online and cross-correlate with other data sources. SciDrive tries to address the “long tail” of a huge number of small data sets that scientists have. The goal is to try bring many small, seemingly unrelated data to a single place and see if new value emerges. People can simply drag and drop (and share) their data without any metadata required.

In the heart of it all is SQL Server

SDSS team in collaboration with Jim Gray took on the enormous task of putting all of the astronomy data into SQL Server database, preserving as much provenance as possible, and making the data as accessible and query-able as possible.

Database logical design

Dr. Alex Szalay
Figure 9: Dr. Alex Szalay

The processed image data were stored in databases. The logical database design consisted of photographic and spectrographic objects. They were organized into a pair of snowflake schemas. Sub-setting views and many indices gave convenient and fast access to the conventional subsets (such as stars and galaxies). Procedures and indices were defined to make spatial lookups even more convenient and faster.

Database physical design

SkyServer initially took a simple approach to database design (see Figure 11 below) and it worked right from the beginning. The design counted on the SQL storage engine and the query optimizer to make all the intelligent decisions about data layout and data access. As Alex Szalay put it: “Great query optimizer made all the difference. Even ‘the worst’ query plans were actually quite good!”

Schema
Figure 11: The photoObj table at left is the center of one star schema describing photographic objects. The specObj table at right is the center of a star schema describing spectrograms and the extracted spectral lines. The photoObj and specObj tables are joined by objectId. Not shown are the dataConstants table that names the photoObj flags and tables that support web access and data loading.

“Indexing the Sky”

To speed up the access, the base tables were heavily indexed (these indices also benefited view access). In addition to the indices, the database design includes a fairly complete set of foreign key declarations to insure that every profile has an object; every object is within a valid field, and so on. The design also insisted that all fields were non-null.  These integrity constraints were invaluable tools in detecting errors during loading and they aided tools that automatically navigated the databases.

Beyond the file group striping (to automatically get the sum of the disk bandwidths without any special user effort), SkyServer used, for the most part, all of the SQL Server default values; there was not much special tuning. This is the hallmark of SQL Server – the system aims to have the out-of-the box performance to be great, and the SkyServer project has been a true testimonial to that goal.

Spatial data access

HTM
Figure 12: Hierarchical triangular mesh

“Spatial was special.” Astronomers are particularly interested in executing spatial queries to obtain galactic clustering and large-scale structure of the universe. The common theme in SDSS experience was that it was possible to embed spatial concepts in a relational framework in a very simple manner. To make spatial area queries run quickly, SDSS team integrated the hierarchical triangular mesh (HTM) code with the SQL Server, which became a new “spatial access method” in the engine. HTM is a method to subdivide the surface of a sphere into spherical triangles of similar, but not identical, shapes and sizes. It is basically a quad-tree that is particularly good at supporting searches at different resolutions, from arc seconds to hemispheres. The HTM library was an external stored procedure wrapped in a table-valued stored procedure spHTM_Cover(<area>).

So all the users had to do was to simply invoke the procedure call similar to this:  select * from spHTM_Cover(‘Circle J2000 12 5.5 60.2 1’) which would return the table with four rows, each row defining the start and end of a 12-deep HTM triangle like below.

HTMIDstart HTMIDend 3,3,2,0,0,1,0,0,1,3,2,2,2,0 3,3,2,0,0,1,0,0,1,3,2,2,2,1 3,3,2,0,0,1,0,0,1,3,2,2,2,2 3,3,2,0,0,1,0,0,1,3,2,2,3,0 3,3,2,0,0,1,0,0,1,3,2,3,0,0 3,3,2,0,0,1,0,0,1,3,2,3,1,0 3,3,2,0,0,1,0,0,1,3,2,3,3,1 3,3,2,0,0,1,0,0,1,3,3,0,0,0

Another optimization technique used by SkyServer was the zoning idea (segmenting space into zone buckets and then segmenting zones by an offset). The main idea behind zoning was to try to push the logic entirely into SQL (the zone code was all native to SQL), which allowed the query optimizer to do a very efficient job at filtering the objects.  In particular, the zone design gave a three-fold speedup for the table-valued functions.

CLR support

integration of .NET common language runtime (CLR) with SQL Server in 2005 enabled astronomers to implement user code that runs inside the database server process. CLR was in particular a very important feature to SDSS as it gave astronomers the ability to write astronomy-specific logic in the form of user-defined functions, aggregates and stored procedures to build critical science functionality and run the compiled code in the database. As Alex put it, “Support for object-oriented types made a dramatic change for SkyServer.”

SQL queries

Astronomers wanted a tool that would be able to quickly answer questions like: “find asteroid candidates” or “find other objects like this one”, which originally gave the motive to build the SQL-based backend. Indeed, right from the beginning Jim Gray asked Alex Szalay to define 20 typical queries astronomers might want to ask and then together they designed the SkyServer database to answer those queries. The anecdote is that the conversation went as follows:

Jim: What are the 20 questions you want to ask?
Alex: Astronomers want to ask anything! Not just 20 queries.
Jim: Ok, start with 5 queries.
[it took Alex 30 minutes to write them all down]
Jim: Ok, add another 5 queries.
[it took Alex 1 hour to write them all down]
Jim: Ok, now add another 5 queries.
[Alex gave up and went home to think about them]

Alex (said later): In 1.5 hours, Jim taught me a lot of humility!

Alex (said later): It also taught us the importance of long-tail distribution and how to prioritize.

The queries corresponded to typical tasks astronomers would do. Translating the queries into SQL required a good understanding of astronomy, a good understanding of SQL, and a good understanding of the databases. As Alex put it: “We were surprised and pleased to discover that all 20 queries had fairly simple SQL equivalents.” Below is one of the query examples used in SkyServer to detect asteroids:

Q: Provide a list of moving objects consistent with an asteroid.

 select	objID,  					       -- return object ID	
 	sqrt( power(rowv,2) + power(colv, 2) ) as velocity, -– velocity
	dbo.fGetUrlExpId(objID) as Url		       -- url of image to examine it.
 into  ##results
 from	PhotoObj  					       -- check each object.
 where (power(rowv,2) + power(colv, 2)) between 50 and 1000	-- square of velocity 
   and rowv >= 0 and colv >=0				       -- negative values indicate error

This is a sequential scan of the PhotoObj table to evaluate the predicate on each of the objects. It finds asteroid candidates. Here is a picture of one of such objects:

Asteroid Candidate

Above query returns ‘slow moving’ objects. To find fast moving objects one can write a slightly different query which looks for streaks in the sky that line up. These streaks are not close enough to be identified as a single object.

SELECT r.objID as rId, g.objId as gId,   
                 dbo.fGetUrlExpEq(g.ra, g.dec) as url 
FROM PhotoObj r, PhotoObj g
WHERE  r.run = g.run and r.camcol=g.camcol _    and abs(g.field-r.field)<2  -- nearby
       -- the red selection criteria
       and ((power(r.q_r,2) + power(r.u_r,2)) > 0.111111 )
       and r.fiberMag_r between 6 and 22 
and r.fiberMag_r < r.fiberMag_g 
and r.fiberMag_r < r.fiberMag_i
       and r.parentID=0 and r.fiberMag_r < r.fiberMag_u      
and r.fiberMag_r < r.fiberMag_z
       -- the green selection criteria
       and ((power(g.q_g,2) + power(g.u_g,2)) > 0.111111 )
       and g.fiberMag_g between 6 and 22 and g.fiberMag_g < g.fiberMag_r
and g.fiberMag_g < g.fiberMag_i
       and g.fiberMag_g < g.fiberMag_u and g.fiberMag_g < g.fiberMag_z
       and g.parentID=0 
       -- the matchup of the pair
       and sqrt(power(r.cx -g.cx,2)+ power(r.cy-g.cy,2)+power(r.cz-g.cz,2))*(10800/PI())< 4.0
       and abs(r.fiberMag_r-g.fiberMag_g)< 2.0_
 
And you can also add a third query

select  top 10 ra, dec, (rowv*rowv + colv*colv ) as velocityVector, *
from PhotoObj
where 
-- object SATURATED | BRIGHT | BLENDED and object DEBLENDED_AS_MOVING
(flags & (
       cast(0x0000000000040000 as bigint) |  
    cast(0x0000000000000002 as bigint) | 
    cast(0x0000000000000008 as bigint) ) ) = 0 
AND (flags & cast(0x0000000100000000 as bigint)) > 0 
-- PSF magnitude / psfCount r  r range between 14.5 and 21.5
AND type = 6
AND (psfMag_r > 14.5)
and (psfMag_r < 21.5)
-- veolocity vector larger than 0.05 deg/day and smaller than 0.5 deg/day.
AND (rowv*rowv + colv*colv > 0.0025)
AND (rowv*rowv + colv*colv < 0.25)
And dec > -1.25 
AND dec < 1.25
-- Limit to specific part of the Stripe-82 region
AND (ra > 300 or ra < 60)
order by (rowv*rowv + colv*colv ) desc

Here is a picture of one of such ‘faster moving’ objects:

Fast Moving Objects

When asked about T-SQL, one of the astronomers said that it was ‘almost like English’ to them, and they could easily understand what was going on. Another astronomer put it: “SQL can serve as a ‘helpdesk’ – if somebody has a problem, another person can answer the question when query is sent to them.” A graphical query plan that’s viewable before submitting an MS-SQL query provided details on which query steps would take the largest fraction of execution time and—in most cases— gave users all the information necessary to improve query performance.

Hardware configuration

The configuration for multiple release support in SDSS is shown in Figure 12 below. DR12 (the latest release) DB servers have the following hardware configuration today:

  • Total data size: 12 TB
  • Number of filegroups: Two (Primary has 8 files, Secondary has one file, see Figure 13)
  • Servers: Four identical nodes with one copy of DB on each
  • System manufacturer: Supermicro
  • System type: x64-based PC
  • Processor(s): Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz (2 processors)
  • Logical cores: 32
  • Physical CPUs: 2
  • Total physical memory: 128 GB
  • HDD size: 3.0 TB
  • Total HDDs: 24

HardwareConfig
Figure 13: DR12 Hardware configuration at JHU for SDSS Servers.

In the production cluster, there are three to four DB servers per release, so that the public and collaboration users can be adequately supported, and the queries can be load-balanced on different boxes. Quick and long queries are pointed to separate servers.

Database Properties
Figure 14: DR12 database properties

Conclusion

“When we started working on SDSS, we had fun, and we were hoping people will use it. Working with SQL Server was fun, but astronomy is also fun” – said Alex Szalay. Science is increasingly driven by data (big and small), combined with changing sociology – surveys today are analyzed by individuals from all over the world, not just by a few experts. The move from hypothesis-driven to data-driven science is a reality, and the SQL Server-powered SDSS is the first true “telescope” for data that has made the vision of a ‘DataScope’ a reality.

 

Joseph & Rimma
Follow Joseph on twitter @josephsirosh, and Rimma at @rimmanehme.

Categories: Database

MySQL Collaboration in the Cloud

Database Journal News - Thu, 03/10/2016 - 09:01

When you think of Cloud services for database administration, Database as a Service (DBaaS) is what tends to comes to mind, but one of the ways that the Cloud has had an enormous impact on our work productivity is in the area of collaboration.

Categories: Database

Microsoft's SQL Server Database Is Heading to Linux

Database Journal News - Wed, 03/09/2016 - 05:25

Next year, Microsoft plans to release a version of its hybrid-cloud-friendly database software for the open-source operating system.

Categories: Database

Oracle Unleashes New Innovations in Data Analytics with Free and Open Interfaces to On-chip Accelerators

Oracle Database News - Tue, 03/08/2016 - 23:37
Press Release Oracle Unleashes New Innovations in Data Analytics with Free and Open Interfaces to On-chip Accelerators Analytics Added to Open Platform for Security to Enable Breakthroughs in Data Analytics Using Oracle’s Software in Silicon Technology

Redwood Shores, Calif.—Mar 8, 2016

Empowering developers to build the next generation of big data analytics platforms, Oracle released a free and open API and developer kit for its Data Analytics Accelerator (DAX) in SPARC processors through its Software in Silicon Developer Program. The program also lets developers view sample use cases and program code, and to test and validate how DAX can speed up analytics applications while test driving Software in Silicon Technology.   “High performance data analytics are critical to a range of key use cases like click stream data, social media sentiment, buying behavior, and more,” said John Fowler, Executive Vice President of Systems, Oracle. “Through our Software in Silicon Developer Program, developers can now apply our DAX technology to a broad spectrum of previously unsolvable challenges in the analytics space because we have integrated data analytics acceleration into processors, enabling unprecedented data scan rates of up to 170 billion rows per second.”   With the release of the 32-core, 256 thread SPARC M7 processor, Oracle created a number of Software in Silicon features by building in higher-level software functions into processor design. One of the most exciting new capabilities introduced in the SPARC M7 processor as part of the Software in Silicon innovations in SPARC M7 is DAX, which delivers unprecedented analytics efficiency.   Data Analytics Accelerator on SPARC M7 DAX adds processing capability that can run selective functionality – Scan, Extract, Select and Translate – at incredibly fast speeds. The SPARC M7 DAX accelerates these analytics primitives on a dedicated physical unit separate from the standard compute cores.   Initial software development enabled DAX for Oracle Database 12c, and all the applications above it. This extends analytics acceleration to all Oracle, ISV, and customer applications.   Large scale scan and filter operations are made trivial by transparent use of 32 dedicated DAX co-processors in the SPARC microprocessor which operate at memory bus speeds of up to 160 GB/s between cache and DRAM. These accelerators, implemented for the first time on-chip for the highest level of performance and efficiency, can now be used by developers through APIs in Oracle Solaris 11, and applied to a variety of use cases.   As one notable example of Data Analytics Accelerator integration into machine learning and Big Data use cases, Oracle engineers have shown that the DAX can significantly accelerate Apache Spark, which has become one of the most popular methods for processing large data sets. Through this project, engineers used the DAX with Apache Spark to take one billion rows of data in memory and filter it into a 3D cube so fast that interactive data analytics are now possible.   SPARC M7 and DAX design advantages include:
 
  • Industry-leading delivered memory bandwidth: at an industry-leading 160GB/s memory bandwidth, the SPARC M7 processor provides enough capacity to feed both the DAX units as well as processor cores.
     
  • DAX offload: frees the processor cores for other processing.
     
  • Efficient decompressing combined with in-memory processing: putting decompression in the DAX unit is much faster than software implementations. Designing decompression with scanning means needless back and forth memory transfers are avoided. Results from the DAX are entered into the CPU cache for better CPU efficiency.
     
  • DAX range comparisons: many real-world database analytics queries are written to find data transacted between certain dates, or product cost ranges, etc. The DAX processes range comparisons at the same rate as individual comparisons. Other processors require additional computational time for each comparison.
     
  • Avoiding cache pollution: the DAX does much of its computation without needing to store intermediate data in a cache, freeing the CPU’s cache for other processing.
    Partnerships with Developer Community and Leading Higher Education Institutions Oracle continues to deliver traditional processor enhancements to improve performance of traditional workloads with more than 20 world record results over the competition. Software in Silicon can deliver previously unattainable step function improvements required in areas such as security and data analytics by embedding the functionalities to handle particular algorithms on the processor with greater performance and efficiency.   Oracle has also published several use cases with code samples to maximize developer productivity and expedite projects as well as a detailed example of DAX integration with Apache Spark. Resources can be accessed now via the Oracle Software in Silicon Cloud, a freely available cloud service for developers and researchers that provides direct access to this technology. Additionally, Oracle is partnering with leading higher education institutions such as Brown University, on innovative research projects with Software in Silicon.   “We are currently working on characterizing the performance of DAX across a suite of modern in-memory data layout schemes. After completing this study, we will work on the optimal use of DAX in accelerating interactive data exploration and visualization with the Tupleware main-memory database system and S-Store real-time stream processing system,” stated Ugur Centimenel, Chairman of the Computer Science Department, Brown University. “Through these studies, we will quantify the performance and scalability of M7 and DAX on real workloads involving sophisticated search and machine learning over large data sets.”   Open APIs for Oracle’s Data Analytics Accelerator are now available for free via the Software in Silicon Cloud. Developers can join the community now to get started on developing the next generation of big data and analytics applications.   Additional Information:   Contact Info Diana Wong
Oracle
+1.650.506.2757
diana.wong@oracle.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Diana Wong

  • +1.650.506.2757

Follow Oracle Corporate

Categories: Database, Vendor

Announcing SQL Server on Linux

Today, we announced our plans to bring SQL Server to Linux. This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud. We are bringing the core relational database capabilities to preview today, and are targeting availability in mid-2017.  Read Scott Guthrie’s blog post to learn more.

Categories: Database

First release candidate of SQL Server 2016 now available

We are excited to announce that our first SQL Server 2016 release candidate, SQL Server 2016 Release Candidate (RC) 0, is now available for download. This is an important milestone in the release of SQL Server 2016, as this marks feature completion for most dimensions of the product. It also means a very rich set of capabilities is now available, including real-time operational analytics, rich visualizations on mobile devices, built-in advanced analytics, new advanced security technologies, and new hybrid scenarios allowing you to securely stretch data to the cloud.

As part of our new rapid release model, we will be publishing multiple release candidates on a more frequent cadence. SQL Server 2016 RC 0, which you can try in your development and test environments, is available for download or in an Azure VM today.

In SQL Server 2016 RC 0, we made enhancements to several features, including:

  • Stretch Database service: With this preview, SQL Server 2016 users can dynamically stretch their on-premises warm and cold data to an Azure service that is engineered from the ground up for compute capacity and virtually infinite storage.
  • Enhancements to In-Memory OLTP: In-Memory OLTP, which dramatically improves transaction processing performance, now has reduced downtime during upgrades.
  • Enhancements to SQL Server Analysis Services: A new Tabular Object model is released to allow easier development and maintenance through code against tabular models.
  • Enhancements to SQL Server Reporting Services: You can do much more in this next preview of the new Reporting Services web portal, including subscribe to reports, manage shared datasets, manage data caching and refresh, and more.

For additional detail, please visit the detailed RC 0 technical overview blog post and the RC 0 Analysis Services blog post.

To learn more, visit the SQL Server 2016 preview page. To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the preview or try the preview by using a virtual machine in Microsoft Azure and start evaluating the impact these new innovations can have for your business. To learn more about the SQL Server Stretch Database service preview, visit the product page in Azure.

Questions? Join the discussion of the new SQL Server 2016 capabilities at MSDN and StackOverflow. If you run into an issue or would like to make a suggestion, you can let us know with the Connect tool. We look forward to hearing from you!

Categories: Database

Technical Overview: SQL Server 2016 Release Candidate 0

The SQL Server engineering team is pleased to announce the availability of SQL Server 2016 Release Candidate 0. This is an important milestone in the release of SQL Server 2016, as it marks feature completion for most dimensions of the product and means a very rich set of capabilities are now available. These include: real-time operational analytics, rich visualizations on mobile devices, built-in advanced analytics, new advanced security technologies, and new hybrid scenarios allowing you to securely stretch data to the cloud.

To learn more about the release, visit the SQL Server 2016 preview page. To experience the new, exciting features in SQL Server 2016 and the new rapid release model, download the preview and start evaluating the impact these new innovations can have for your business.

Questions? Join the discussion of the new SQL Server 2016 capabilities at MSDN and StackOverflow. If you run into an issue or would like to make a suggestion, you can let us know using Microsoft’s Connect tool. We look forward to hearing from you.

For more information on this release, see SQL Server 2016 Release Notes and What’s New in SQL Server 2016.

Database scoped configuration

This release now supports a new database level object that holds optional configuration values that affect the behavior of the application code at the database level. This support is available in both SQL Server 2016 Release Candidate (RC0) and SQL Database V12 using the new ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL) statement. This statement modifies the default SQL Server 2016 Database Engine behavior for a particular database. A generic mechanism for creating database configuration(s) at creation time is not provided.

These options are:

  • Clear procedure cache.
  • Set the MAXDOP parameter to an arbitrary value (1,2, …) for the primary database based on what works best for that particular database and set a different value (e.g. 0) for all secondary database used (e.g. for reporting queries).
  • Set the query optimizer cardinality estimation model independent of the database to compatibility level.
  • Enable or disable parameter sniffing at the database level.
  • Enable or disable query optimization hotfixes at the database level.

The following T-SQL Syntax is supported:

ALTER DATABASE SCOPED CONFIGURATION

{
{ [ FOR SECONDARY] SET }
}
| CLEAR PROCEDURE_CACHE
[;]

< set_options > ::=
{
MAXDOP = { | PRIMARY}
| LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY}
| PARAMETER_SNIFFING = { ON | OFF | PRIMARY}
| QUERY_OPTIMIZER_HOTFIXES = { ON | OFF | PRIMARY}
}

The new functionality is supported for both Azure SQL Database and SQL Server.

PolyBase enhancements in RC0 include:

  • Support for the latest Cloudera distribution CDH5.5 on Linux
  • Support for public containers and blobs in Azure blob storage

SQL Server Management Studio improvements in this release include:

In-Memory OLTP new features in RC0:

  • Parallel scan of non-clustered indexes. All indexes on memory-optimized tables now support parallel scan, leading to increased performance of analytics-style queries that scan large sets of data.
  • Reduced downtime during upgrade: upgrade from earlier builds of SQL Server 2016 no longer runs database recovery; the time it takes to run upgrade is no longer a factor of the data size. For upgrade and attach/restore from SQL Server 2014 the number of database restarts is reduced to one: during upgrade, the database is restarted once, thus the time it takes to upgrade is a factor of the data size.
  • Built-in function @@SPID is now supported natively compiled T-SQL modules and in constraints on memory-optimized tables
  • Log-optimized and parallel ALTER: most ALTER TABLE operations now run in parallel and result in only the metadata change to be written to the transaction log, reducing the time required to run the ALTER operation.
Improved support for large number of foreign key references

With CTP3.3, we added support for efficient DELETE operations for tables with up to 10,000 incoming foreign key references. With RC0, we are adding support for the following:

  • Efficient UPDATE operations on tables with up to 10,000 incoming foreign key references.
  • Efficient DELETE and UPDATE operations for partitioned tables with large numbers of foreign key references.

The following limitations still apply:

  • MERGE operations are not supported for tables with large numbers of foreign key references
  • Large numbers of foreign key references are not currently supported for memory-optimized tables, or stretch database.
New Built-in Table-Valued Function STRING_SPLIT

STRING_SPLIT is a T-SQL function that splits input character expression by specified separator and outputs result as a table.

Syntax:

STRING_SPLIT ( string, separator )
returns: table with a column named value

Examples:

Split comma separated value literal string
SELECT * FROM STRING_SPLIT(‘Lorem ipsum dolor sit amet.’, ”)

The result is:

value
—–
Lorem
ipsum
dolor
sit
amet.

Split comma separated value string in a column
SELECT id, title, value
FROM Article
CROSS APPLY STRING_SPLIT(tags, ‘,’)

Support for import and export of UTF-8 data

Data stored in a UTF-8 encoded file can now be imported into SQL Server and exported from SQL Server into a UTF-8 encoded file, using BULK INSERT T-SQL command and bcp command line utility. As a part of the import process, the UTF-8 encoded strings are read from the file, and converted and stored as native data types that correspond to the target columns in SQL Server tables. As a part of the export process, data are converted from native data types into UTF-8 encoded strings and written into the file. All native data types are supported except Xml, SqlVariant, Text, NText, and Image. The UTF-8 code page is specified by providing the following arguments:

  • CODEPAGE = ‘65001’, DATAFILETYPE = ‘Char’, in case of BULK INSERT command,
  • -c -C 65001, in case of bcp utility.

Examples:

Importing with BULK INSERT into SQL Server from a file
BULK INSERT MyTable
FROM ‘path\file.csv’
WITH ( FIELDTERMINATOR = ‘,’, CODEPAGE = ‘65001’, DATAFILETYPE = ‘Char’)

Importing with bcp utility into SQL Server from a file
bcp MyTable in   “path\file.csv” -T -t , -c -C 65001

Exporting with bcp utility from SQL Server to a file
bcp MyTable out “path\file.csv” -T -t , -c -C 65001

AlwaysOn Availability Groups add support for:

  • Distributed Availability Groups: It is now possible to configure replication between availability groups. Note: these different availability groups can live on different Windows Clusters.
  • Streaming seeding of replicas: New secondary replicas in an Availability Group can now be seeded directly from the primary replica without the need for database backup/restore.
SQL Server Analysis Services (SSAS)

This month’s update to Analysis Services delivers support for display folders for Tabular models, any models created with new SQL Server 2016 compatibility level can now be used with PowerShell and SSIS. Finally, the new Tabular Object model is released to allow easier development and maintenance through code against tabular models. See the Analysis Services blog for more details.

SQL Server Reporting Services (SSRS)

Much more functionality is now available in this next preview of the new Reporting Services web portal. You can now:

  • Subscribe to reports to receive them in your email inbox, download, rename, move, and delete reports and other catalog items
  • Manage shared datasets
  • Manage data caching and refresh
  • Create “linked” reports with different default parameter values
  • Pin to a dashboard in a group when pinning a report chart, gauge, map, or image to a Power BI dashboard
  • Choose the currency for monetary values when creating a KPI or mobile report
SQL Server Integration Services (SSIS)

SSIS Projects in Visual Studio SQL Server Data Tools (SSDT) can now target 2012, 2014 and 2016 versions of SQL Server by switching a project level property between 2012, 2014 and 2016 SQL server. SSIS now supports setting a server wide customized logging level. Finally, the SSIS HDFS connector now supports the ORC file format, in addition to CSV and Avro. See the Integration Services blog for more details.

Master Data Services (MDS) adds a New Derived Hierarchy Management web page. See the Master Data Services blog for more details.

For more information on this release, see SQL Server 2016 Release Notes and What’s New in SQL Server 2016.

Categories: Database

Announcing SQL Server Stretch Database service preview

Microsoft is pleased to announce a preview of the new SQL Server Stretch Database service. With this preview, SQL Server 2016 users can dynamically stretch their on-premises warm and cold data to an Azure service that is engineered from the ground up for compute capacity and virtually infinite storage. Unlike typical cold data management solutions, your data is always at hand and secure by design. You can now keep as much data as you need indefinitely without long maintenance windows or the high costs of traditional enterprise storage.

The Stretch Database service makes remote query processing possible by providing compute and storage in a way that’s completely transparent to the application; using Stretch Database typically doesn’t require any application changes. If added security is required you can use Stretch Database with new Always Encrypted technology, where sensitive data is encrypted before sending to Azure and the encryption key always remains on-premises—extending data in a more secured manner for greater peace of mind.

How to use Stretch Database with SQL Server 2016

Right now, with SQL Server 2016 RCO, when you enable stretching to Azure from SQL Server Management Studio, the default behavior is to stretch to Azure SQL Database. If you would like to preview the new Stretch Database service, you can enroll in the preview and we’ll provide instructions for configuring your SQL Server 2016 database for this new service. For future builds of SQL Server 2016, Stretch Database service will be the default target location for stretching.

Getting started with Stretch Database

If you’re ready to learn more about Stretch Database, read our getting started articles about Stretch Database in SQL Server books online.

If you’re ready to sign up for the Stretch Database preview, register for preview access. To trial SQL Server 2016, you can download an evaluation copy here or try the preview by using a virtual machine in Microsoft Azure.

Questions?

Join the discussion of the new SQL Server 2016 capabilities at MSDN and StackOverflow. If you run into an issue or would like to make a suggestion, you can let us know on Connect. We look forward to hearing from you!

Categories: Database