Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!


Using Docker to develop with Couchbase

NorthScale Blog - 15 hours 42 min ago

Rafael Ugolini is a full stack software developer currently based in Brussels, Belgium. He has been working with software development for more than 10 years and is lately focused on designing web solutions and developing using Python and JavaScript. Rafael Ugolini is Senior Software Developer at Famoco.

FullSizeRender Introduction

Docker is a great project that is helping developers around the world run applications in containers. This not only helps shipping software faster, but it also results with the famous “it works in my machine” phrase. In this article I will explain how to create a modular Couchbase image that doesn’t require any Web UI interaction to have a ready-to-go database for you.

All the code is available online here.


The first step is to create the Dockerfile.

Couchbase Version

FROM couchbase/server:enterprise-4.6.1

This example is based on Couchbase Server Enterprise 4.6.1, but you can feel free to change to the specific version you are running in your environment.

Memory Configuration


All the values here are in MB:

– MEMORY_QUOTA: per node data service ram quota

– INDEX_MEMORY_QUOTA: per node index service ram quota

– FTS_MEMORY_QUOTA: per node index service ram quota


ENV SERVICES "kv,n1ql,index,fts"

These are the services that will be available for the node created:

– kv: Data

– n1ql: Query

– index: Index

– fts: Full-Text Search


ENV USERNAME "Administrator"
ENV PASSWORD "password"

Username and password to be used in Couchbase Server.

Cluster Options


These options are only used if you want to add more than one node in the cluster.

– CLUSTER_HOST: hostname of the cluster for this node to join

– CLUSTER_REBALANCE: set “true” if you want the cluster to rebalance after the node is joined




The Couchbase Server image already ships with an script and we don’t want to override it. The trick here is to copy our version of to /, run Couchbase Server in the background, and after configuring the node attach the script back to the original ENTRYPOINT.


The ENTRYPOINT is used in combination with the original script from the Couchbase Server image. Let’s go line by line to understand how it works.

Initialize Couchbase Server

# Monitor mode (used to attach into couchbase entrypoint)
set -m
# Send it to background
/ couchbase-server &

First we use set -m to enable job control, process running in background (like the original ENTRYPOINT) run in a separate process group. This option is turned off by default in non-interactive mode, like scripts.

Util Functions

# Check if couchbase server is up
check_db() {
 curl --silent > /dev/null
 echo $?

This function is used to check when Couchbase Server starts answering HTTP calls.

# Variable used in echo
# Echo with
numbered_echo() {
echo "[$i] $@"
i=`expr $i + 1`

This is just a util function, add a number before any echo in the script to count the steps taken automatically.

# Parse JSON and get nodes from the cluster
read_nodes() {
cmd="import sys,json;"
cmd="${cmd} print(','.join([node['otpNode']"
cmd="${cmd} for node in json.load(sys.stdin)['nodes']"
cmd="${cmd} ]))"
python -c "${cmd}"

In order to parse the output of the nodes in Couchbase Server API, I’m using a function which runs ython to read STDIN, transform it to JSON and the Couchbase nodes. This is used for rebalancing.

Configure the Node

# Wait until it's ready
until [[ $(check_db) = 0 ]]; do
>&2 numbered_echo "Waiting for Couchbase Server to be available"
sleep 1

echo "# Couchbase Server Online"
echo "# Starting setup process"

The first step is to wait until the server is ready, then using the function numbered_echo you can see how long it took for Couchbase Server to have the API calls available.

HOSTNAME=`hostname -f`

# Reset steps

Then we set a variable HOSTNAME to be used in all the API calls we do and we also reset the counter from numbered_echo by setting it to 1.

numbered_echo "Initialize the node"
curl --silent "http://${HOSTNAME}:8091/nodes/self/controller/settings" \
-d path="/opt/couchbase/var/lib/couchbase/data" \
-d index_path="/opt/couchbase/var/lib/couchbase/data"

numbered_echo "Setting hostname"
curl --silent "http://${HOSTNAME}:8091/node/controller/rename" \
-d hostname=${HOSTNAME}


First thing to do is to set up disk storage configuration and then we set the hostname.

Joining a Cluster

if [[ ${CLUSTER_HOST} ]];then
numbered_echo "Joining cluster ${CLUSTER_HOST}"
curl --silent -u ${USERNAME}:${PASSWORD} \
"http://${CLUSTER_HOST}:8091/controller/addNode" \
-d hostname="${HOSTNAME}" \
-d user="${USERNAME}" \
-d password="${PASSWORD}" \
-d services="${SERVICES}" > /dev/null

If CLUSTER_HOST is set, the script will try to add the current container to the cluster.

if [[ ${CLUSTER_REBALANCE} ]]; then
# "Unexpected server error without the sleep 2
sleep 2
numbered_echo "Retrieving nodes"
curl --silent -u ${USERNAME}:${PASSWORD} http://${CLUSTER_HOST}:8091/pools/default | read_nodes

numbered_echo "Rebalancing cluster"
curl -u ${USERNAME}:${PASSWORD} \
"http://${CLUSTER_HOST}:8091/controller/rebalance" \
-d knownNodes="${known_nodes}"


After adding the node into the cluster, the script can also check for the variable CLUSTER_REBALANCE to see if it needs to rebalance the cluster automatically. This is where we use the Python function to read the nodes from /pools/default endpoint.

Not joining a cluster

numbered_echo "Setting up memory"
curl --silent "http://${HOSTNAME}:8091/pools/default" \
-d memoryQuota=${MEMORY_QUOTA} \
-d indexMemoryQuota=${INDEX_MEMORY_QUOTA} \
-d ftsMemoryQuota=${FTS_MEMORY_QUOTA}

Memory settings for the services.

numbered_echo "Setting up services"
curl --silent "http://${HOSTNAME}:8091/node/controller/setupServices" \
-d services="${SERVICES}"

Services to be used by the node.

numbered_echo "Setting up user credentials"
curl --silent "http://${HOSTNAME}:8091/settings/web" \
-d port=8091 \
-d username=${USERNAME} \
-d password=${PASSWORD} > /dev/null


Set up the credentials for the node.


# Attach to couchbase entrypoint
numbered_echo "Attaching to couchbase-server entrypoint"
fg 1

To end the script, we attach it to the original ENTRYPOINT.


To demonstrate how to use it, I will be using the image registered in Docker Hub with the code here.

Single node


docker run -ti --name couchbase-server-nosetup \
-h node1.cluster \
-p 8091-8093:8091-8093 \
-p 11210:11210 \
-p 4369:4369 \
-p 21100-21299:21100-21299 \

This runs a single node using the minimum required memory and the default credentials (Administrator/password) registered in the image. All the network ports Couchbase Server uses are exposed as well.

docker run -ti --name couchbase-server-nosetup \
-h node1.cluster \
-p 8091-8093:8091-8093 \
-p 11210:11210 \
-p 4369:4369 \
-e USERNAME=admin \
-e PASSWORD=adminadmin \
-p 21100-21299:21100-21299 \

The command above plays a little with the environment variables available in the Dockerfile.


In this example, we will connect 3 nodes in the cluster.

docker network create couchbase


We must first create a network Couchbase where we will connect all nodes.

docker run -ti --name node1.cluster \
-p 8091-8093:8091-8093 \
-p 11210:11210 \
-p 4369:4369 \
-p 21100-21299:21100-21299 \
-h node1.cluster \
--network=couchbase \

Then we create the first node.

docker run -ti --name node2.cluster \
--network=couchbase \
-h node2.cluster \
-e CLUSTER_HOST=node1.cluster \


Since all the network ports are exposed in the first node, it’s not necessary to expose them here.

Attention to the detail that CLUSTER_HOST is set to node1.cluster which is the hostname of the first node and CLUSTER_REBALANCE is also set to true. Once the node is added to the cluster, it will rebalance automatically.

docker run -ti --name node3.cluster \
--network=couchbase \
-h node3.cluster \
-e CLUSTER_HOST=node1.cluster \


The node3 is also added to the cluster, but since CLUSTER_REBALANCE wasn’t set, it will require manual rebalance of the cluster for it to become available.


This post is part of the Couchbase Community Writing Program

The post Using Docker to develop with Couchbase appeared first on The Couchbase Blog.

Categories: Architecture, Database

Engaging for growth: Introducing the industry’s first Engagement Database

NorthScale Blog - Tue, 05/23/2017 - 13:55

Today at Couchbase, we tread a new path by carving out a new category of database: the Engagement Database.

What is an Engagement Database, you ask me – a guy steeped in the technology side of the business? It’s something that addresses a need we discovered when speaking with our customers over the past year. We asked them, “What kind of applications are you building?” It became obvious that just being “NoSQL” didn’t address the challenges our customers were facing.

The applications our customers are focused on, as part of their organization’s’ overall digital innovation initiatives, really comes down to enhancing the customer experience. With consumers’ diminishing attention spans and increasing fickleness when it comes to final purchase decisions, customer experience is a fierce battleground on which today’s businesses are fighting.

With more and more businesses seeing customer experience equating to competitive advantage, new technologies continue to evolve to unleash data’s potential and create those amazing experiences. However, it’s a simple fact that databases – from transactional databases to the majority of NoSQL solutions – have not been optimized to support and nurture the interactions customers desire. And scale only makes the problem worse.

So far, attempts to solve this problem have led to organizations using a variety of different databases for specific use cases – leading to a difficult-to-manage ‘database sprawl.’ The solution is a comprehensive ‘engagement database,’ or one that allows enterprises to strategically and expeditiously serve up data when designing customer interactions and experiences – especially when they need to perform at scale across all channels and devices. This is the Couchbase Data Platform.

The Couchbase Data Platform was designed to harness the full potential of dynamic data, at any scale, across any channel or device. Built on the most powerful NoSQL technology, the Couchbase Data Platform – as the industry’s first Engagement Database – makes it simple to continually reinvent the customer experience. No other database provides the capabilities that are required to create these experiences from a single platform.

Today we also unveiled innovation behind the industry’s first Engagement Database: the latest Beta release of Couchbase Server 5.0 and preview of Couchbase Mobile 2.0, both part of the Couchbase Data Platform.  The enhanced product suite provides improved developer agility, query performance, and easier cluster management, all of which enable enterprises to build amazing customer engagement applications. Finally, the jewel in the crown is the fully integrated full text search capability–not just for server deployments in the cloud, but also for mobile devices on the edge. To summarize, key enhancements include:

Couchbase Server 5.0 Beta
  • Richer customer experiences with built-in full text search with independent scaling all in the same Couchbase cluster  
  • More responsive applications and efficient data management, with in-memory data sets (no disk needed) including support for query, indexing and high availability replication for in-memory data.
  • Simpler application development that leverages enhanced N1QL query monitoring and debugging, and technology leading adaptive indexing, and built-in multi data center programmability.  
  • Enhanced security options, including fine-grained role based access control (RBAC) support for users and applications.
  • More available applications, from fast node failover in seconds and automatic index replication.
  • Strong ecosystem improvements for ETL, and for public and hybrid cloud solutions
Couchbase Mobile 2.0 Preview
  • Richer customer experiences by providing built-in full text search for mobile applications.
  • Simplified application development with support for N1QL-like API, as well as a new API that provides built-in domain data modeling support
  • Agile development of collaborative, multi-user apps with automatic conflict management on the edge.
  • Faster apps with delta synchronization for more efficient data management from the edge to the cloud

We are very excited to share this news in front of hundreds of customers and partners at Couchbase Connect in New York this week. Stay tuned as we share additional feedback and reaction from attendees and industry pundits next week.

The post Engaging for growth: Introducing the industry’s first Engagement Database appeared first on The Couchbase Blog.

Categories: Architecture, Database

Oracle Utilities Unveils the Perfect Customer Platform for the Modern Utility

Oracle Database News - Tue, 05/23/2017 - 13:00
Press Release Oracle Utilities Unveils the Perfect Customer Platform for the Modern Utility New customer to meter offering slashes tech costs by 25% and combines full power of customer information systems and meter data management systems

CS Week – Fort Worth, Texas—May 23, 2017

Oracle today unveiled Oracle Utilities Customer to Meter, a comprehensive meter-to-cash solution for today’s customer-first utility. Oracle Utilities Customer to Meter is the first offering to combine a market-leading customer information system (CIS) with a market-leading meter data management system (MDMS) into one solution with a single user interface.

Leveraging a single shared technology stack, this new solution can reduce utility costs due to faster implementation times, fewer integration points, and greater operational efficiencies. Oracle Utilities Customer to Meter delivers all of the benefits of a complete meter solution and a powerful customer platform, enabling utilities to more immediately and efficiently extract value from advanced metering infrastructure data to improve customer experience. With this streamlined approach, utilities can more easily design customer-centric, personalized programs and services, and prepare for the continued growth of smart meter programs. This holistic solution lays the groundwork for an evolving utility that wants to roll out smart meters in the future, without a major IT project.

“Utilities that leverage data to deliver an improved customer experience and more personalized programs - such as tailored time-of-use billing, or targeted home energy management advice - and do this with a single, integrated solution that combines customer and meter data, will be well poised to take advantage of the continued growth of smart meters and a smarter, more customer-centric grid”, according to Roberta Bigliani, Vice President, IDC Energy Insights.

With rising customer expectations and expanding smart grids, utilities are turning to modern, comprehensive technologies that deliver world-class customer engagement and operational efficiencies. Oracle Utilities Customer to Meter provides the platform to respond to evolving market dynamics and quickly implement new business requirements that span metering, rate analysis, billing, collections and customer programs. For example, as electric utilities face increasing distributed generation they may test new rate structures to better manage demand. With Oracle Utilities Customer to Meter, months of customization can be reduced to hours of configuration and utilities can easily test and implement the changes necessary to evolve. Oracle Utilities Customer to Meter consolidates advanced usage and billing capabilities for all meters – from scalar to interval—so utilities can manage those meters and their data in one place and derive greater value from grid investments.

“Oracle Utilities continues to partner with utilities around the globe to solve the issue of increasing complexity in this rapidly transforming industry. Simplifying meter-to-cash processes is an important part of those partnerships. This new solution does exactly that: it allows utilities to get up and running in a matter of months with a complete meter-to-cash solution and allows them to leverage that complete solution to streamline business processes and easily stay ahead of rapidly evolving business drivers impacting how they serve their customers,” said Rodger Smith, senior vice president and general manager, Oracle Utilities. 

Oracle Utilities Customer to Meter is redefining the utility customer platform by enabling utilities to:

  • Implement a full meter-to-cash solution in a matter of months
  • Leverage one technology stack and reduce technology costs
  • Achieve service excellence in every customer interaction with a single, intuitive user interface
  • Deliver a powerful, streamlined customer experience across every channel
  • Expand smart meter programs seamlessly and derive more value from AMI data

Additional Resources Contact Info Valerie Beaudett
Oracle Corporation
+1 650.400.7833 Christina McDonald
+1 212.614.4221 About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Christina McDonald

  • +1 212.614.4221

Follow Oracle Corporate

Categories: Database, Vendor

What's new in IBM Security Guardium V10

IBM - DB2 and Informix Articles - Tue, 05/23/2017 - 05:00
In Version 10, IBM Security Guardium takes a major step forward with intelligence and automation to safeguard data, enterprise-readiness features, and increased breadth of data sources, including file systems. This article provides an in-depth technical review of all new and changed capabilities including database activity monitoring, vulnerability assessment, and file activity monitoring. This article, originally published in September 2015, was updated to include enhancements for the updates in Version 10.1 (delivered in June 2016).
Categories: Database

Announcing Couchbase Server 5.0 Beta

NorthScale Blog - Mon, 05/22/2017 - 23:21

We’re excited to pull back the curtain on the beta release of Couchbase Server 5.0. With this release, Couchbase provides the world’s first Engagement Database, built on the most powerful NoSQL technology. This platform delivers unparalleled performance at any scale, while providing unmatched agility and manageability.

To meet the requirements of an Engagement Database, this beta release comes with new key capabilities that further strengthen our core characteristics of agility, performance, and manageability.

  1. Full-text search
  2. Powerful indexing & querying
  3. Secure platform
  4. Performance, scalability, and high-availability
  5. Enhanced management, developer tools
  6. Big data connectors

With these additions and much more, Couchbase Server 5.0 Beta is a huge release that packs a massive punch for enterprises looking to transform their digital business.

Download Couchbase Server 5.0 Beta  What’s new in 5.0 Beta?

Let’s dive right in!

Full-text Search – built right in

Need to enable flexible and powerful Search capabilities? No digital application is complete without search. With 5.0, Couchbase search is becoming an integral part of the scalable data platform. Full-text search (FTS) provides the ability to index Couchbase documents and query them rapidly using a variety of indexing methods, text analyzers, and languages – without leaving the Couchbase Data Platform and without duplicating any data. The search index gets in-memory updates as the data changes. To learn more about FTS, see Full-text search reference.

Secure Platform

For digital applications, being compliant is not just nice-to-have, but a must-have. Built on the role-based access control (RBAC) security model from 4.5, RBAC for applications in 5.0 Beta allows you to segregate access and closely align user roles in Couchbase with the actual roles they hold within your organization. This allows your application services and users to access the information they need – nothing more, nothing less.

The bottom line is that with RBAC for applications you are now closer to meeting your security compliance requirements. To learn more, read about RBAC in our documentation.

Powerful indexing, querying, and search

Couchbase Server 5.0 Beta adds a unique new way to provide ad hoc search queries through N1QL. Adaptive indexes can efficiently look up any of the indexed fields – without requiring you to create multiple composite indexes or different index key combinations! For more information, read about adaptive indexes in our documentation.

CREATE INDEX `aidx_self` ON `travel-sample`((distinct (PAIRS(self))));

EXPLAIN SELECT * FROM `travel-sample` USE INDEX(aidx_self) WHERE (city LIKE "San%") and type = 'airport';

Want to identify a long-running query and tune your query performance? With Couchbase Server 5.0, you get a detailed visual query plan with execution timing and other query monitoring statistics that provide detailed insights into the query execution process. Check our documentation to learn how you can monitor your N1QL queries.

Want to join external data with the data stored in Couchbase? Whether it’s social data, map data, or any other JSON data on the web, now there’s a way with N1QL CURL to interact with it, and integrate to build powerful applications. Check our documentation to learn more about N1QL CURL.

Performance, scalability, and high availability

From the very beginning, performance has been one of the key reasons why enterprises have chosen Couchbase for mission-critical applications. With 5.0 Beta we have continued to push the boundaries even further to provide you with unparalleled performance at any scale!

Want better performance at a much lower cost? Now, with the new ephemeral buckets feature in Couchbase 5.0 Beta, you can reduce the total cost of ownership (TCO) by eliminating the disk component for your Couchbase buckets, and get highly consistent in-memory performance without disk-based fluctuations. For more information, see here.

For a mission-critical Engagement Database, robust failure detection and recovery are key. The new fast-failover feature in Couchbase 5.0 Beta provides a robust failure detection mechanism to reduce the time window of a failure detection from 30 seconds to less than 10 seconds. This means, increased 9’s of availability for your application. For more information, see here.

With Couchbase 5.0 Beta, you can not only create indexes to speed up and scale queries to a new level, but also enjoy better index availability and manageability. Just specify the number of index replicas to create, and the system will dynamically manage the placements of the index replicas across different nodes, server groups, and availability zones. Couchbase Server 5.0 also brings support for rebalancing indexes without any system downtime. For more information, see here.

5.0 Beta also adds several other N1QL performance enhancements. Some of the notable ones are:

  1. Indexing and querying on meta() fields
  2. Complex predicate pushdowns
  3. Pagination pushdowns 
  4. Operator pushdown
Enhanced management & developer tools

When you launch Couchbase Server 5.0 Beta, you will notice a modern web console interface that is engineered for intelligence, comfort, and speed. This redesigned interface offers a new look and streamlines your common tasks and workflows.


Find it hard to read JSON or text-based N1QL query plans? 5.0 Beta has the answer – he visual query plan feature has arrived!

The visual query plan feature provides a graphical representation of the query execution process as data flows visually from one query operator to another, highlighting the most expensive operations.

You can also use query monitoring to see currently active queries and how long they’ve been running, to view the longest running completed queries, and see statistics on the execution of prepared queries.

For building applications with Couchbase Server 5.0 Beta as fast as possible we have updated our SDKs to support many of the new critical features you’ve seen above – including RBAC, fast failover, ephemeral buckets, and full-text search. We’ve also introduced many other language-specific improvements, as well as improved integration with Spring Data and .NET Core support. For more information, check out the release notes of each language.  

Couchbase 5.0 Beta also provides integrations with Spark 2.1, and a developer preview connector for Talend Open Studio 6.3.1.

Dig deeper!

We have just scratched the surface of Couchbase Server 5.0 with the capabilities mentioned above and there are many more. Our new 5.0 documentation will help you to dig deeper.

OK! How do I get my hands on 5.0 beta?

Remember this before you take the plunge:

This is a beta version of Couchbase Server 5.0 and is intended for development purposes only.

To use 5.0 Beta, you need to do a fresh install. This release is not supported in production.

We consider beta releases to have some rough edges and bugs.

Overall, the release is still under active development, which means that
you can have a big impact on the final version of the product by providing feedback and observations.

It is easy to get your hands on the beta:

  1. Download Couchbase Server 5.0 Beta from our download page.
  2. Looking to develop in Java, .Net, Node.js, PHP, or other languages with native SDKs? Download the latest SDK version under the “client libraries” section on the downloads page.
  3. Don’t forget to check out our 5.0 Beta release notes.

Finally, good luck building your applications with Couchbase Server 5.0 Beta, and we look forward to your valuable feedback.

The post Announcing Couchbase Server 5.0 Beta appeared first on The Couchbase Blog.

Categories: Architecture, Database

Announcing Couchbase Mobile 2.0 Developer Preview

NorthScale Blog - Mon, 05/22/2017 - 23:16

Couchbase Mobile 2.0 is a groundbreaking new release for Couchbase Mobile. We’ve reimagined the developer experience with a cross-platform common core, new simplified API, and automated conflict resolution that can be customized. In this release, we are bringing N1QL queries and full-text search capabilities to mobile. Read on to learn all about the most advanced NoSQL mobile database on the planet!

Couchbase Lite

We have rewritten Couchbase Lite in the 2.0 version. The database core engine, internally referred to as “Couchbase Lite core,” has been implemented in C/C++. With a common core, the size of the codebase has been significantly reduced resulting in better manageability. It also allows for easy porting to low fidelity devices that enable new IoT use cases and opens new markets for Couchbase mobile developers.

There are language-specific bindings on top of the common core for iOS, .NET, and Java. During development, we’ve seen a 6x improvement in performance between 1.x and 2.0.

New simplified API

With built-in thread safety, mutable properties, typed accessors and blobs for accessing attachments, you will find the reimagined API easy to learn. We’ve gone through multiple iterations of the API as we engaged with our developer community early on. Your feedback has been invaluable in shaping the current revision of the APIs and we look forward to your continued support.

Fluent API for N1QL queriesDatabase queries have changed significantly in this release. Couchbase Lite 2.0 uses semantics based on the N1QL query language defined by Couchbase. The query API has two styles, builder and fluent, and the implementation chosen would depend on the development platform.

The API allows you to access multiple Couchbase Lite databases with cross-database joins. This API will be familiar if you’ve used Core Data, or other query APIs based on SQL (like jOOQ).

Full-text search

In this release, users can perform full-text searches on the JSON documents stored in Couchbase Lite. You can now bring to your mobile applications what Google, Yahoo, and Bing do with HTML on the web. The API for using full-text search is not very different from the query API – users can search for text, text fragments, and text connected by binary operators, and Couchbase Lite finds the set of JSON documents that best match those terms.


Couchbase Mobile 2.0 uses a new replication protocol, based on WebSockets. This protocol has been designed to be fast, efficient, easier to implement, and symmetrical between client/server. Even though the replication protocol has changed, Couchbase Lite 1.x clients will be able to work with Couchbase Mobile 2.0 deployments since Couchbase Sync Gateway continues to support both clients.

However, the new replication protocol is incompatible with version 1.x, and with CouchDB-based databases including PouchDB and Cloudant.

But the new replicator is faster than the old one – we’ve seen up to twice the speed on iOS devices, and even greater improvement on Android.

Automated conflict management

We’ve taken a completely different approach to conflict management. An application is no longer required to handle conflicts out of band and keep track of various conflicting revisions. Couchbase Lite 2.0 will detect a conflict while saving a document or during replication and invoke an app-defined conflict resolver. The conflict resolution is designed for flexibility that will allow developers to tailor it for their specific needs.

To get started  

The post Announcing Couchbase Mobile 2.0 Developer Preview appeared first on The Couchbase Blog.

Categories: Architecture, Database

AS ONE Leverages Oracle Database Cloud to Publish Inventory Data to 4,300 Dealers in Real Time

Oracle Database News - Mon, 05/22/2017 - 13:00
Press Release AS ONE Leverages Oracle Database Cloud to Publish Inventory Data to 4,300 Dealers in Real Time Oracle Database Exadata Express Cloud Service to help facilitate collaboration with expanding dealer network, supporting business growth and cost reduction

Redwood Shores, Calif.—May 22, 2017

Oracle today announced that AS ONE Corporation has released APIs using Oracle Database Exadata Express Cloud Service, which provides Oracle's latest database, Oracle Database 12c Release 2, in the cloud. With these offerings, AS ONE will be able to provide its 4,300 dealerships with real-time information for over 300,000 inventory datasets. Prior to the investment in Oracle’s solutions, the data could only be published once a day.

AS ONE is a general trading company of scientific instruments, as well as industrial, physics and chemistry equipment, that provides hospital and nursing care products. It offers more than 1.4 million items of product information through catalogs and the company's Web shop—AXEL—and sells products through a delivery system that connects users with dealers and manufacturers. AS One had been looking for a mechanism for real-time external collaboration of over 300,000 inventory data sets with minimum load and processing time.

In July 2014, the company consolidated the database infrastructure—which originally existed in six disparate systems—into Oracle Exadata. According to AS ONE, the integration reduced the operation time by five times and improved the processing performance up to 20 times. Oracle Database Exadata Express Cloud Service serves as an effective approach to access real-time data, and the Oracle Exadata and Oracle REST Data Services can easily develop REST API of Oracle Database to AS One’s 4,300 dealerships. Initially, AS One plans to leverage the solution for dealership data reference, and eventually expand to include functions such as delivery date reply and order reception. In addition, AS One will leverage chatbot technology to implement inventory inquiries via a business chat application.

System integration in this project was handled by Kanden System Solutions Co., Inc.

Additionally, AS One also implemented Oracle Data Visualization Service to visualize 1.4 million data points at the core of the system.

"AS ONE is a general trading company of catalogs and web based physics and chemistry equipment, founded in 1933. It provides creative value in advanced IT and logistics in the research, industry and medical fields. It opened an office in Shanghai, China in 2007 and opened its base in Santa Clara, California in 2017. In addition to catalogs, AS ONE’s sales channels expanded to e-commerce. As the number of products to be handled continues to increase, Oracle Database Exadata Express Cloud Service helped realize a mechanism to link inventory information to external sites with REST API in a short period of time. After publishing the API to dealers in the short term, data can be obtained automatically in real time,” said Mr. Tomohiro Fukuda, General Manager, IT Department, AS ONE Corporation 

Additional Information Contact Info Nicole Maloney
+1.415.235.4033 Sarah Fraser
+1.650.743.0660 About Oracle The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at   Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.415.235.4033

Sarah Fraser

  • +1.650.743.0660

Follow Oracle Corporate

Categories: Database, Vendor

Getting Started with Azure SQL Data Warehouse - Part 5

Database Journal News - Mon, 05/22/2017 - 08:01

In part 5 of this series covering Azure SQL Data Warehouse, Arshad Ali covers performance optimization and the different aspects that impact performance.

Categories: Database

PostgresOpen SV 2017 Registration Opens; 2 weeks for CFP!

PostgreSQL News - Fri, 05/19/2017 - 01:00

PostgresOpen and PGConf SV have joined forces this year to put together a fantastic PostgreSQL conference, PostgresOpen SV 2017, being held in
downtown San Francisco from September 6th to 8th.

Early Bird Registration for PostgresOpen SV 2017 is now open!

Simply go to our tickets page and register to attend the longest running annual PostgreSQL conference in the US.

The Program Committee is excited to be able to offer tickets for PostgresOpen SV at the same rate as last year, with a $200 discount for early bird registrations!

We also want to remind you that the Call for Papers is only open until May 30th, Anywhere on Earth (AoE), this is the last chance for you to submit your talk for PostgresOpen SV 2017, there's only two weeks left!

Presentations on any topic related to PostgreSQL including, but not limited to, case studies, experiences, tools and utilities, application development, data science, migration stories, existing features, new feature development, benchmarks, and performance tuning are encouraged.

Tutorials will be announced in the coming weeks- watch our blog for updates!

The Program Committee looks forward to bringing the best PostgreSQL presentations and tutorials from speakers around the world to the fantastic Parc55 in downtown San Francisco.

Speakers will be notified by June 6th, 2017 AoE, with the schedule to be published once selected speakers have confirmed.

PostgresOpen SV 2017 is only able to happen with the support of our fantastic sponsors. We are extremely pleased to be able to recognize our Diamond launch sponsors:



and head to our site to see all of our Gold, Silver and Supporter sponsors

Sponsorship opportunities are still available!

We look forward to seeing everyone in San Francisco!

Any questions? Please contact:

Categories: Database, Open Source

Synchronizing Images Between Android and iOS with NativeScript, Angular, and Couchbase

NorthScale Blog - Thu, 05/18/2017 - 15:00

A few weeks ago I had written a guide that demonstrated saving captured images to Couchbase Lite as base64 encoded string data in a NativeScript with Angular mobile application. While the previous guide worked for both Android and iOS, the data was localized to the device. What if you wanted to synchronize the images between devices or even store them in the cloud?

We’re going to see how to use Couchbase Mobile to synchronize image data between devices and platforms in a NativeScript with Angular application.

Going forward you should note that this is part two in the series.  This means that if you haven’t already followed the previous tutorial and gotten a working version of the project, you should put this tutorial on hold. Start with the guide, Save Captured Images in a NativeScript Angular Application to Couchbase, then work your way into synchronizing the images.

NativeScript Couchbase Photos

The above animated image will give you a rough idea of what we’re after.  We want to be able to synchronize the saved images between Android and iOS using Sync Gateway and optionally Couchbase Server.

The Requirements

The prerequisites for this guide are similar to what was found in the previous.  You’ll need the following:

  • NativeScript CLI
  • Android SDK for Android or Xcode for iOS
  • Couchbase Sync Gateway
  • Couchbase Server (optional)

You’ll notice that Sync Gateway and optionally Couchbase Server are the new requirements of this guide in the series.  We’ll need these for synchronization to actually happen.  If you’re unfamiliar, Sync Gateway is the synchronization middleware and Couchbase Server is the remote database server.

Configuring Sync Gateway for Replication

To use Sync Gateway we’ll need to define a configuration as to how synchronization happens and things like that.

Create a sync-gateway-config.json file somewhere on your computer that contains the following information:

    "log":["CRUD+", "REST+", "Changes+", "Attach+"],
    "databases": {
        "image-database": {
                function (doc) {
                    channel (doc.channels);
            "users": {
                "GUEST": {
                    "disabled": false,
                    "admin_channels": ["*"]

In the above configuration file we are saving everything to walrus:data which is an in-memory solution rather than persisting it to Couchbase Server.  The remote database is called image-database, but it doesn’t have to match what we have in our mobile application code.

For simplicity everyone will be able to read and write data in the same channel as a guest.

To run Sync Gateway, execute the following:

/path/to/sync_gateway /path/to/sync-gateway-config.json

You should be able to access Sync Gateway from your web browser at http://localhost:4984/_admin/ and view everything that is being synchronized, often referred to as replicated.

Adding the Logic for Synchronizing Image Data

The actual code involved towards getting replication working in our NativeScript with Angular application is minimal.

Open the project’s app/app.component.ts file and include the following TypeScript code:

import { Component, OnInit, NgZone } from "@angular/core";
import { Couchbase } from "nativescript-couchbase";
import * as Camera from "camera";
import * as ImageSource from "image-source";

    selector: "ns-app",
    templateUrl: "app.component.html",
export class AppComponent implements OnInit {

    private database: any;
    private pushReplicator: any;
    private pullReplicator: any;
    public images: Array<any>;

    public constructor(private zone: NgZone) {
        this.database = new Couchbase("image-database");
        this.database.createView("images", "1", function(document, emitter) {
            if(document.type && document.type == "image") {
                emitter.emit(document._id, document);
        this.pushReplicator = this.database.createPushReplication("");
        this.pullReplicator = this.database.createPullReplication("");
        this.images = [];

    public ngOnInit() {
        this.database.addDatabaseChangeListener(changes => {
            for(let i = 0; i < changes.length; i++) {
       => {
                    let image = ImageSource.fromBase64(this.database.getDocument(changes[i].getDocumentId()).image);
        let rows = this.database.executeQuery("images");
        for(let i = 0; i < rows.length; i++) {

    public capture() {
        Camera.takePicture({ width: 300, height: 300, keepAspectRatio: true, saveToGallery: false }).then(picture => {
            let base64 = picture.toBase64String("png", 70);
                "type": "image",
                "image": base64,
                "timestamp": (new Date()).getTime()
        }, error => {


The above code includes everything from the first part of the series as well as this part of the series.  We’re going to break down only what has been added in regards to replication.

In the constructor method we define where we are going to push data to and where we are going to pull data from.

this.pushReplicator = this.database.createPushReplication("");
this.pullReplicator = this.database.createPullReplication("");

This is to be done continuously for as long as the application is open.

Make sure you use the correct host or IP address for Sync Gateway.  If you’re using Genymotion like I am, localhost will not work.  You’ll have to figure out the correct IP addresses.

In the ngOnInit method we start the replication process and configure a listener.

this.database.addDatabaseChangeListener(changes => {
    for(let i = 0; i < changes.length; i++) { => {
            let image = ImageSource.fromBase64(this.database.getDocument(changes[i].getDocumentId()).image);

Any time there is a change in the database we will loop through them and load the base64 data.  This example is simple so there are no updates or deletions of images.  If this was the case, our listener would be a little more complex in logic.

The reason we are using an Angular NgZone is because the listener operates on a different thread.  By zoning, we can take the data and make sure the UI updates correctly.

That’s all we had to do to get the images synchronizing between device and server.  Easy right?


You just saw how to synchronize image data between devices and platforms using NativeScript, Angular, and Couchbase. This was a followup to the previous tutorial I wrote called, Save Captured Images in a NativeScript Angular Application to Couchbase, where we got the initial application up and running.

In case you’d rather not store your images in the database, you might consider creating an API that uses an object storage like Minio or Amazon S3.  I’ve written a tutorial on creating an API that saves to Minio that might help.

For more information on using Couchbase with Android and iOS, check out the Couchbase Developer Portal.

The post Synchronizing Images Between Android and iOS with NativeScript, Angular, and Couchbase appeared first on The Couchbase Blog.

Categories: Architecture, Database

PostgreSQL 10 Beta 1 Released

PostgreSQL News - Thu, 05/18/2017 - 01:00

The PostgreSQL Global Development Group announces today that the first beta release of PostgreSQL 10 is available for download. This release contains previews of all of the features which will be available in the final release of version 10, although some details will change before then. Users are encouraged to begin testing their applications against this latest release.

Major Features of 10

The new version contains multiple features that will allow users to both scale out and scale up their PostgreSQL infrastructure:

  • Logical Replication: built-in option for replicating specific tables or using replication to upgrade
  • Native Table Partitioning: range and list partitioning as native database objects
  • Additional Query Parallelism: including index scans, bitmap scans, and merge joins
  • Quorum Commit for Synchronous Replication: ensure against loss of multiple nodes

We have also made three improvements to PostgreSQL connections, which we are calling on driver authors to support, and users to test:

  • SCRAM Authentication, for more secure password-based access
  • Multi-host "failover", connecting to the first available in a list of hosts
  • target_session_attrs parameter, so a client can request a read/write host
Additional Features

Many other new features and improvements have been added to PostgreSQL 10, some of which may be as important, or more important, to specific users than the above. Certainly all of them require testing. Among them are:

  • Crash-safe and replicable Hash Indexes
  • Multi-column Correlation Statistics
  • New "monitoring" roles for permission grants
  • Latch Wait times in pg_stat_activity
  • XMLTABLE query expression
  • Restrictive Policies for Row Level Security
  • Full Text Search support for JSON and JSONB
  • Compression support for pg_receivewal
  • ICU collation support
  • Push Down Aggregates to foreign servers
  • Transition Tables in trigger execution

Further, developers have contributed performance improvements in the SUM() function, character encoding conversion, expression evaluation, grouping sets, and joins against unique columns. Analytics queries against large numbers of rows should be up to 40% faster. Please test if these are faster for you and report back.

See the Release Notes for a complete list of new and changed features.

Test for Bugs and Compatibility

We count on you to test the altered version with your workloads and testing tools in order to find bugs and regressions before the release of PostgreSQL 10. As this is a Beta, minor changes to database behaviors, feature details, and APIs are still possible. Your feedback and testing will help determine the final tweaks on the new features, so test soon. The quality of user testing helps determine when we can make a final release.

Additionally, version 10 contains several changes that are incompatible with prior major releases, particularly renaming "xlog" to "wal" and a change in version numbering. We encourage all users test it against their applications, scripts, and platforms as soon as possible. See the Release Notes and the What's New in 10 page for more details.

Beta Schedule

This is the first beta release of version 10. The PostgreSQL Project will release additional betas as required for testing, followed by one or more release candidates, until the final release in late 2017. For further information please see the Beta Testing page.

Categories: Database, Open Source

Try new SQL Server command line tools to generate T-SQL scripts and monitor Dynamic Management Views

This post was authored by Tara Raj and Vinson Yu, Program Managers – SQL Server Team

We are excited to announce the public preview availability of two new command line tools for SQL Server:

  • The mssql-scripter tool enables developers, DBAs, and sysadmins to generate CREATE and INSERT T-SQL scripts for database objects in SQL Server, Azure SQL DB, and Azure SQL DW from the command line.
  • The DBFS tool enables DBAs and sysadmins to monitor SQL Server more easily by exposing live data from SQL Server Dynamic Management Views (DMVs) as virtual files in a virtual directory on Linux operating systems.

Read on for detailed usage examples, try out these new command line tools, and give us your feedback.


Mssql-scripter is the multiplatform command line equivalent of the widely used Generate Scripts Wizard experience in SSMS.

You can use mssql-scripter on Linux, macOS, and Windows to generate data definition language (DDL) and data manipulation language (DML) T-SQL scripts for database objects in SQL Server running anywhere, Azure SQL Database, and Azure SQL Data Warehouse. You can save the generated T-SQL script to a .sql file or pipe it to standard *nix utilities (for example, sed, awk, grep) for further transformations. You can edit the generated script or check it into source control and subsequently execute the script in your existing SQL database deployment processes and DevOps pipelines with standard multiplatform SQL command line tools such as sqlcmd.

Mssql-scripter is built using Python and incorporates the usability principles of the new Azure CLI 2.0 tools. The source code can be found on Github at, and we welcome your contributions and pull requests!

Get started with mssql-scripter

$pip install mssql-scripter
For additional installation tips, visit

Script Your First Database Objects
For usage and help content, pass in the -h parameter, which will also show all options:
mssql-scripter -h

Here are some example commands
# Generate DDL scripts for all database objects (default) in the Adventureworks database and output to stdout
$ mssql-scripter -S localhost -d AdventureWorks -U sa

# Generate DDL scripts for all database objects and DML scripts (INSERT statements) for all tables in the Adventureworks database and save the script to a file
$ mssql-scripter -S localhost -d AdventureWorks -U sa –schema-and-data  > ./adventureworks.sql

# generate DDL scripts for objects that contain “Employee” in their name to stdout
$ mssql-scripter -S localhost -d AdventureWorks -U sa –include-objects Employee

# Change a schema name in the generated DDL script on Linux and macOS and bash in Windows 10.
# 1) Generate DDL scripts for the Adventureworks database
# 2) Pipe results and change all occurrences of SalesLT to SalesLT_test using sed, and save the script to a file
$ mssql-scripter scripter -S localhost -d Adventureworks -U sa | sed -e “s/SalesLT./SalesLT_test./g” > adventureworks_SalesLT_test.sql


A big part of operationalizing SQL Server is to monitor to ensure that SQL Server is performant, highly available, and secure for your applications. With SQL Server 2017, Dynamic Management Views (DMVs) on Windows are also accessible on Linux, allowing your existing scripts and tools that rely on DMVs to continue to work. Traditionally, to get this information, you would use GUI admin tools such as SSMS or command line tools such as SQLCMD to run queries.

Today, we are also introducing a new experimental Linux tool, DBFS, which enables you to access live DMVS mounted to a virtual filesystem using FUSE. All you need to do is view the contents of the virtual files in the mounted virtual directory to see the same data you would see as if you ran a SQL query to view the DMV data. There is no need to log in to the SQL Server using a GUI or command line tool or run SQL queries. DBFS can also be used in scenarios where you want to access DMV data from the context of a script with CLI tools such as grep, awk, and sed.

DBFS uses the FUSE file system module to create two zero byte files for each DMV—one for showing the data in CSV format and one for showing the data in JSON format. When a file is “read,” the relevant information from the corresponding DMV is queried from SQL Server and displayed just like the contents of any CSV or JSON text file.


  • Access data in .json format if you are connected to SQL Server 2016 or later
  • Compatible with Bash tools such as grep, sed, and awk
  • Live DMV data at time of access
  • Works with both SQL Server on Windows and SQL Server on Linux


  • This tool is currently only available for Ubuntu, Red Hat, and CentOS (SUSE coming soon!).

Next Steps:
See more usage examples and read more about mssql-scripter at and get started with the DBFS today at

We are open to suggestions, feedback, questions, and of course contributions to the project itself.

Categories: Database

SQL Server 2017 CTP 2.1 now available

Microsoft is excited to announce a new preview for the next version of SQL Server (SQL Server 2017). Community Technology Preview (CTP) 2.1 is available on both Windows and Linux. In this preview, we added manageability features to make it easier to configure SQL Server in Docker containers. We’re also introducing some new command line tools for managing SQL Server in our GitHub repo. And, there’s a preview of SQL Server Integration Services on Linux to try! You can try the SQL Server 2017 preview in your choice of development and test environments now:

Key CTP 2.1 enhancements

The primary enhancement to SQL Server 2017 in this release is the ability to configure SQL Server configuration settings through environment variables passed in as parameters to docker run. This enables many of the SQL Server configuration scenarios in Docker containers such as setting the collation.

For additional detail on CTP 2.1, please visit What’s New in SQL Server 2017, Release Notes and Linux documentation.

SQL Server Integration Services on Linux

SQL Server Integration Services now supports Linux for the first time! Today we are also releasing a preview of SQL Server Integration Services for Ubuntu. The preview enables you to run SSIS packages on the Linux OS, extract data from and load it to most common sources and targets, and perform common transformations. It has a simple command line installation. For more information, see our SSIS blog.

Updated SQL Server Tooling

The latest release of SQL Server Management Studio is out! It features improvements to how it works with SQL Server on Linux so make sure you have the latest. In addition, we are excited to announce the public preview availability of two new command line tools for SQL Server:

  • The mssql-scripter tool enables developers, DBAs, and sysadmins to generate CREATE and INSERT T-SQL scripts for database objects in SQL Server, Azure SQL DB, and Azure SQL DW from the command line.
  • The DBFS tool enables DBAs and sysadmins to monitor SQL Server more easily by exposing live data from SQL Server Dynamic Management Views (DMVs) as virtual files in a virtual directory on Linux operating systems.
New lightweight installer for SQL Server Reporting Services (SSRS)

In CTP 2.1, we moved Reporting Services installation from the SQL Server installer to a separate installer. This is a packaging change, not a product change; access to SQL Server Reporting Services is still included with your SQL Server license. The new installation process keeps our packages lean and enables customers to deploy and update Reporting Services with zero impact on your SQL Server deployments and databases.

To learn more about what’s new in SQL Server 2017 Reporting Services preview, read our Reporting Services Release Notes. To download the latest preview in the new lightweight installer, go to

To learn more about the recent announcement of a Power BI Report Server preview, which includes the capabilities of SQL Server 2017 Reporting Services and support for Power BI reports, you can read this blog article.

Get SQL Server 2017 CTP 2.1 today!

Try the preview of the SQL Server 2017 today! Get started with the preview of SQL Server with our updated developer tutorials that show you how to install and use SQL Server 2017 on macOS, Docker, Windows, and Linux and quickly build an app in a programming language of your choice.

Have questions? Join the discussion of SQL Server 2017 at MSDN. If you run into an issue or would like to make a suggestion, you can let us know through Connect. We look forward to hearing from you!

Categories: Database

SQL Server 2017 on Linux surpasses 1 million Docker pulls as the next preview version rolls out

This post was authored by Rohan Kumar, General Manager, Database Systems Group

SQL Server 2017 makes it easier and simpler to work with data, with more deployment options than before and monthly preview releases offering regular innovation and improvements. The momentum behind these new options is clear. We are excited to mark a new milestone: Last week, SQL Server on Linux passed 1 million pulls of its container image! The image has been on Docker Hub for the six months since we first launched the SQL Server on Linux public preview in November 2016, with steadily growing customer use. In fact, we now have customers like dv01 going into production with SQL Server 2017 in Docker containers using the production support agreement from our Early Adoption Program (EAP). The container image is also available in the Docker Store, where it’s currently one of the featured images.

Customer interest in containers is high because of the benefits for production, and especially development and test: consistent and reliable behavior across environments, in a lightweight and easy- to-use format. Containers are fast to set up, can easily be stopped and started, and give users the ability to spin up multiple containers together using tools like docker-compose to easily start and interconnect database, application, and other services containers in a micro-services architecture.

SQL Server on Linux containers has been tested extensively in our test lab over the course of SQL Server 2017 public previews. We have been deploying SQL Server on a 150-node Kubernetes cluster in Azure to test each successive monthly Community Technology Preview (CTP). For each test pass, we automatically deploy 750 containers and run over a million tests. In addition to Kubernetes, we are testing on other container platforms with our partners and the community, including Red Hat OpenShift, Docker Swarm, and Mesosphere DC/OS.

Financial technology startup cuts database management time by 90 percent

Customers are already adopting SQL Server in containers. dv01 is a Wall Street startup, offering a reporting and analytics platform to institutional investors interested in greater insight into consumer lending markets. dv01 had initially based its solution on PostgreSQL and Amazon Redshift, but moved to SQL Server 2016 in Windows Azure Virtual Machines for faster query response times and scalability as its data grew. Because the firm runs all its other workloads on Linux, dv01 signed up for the Early Adoption Program for SQL Server 2017 to get Microsoft advice and assistance on migrating its solution to SQL Server on Linux. This move will help the company avoid managing multiple operating systems within its environment. It opted to deploy the application to production on Docker Engine, using a SQL Server 2017 on Linux image. Its choice to implement SQL Server and Docker containers has cut database management time by 90 percent, freeing its development team to focus on adding new capabilities to the product. To learn more about dv01’s SQL Server 2017 journey, you can read its story here.

“SQL Server 2016 offered the combination of performance and scalability that we needed,” said Dean Chen, VP of Engineering, dv01. “Expensive queries that were taking 30 seconds or more with our previous system now take 1-2 seconds, which means we’re able to do analytics queries in close to real time for our users.”

Making SQL Server on a Linux Docker container easy

With SQL Server 2017 CTP 2.1, available today, we continue to add to the manageability features for SQL Server on Linux Docker containers. We have introduced the ability to configure the SQL Server configuration settings through environment variables passed as parameters to docker run. This enables some of the most common SQL Server configuration scenarios in Docker containers, such as setting the server collation when creating a new SQL Server instance in a container. If you’d like to learn more about the SQL Server 2017 CTP 2.1 release, read our detailed blog for information on the other enhancements and how to get started with the preview.

We want to make it as easy as possible to get started with this technology. If you’d like to learn about how to get started with building a data-centric CI/CD pipeline using SQL Server on Linux containers, join SQL Server engineers Travis Wright and Tobias Ternstrom for this how-to video from the Microsoft Build event for developers.

Reasons to consider running SQL Server in containers

In many ways, container technology is at an inflection point much like hypervisors were 15 years ago. The benefits are immense and increasing every day and include the following:

  • Reduced size on disk for better hardware utilization
  • Reduced CPU/memory consumption, which also results in better hardware utilization
  • Reduced deployment size for faster deployments and scale up/down
  • Reduced patching for less effort, less vulnerability, less down time
  • Better composability using layers of Images, applications defined as multiple containers
  • Easier sharing with Docker Hub and Registry

But in some cases, there are still areas for improvement. For example, configuring high availability in a container platform is not well defined yet. Persistence to local and remote storage is still relatively new and is a complex area of any container platform. Because containers are still new, finding people that are experienced in working with containers can be a challenge. We look forward to working with the community to expand on and refine the capabilities of container platforms in the months to come.

The road ahead for SQL Server in containers

We are targeting support for SQL Server on Linux containers by General Availability of SQL Server 2017 later this year. Customers in our Early Adoption Program can deploy into production on containers right now with full support of our support and engineering teams. We have created a GitHub repository called mssql-docker where you can get Dockerfiles, example entrypoint scripts, and provide us with feedback and feature requests. It’s also a great place to engage with other people running SQL Server in containers.

We are also working on testing SQL Server in Windows containers, including SQL Server 2016 SP1 Developer and Express editions and SQL Server 2017 Evaluation edition. The Windows container images are available now on Docker Hub for testing and experimentation as well.

Thanks again to our community for your interest in and support for SQL Server in containers. We look forward to your continued feedback.

–Rohan Kumar, General Manager, Database Systems Group

Categories: Database

Free SQL Multi Select Basic Edition 3.2 released

PostgreSQL News - Wed, 05/17/2017 - 01:00

SQL Multi Select 3.2 Basic Edition is now free.

Run multiple scripts on multiple PostgreSQL databases with a single click. A description for how to use SQL Multi Select with other PostgreSQL tools is available here.

Changes in version 3.2:

  • various GUI changes to improve Wine compatibility.
  • added option to define default scripts for PostgreSQL, MySQL, and Oracle servers
  • modified the upgrade process to avoid having to reboot the Linux OS.

System requirements:

  • Runs on Windows XP to Windows 10.
  • Runs on Wine, tested on Ubuntu and Fedora.
  • Supports PostgreSQL 8.3 to 9.6, without the need for any additional database drivers.

For more information about SQL Multi Select for PostgreSQL, please visit, or download a free 14-day trial.

About Yohz Software

Yohz Software is a developer of free and commercial database applications for most popular database engines. Visit our site at

Categories: Database, Open Source

Announcing Access to comprehensive PostgreSQL on Mapt

PostgreSQL News - Wed, 05/17/2017 - 01:00

Mapt - powered by Packt - is a comprehensive tech library stuffed full of the latest PostgreSQL knowledge. Mapt boasts over 170 hours of PostgreSQL courses, including the latest Packt PostgreSQL eBooks. It’s been designed for developers who need answers fast.

Newly released on Mapt, the PostgreSQL Administration Cookbook, High Performance Cookbook and High Availability Cookbook give you essential answers right at your fingertips. Now PostgreSQL community members can access these titles and more with an exclusive 50% off discount on Mapt Annual subscription.

You can receive a discount off of a whole year of Mapt with the discount code MptPgSQL50

Mapt’s PostgreSQL courses cover everything you need to know whether you’re just starting out with the basics, or looking for advanced tips and tricks to get the most from PostgreSQL. Get practical insight from real PostgreSQL experts with decades of database experience. Mapt authors include Simon Riggs, CTO of PostgreSQL consultancy 2ndQuadrant; Greg Smith, principal consultant for 2ndQuadrant, and Postgre Open speaker Shaun M Thomas.

On top of all that PostgreSQL knowledge, a Mapt subscription nets even more insight. Access to over 4,500 eBook and video courses on data, development, and more. Get career development guidance from Mapt’s Skill Plans. Make your learning stick with assessments, and take advantage of a global author community.

Categories: Database, Open Source

Keeping freight moving in Denmark with SQL Server 2016

Danske Logo

Efficient data management keeps goods flowing smoothly in Denmark. Danske Fragtmaend, the country’s largest national transport and distribution firm, has been moving freight for more than a century. Today, Danske Fragtmaend delivers more than 40,000 consignments each day throughout Denmark, and businesses from small mom-and-pop operations to factories rely on its services.

The firm handles logistics in a central location, where 200 dispatchers keep an eye on the movement of thousands of trucks and their cargo. Both drivers and dispatchers need the latest information to operate efficiently, so they rely on a data platform based on SQL Server 2016. The storage system includes 160 terabytes of flash memory for fast I/O and high uptimes. Throughout the day, drivers continually scan transactions with PDAs and send shipping information including GPS coordinates to the data platform. Fast access to information is essential. Ulf Preisler, chief information officer at Danske Fragtmaend, says, “When it comes to short-term logistics, you’ve got to think like an air traffic controller more than a traditional radio dispatcher.”

Because the data changes rapidly, asynchronous replication between geographically disparate datacenters was inadequate. Instead, Danske Fragtmaend runs SQL Server on Windows 2016. Windows Server 2016 introduces a new disaster recovery and preparedness feature, Storage Replica, which enables storage-agnostic, synchronous replication of data across geographically diverse datacenters. Even if disaster strikes one location, all the data exists elsewhere, so there is no possibility of loss.


Best of all, companies that combine flash storage with the latest versions of SQL Server and Windows Server can achieve a multiplying effect on performance. Danske Fragmaend’s lead software developer, Morten Vinther, ran several tests to compare the old storage stack with the new one. “After combining the new all-flash infrastructure and the features from SQL Server 2016 on Windows Server 2016, one of our BI queries ran 9,521 times faster than on the prior infrastructure. That is much more than we expected.”

To find out more about Danske Fragtmaend’s SQL Server 2016 implementation, read the customer story.

Customer Name: Danske Fragtmaend
Industry: Transportation and logistics
Country or Region: Denmark
Customer Website:
Employee Size: 900

Categories: Database

SQL Server Command Line Tools for macOS released

This post was authored by Meet Bhagdev, Program Manager, Microsoft

We are delighted to share the production-ready release of the SQL Server Command Line Tools (sqlcmd and bcp) on macOS El Capitan and Sierra.

The sqlcmd utility is a command-line tool that lets you submit T-SQL statements or batches to local and remote instances of SQL Server. The utility is extremely useful for repetitive database tasks such as batch processing or unit testing.

The bulk copy program utility (bcp) bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files.

Install the tools for macOS El Capitan and Sierra

/usr/bin/ruby -e “$(curl – fsSL”
brew tap microsoft/mssql-release
brew update
brew install mssql-tools
#for silent install ACCEPT_EULA=y brew install mssql-tools

Get started

sqlcmd -S localhost -U sa -P <your_password> -Q “<your_query>”

bcp <your table>in ~/test_data.txt -S localhost -U sa -P <your password>-d<your database> -c -t ‘,’
bcp <your table>out ~/test_export.txt -S localhost -U sa -P<your password> -d<your database> -c -t ‘,’

For more information, check out some examples for sqlcmd and bcp.

Please file bugs, questions or issues on our Issues page. We welcome contributions, questions and issues of any kind.


Categories: Database

NDP Episode #18: Microsoft DocumentDB for NoSQL in the Cloud

NorthScale Blog - Tue, 05/16/2017 - 16:00

I am pleased to announce that the latest episode of the NoSQL Database Podcast titled, Microsoft DocumentDB for NoSQL in the Cloud, has been published to all the major podcasting networks.  In this episode I am joined by Kirill Gavrylyuk from Microsoft’s Azure team where we talk about the NoSQL database, DocumentDB, now known as Azure Cosmos DB.

This episode can be found for free on iTunes, Pocket Casts, and various other networks, but it can also be found below.

If you want to learn more about DocumentDB, I encourage you to follow Kirill on Twitter.  If you’re interested in more information about the podcast or want to make suggestions, contact me on Twitter at @nraboy.

Want to learn about the NoSQL database, Couchbase?  Check out the Couchbase Developer Portal for documentation and examples.

The post NDP Episode #18: Microsoft DocumentDB for NoSQL in the Cloud appeared first on The Couchbase Blog.

Categories: Architecture, Database

Five reasons to run SQL Server 2016 on Windows Server 2016 – No. 5: Consistent data environment across hybrid cloud environments

COnsistent data

Have you ever seen a tree that simultaneously bears completely different species of fruit? It’s a real thing: apples, plums, oranges, lemons, and peaches all growing on the same tree. The growers have the advantage of a consistent environment (the same tree) that allows them to be efficient with resources, pick the type of fruit they need when they need it, and always have the right kind of fruit without having to invest in specialized plants.

Those trees are like the consistent foundation shared by SQL Server 2016, Windows Server 2016, and Microsoft Azure: Common code underlying the Microsoft platform makes it possible to run your data workloads seamlessly on-premises, in a hybrid environment, or strictly in the cloud—and to pick the option you need, while moving easily from one environment to the other.

Common code = Unique value

The common code base creates a write-once-deploy-anywhere SQL Server and Windows Server experience. You have flexibility across physical on-premises machines, private cloud environments, third-party hosted private cloud environments, public cloud, and hybrid deployments. Figure 1 diagrams this unique platform.

Figure 1: Microsoft Data Platform: On-premises, hybrid, and cloud

Figure 1

This means that you can choose a hybrid deployment and take advantage of any of the four basic options for hosting SQL Server:

  1. SQL Server in on-premises non-virtualized physical machines
  2. SQL Server in on-premises virtualized machines
  3. SQL Server on Azure Virtual Machine. This is SQL Server installed and hosted in the cloud on Windows Server virtual machines (VMs) running on Azure. Also known as infrastructure as a service (IaaS), it is optimized to “lift and shift” existing SQL Server applications to the cloud. All versions and editions of SQL Server are available, including free ones for dev/test and lightweight workloads.
  4. Azure SQL Database (Microsoft public cloud). This is a SQL Server database native to the cloud and compatible with most SQL Server features. It is also known as a platform as a service (PaaS) database or a database as a service (DBaaS). It delivers all the agility and world-class security features of Azure and is ideal for software as a service (SaaS) app development.

When you run SQL Server on Windows Server, whether on-premises or in an IaaS virtual machine, you get the benefit of:

  • Improved database performance and availability with support for up to 24 terabytes of memory and 640 cores on a single server.
  • Built-in security at the operating system level. For example, when database admins can use a single Active Directory management pane across Azure and on-premises machines to set policies, enable/disable access, etc., it truly raises the security bar across the organization.
  • Simple and seamless upgrades with Rolling Upgrades.
  • Ability to make SQL highly available on any cloud with Storage Spaces Direct to create virtual shared storage across VMs.
  • Access to new classes of direct-attach storage (such as NVMe) for applications that require redundant storage across machines.
  • Reduce costs of hosting additional VMs by leveraging a Cloud Witness.

You benefit from the ability to use familiar server products, development tools, and technical expertise across all environments. No other platform delivers across this spectrum of implementations and builds in hybrid capabilities everywhere. Learn how to choose Azure SQL (PaaS) Database or SQL Server on Azure VMs (IaaS).

Free migration tools

Further easing the way to hybrid and cloud solutions are the SQL Azure Migration Wizard and other free migration tools. These are designed to provide easy migration of Windows Server 2016 servers to virtual machines in the cloud.

When determining how much hardware to allocate for certain applications, downsizing datacenters, or migrating existing workloads to virtual machines (VMs), you can tap into cloud capabilities in several ways:

  • Backup to Azure, including, managed backup, backup to Azure Block Blobs, and Azure Storage snapshot backup.
  • The Azure Site Recovery tool to migrate workloads on on-premises VMs and physical servers to run on Azure VMs, with full replication and backup, Azure IaaS VMs between Azure regions, and AWS Windows instances to Azure IaaS VMs. Easy addition of an Azure node to an AlwaysOn Availability Group in a hybrid environment.
  • Two new limited previews, Azure Database Migration Service and Azure SQL Database – Managed Instance, create a great path for customers looking for a way to easily modernize their existing database environment to a fully managed PaaS service without application redesign.
SQL Server License Mobility and Azure Hybrid Use Benefit for Windows Server

Even licensing is designed to ensure that wherever you deploy, you can cost-effectively take advantage of all the options.

  • SQL Server customers with active Software Assurance can use existing licenses on Azure Virtual Machines with no extra charges to SQL Server licensing. Simply assign core licenses equal to the virtual cores in the VM, and pay only for VM compute costs.
  • License Mobility ensures you can easily move SQL Server databases to the cloud using your existing licensing agreement with active Software Assurance. No additional licensing is required for SQL Server passive high availability (HA) nodes; you can configure a passive VM with up to the same compute as your active node to deliver uptime.
  • Windows Server customers with Software Assurance can save up to 40 percent by leveraging on-premises licenses to move workloads to Azure VMs with this Azure Hybrid Use Benefit.
SQL Server 2016 with Windows Server 2016: Built for hybrid cloud

Microsoft continues to build in innovation so that organizations do not have to purchase expensive add-ins to get the benefits of the cloud with security features, simplicity, and consistency across on-premises and the cloud. Together, SQL Server 2016 and Windows Server 2016 will bear fruit for your organization. Get started on hybrid now.

Learn more about SQL Server in Azure VM in this datasheet.

Try SQL Server in Azure.

Improve security, performance, and flexibility with SQL Server 2016 and Windows Server 2016

By running SQL Server 2016 and Windows Server 2016 together you can unlock the full potential of the Microsoft data platform. This series of blogs on five reasons to run these two new releases together barely scratches the surface. What’s the best way to find out just how powerful this combination is? Try it out! Download your free trial of Windows Server 2016 and SQL Server 2016 today.

Read more
Categories: Database