Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database

Workshop Content for Full-Stack Java and NoSQL Development Now Available

NorthScale Blog - Thu, 03/02/2017 - 22:20

About a week ago I was at DevNexus 2017 in Atlanta, Georgia, giving a workshop on creating full stack applications using a variety of technologies such as Java, Angular, Docker, Couchbase, and JavaFX.

DevNexus 2017 Workshop

Everyone who participated in the full day workshop was very successful towards the end.  So what did it consist of specifically?

The workshop was broken into six parts:

  • Deploying an automatically provisioned Couchbase Cluster with Docker
  • Developing a RESTful API with Java and the Couchbase Java SDK
  • Designing a client front-end application using Angular
  • Designing a client front-end application using JavaFX for Desktop
  • Synchronizing a JavaFX application with Couchbase Mobile
  • Developing with REST and sync in the same application

So where am I going with this? I am pleased to announce that this same workshop, with all instructions and slides, is available for free on GitHub.

To get access to this Java workshop, visit here and follow the README that is included.  For more help on developing with Couchbase, visit the Couchbase Developer Portal.

The post Workshop Content for Full-Stack Java and NoSQL Development Now Available appeared first on The Couchbase Blog.

Categories: Architecture, Database

Clean Your Buffers for Accurate Performance Testing

Database Journal News - Thu, 03/02/2017 - 09:01

In order to do accurate performance testing between multiple runs of a SQL Server command or scripts you need to remember to clean the buffer, procedure and system cache between each test run.

Categories: Database

Who Changed My Database Schema?

Database Journal News - Thu, 03/02/2017 - 09:01

Have you ever wanted to know who made a schema change to your database?  If so, read on to learn how.

Categories: Database

Now available! SQL Server Premium Assurance provides six more years of product support

Today we are announcing general availability of SQL Server Premium Assurance, a new offering that enables flexibility to keep systems running without disruption while modernizing on your own schedule.

When you purchase Premium Assurance, you receive “critical” and “important” security updates and bulletins during the six years after the End of Extended Support. This means you can get up to 16 years of total support beginning with SQL Server 2008 and 2008 R2 versions.

To learn more about SQL Server Premium Assurance and its companion offering Windows Server Premium Assurance, visit the announcement on Hybrid Cloud blog. You can get the lowest price and lock in savings if you purchase Premium Assurance through June 2017. Prices will increase over time, so act now!

Categories: Database

Using Couchbase Full Text Search Service in Java

NorthScale Blog - Wed, 03/01/2017 - 17:46

Ratnopam Chakrabarti is a software developer currently working for Ericsson Inc. He has been focused on IoT, machine-to-machine technologies, connected cars, and smart city domains for quite a while. He loves learning new technologies and putting them to work. When he’s not working, he enjoys spending time with his 3-year-old son.

Full-text based search is a feature that allows users to search based on texts and keywords, and is very popular among users and the developer community. So it’s a no-brainer that there are lots of APIs and frameworks that offer full-text search, including Apache Solr, Lucene, and Elasticsearch, just to name a few. Couchbase, one of the leading NoSQL giants, started rolling out this feature in their Couchbase Server 4.5 release.

In this post, I am going to describe how to integrate the full-text search service into their application using the Couchbase Java SDK.

Set Up

Go to start.spring.io and select Couchbase as a dependency into your Spring boot application.

Once you have the project set up, you should see the following dependency in your project object model (pom.xml) file. It ensures that all Couchbase libraries are in place for the app.

<dependency>

<groupId>org.springframework.boot</groupId>

<artifactId>spring-boot-starter-data-couchbase</artifactId>

</dependency>

You need to set up a Couchbase bucket to house your sample dataset to search on.

I have created a bucket named “conference” in the Couchbase admin console.

Couchbase admin console

The “conference” bucket has three documents currently, and they hold data about different conferences held across the world. You can extend this data model or create your own if you would like to experiment. For instance, résumés, product catalogs, or even tweets make a good use case for full-text search. For case in point though, let’s stick to the conference data as shown below:


{

"title": "DockerCon",

"type": "Conference",

"location": "Austin",

"start": "04/17/2017",

"end": "04/20/2017",

"topics": [

"containers",

"devops",

"microservices",

"product development",

"virtualization"

],

"attendees": 20000,

"summary": "DockerCon will feature topics and content covering all aspects of Docker and it's ecosystem and will be suitable for Developers, DevOps, System Administrators and C-level executives",

"social": {

"facebook": "https://www.facebook.com/dockercon",

"twitter": "https://www.twitter.com/dockercon"

},

"speakers": [

{

"name": "Arun Gupta",

"talk": "Docker with couchbase",

"date": "04/18/2017",

"duration": "2"

},

{

"name": "Laura Frank",

"talk": "Opensource",

"date": "04/19/2017",

"duration": "2"

}

]

}

In order to use full-text search on the above dataset, you need to create a full-text search index first. Do the following steps:

In the Couchbase admin console, click on the Indexes tab.

Click on the Full Text link, which will list the current full text indexes.

text indexes Image

As you can guess, I have created an index named “conference-search” which I would use from the Java code to search the conference-related data.

Click on the New Full Text Index button to create a new index.

pasted image 0 1

Yes, it’s that easy. Once you have created the index, you are ready to use the index from the app you are building.

Before we dive into the code, let’s have a look at the other two documents that are already in the bucket.


Conference::2

{

"title": "Devoxx UK",

"type": "Conference",

"location": "Belgium",

"start": "05/11/2017",

"end": "05/12/2017",

"topics": [

"cloud",

"iot",

"big data",

"machine learning",

"virtual reality"

],

"attendees": 10000,

"summary": "Devoxx UK returns to London in 2017. Once again we will welcome amazing speakers and attendees for the very best developer content and awesome experiences",

"social": {

"facebook": "https://www.facebook.com/devoxxUK",

"twitter": "https://www.twitter.com/devoxxUK"

},

"speakers": [

{

"name": "Viktor Farcic",

"talk": "Cloudbees",

"date": "05/11/2017",

"duration": "2"

},

{

"name": "Patrick Kua",

"talk": "Thoughtworks",

"date": "05/12/2017",

"duration": "2"

}

]

}

 

Conference::3

{

"title": "ReInvent",

"type": "Conference",

"location": "Las Vegas",

"start": "11/28/2017",

"end": "11/30/2017",

"topics": [

"aws",

"serverless",

"microservices",

"cloud computing",

"augmented reality"

],

"attendees": 30000,

"summary": "Aamazon web services reInvent 2017 promises a larger venue, more sessions and a focus on technologies like microservices and Lambda.",

"social": {

"facebook": "https://www.facebook.com/reinvent",

"twitter": "https://www.twitter.com/reinvent"

},

"speakers": [

{

"name": "Ryan K",

"talk": "Amazon Alexa",

"date": "11/28/2017",

"duration": "2.5"

},

{

"name": "Anthony J",

"talk": "Lambda",

"date": "11/29/2017",

"duration": "1.5"

}

]

}

Invoking Full-Text Search from Java Code Connect to Couchbase bucket from code

Spring boot offers a convenient way to connect to the Couchbase environment by allowing us to specify certain Couchbase environment details as a Spring configuration. We normally specify the following parameters in the application.properties file:

spring.couchbase.bootstrap-hosts=127.0.0.1
spring.couchbase.bucket.name=conference
spring.couchbase.bucket.password=

Here, I have specified my localhost ip since I am running Couchbase Server on my laptop. Note: You can run Couchbase as a Docker container by providing the IP address of the container.

The bucket-name has to match with the name of the bucket created using the Couchbase console.

We can also specify a cluster of IP addresses as bootstrap-hosts. Spring will provide a Couchbase environment cluster with all the nodes running Couchbase on them. If a password was set up when the bucket was created, then we can specify that as well; otherwise, leave that field empty. In our case, we leave it empty.

In order to run query against our desired bucket, first we need to have a reference to the bucket object. And the spring-couchbase configuration does all the heavy lifting behind the scenes for us. All we have to do is inject the bucket from the constructor within the Spring service bean class.

Here is the code:


@Service

public class FullTextSearchService {

private Bucket bucket;

public FullTextSearchService(Bucket bucket) {

this.bucket = bucket;

log.info("******** Bucket :: = " + bucket.name());

}

public void findByTextMatch(String searchText) throws Exception {

SearchQueryResult result = getBucket().query(

new SearchQuery(FtsConstants.FTS_IDX_CONF, SearchQuery.matchPhrase(searchText)).fields("summary"));

for (SearchQueryRow hit : result.hits()) {

log.info("****** score := " + hit.score() + " and content := "

+ bucket.get(hit.id()).content().get("title"));

}

}

We can also customize some of the CouchbaseEnvironment settings parameters. For a detailed list of parameters we can customize, take a look at the following reference guidelines:

At this point, we can invoke the service from the CommandLineRunner bean.


@Configuration

public class FtsRunner implements CommandLineRunner {

@Autowired

FullTextSearchService fts;

@Override

public void run(String... arg0) throws Exception {

fts.findByTextMatch("developer");

}

}

Using Full-Text Search Service

At the core of the Java SDK, Couchbase offers query() method as a way querying on a specified bucket. If you are familiar with N1QL Query or View Query, then the query() method offers a similar pattern; the only difference for Search is that it accepts a SearchQuery parameter as an argument.

Following is the code that searches for a given text in the “conference” bucket. getBucket() method returns a handle of the bucket.

When creating a SearchQuery, you need to supply the name of the index that you created in the Set Up section above. Here, I am using “conference-search” as the index which is specified in the FtsConstants.FTS_IDX_CONF. By the way, the full source code of the app is uploaded in GitHub and available for download. The link is at the end of the post.


public static void findByTextMatch(String searchText) throws Exception {

SearchQueryResult result = getBucket().query(new    SearchQuery(FtsConstants.FTS_IDX_CONF, SearchQuery.matchPhrase(searchText)).fields("summary"));

log.info("****** total  hits := "+ result.hits().size());

for (SearchQueryRow hit : result.hits()) {

log.info("****** score := " + hit.score() + " and content := "+ bucket.get(hit.id()).content().get("title"));

}

}

The above code is searching on the “summary” field of the documents in the bucket by using the matchPhrase(searchText) method.

The code is invoked by a simple call:

findByTextMatch(“developer”);

So, the full-text search should return all documents in the conference bucket that have the text “developer” in their summary field. Here’s the output:

Opened bucket conference

****** total  hits := 1

****** score := 0.036940739161339185 and content := Devoxx UK

The total hits represent the total number of matches found. Here it’s 1 and the corresponding scoring of that match can also be found. The code doesn’t print the entire document, just outputs the conference title. You can print the other attributes of the document if you wish to.

There are other ways of using the SearchQuery which are discussed next.

Fuzzy Text Search

You can perform fuzzy query by specifying a maximum Levenshtein distance as the maximum fuzziness() to allow on the term. The default fuzziness is 2.

For example, let’s say I want to find the conference where “sysops” is one of the “topics”. From the dataset above, you can see there’s no “sysops” topics present in any of the conferences. The closest match is “devops”; however, that is 3 Levenshtein distance away. So, if I run the following code with fuzziness 1 or 2 it shouldn’t bring back any result, which it doesn’t.


SearchQueryResult resultFuzzy = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, SearchQuery.match(searchText).fuzziness(2)).fields("topics"));

log.info("****** total  hits := "+ resultFuzzy.hits().size());

for (SearchQueryRow hit : resultFuzzy.hits()) {

log.info("****** score := " + hit.score() + " and content := "+ bucket.get(hit.id()).content().get("topics"));

}

findByTextFuzzy("sysops"); gives the following output:

total  hits := 0

Now, if I change the fuzziness to “3” and invoke the same code again, I get a document back. Here goes:

 ****** total  hits := 1

****** score := 0.016616112953992054 and content := [“containers”,”devops“,”microservices”,”product development”,”virtualization”]

Since “devops” matches “sysops” with a fuzziness of 3, the search is able to find the document.

Regular Expression Query

You can do regular expression-based queries using SearchQuery. The following code makes use of the RegExpQuery to search on “topics” based on a supplied pattern.

RegexpQuery rq = new RegexpQuery(regexp).field("topics");

SearchQueryResult resultRegExp = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, rq));

log.info("****** total  hits := "+ resultRegExp.hits().size());

for (SearchQueryRow hit : resultRegExp.hits()) {

log.info("****** score := " + hit.score() + " and content := "+ bucket.get(hit.id()).content().get("topics"));

}


When invoked as

findByRegExp(“[a-z]*\\s*reality”);

It returns the following 2 documents:

****** total  hits := 2

****** score := 0.11597946228887497 and content := [“aws”,”serverless”,”microservices”,”cloud computing”,”augmented reality“]

****** score := 0.1084888528694293 and content := [“cloud”,”iot”,”big data”,”machine learning”,”virtual reality“]

Querying by Prefix

Couchbase enables you to query based on a “prefix” of a text element. The API searches for texts that start with the specified prefix. The code is simple to use; it searches on the “summary” field of the document for the texts that have the supplied prefix.


PrefixQuery pq = new PrefixQuery(prefix).field("summary");

SearchQueryResult resultPrefix = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, pq).fields("summary"));

log.info("****** total  hits := "+ resultPrefix.hits().size());

for (SearchQueryRow hit : resultPrefix.hits()) {

log.info("****** score := " + hit.score() + " and content := "+ bucket.get(hit.id()).content().get("summary"));

}

If you invoke the code as findByPrefix(“micro”);

You get the following output:

 ****** total  hits := 1

****** score := 0.08200986407165835 and content := Aamazon web services reInvent 2017 promises a larger venue, more sessions and a focus on technologies like microservices and Lambda.

Query by Phrase

The following code lets you query a phrase in a text.


MatchPhraseQuery mpq = new MatchPhraseQuery(matchPhrase).field("speakers.talk");

SearchQueryResult resultPrefix = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, mpq).fields("speakers.talk"));

log.info("****** total  hits := "+ resultPrefix.hits().size());

for (SearchQueryRow hit : resultPrefix.hits()) {

log.info("****** score := " + hit.score() + " and content := "+ bucket.get(hit.id()).content().get("title") + " speakers = "+bucket.get(hit.id()).content().get("speakers"));

}


Here, the query is looking for a phrase in the “speakers.talk” field and returns the match if found.

A sample invocation of the above code with

findByMatchPhrase(“Docker with couchbase”) gives the following expected output:

****** total  hits := 1

****** score := 0.25054427342401087 and content := DockerCon speakers = [{“duration”:”2″,”date”:”04/18/2017″,”talk”:”Docker with couchbase“,”name”:”Arun Gupta”},{“duration”:”2″,”date”:”04/19/2017″,”talk”:”Opensource”,”name”:”Laura Frank”}]

Range Query

Full-ext search is also pretty useful when it comes to range-based searching – be it a numeric range or even a date range. For example, if you want to find out the conference(s) where the number of attendees fall within a range, you can easily do that by,

findByNumberRange(5000, 30000);

Here, the first argument is the min of the range and the second argument is the max of the range.

Here’s the code that gets triggered:


NumericRangeQuery nrq = new NumericRangeQuery().min(min).max(max).field("attendees");

SearchQueryResult resultPrefix = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, nrq).fields("title", "attendees", "location"));

log.info("****** total  hits := "+ resultPrefix.hits().size());

for (SearchQueryRow hit : resultPrefix.hits()) {

JsonDocument row = bucket.get(hit.id());

log.info("****** score := " + hit.score() + " and title := "+ row.content().get("title") + " attendees := "+ row.content().get("attendees") + " location := " + row.content().get("location"));

}


And it gives the following output – the conferences that have attendees falling between the supplied range are returned.

 ****** total  hits := 2

****** score := 5.513997563179222E-5 and title := DockerCon attendees := 20000 location := Austin

****** score := 5.513997563179222E-5 and title := Devoxx UK attendees := 10000 location := Belgium

Combination Query

Couchbase full-text search service allows you to use a combination of queries according to your need. To demonstrate this, let’s first invoke the API by supplying two arguments.

findByMatchCombination(“aws”, “containers”);

Here, the client code is trying to use the combination search based on “aws” and “containers”. Let’s look at the query API now.


MatchQuery mq1 = new MatchQuery(text1).field("topics");

MatchQuery mq2 = new MatchQuery(text2).field("topics");

SearchQueryResult match1Result = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, mq1).fields("title", "attendees", "location", "topics"));

log.info("****** total  hits for match1 := "+ match1Result.hits().size());

for (SearchQueryRow hit : match1Result.hits()) {

JsonDocument row = bucket.get(hit.id());

log.info("****** scores for match 1 := " + hit.score() + " and title := "+ row.content().get("title") + " attendees := "+ row.content().get("attendees") + " topics := " + row.content().get("topics"));

}

SearchQueryResult match2Result = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, mq2).fields("title", "attendees", "location", "topics"));

log.info("****** total  hits for match2 := "+ match2Result.hits().size());

for (SearchQueryRow hit : match2Result.hits()) {

JsonDocument row = bucket.get(hit.id());

log.info("****** scores for match 2:= " + hit.score() + " and title := "+ row.content().get("title") + " attendees := "+ row.content().get("attendees") + " topics := " + row.content().get("topics"));

}

ConjunctionQuery conjunction = new ConjunctionQuery(mq1, mq2);

SearchQueryResult result = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, conjunction).fields("title", "attendees", "location", "topics"));

log.info("****** total  hits for conjunction query := "+ result.hits().size());

for (SearchQueryRow hit : result.hits()) {

JsonDocument row = bucket.get(hit.id());

log.info("****** scores for conjunction query:= " + hit.score() + " and title := "+ row.content().get("title") + " attendees := "+ row.content().get("attendees") + " topics := " + row.content().get("topics"));

}

DisjunctionQuery dis = new DisjunctionQuery(mq1, mq2);

SearchQueryResult resultDis = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, dis).fields("title", "attendees", "location", "topics"));

log.info("****** total  hits for disjunction query := "+ resultDis.hits().size());

for (SearchQueryRow hit : resultDis.hits()) {

JsonDocument row = bucket.get(hit.id());

log.info("****** scores for disjunction query:= " + hit.score() + " and title := "+ row.content().get("title") + " attendees := "+ row.content().get("attendees") + " topics := " + row.content().get("topics"));

}

BooleanQuery bool = new BooleanQuery().must(mq1).mustNot(mq2);

SearchQueryResult resultBool = getBucket().query(new SearchQuery(FtsConstants.FTS_IDX_CONF, bool).fields("title", "attendees", "location", "topics"));

log.info("****** total  hits for booelan query := "+ resultBool.hits().size());

for (SearchQueryRow hit : resultBool.hits()) {

JsonDocument row = bucket.get(hit.id());

log.info("****** scores for resultBool query:= " + hit.score() + " and title := "+ row.content().get("title") + " attendees := "+ row.content().get("attendees") + " topics := " + row.content().get("topics"));

}


First, individual matches are found based on the texts. We find the result set document/s matching “aws” as one of the topics of the conference. In the same way, we find the documents having “containers” as topics.

Next, we start combining the individual results to form combination queries.

Conjunction Query

Conjunction query would return all matching conferences that have both “aws” and “containers” listed as topics. Our current dataset doesn’t have such a conference yet; so as expected, when we run the query we don’t get back any matching documents.

****** total  hits for match1 := 1   — this matches “aws”

****** scores for match 1 := 0.11597946228887497 and title := ReInvent attendees := 30000 topics := [“aws”,”serverless”,”microservices”,”cloud computing”,”augmented reality”]

****** total  hits for match2 := 1 — this matches “containers”

****** scores for match 2:= 0.12527214351929328 and title := DockerCon attendees := 20000 topics := [“containers”,”devops”,”microservices”,”product development”,”virtualization”]

****** total  hits for conjunction query := 0

Disjunction Query

Disjunction query would return all matching conferences if any one of the candidate queries return a match. Since each of the individual match queries return one conference each, when we run our disjunction query, we get back both those results.

****** total  hits for disjunction query := 2

****** scores for disjunction query:= 0.018374455634478874 and title := DockerCon attendees := 20000 topics := [“containers“,”devops”,”microservices”,”product development”,”virtualization”]

****** scores for disjunction query:= 0.01701143945069833 and title := ReInvent attendees := 30000 topics := [“aws“,”serverless”,”microservices”,”cloud computing”,”augmented reality”]

Boolean Query

Using bBoolean query, we can combine different combinations of match queries. For example, BooleanQuery bool = new BooleanQuery().must(mq1).mustNot(mq2) returns all conferences that must match the first term query result which is mq1 and at the same time it must not match mq2. You can flip the combination around.

The output of our code is as follows:

****** total  hits for booelan query := 1

****** scores for resultBool query:= 0.11597946228887497 and title := ReInvent attendees := 30000 topics := [“aws“,”serverless”,”microservices”,”cloud computing”,”augmented reality”]

It returns the conference that has a topic named “aws” (which, by the way, is the same as mq1) and does not have a topic named “containers” (i.e., mq2). The only conference that satisfies both the these conditions is titled “ReInvent” and that gets returned as output.

I hope you found the post useful. The source code can be found online. For a general idea about the Couchbase full-text search service, refer to the following blog post for some useful insights:

 

The post Using Couchbase Full Text Search Service in Java appeared first on The Couchbase Blog.

Categories: Architecture, Database

Oracle and Pluralsight Bring New Cloud Learning and Enablement Resources to Developers

Oracle Database News - Wed, 03/01/2017 - 14:00
Press Release Oracle and Pluralsight Bring New Cloud Learning and Enablement Resources to Developers Pluralsight Developer Members to Gain Access to New Oracle Cloud Courses

Redwood Shores, Calif.—Mar 1, 2017

Oracle today announced a new collaboration with Pluralsight, a technology learning platform for software developers and Silver level member of Oracle PartnerNetwork.

Through the collaboration, developers will gain access to three new Oracle learning pathways on the Pluralsight platform including Oracle Cloud: Java Development, Oracle Cloud: Node.js Development and Oracle Cloud: IaaS Foundations. They will also be able to leverage two new Oracle Cloud courses including Oracle Cloud for Developers and Oracle Compute Cloud Service Foundations. The announcement was made at Oracle Code San Francisco, the first of a new global series of developer-focused events.

“Pluralsight has built a rich community and library of content that help developers take their skills to the next level,” said Damien Carey, Senior Vice President, Oracle University. “By expanding Pluralsight’s offering with new Oracle courses, we are providing increased opportunity for developers to learn valuable new skills and techniques to keep up with the latest and ever-changing demands of the software development world.”

Created by industry experts and curated by Pluralsight in collaboration with Oracle University, the Oracle learning pathways and new courses are available in English and will be delivered in all countries where Pluralsight is available. The new courses, designed to empower Oracle developers to advance their skillset, will be offered through Pluralsight’s existing subscription options.

“As the technology landscape evolves at a rapid pace, it’s critical for software developers to continually build their skillset to remain competitive and at the top of their game,” said Pluralsight co-founder and CEO, Aaron Skonnard. “We’re excited to team up with Oracle to activate their community of technology professionals with the tools they need to benchmark and master key skills and continue building next generation technology.”

Contact Info Scott Thornburg
Oracle
+1.415.816.8844
scott.thornburg@oracle.com Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com Mariangel Babbel
Pluralsight
+1.801.784.9150
mariangel-babbel@pluralsight.com About Pluralsight

Pluralsight is an enterprise technology learning platform that delivers a unified, end-to-end learning experience for businesses across the globe. Through a subscription service, companies are empowered to move at the speed of technology, increasing proficiency, innovation and efficiency. Founded in 2004 and trusted by Fortune 500 companies, Pluralsight provides members with on-demand access to a digital ecosystem of learning tools, including adaptive skill tests, directed learning paths, expert-authored courses, interactive labs and live mentoring. For more information, visit www.pluralsight.com.

About Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data. The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day. For more information, please visit us at http://cloud.oracle.com.

* Performance comparison to AWS RDS db.m4.10xlarge using Provisioned IOPS storage.

About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle's partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program – an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Scott Thornburg

  • +1.415.816.8844

Kristin Reeves

  • +1.415.856.5145

Mariangel Babbel

  • +1.801.784.9150

Follow

Follow Oracle Corporate

Categories: Database, Vendor

Couchbase Mobile Changes Explorer – Part. 1

NorthScale Blog - Wed, 03/01/2017 - 00:58

The Couchbase Sync Gateway changes feed can be useful for driving various kinds of logic besides replications. To help understand the intricacies of the changes feed, I developed a simple tool. In this blog post I’ll talk about what the app does. In subsequent posts I’ll walk through the code, and then talk about key output to know.

Introduction

The Couchbase Mobile stack comprises three components, Couchbase Server, Sync Gateway, and Couchbase Lite. Each of these elements is useful in it’s own right. In a typical scenario, though, they all function together.

Sophisticated, robust synchronization of data is tricky. This is where Couchbase Mobile shines. Rather than rely on an always-on network or inconsistent clocks, it uses Multiversion Concurreny Control (MVCC). This approach gives developers both reliability and flexible ability to handle document conflicts.

Sync Gateway is the key to enabling data synchronization amongst all the pieces. To do this, Couchbase Mobile uses something called the changes feed. Couchbase Lite clients access the feed to drive data replication. Although the changes feed was designed with only that purpose in mind, it can serve other goals as well.  This makes it worth understanding in more depth.

CBM Changes Explorer

I call the tool the CBM Changes Explorer. The app allows you to simultaneously manipulate data via Couchbase Lite, while monitoring a Sync Gateway changes feed. You can also run more than one instance at a time, giving the possibility to see how clients and Sync Gateway interact when going on and off line.

Couchbase Mobile Changes Explorer Animated Gif

This animation shows the UI in action. On top we have three panes. The leftmost pane shows a list of all documents in the local (Couchbase Lite) database. The center pane acts to both show the contents of any document selected from the left-hand list, and allows editing and saving of new documents or document revisions.

The rightmost pane shows the output of the changes feed from Sync Gateway.

The username and password fields set the values used in basic authentication between Couchbase Lite and Sync Gateway. If you enable the GUEST user, no authentication is needed. The “Apply” button toggles authentication use on and off. (This means even if you have supplied credentials, you can turn off authentication with the toggle.)

The Save button tries to save the current JSON in the Contents pane as a new revision. You can create a new document by entering a structure with no _rev entry. You can also easily create new revisions of an existing document by first selecting it in the document list, modifying the contents, then saving it. The _rev entry will automatically have the current value, which Couchbase Lite needs to create a descendent revision.

The Sync button toggles both continuous push and pull replications between the Couchbase Lite client and Sync Gateway on and off.

Example: Showing Conflicts in the Changes Feed Output

Combining these features, you can create revision conflicts to see how they work. Sync a document to two separate instances of the app. Turn off sync. Then make different changes to the document in each instance of the client. Turn sync back on and you’ll have a conflict.

To demonstrate, here’s the changes feed for one update of a document without conflicts

{
    "changes": [
        {
            "rev": "3-dcb456e2abf57fcedd3c912d73f0dc47"
        }
    ],
    "doc": {
        "_id": "doc",
        "_rev": "3-dcb456e2abf57fcedd3c912d73f0dc47",
        "channels": [
            "105.3"
        ],
        "key": "#2"
    },
    "id": "doc",
    "seq": 7
}

and here’s the same document with a conflict

{
    "seq" : 7,
    "id" : "doc",
    "doc" : {
      "_id" : "doc",
      "_rev" : "3-dcb456e2abf57fcedd3c912d73f0dc47",
      "channels" : [ "105.3" ],
      "key" : "#2"
    },
    "changes" : [ {
      "rev" : "3-dcb456e2abf57fcedd3c912d73f0dc47"
    }, {
      "rev" : "3-07e37dc9e819d7a4c20e5d125f56c714"
    } ]
  } ],
  "last_seq" : "7"
}

(Note: The output from the changes feed is controlled by several parameters. To get the feed shown above, you need to set active_only to false, include_docs to true, and style to all_docs.)

Notice the difference in the changes array. It lists two revisions. The document data still shows only the default winning revision. The data for other revisions remains available in order to let us choose how to resolve conflicts. In a real-world app we’d want to resolve this conflict. Otherwise the alternate data remains in the database and can cause it to grow unnecessarily large.

Next Steps

I’ve only touched on a small portion of what you can try out. You can also use command line tools and the Sync Gateway admin interface. Take a look here for more on that.

The application code itself consists of about 400 lines of code and another 100 lines for the UI xml. I’ll walk through the code in part 2 of this post. There are quite a few simple features that would make the app more useful, so contributions are welcome.

The project is now up on GitHub.

Postscript

Check out more resources on our developer portal and follow us on Twitter @CouchbaseDev.

You can post questions on our forums. And we actively participate on Stack Overflow.

Hit me up on Twitter with any questions, comments, topics you’d like to see, etc. @HodGreeley

The post Couchbase Mobile Changes Explorer – Part. 1 appeared first on The Couchbase Blog.

Categories: Architecture, Database

Graphql server with node and couchbase, ottoman and spatial view

NorthScale Blog - Tue, 02/28/2017 - 17:41

Jose Navarro is a full stack developer at FAMOCO in Brussels, Belgium. He has been working for the last 3 years as a web developer with Node.js, Java, AngularJS, and ReactJS, and has deep interest in web development and mobile technologies.

We are going to develop a Graphql server in nodejs with express. Graphql is a query language for APIs, it was developed by facebook and it was released in 2015. It is designed to build client applications by providing an intuitive and flexible syntax and system for describing their data requirements and interactions. One of the biggest differences with REST APIs is that you only have one entry endpoint for all the resources, instead of one endpoint for every resource; And with graphql you specify what attributes you want for each request, instead of receive whatever the REST API service returns, so we can be sure that we have all the data that we need and reduce the size of our requests.

Facebook has been using it for a few years in their mobile app, for example in the iOS app, and old version of their mobile app still works without problem because the graphql did not change, because it is one endpoint with the same schema, this probably would not be possible with REST APIs because when you release a new version of you API, your endpoint probably is going to change, so the clients needs to adapt to the new endpoint and data. Also Github open their graphql server, so the users can query their services using graphql, instead of the REST APIs services..

With the server we are going to query and create Places. To store the data we are going to use couchbase and we are going to use Spatial views to query the Places by its geographical location. I wrote a previous post about node and couchbase, so I will skip the configuration of the db that I wrote in the previous post.

Requirements

You need to have installed in your computer:

– nodejs

– couchbase server

You can find the code in the github repo.

Spatial View

First of all we need to create the spatial view. We go to the admin page, in my case http://localhost:8091/ and log in with my user and password. Then click on Data Buckets and create a bucket, i called it graphql. After that we click on View, then we click on Create Development Spatial View, and we type the values.

Create Spatial Development View

I used place_by_location in both Design Document Name and View Name. Now click on edit, and add the following code


function (doc) {

if (doc._type === 'Place' && doc.location) {

emit([{

"type": "Point",

"coordinates": [doc.location.lon, doc.location.lat]

}], doc);

}

}


and click on Save.

Here you can also test the view with the documents on the bucket.

Place Model

For our places, we are going to store the name of the place and a description as string.

Since we want to use the SpatialView that couchbase provides, we are going to store the location of the place in an object called *location* where we are going to store the latitude and longitude.

Also we will set by default the created date when we add the place.


const PlaceModel = ottoman.model('Place', {

name: 'string',

description: 'string',

location: {

lat: 'number',

lon: 'number'

},

created: {

type: 'Date',

default: Date.now

}

});


For the spatial query, we are going to define a function that will perform a spatial query. For that we are going to create a spatial query using the couchbase package.


const queryByLocation = (bbox = [0, 0, 0, 0], next) => {

const query = couchbase.SpatialQuery.from('dev_place_by_location', 'place_by_location').bbox(bbox);

bucket.query(query, next);

}


In the *from* function, we have to provide the *design document name* and the *view name*. Then in the bbox(bounding box), we need to provide the array of 4 floats **[ min Longitude , min Latitude , max Longitude , max Latitude ]**.

The last step is to perform the query in the bucket.

Graphql Server

We are going to use an express server and the package express-graphql.

We import the schema of our graphql server that we are going to define later.


const express = require('express');

const graphqlHTTP = require('express-graphql');

const PORT = process.env.PORT || 5000;

const schema = require('./schemas');

const app = express();

app.use('/graphql', graphqlHTTP({

schema: schema,

graphiql: true,

}));

// start server

const server = app.listen(PORT, () => {

console.log(Server started at ${ server.address().port });

});


In the graphql server, we are going to use the route */graphql*. And we are going to set some option, like graphiql, that will provide us a graphic interface to execute queries.

The last step is start our express server.

Graphql Schemas

Graphql queries and mutations rely on the schemas that we define. So we have to create a schema for our Place object.

First we are going to define a Schema for Location.


const {

GraphQLObjectType,

GraphQLFloat,

GraphQLNonNull,

} = require('graphql');

const LocationSchema = new GraphQLObjectType({

name: 'Location',

description: 'Geographical location',

fields: {

lat: {

type: new GraphQLNonNull(GraphQLFloat),

description: 'Latitude',

},

lon: {

type: new GraphQLNonNull(GraphQLFloat),

description: 'Longitude',

},

}

});

module.exports = LocationSchema;

We need to import the types from graphql package. In our schema we can define a name and a description, this fields are useful to document our queries, so the user can understand what that field means.

Then we have to define fields, where we are going to specify the fields inside our schema, in this case, we have defined lat and lon. In every field we have to specify the type, in this case these fields are float values, and they are required, so we use GraphQLNonNull  and  the type GraphQLFloat. We add a description so we can now what they mean.

Now we are going to define the schema Place.

Here we are going to import the types from graphql and the schema Location that we have defined.


const {

GraphQLObjectType,

GraphQLString,

GraphQLNonNull,

} = require('graphql');

const LocationSchema = require('./location');

const PlaceSchema = new GraphQLObjectType({

name: 'Place',

description: 'Place description',

fields: {

id: {

type: GraphQLString,

resolve(place) {

return place._id;

}

},

name: {

type: GraphQLString,

},

description:{

type: GraphQLString,

},

location: {

type: LocationSchema,

},

created: {

type: GraphQLString,

}

}

});

module.exports = PlaceSchema;

We are matching the fields from the model, so we do not have to provide the resolve function. Only for the field id, because couchbase return the value in the field _id.

Graphql Query

Query is the way that we retrieve data for the server.

The query object is also a schema, like the previous ones. In this case the fields are the queries we allow the user to perform. We are going to define 3 types of query.

allPlaces

In this query we are going to query for all the places in the database, and we are going to order them by the created field, so we return first the newest places.

Like we are going to return an array of Places, we assign to type the type GraphQLList and we provide the Place schema


...

const PlaceSchema = require('./place');

const Place = require('../models/place');

...

allPlaces: {

type: new GraphQLList(PlaceSchema),

description: 'Query for all places',

resolve(root, args) {

return new Promise((resolve, reject) => {

Place.find({}, {

sort: {

created: -1

},

}, (err, places) => {

if (err) {

reject(err);

}

resolve(places);

})

});

}

}

We also add a description, this field is optional.

The last parameter is a function, that is the resolve function, that will specify how we are going to retrieve the data for our database. As our calls to the database are asynchronous, we are going to return a promise, that is going to use the Place model that we defined with ottoman. With the model we use find to query for documents, and we pass as the first parameter an empty object, because we want to query all the documents; The second parameter is the options of our query, in this case we are going to order by the field created in descending order. Finally we provide the callback function that will resolve the promise with the values, or reject it in case of an error.

Places

In this query we are going to query using the Spatial view, so we have to pass the bbox points in the parameter of the query.


...

const queryByLocation = require('../models/place').queryByLocation;

...

Places: {

type: new GraphQLList(PlaceSchema),

description: 'Query for all places inside the boundary box',

args: {

minLon: {

type: new GraphQLNonNull(GraphQLFloat),

description: 'Min Longitude of the boundary box',

},

maxLon: {

type: new GraphQLNonNull(GraphQLFloat),

description: 'Max Longitude of the boundary box',

},

minLat: {

type: new GraphQLNonNull(GraphQLFloat),

description: 'Min Latitude of the boundary box',

},

maxLat: {

type: new GraphQLNonNull(GraphQLFloat),

description: 'Max Latitude of the boundary box',

},

},

resolve(root, args) {

// bbox = [ min Longitude , min Latitude , max Longitude , max Latitude ]

const bbox = [

args.minLon,

args.minLat,

args.maxLon,

args.maxLat,

];

return new Promise((resolve, reject) => {

queryByLocation(bbox, (err, places) => {

if (err) {

reject(err);

}

resolve(places.map((place) => place.value));

})

});

}

}

First we import the function that we defined to perform the spatial query.

As the previous query, we define the type as an array of places, and add a description.

In this query, we need some parameter, so we define the args, that refers to the parameter; Each value inside args correspond with the parameters, in this case we define 4, minLon, maxLon, minLat, maxLat and for all of them we are going to define the type as required and floats.

In this case, the resolve function is also a promise. First we build the bbox array to pass the function queryByLocation. In case of an error, we will reject the promise with an error; In case of success, we need to map the object from the db, because the spatial view returns the geopoint and value, where we are returning the full document, it will change if we define a different spatial view.

Place

The last query that we are going to define, is the one to query for one place by its id.


Place: {

type: PlaceSchema,

description: 'Query for a place by the place id',

args: {

id: {

type: new GraphQLNonNull(GraphQLString),

description: 'Place id',

}

},

resolve(root, args) {

return new Promise((resolve, reject) => {

Place.getById(args.id, (err, place) => {

if (err) {

reject(err);

}

resolve(place);

});

});

}


In this case, the type is the Place schema, in the args, we only need to define the id and we set it to string and required.

The resolve function, again is a promise, in this case we are going to use the function queryById from the model, and we pass the id value from the args object.

Graphql Mutation

With mutations we can modify the data in our server. As the query, mutations object are schemas. So we have to define the same fields as the previous schemas.

When we perform the mutation query we provide the values between parenthesis, and like the queries, we provide the values we want to retrieve of the modified object.

Here we are going to perform the creation, update and delete of Places.

createPlace

In this mutation we are going to create a new place.

In the type of the schema we are going to define it as Place Schema, because we are going to return the created Place.


createPlace: {

type: PlaceSchema,

description: 'Create a place',

args: {

name: {

type: new GraphQLNonNull(GraphQLString),

description: 'Name of the place',

},

description: {

type: new GraphQLNonNull(GraphQLString),

description: 'Description of the place',

},

latitude: {

type: new GraphQLNonNull(GraphQLFloat),

description: 'Latitude of the place',

},

longitude: {

type: new GraphQLNonNull(GraphQLFloat),

description: 'Longitude of the place',

}

},

resolve(source, args) {

return new Promise((resolve, reject) => {

const place = new Place({

name: args.name,

description: args.description,

location: {

lat: args.latitude,

lon: args.longitude,

},

});

place.save((err) => {

if (err) {

reject(err);

}

resolve(place);

})

});

}

}


Like the queries, we define the args with the values that we require to create a new Place. In this case we require name and description as strings, and latitude and longitude as floats, all fields are going to be required.

In the resolve function, we are going to return a promise. Inside the promise we are going to create the place with the values of the query inside args. Then we are going to perform save on the place object. Finally, in case of an error saving the place, we are going to reject the promise with the error, or we are going to resolve the promise with the place data.

updatePlace

As in the createPlace mutation, the updatePlace mutation is similar, the differences are that in this case, all the values in the args are not required, and the id field is a required string; And in the resolve function, first we are going to look for the object by the id, then we check the fields provided by the user and update the place, and finally we save it, and return the new object


updatePlace: {

type: PlaceSchema,

description: 'Update a place',

args: {

id: {

type: new GraphQLNonNull(GraphQLString),

description: 'Id of the place',

},

name: {

type: GraphQLString,

description: 'Name of the place',

},

description: {

type: GraphQLString,

description: 'Description of the place',

},

latitude: {

type: GraphQLFloat,

description: 'Latitude of the place',

},

longitude: {

type: GraphQLFloat,

description: 'Longitude of the place',

}

},

resolve(source, args) {

return new Promise((resolve, reject) => {

Place.getById(args.id, (err, place) => {

if (err) {

reject(err);

} else {

if (args.name) {

place.name = args.name;

}

if (args.description) {

place.name = args.name;

}

if (args.latitude) {

place.location.lat = args.latitude;

}

if (args.longitude) {

place.location.lon = args.longitude;

}

place.save((err) => {

if (err) {

reject(err);

}

resolve(place);

});

}

})

});

}

}

deletePlace

The last mutation is the delete, here we define a type of Place Schema, because we are going to return the object we delete.

In the args, we only need to define the id of the place to delete.

In the resolve function, we are going to return a Promise that is going to search for the place by id, and the perform the remove. We will reject the promise in case the place is not found or if there is an error while removing it; Or we are going to resolve the promise with the place data in case we remove it successfully.


deletePlace: {

type: PlaceSchema,

description: 'Delete a place',

args: {

id: {

type: new GraphQLNonNull(GraphQLString),

description: 'Id of the place',

},

},

resolve(source, args) {

return new Promise((resolve, reject) => {

Place.getById(args.id, (err, place) => {

if (err) {

reject(err);

} else {

place.remove((err) => {

if (err) {

reject(err);

}

resolve(place);

});

}

})

});

}

}

Test

To test our app, we are going to use Graphiql, that we allow in our server, for that we have to visit http://localhost:5000/graphql

graphiql

In this page, we can perform the queries and mutations that we defined previously.

Create


mutation {

createPlace(

name: "testplace"

description: "testdescription"

latitude: 1.36

longitude: 18.36

) {

id

}

}

mutation create

Update


mutation {

updatePlace(

id: "41133f98-18e8-4979-89e0-7af012b0e14f"

name: "updateplace"

description: "updatedescription"

latitude: 2.36

longitude: 15.96

) {

id

name

description

}

}

mutation update

Delete


mutation {

deletePlace(id: "41133f98-18e8-4979-89e0-7af012b0e14f") {

id

}

}


mutation delete

Query All


query {

allPlaces {

id

name

location {

lat

lon

}

}

}

query all

Query by boundary box


query {

Places(

minLon: 3

maxLon: 5

minLat: 49

maxLat: 51

) {

name

location {

lat

lon

}

}

}


query bbox

Query a place by id


mutation {

deletePlace(id: "41133f98-18e8-4979-89e0-7af012b0e14f") {

id

}

}


query id

Conclusion

Graphql is a good query language that allow us to query only for the information that we define, so we can avoid underfetching or overfetching, and we can be sure that we always have the data.

In a Graphql server, the clients only use one single endpoint, so it hides the complexity and logics of the backend, so the server can connect to different backends, or use different databases, and if they change, the clients logic do not have to change because the endpoint is the same.

Also we have seen how to perform geographical query in our data with couchbase.

References

The post Graphql server with node and couchbase, ottoman and spatial view appeared first on The Couchbase Blog.

Categories: Architecture, Database

Flattening and Querying NoSQL Array Data with Couchbase N1QL

NorthScale Blog - Tue, 02/28/2017 - 15:46

I was browsing the Couchbase forums and I came across a question regarding queries against array data in Couchbase.  Coming from a relational database, I too once struggled to grasp the concept of querying complex formatted JSON data with SQL.

How do you query within these embedded NoSQL documents?  There are numerous ways, none of which are particularly difficult.  We’re going to examine some of the complex query possibilities.

In case you’re curious the question I stumbled upon, it can be found here.  The user wanted to know how to query for objects that were nested in an array for a single document.  The proposed document model looked similar to this:

{
  "id": "order-1",
  "type": "order",
  "items": [
    {
      "id": "pokemon-blue",
      "type": "gaming",
      "name": "Pokemon Blue"
    },
    {
      "id": "ms-surface-book",
      "type": "computing",
      "name": "Microsoft Surface Book"
    }
  ]
}

The end goal was was to be able to get each object in a query based on a WHERE condition that included the nested type property.

One way to do this is to write a N1QL query that looks like the following:

SELECT 
    forum.id, forum.type, item
FROM forum
UNNEST items AS item
WHERE item.type != "computing";

In the above query we are performing a SELECT from a Couchbase Bucket called forum and flattening the array using the UNNEST keyword.  The flattened result set would look like the following before applying the WHERE condition:

[
  {
    "id": "order-1",
    "item": {
      "id": "pokemon-blue",
      "name": "Pokemon Blue",
      "type": "gaming"
    },
    "type": "order"
  },
  {
    "id": "order-1",
    "item": {
      "id": "ms-surface-book",
      "name": "Microsoft Surface Book",
      "type": "computing"
    },
    "type": "order"
  }
]

The WHERE condition will return us a single result instead of two, where the single result is of a gaming type as per our query.

So is this the only way to accomplish what we’ve just done?  Absolutely not!

Take the following N1QL query in Couchbase:

SELECT 
    forum.id, 
    forum.type, 
    ARRAY item FOR item IN forum.items WHEN item.type != 'computing' END AS item
FROM forum

In the above query we are not first flattening the array through an UNNEST operation.  Instead we are using one of the collection operators to find array items that meet our criteria.

Are there other ways to get the job done?  Of course there are, but these two should be enough to get you started when it comes to querying arrays in Couchbase with N1QL.

If you need more help with N1QL, check out the Couchbase Developer Portal for other examples.

The post Flattening and Querying NoSQL Array Data with Couchbase N1QL appeared first on The Couchbase Blog.

Categories: Architecture, Database

CAP Theorem and Chaos at CodeMash Conference – January 2017 (Videos)

NorthScale Blog - Mon, 02/27/2017 - 19:00

I was again honored to be selected as a speaker at the great CodeMash conference in Ohio.

I took part in three speaking events:

Are You Ready for Chaos? Horizontal Scaling in a Briefcase

This session was partly a demonstration of the CouchCase project that I worked on in the summer, but also a demonstration of the CAP Theorum and Couchbase in action. The CouchCase did not cooperate, but I had a backup plan of using Docker which went okay. I give a very high-level overview in the video, but for more detail on Couchbase’s architecture, check out the Couchbase Developer Portal.

Have Your Best Season Yet: Becoming a (Microsoft) MVP

I was asked by Microsoft’s Lisa Anderson to be on a panel of Microsoft MVPs to answer questions about the Microsoft MVP program. I didn’t talk about Couchbase or technology very much, and I don’t have a video for you. But, if you are a Microsoft developer who is helping to build a developer community, you should definitely look into the MVP program.

Lego Bucket Races

This was my first time presenting at the “KidzMash” event for children that runs in parallel to CodeMash. I asked the kids to put different colored blocks into different buckets according to rules as fast as they can. The idea was to demonstrate how a database works, how an indexed database works, and how performance testing works.

I really had no idea what to expect, as I’ve never presented anything technical to kids before. There was a wide array of age differences, so the races didn’t always go to plan. I won’t embed the video here, since most of it is kids yelling. Think of it as an “explain it to me like I’m five” explanation on database indexing.

Summary

CodeMash is my favorite conference. I always have a great time and learn a lot. I highly recommend that you check it out!

If you enjoyed the CouchCase video, please let me know by leaving a comment below. If you have any questions about Couchbase, you can always ask me on Twitter @mgroves, or ask a question on the Couchbase Forums.

The post CAP Theorem and Chaos at CodeMash Conference – January 2017 (Videos) appeared first on The Couchbase Blog.

Categories: Architecture, Database

Couchbase Meets .Net Core and Docker

NorthScale Blog - Mon, 02/27/2017 - 16:55

Brant Burnett is a Couchbase Expert, systems architect, and .Net developer experienced in desktop and web full stack development.  For the last 12 years, he has been a working with CenterEdge Software, a family entertainment software company based in Roxboro, NC.  Brant is experienced in developing applications for all segments of their software suite.  Over the last 4 years, he has worked to transition the company’s cloud infrastructure from a Microsoft SQL platform to a pure Couchbase NoSQL platform.  Through his work at CenterEdge, Brant has been able to focus on creating serious software solutions for fun businesses.

With the release of the Couchbase .NET SDK 2.4.0, Couchbase now has official support for .NET Core. This opens up a wide new world for .NET Couchbase developers. In particular, we can now use Docker to easily manage our applications and improve our deployment process, something previously reserved for the likes of Java and Node.js.

At CenterEdge Software, we’re quickly moving to break our ASP.NET monolithic applications into Docker-based ASP.NET Core microservices. We’re very excited about the new possibilities that it provides, and the improvements to our application’s robustness and ease of deployments.  Hopefully, this overview of the approaches we’re using to make this transition will help others follow suit.

Configuration and Environments

In most ASP.NET Core applications, configuration is based on settings read from the appsettings.json file in the root of your project. These settings are then overridden by environment-specific settings (such as appsettings.Development.json). These settings can then be overridden in turn by environment variables present when the application is started.

At CenterEdge, we’ve defined the .NET Core environments to mean specific things relative to our real-world environments. Note that you can also add your own environment names, you don’t need to use the defaults, but the defaults worked for us.

  • Development – Local machine development using Visual Studio. The configuration points to Couchbase Server on the local machine, etc.
  • Staging – In-cloud testing environments
  • Production – Both the pre-production environment (for final tests before deployment) and the final production environment. These environments are generally the same as Staging but with lighter logging by default.

So our base appsettings.json usually looks something like this:

{
“Logging”: {
“IncludeScopes”: false,
“LogLevel”: {
“Default”: “Debug”,
“System”: “Information”,
“Microsoft”: “Information”,
“Couchbase”: “Debug”
}
},
“Couchbase”: {
“Buckets”: [
{
“Name”: “my-bucket”
}
]
}
}

The above configuration uses localhost for Couchbase Server by default, since we don’t have any server URLs specified. Next we’ll create appsettings.Staging.json and/or appsettings.Production.json like this:

{
“Logging”: {
“LogLevel”: {
“Default”: “Information”,
“Couchbase”: “Information”
}
}
“CouchbaseServiceDiscovery”: “_couchbase._tcp.services.local”
}

This reduces our log levels to something more reasonable, and also has a setting for service discovery (discussed later).

Dependency Injection

ASP.NET Core uses a lot of techniques that are different from the traditional ASP.NET model, which means integrating Couchbase into .NET Core applications is a bit different. In particular, ASP.NET Core is built from the ground up to work with dependency injection.

To support this, we use the Couchbase.Extensions.DependencyInjection package to bridge the gap between Couchbase SDK bucket objects and the dependency injection system. Couchbase is registered during ConfigureServices in the Startup class, passing the configuration section from above. We also add some shutdown code to close connections when the web application is exiting.

public void ConfigureServices(IServiceCollection services)
{
// Register Couchbase with configuration section
services

.AddCouchbase(Configuration.GetSection(“Couchbase”))

.AddCouchbaseBucket<IMyBucketProvider>(“my-bucket”);

if (!Environment.IsDevelopment())
{
services.AddCouchbaseDnsDiscovery(Configuration[“CouchbaseServiceDiscovery”]);
}

services.AddMvc();
// Register other services here
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env,

ILoggerFactory loggerFactory,
IApplicationLifetime applicationLifetime)
{
// …

// Not showing standard application startup here

// …

// When application is stopped gracefully shutdown Couchbase connections
applicationLifetime.ApplicationStopped.Register(() =>
{
app.ApplicationServices.GetRequiredService<ICouchbaseLifetimeService>().Close();
});
}

You can access any bucket in any controller by injecting IBucketProvider via the constructor. However, you may note that the above example also makes a call to AddCouchbaseBucket<IMyBucketProvider>(“my-bucket”).

This method allows you to register an empty interface inherited from INamedBucketProvider:

public interface IMyBucketProvider : INamedBucketProvider
{
}

And then inject it into a controller or business logic service. It will always provide the same bucket, based on the configuration you provided during ConfigureServices.

public class HomeController : Controller
{
private readonly IMyBucketProvider _bucketProvider;

public HomeController(IMyBucketProvider bucketProvider)
{
_bucketProvider = bucketProvider;
}

public IActionResult Index()
{
var bucket = _bucketProvider.GetBucket();

var result =
await bucket.QueryAsync<Model>(
“SELECT Extent.* FROM my-bucket AS Extent”);

if (!result.Success)
{
throw new Exception(“Couchbase Error”, result.Exception);
}

return View(result.Rows);
}
}

Service Discovery

When working with microservices, service discovery is a common problem. Each environment that you run will tend to have different services at different endpoints. Couchbase is one such service, which may exist at a different address in each environment. There are many solutions for service discovery, but at CenterEdge we decided to stick with a simple solution for now, DNS SRV records.

To support this, we use the Couchbase.Extensions.DnsDiscovery package. This package will find DNS SRV records which list the nodes in the cluster. To support this, we create a private DNS domain in AWS Route 53 named “services.local”, and create a SRV recordset named “_couchbase._tcp.services.local” that has the list of Couchbase nodes. The Route 53 recordset looks something like this:

10 10 8091 couchbasedata1.int.dev.centeredgeonline.com
10 10 8091 couchbasedata2.int.dev.centeredgeonline.com
10 10 8091 couchbasedata3.int.dev.centeredgeonline.com

In the above example for ConfigureServices in startup, you may have noticed the following section:

if (!Environment.IsDevelopment())
{
services.AddCouchbaseDnsDiscovery(Configuration[“CouchbaseServiceDiscovery”]);
}

This will replace any servers passed via configuration with the servers found by looking up the DNS SRV record. We also provide the DNS name via configuration, making it easy to override if necessary. We specifically don’t use this extension in our Development environment, where we’re using localhost to access the Couchbase cluster.

That’s Cool, What About Docker?

So far, everything we’ve done is applicable to ASP.NET Core in general, and is not necessarily specific to Docker. So how do we move from a general application to one that runs in a Docker container?

First, there a few preparatory steps you’ll need to complete on your development machine:

  1. Ensure that you have Hyper-V enabled in Windows
  2. Install Docker for Windows
  3. Configure a Shared Drive in Docker for the drive where your application lives
  4. Install Visual Studio Tools for Docker
  5. Ensure that Docker is started (you can configure Docker to autostart on login)

Now, you’re ready to go. Just right click on your project in Visual Studio, and go to Add > Docker Support. This adds the necessary files to your project.

add docker support 3

Add Docker Support

While several files are added, there are some files that are particularly important. The first file I’d like to point out is Dockerfile:

FROM microsoft/aspnetcore:1.0.1
ENTRYPOINT [“dotnet”, “TestApp.dll”]
ARG source=.
WORKDIR /app
EXPOSE 80
COPY $source .

There are two key lines in this file that you might need to modify:

FROM microsoft/aspnetcore:1.0.1

You must change this line if you’re using a different version of .NET Core, such as 1.0.3 or 1.1.0. The version tag on this line should match the version of .NET Core used in your project.json file.

ENTRYPOINT [“dotnet”, “TestApp.dll”]

If you rename your project, it will output a different DLL filename. Change this line to reference the correct DLL filename.

The next file is docker-compose.yml. This file, along with some related files, controls the nature of the Docker containers started when you click Run. We’ll need to make a change in docker-compose.yml to get the Couchbase Server connection working.

Our configuration for the Development environment is trying to access “localhost” to access Couchbase Server. This approach works fine if the application is running in IIS Express. However, inside a Docker container “localhost” no longer points to your development computer.  Instead it refers to the isolated Docker container, much like it would within a virtual machine.

To fix this, we need to add an environment section to docker-compose.yml to use your computer’s name instead of “localhost”:

version: ‘2’

services:
testapp:
image: user/testapp${TAG}
build:
context: .
dockerfile: Dockerfile
ports:
– “80”
environment:
– Couchbase:Servers:0=http://$COMPUTERNAME:8091/

Just add the last two lines above to your file. Docker Compose will automatically substitute $COMPUTERNAME with the name of your computer, which is helpful when sharing the application with your team via source control.

Now you’re ready to test in Docker. Just change the Run drop down in your Visual Studio toolbar to Docker instead of IIS Express before you start your app. It even supports debugging and shows logs in the Debug window.

If you want to get really fancy, you can also tweak docker-compose.yml to do things like launch additional required containers, override other settings via environment variables, and more. For example, at CenterEdge we use this approach to launch additional microservices that are dependencies of the application being developed.

Deployment

Your exact deployment approach will vary depending on your Docker platform. For example, CenterEdge uses Amazon AWS, so we’ll deploy using EC2 Container Service. Regardless of your platform of choice, you’ll need to make a Docker image from your application and publish it to a Docker container registry.

At CenterEdge we’ve added this to our continuous integration process, but here’s a summary of the steps involved:

  1. Run “dotnet publish path/to/your/app -c Release” to publish your application. This will publish to “bin/Release/netcoreapp1.0/publish” by default, but this can be controlled with the “-o some/path” parameter. For .NET Core 1.1, it will be netcoreapp1.1 instead of netcoreapp1.0 by default.
  2. Run “docker build -t myappname path/to/your/app/bin/Release/netcoreapp1.0/publish” to build a Docker image. It will be tagged as “myappname”.
  3. Run “docker tag myappname yourdockerregistry/myappname:sometag” to tag the Docker image for your Docker registry. Substitute “yourdockerregistry” with the path to your Docker registry. For Docker Hub, this is just your username. Substitute “sometag” with tag you want to use, such as “latest” or “1.0.5”.
  4. Run “docker push yourdockerregistry/myappname:sometag” to push the image to your Docker container registry. This assumes that you’ve already used “docker login” to authenticate with your registry.

Regarding versioning, at CenterEdge we use NuGet-style version numbering for our microservices. For example, “1.1.0” or “2.0.5-beta002”. This version number is the tag we use in our Docker container registry. We also follow SemVer, meaning that increments to different parts of the number have specific meanings. If we increment the first digit, it means the API has breaking changes and is not fully backwards compatible. Incrementing the second digit indicates significant new features. The third digit is incremented for bug fixes.

Conclusion

Hopefully, you now have the basic tools you’ll need to transition your .NET applications using Couchbase to .NET Core and Docker. We’ve found the transition to be fun and exciting.  While ASP.NET Core has changed some approaches and other things have been deprecated, the overall platform feels much cleaner and easier to use. And I’m sure even more great things are coming in the future.

The post Couchbase Meets .Net Core and Docker appeared first on The Couchbase Blog.

Categories: Architecture, Database

Joining NoSQL Documents with the MongoDB Query Language vs Couchbase N1QL

NorthScale Blog - Mon, 02/27/2017 - 16:19

One of the most frequent questions I receive when it comes to NoSQL is on the subject of joining data from multiple documents into a single query result. While this question is brought up more frequently from RDBMS developers, I also receive it from NoSQL developers.

When it comes to data joining, every database does it different, some of which require it to be done through the application layer, rather than the database layer. We’re going to explore some data joining options between database technologies.

MongoDB is a popular NoSQL technology, so we’ll be seeing how much easier it is to join documents in Couchbase by comparison.

The Sample Data

For this example, we’ll be basing both MongoDB and Couchbase off two sample documents. Assume we’re working with a classic order and inventory example. For inventory, our documents might look something like this:

{
    "id": "product-1",
    "type": "product",
    "name": "Pokemon Red",
    "price": 29.99
}

While flat, the above document can properly explain one particular product. It has a unique id which will be involved during the join process. For orders, we might have a document that looks like the following:

{
    "id": "order-1",
    "type": "order",
    "products": [
        {
            "product_id": "product-1",
            "quantity": 2
        }
    ]
}

The goal here will be to join these two documents in a single query using both MongoDB and Couchbase. However, query language aside, these documents can always be joined via the application layer through multiple queries. This is not the result we’re after though.

Joining Documents with MongoDB and the $lookup Operator

In recent versions of MongoDB there is a $lookup operator that is part of the aggregation queries. Per the MongoDB documentation, this operator performs as the following:

Performs a left outer join to an unsharded collection in the same database to filter in documents from the “joined” collection for processing. The $lookup stage does an equality match between a field from the input documents with a field from the documents of the “joined” collection.

To use the $lookup operator, you’d have something like this:

db.collection.aggregate([
    {
       $lookup:
         {
           from: <collection to join>,
           localField: <field from the input documents>,
           foreignField: <field from the documents of the "from" collection>,
           as: <output array field>
         }
    }
])

Now this is great, but it doesn’t work on relationships found in arrays. This means that the $lookup operation cannot join the product_id found in the products array to another document. Instead the array must be “unwound” or “unnested” first which adds extra complexity to our query:

db.orders.aggregate([
    { $unwind: "$products" },
    {
        $lookup: {
            from: "products",
            localField: "products.product_id",
            foreignField: "_id",
            as: "productObjects"
        }
    }
])

The $unwind operator will flatten the array and then do a join on the now flat objects that were produced. The result of such query would look like this:

{
    "_id" : ObjectId("58a3869acbf64c4ace55e713"),
    "products" : {
        "product_id" : ObjectId("58a3851b2f14a900caa7a731"),
        "quantity" : 2
    },
    "productObjects" : [
        {
            "_id" : ObjectId("58a3851b2f14a900caa7a731"),
            "name" : "Pokemon Red",
            "price" : 29.99
        }
    ]
}

Had there been more than one reference in the array, there would have been more results returned. However, what is returned isn’t very attractive. We still have the old products object and now a productsObject array. Further manipulations to the data stream needs to happen.

The productsObject array should be “unwound” and then reconstructed to how we want it. This can be accomplished by doing the following:

db.orders.aggregate([
    { $unwind: "$products" },
    {
        $lookup: {
            from: "products",
            localField: "products.product_id",
            foreignField: "_id",
            as: "productObjects"
        }
    },
    { $unwind: "$productObjects"},
    {
        $project: {
            products: {
                "quantity": "$products.quantity",
                "name": "$productObjects.name",
                "price": "$productObjects.price"
            }
        }
    }
])

Notice that the aggregate query is now getting more complex. After doing the join, the result is “unwound” and then the result is reconstructed using the $project operator.

At this point further manipulations to the result can be made such as grouping the results so that the products objects become a single array again. Each manipulation to the data set requires more aggregation code which can easily become messy, complicated, and difficult to read.

This is where Couchbase N1QL becomes so much more pleasant to work with.

Using Couchbase and N1QL to Join NoSQL Documents

Let’s use the same document example that we used for MongoDB. This time we’re going to write SQL queries with N1QL to get the job done.

The first thing that comes to mind might be to use a JOIN in SQL. Our query might look something like this:

SELECT orders.*, product
FROM example AS orders
JOIN example AS product ON KEYS orders.products[*].product_id
WHERE orders.type = 'order'

In the above example, both the documents exist in the same Couchbase Bucket. A JOIN against document ids happens based on the product_id values found in the products array. The above query would yield results that look like this:

[
  {
    "id": "order-1",
    "product": {
      "id": "product-1",
      "name": "Pokemon Red",
      "price": 29.99,
      "type": "product"
    },
    "products": [
      {
        "product_id": "product-1",
        "quantity": 2
      }
    ],
    "type": "order"
  }
]

Like with MongoDB, there will be a result for every item of the products array that matches. In fairness, while the N1QL version was easier to write, it wasn’t necessarily any more difficult than the MongoDB Query Language at this point. As we manipulate the data more, Couchbase becomes a lot easier by comparison.

For example, let’s say we wanted clean up the results:

SELECT orders.id, orders.type, OBJECT_PUT(product, "quantity", products.quantity) AS product
FROM example AS orders
UNNEST orders.products AS products
JOIN example AS product ON KEYS products.product_id
WHERE orders.type = 'order'

There are some major differences in what we’re doing in the above, but minor differences in how we’re doing them. Instead of joining directly on the array, we are first flattening or “unnesting” the array, like what we saw in the MongoDB $unwind operator. The join is now happening on each of the flattened results. Finally, the quantity from the original object is added to the new object.

The result to the above query would look something like this:

[
  {
    "id": "order-1",
    "product": {
      "id": "product-1",
      "name": "Pokemon Red",
      "price": 29.99,
      "quantity": 2,
      "type": "product"
    },
    "type": "order"
  }
]

Let’s say that the original products array had more than one product reference in it. Instead of returning several objects based on the JOIN criteria we saw above, it might make sense to re-pack that original array.

SELECT orders.id, orders.type, ARRAY_AGG(OBJECT_PUT(product, "quantity", products.quantity)) AS products
FROM example AS orders
UNNEST orders.products AS products
JOIN example AS product ON KEYS products.product_id
WHERE orders.type = 'order'
GROUP BY orders

In the above query we’ve only added ARRAY_AGG and a GROUP BY, but as a result, each joined document shows up in the products array instead of the id value.

Don’t want to use an actual JOIN operator? Try using a SQL subquery instead.

Conclusion

Joining data in NoSQL is a very popular concern for developers that are seasoned RDBMS veterans. Because MongoDB is a very popular NoSQL technology, I thought it would be good to use as a comparison to how Couchbase handles document joining. For light operations, MongoDB’s $lookup operator is tolerable, but as queries become more complex, you may need to take a step back. With N1QL, writing complex queries that include joining operations become very easy and stay easy regardless on how complex the query is.

For more information on N1QL and Couchbase, visit the Couchbase Developer Portal.

The post Joining NoSQL Documents with the MongoDB Query Language vs Couchbase N1QL appeared first on The Couchbase Blog.

Categories: Architecture, Database

Oracle Expands Oracle Cloud at Customer Portfolio to Database Workloads with Oracle Exadata Cloud Machine

Oracle Database News - Mon, 02/27/2017 - 14:00
Press Release Oracle Expands Oracle Cloud at Customer Portfolio to Database Workloads with Oracle Exadata Cloud Machine Organizations equipped to reap the benefits of the Oracle Cloud Platform in their own datacenter

Redwood Shores, Calif.—Feb 27, 2017

Continuing to help organizations simplify cloud adoption by bringing the benefits of the cloud inside their own datacenters, Oracle today announced the expansion of the Oracle Cloud at Customer portfolio with the availability of Oracle Exadata Cloud Machine.  With today’s news, Oracle is offering organizations the ultimate in choice and flexibility in where they deploy the world’s most advanced database cloud for mission-critical workloads. Organizations can now deploy Oracle Exadata in a number of ways, including as a cloud service inside their own datacenter, in the Oracle Cloud, and in a traditional on-premises environment.

Since its introduction just over a year ago, Oracle Cloud at Customer has seen tremendous popularity as organizations look for ways to bridge the gap between the public cloud and on-premises in their journey to the cloud.  While organizations look forward to moving their enterprise workloads to the public cloud, many have been constrained by business, legislative, and regulatory requirements that have prevented them from moving their data and applications outside their own datacenter. Oracle Exadata Cloud Machine delivers the full power of the Oracle Exadata Cloud Service that resides in Oracle’s public cloud to customers who require or prefer their databases to be located on-premises.

“Oracle Exadata Cloud Machine is an ideal platform for organizations that want the benefits of the cloud brought to their datacenter,” said Juan Loaiza, senior vice president of systems technologies, Oracle. “For many years, Oracle Exadata has been the platform of choice for running mission critical Oracle databases at thousands of customers, and the Oracle Exadata Cloud Machine extends this value proposition to those customers who want cloud benefits but cannot or aren’t yet ready to move to a public cloud.”

With Oracle Exadata Cloud Machine, customers have subscription access to the most powerful Oracle Database with all options and features, like Real Application Clusters, Database In-Memory, Active Data Guard and Advanced Security, offering extremely high levels of performance, availability and security features for mission-critical workloads.  Additionally, the Oracle Exadata Cloud Machine is 100 percent compatible with on-premises and Oracle Cloud applications and databases, ensuring any existing application can be quickly migrated to the cloud without changes. 

The Oracle Exadata Cloud Service and Oracle Exadata Cloud Machine provide leading functionality, including:

  • Mission-critical database for OLTP, analytics, mixed workloads, and consolidation—all options included
  • Highly proven database hardware platform with NVMe Flash, InfiniBand networking, and the fastest servers
  • Intelligent database platform with Smart Database Algorithms in storage, networking, and compute
  • Advanced database cloud platform with subscription based pricing and real-time online capacity bursting
  • Flexible cloud that can be deployed in Oracle's public cloud or inside the customer's data center with Oracle managing all infrastructure
  • Simple and straightforward migration to the cloud—software and hardware are identical and 100 percent compatible

“Every IT organization is making plans to move to the public cloud, and Oracle customers are no different,” said Carl Olofson, Research Vice President for structured data management software at IDC. “The Oracle Cloud at Customer program provides a means of transitioning to the cloud by starting right in the datacenter, thereby maintaining direct interaction with the applications that remain on the premises. The Oracle Exadata Cloud Machine extends that capability with all the features of Exadata, managed remotely by the Oracle Cloud team. It is a great first step toward eventual cloud deployment.” 

The Oracle Cloud at Customer portfolio of services enables organizations to get all of the benefits of Oracle’s public cloud services in their datacenter. The business model is just like a public cloud subscription; the hardware and software is the same; Oracle experts monitor and manage the infrastructure; and the same tools used in Oracle’s public cloud are used to provision resources on the Cloud Machine.  This is the only offering from a major public cloud vendor that delivers a stack that is 100 percent compatible with the Oracle Cloud but available on-premises. Since the software is seamless with the Oracle Cloud, customers can use it for a number of use cases, including disaster recovery, elastic bursting, dev/test, lift-and-shift workload migration, and a single API and scripting toolkit for DevOps. Additionally, as a fully managed Oracle offering, customers get the same experience and the latest innovations and benefits using it in their datacenter as in the Oracle Cloud.

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data.  The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

For more information, please visit us at http://cloud.oracle.com.

Contact Info Nicole Maloney
Oracle PR
+1.650.506.0806
nicole.maloney@oracle.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Follow Oracle Corporate

Categories: Database, Vendor

Oracle Industry Connect 2017 Convenes Community of Business Leaders to Share Insight, Expertise and Industry-specific Best Practices

Oracle Database News - Mon, 02/27/2017 - 13:59
Press Release Oracle Industry Connect 2017 Convenes Community of Business Leaders to Share Insight, Expertise and Industry-specific Best Practices Veteran journalist Tom Brokaw, Oracle Chief Executive Officer Mark Hurd and Oracle Executive Vice President Bob Weiler to Headline

Redwood Shores, Calif.—Feb 27, 2017

Oracle is hosting its fourth-annual Oracle Industry Connect, an exclusive, executive-level event by key industry experts for industry experts, to share strategies for innovation and organizational transformation and success. The event takes place March 20-22 in Orlando, Fla.

The conference features distinguished subject matter experts and keynotes from Oracle CEO Mark Hurd; Bob Weiler, executive vice president of Oracle’s Global Business Units; and Tom Brokaw, NBC News correspondent and New York Times best-selling author.

Brokaw, who was awarded the Presidential Medal of Freedom by President Barack Obama, draws on his rich career in network news covering elections, summits, war, political turmoil and other major news events around the world. The legendary newsman entertains and enlightens audiences with his experiences and observations.

The keynotes will be followed by seven industry-specific tracks with business leaders in communications, construction and engineering, energy and utilities, financial services and insurance, hospitality, life sciences and healthcare and retail. Featured speakers include:

  • Robert Hackl, Senior Vice President, Leasing, Sprint
  • Dr. Philip Tetlock, Ph.D, Annenberg University Professor, University of Pennsylvania
  • Kimberly Greene, Executive Vice President and Chief Operating Officer, Southern Company Services, Inc.
  • Lisa Davis, Global Managing Director, Treasury and Trade Solutions, Citi
  • Steven Marks, Founder, Guzman y Gomez
  • Robert B. Darnell, MD, Ph.D, Heilbrunn Professor and Senior Physician at The Rockefeller Center, Investigator at Howard Hughes Medical Institute, Founding Director at NY Genome Center
  • Jeff Wollen, CIO, Wiggle

“Every day, Oracle provides companies the most sophisticated applications in order to transform their businesses,” Oracle CEO Mark Hurd said. “From planning retailers' merchandise, to running wireless communications networks, to serving as the backbone of our power grids, no technology company can provide the range of industry-specific specialization that Oracle can and no other company can provide it in the cloud today.”

“As the cloud transforms the way industries operate, our team works tirelessly to get our customers where they need to go next,” said Bob Weiler, executive vice president or Oracle’s Global Business Units. “Oracle Industry Connect provides a community of distinguished industry innovators to share ideas and collaborate on the path and solutions for their success.”

For more information about how Oracle is committed to empowering organizations through best-in-class, industry-specific business solutions, visit oracle.com/industries. To learn more about Oracle Industry Connect 2017, go to oracle.com/oracleindustryconnect.

Contact Info Katie Barron
Oracle Communications
+1 202-904-1138
katie.barron@oracle.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1 202-904-1138

Follow Oracle Corporate

Categories: Database, Vendor

Is Oracle Enabling Compulsive Tuning Disorder?

Database Journal News - Mon, 02/27/2017 - 09:01

With all of the metrics Oracle provides, it can be easy to want to tune a lot of areas. Read on to see why not every metric needs tuning.

Categories: Database

NoSQL Simplifies Database DevOps

NorthScale Blog - Sat, 02/25/2017 - 07:16

Does your organization want to simplify Database DevOps?
Is your database becoming a bottleneck to innovate rapidly?
Do you want to save millions of $$ in database licensing cost?

Read on!

State of Database DevOps

State of Database DevOps is a survey on DevOps adoption rates among SQL Server professionals. Over 1000 SQL Server professionals responded to the survey. The respondents came from across the globe and represent a wide range of job roles, company sizes and industries.

There are some good findings in the survey results. A few key findings worth highlighting here:

the greatest challenge with integrating database changes into a DevOps process would still be synchronizing application and database changes

Another one …

The greatest drawback identified with traditional siloed database development is the increased risk of failed deployments or downtime when introducing changes. This is closely followed by slow development and release cycles and the inability to respond quickly to changing business requirements

And another one …

Increasing the speed of delivery of database changes and freeing developers up to do more value added work are the key drivers for automating the delivery of database changes

The challenges highlighted here are not mentioned in the context of SQL server only, but would be applicable towards any relational database. You may be using Oracle, Postgres, MySQL, MariaDB or any other relational database for that matter and would be very much facing these issues. Why?

Why is Relational not well suited for Database DevOps?

It’s common for an application to operate on data from multiple tables in a RDBMS. For example, placing an order may use Customer, Order and Product tables. Each table has multiple columns with standard data types specific to a database. Tables may have primary, reference and foreign key constraints. Developers building applications using a relational database typically use an Object Relational Mapper (ORM), such as Hibernate or Java Persistence API for Java developers. There are similar ORM for other languages as well. ORMs captures the underlying complex database structure and allow programmers to build applications naturally using their language.

ORMs also use a persistence provider and allows your application to be independent of the underlying database. This persistence provider creates a binding between the language specific class to the database structure. For example, it maps a class to a table or multiple tables, binds the language data types to the types defined in the database and captures the relationship between tables. Theoretically, a programmer can use a different persistence provider to use a different RDBMS for the application. But this is far from a practical experience!

Any database change requires the ORM classes to be updated otherwise the application may not work. For example, adding a new table may mean a new Java class or updating an existing class. Change of a data type in a column requires the class definition to be updated otherwise the application will not even compile. Adding a new column means adding a new field in the class. Any change requires the classes to be updated and the application needs to be repackaged.

Changes in database structure are required all the time to meet the evolving needs of business. But if the DBAs make a database change and the ORM classes are not updated then there is a disconnect. Application deployment needs to be coordinated with the updating the database schema. There are tools like Flyway, Liquibase and others that integrate application and database deployment. But developers are often not allowed to make any direct changes to the production database. A disconnect would result in your application not working and the business to suffer. DevOps practices can definitely help solve these issues as it requires a close collaboration between developers that are building applications and DBAs that are updating database scripts.

But as the survey reports, more than 50% of the respondents do not have DevOps adopted today.

Database DevOps Adoption

There are challenges even if you were to integrate database changes into a DevOps process.

Database DevOps Challenges

Synchronizing application and database changes where the ORM classes need to be synchronized with the backend database structure is the biggest challenge. DBAs may want to structure the database in a certain way which may not be optimal for application development. Applying consistency across application and database development is the next major challenge for ensuring a seamless database DevOps.

A siloed development has serious issues on your ability to rapidly innovate and deliver value to your business.

Database DevOps Drawbacks

As shown in this image, failed deployments when introducing changes, slow development/release cycles and inability to respond to business needs account for over 60% of the drawbacks.

Speed of delivery of database changes is the biggest concern for database DevOps.

Database DevOps Driver

So what do you do?

How does NoSQL simplify Database DevOps?

NoSQL document database helps to simplify database DevOps!

How does NoSQL simplify database DevOps?

  • Schema flexibility – Developers need a single database that can store rapidly changing structured, semi-structured and unstructured data. NoSQL document database offers schema flexibility by allowing developers operate directly on JSON data and derive meaning out of it
  • No impedance mismatch – With no ORM for the application, there is no impedance mismatch between domain classes and database structure. Only the application code needs to be updated and no coordination is required with the schema changes
  • Scalability –  One of the drawbacks mentioned in the report is the inability to adapt to changing business requirements. This highlights scalability as a major DevOps challenge. If the volume of data, the number of queries, or the types of indexes required to support the application changes the database needs to change to accommodate those changes. Not in weeks or months, but today! No SQL databases run on commodity hardware and has a scale-out architecture as opposed to scale-up with RDBMS. Sharding can help with scalability in RDBMS but that’s an extra complexity that now need to be dealt with.

Learn more about why enterprises move to NoSQL.

Which NoSQL database is preferred by GE, Marriott, Verizon, United, LinkedIn, DIRECTV and many others?

What are some other advantages of Couchbase?

NoSQL is not a panacea by any means. If you are building a system that needs complex transaction logic or real-time data warehousing, then RDBMS may be a better fit. However it addresses your scalability and agility concerns and simplifies database DevOps.

Here is a great video on migrating from relational database to NoSQL:

Here is another interesting video that shows why Marriott transitioned from Relational to NoSQL:

A lot more videos are available on Couchbase Connect 2016.

And some more relevant blogs:

Talk to us:

 

The post NoSQL Simplifies Database DevOps appeared first on The Couchbase Blog.

Categories: Architecture, Database

Microsoft Azure and Couchbase Hands on Lab (Detroit)

NorthScale Blog - Fri, 02/24/2017 - 02:17

Microsoft Azure and Couchbase are presenting a free hands-on lab “Lunch & Learn” on using NoSQL with Docker Containers.

  • When: Wednesday, March 8th, 2017 – 11:00am – 2:00pm
  • Where: Microsoft Technology Center
    1000 Town Center, Suite 250, Room MPR3
    Southfield, MI 48075

Sign up today to reserve your seat.

Event details

On Wednesday March 8th, Microsoft and Couchbase are holding a joint Lunch & Learn from 11:00 am to 2:00 pm to introduce you to the fundamentals of today’s quickly maturing NoSQL technology. Specifically we will show you have easy it is to add Couchbase to the Azure Cloud or Hybrid Cloud environment.

Whether you are new to NoSQL technologies or have had experience with Couchbase, we hope you can join this informative session showcasing how the world’s leading companies are utilizing Couchbase’s NoSQL solutions to power their mission-critical applications on Azure.

During our Lunch & Learn, we’ll discuss:

  • The basics of using Couchbase NoSQL for Azure cloud or hybrid cloud environments
  • Using Containers – Couchbase Server on Azure
  • Why leading organizations are using Azure & Couchbase with their modern web and mobile applications
  • Provisioning VMs in Azure and setting up Couchbase
  • Good general practices for Couchbase Server on Azure

Register Now to reserve your seat, and please share this invitation with your coworkers or anyone else who might be interested. If you have any questions, please leave a comment, email me at matthew.groves@couchbase.com, or contact me on Twitter @mgroves.

You may want to try out Couchbase on Azure before you come to the lab: you can find the latest Couchbase Server 4.6.0 release in the Azure marketplace.

The post Microsoft Azure and Couchbase Hands on Lab (Detroit) appeared first on The Couchbase Blog.

Categories: Architecture, Database

Oracle and Tech Mahindra Deliver Industry’s First VoLTE as a Service Offering

Oracle Database News - Thu, 02/23/2017 - 14:00
Press Release Oracle and Tech Mahindra Deliver Industry’s First VoLTE as a Service Offering Oracle Communications and Tech Mahindra helping drive VoLTE adoption by bringing operators an affordable, powerful VoLTE solution

Redwood Shores, Calif.—Feb 23, 2017

Oracle today announced that Tech Mahindra, a leading system integrator for network infrastructure services, and Oracle Communications  have partnered to deliver an end-to-end VoLTE-as-a-Managed-Service solution based on Oracle’s IMS Core and Signaling products. The partnership, represents the industry’s first end-to-end VoLTE solution built on best-of-breed technology. The solution offers operators the ability to achieve a faster time to market with new VoLTE services, increased voice quality and greater network efficiency while significantly reducing cost and complexity.

Today’s connected world places considerable demands on traditional communication services and the underlying networks. As service providers grapple with the move to an all-IP future, the resources needed to upgrade networks and services is a significant obstacle. Wireless operators have long recognized the need to adopt VoLTE in order to remain relevant and prepare for interoperability with other networks in the future, but the price and difficulty of this adjustment has been prohibitive. 

Tech Mahindra’s VoLTE-as-a-Managed-Service solution, powered by Oracle Communications technology, simplifies the path to an all-IP network by offering a fully virtualized solution that runs on common off the shelf hardware rather than relying on proprietary networking equipment. A typical service provider with an LTE data network can expect to service its first Oracle-enabled VoLTE call within 3-6 months of deploying the solution, often at significant cost savings compared to traditional vendors and in-house solutions.

“The need to drive increased network efficiency and coverage while offering enhanced voice quality necessitates the move to Voice-over-Packet technologies,” said Manish Vyas, CEO Tech Mahindra Network Services. “Leveraging Oracle technology, Tech Mahindra is enabling service providers to adopt VoLTE in a simpler and more cost-effective way, with a powerful end-to-end pre-integrated solution that is virtualized and offers industry leading capabilities at each function.”

 The VoLTE-as-a-Managed-Service solution is built on Oracle products that are used today in service providers around the world. Designed, deployed and operated by Tech Mahindra, it empowers service providers to offer the VoLTE services their customers demand with reduced operational costs and without requiring any internal skillset realignment.

“Oracle Communications is laser-focused on accelerating service providers’ transformation toward the software-centric networks of the future,” said Douglas Suriano, Senior Vice President and General Manager at Oracle Communications. “Tech Mahindra brings valuable experience in managed services, and this partnership will enable us to deliver the industry’s first end-to-tend VoLTE solution to service providers globally.”

The Oracle Communications technologies supporting the new VoLTE as a Service offering include Oracle Communications Core Session Manager, Oracle Communications Session Border Controller, Oracle Communications Evolved Communications Application Server, Oracle Communications Policy Management, Oracle Communications Diameter Signaling Router and Oracle Communications Applications Orchestrator. To learn more about these products and other Oracle Communications offerings, visit: http://bit.ly/2kLCqqZ.

Contact Info Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com Shalini Singh
Tech Mahindra
+91.965.446.3108
shalini.singh7@techmahindra.com About Tech Mahindra

Tech Mahindra represents the connected world, offering innovative and customer-centric information technology experiences, enabling Enterprises, Associates and the Society to Rise™. We are a USD 4.2 billion company with 117,000+ professionals across 90 countries, helping over 837 global customers including Fortune 500 companies. Our convergent, digital, design experiences, innovation platforms and reusable assets connect across a number of technologies to deliver tangible business value and experiences to our stakeholders. Tech Mahindra is amongst the Fab 50 companies in Asia (Forbes 2016 list).

We are part of the USD 17.8 billion Mahindra Group that employs more than 200,000 people in over 100 countries. The Group operates in the key industries that drive economic growth, enjoying a leadership position in tractors, utility vehicles, after- market, information technology and vacation ownership.

Connect with us on www.techmahindra.com

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Shalini Singh

  • +91.965.446.3108

Follow Oracle Corporate

Categories: Database, Vendor

BBVA Banks on Oracle to Deliver a Better Mobile Experience to Customers

Oracle Database News - Thu, 02/23/2017 - 14:00
Press Release BBVA Banks on Oracle to Deliver a Better Mobile Experience to Customers Spanish financial services provider chooses Oracle to enable customers to open accounts with mobile devices

Redwood Shores, Calif.—Feb 23, 2017

Differentiating itself from competitors, while offering an enhanced experience to customers, Spanish bank, BBVA, is using Oracle Communications  technology to enable customers to open new accounts via their mobile devices in minutes.

The banking industry is under heavy scrutiny to validate and protect customer information.  BBVA has chosen a solution with comprehensive security features to enhance efforts to meet EU compliance requirements for confidential documentation and secure management of personal data, as well as standards for authentication, reporting and monitoring. BBVA chose Oracle Communications WebRTC Session Controller and Quobis Sippo WebRTC Application Controller as the foundation for its new platform because the technology is easily configured and integrates directly with the company’s existing internal systems.

“We live in an age of convenience where people can do everything from their mobile phones, whether it is to open a new account or to pay,” said Ignacio Teulon Ramírez, Digital Transformation - Customer Experience Director, BBVA. “We want to provide our customers with services in the way they prefer to consume them, and we want to provide them the best experience possible.”

Today, BBVA can provide a rich, real-time audio and video experience on a mobile phone or tablet. Jointly delivered by Quobis and in partnership with BT, the solution enables BBVA to validate customers’ identity so customers and prospects can quickly open a new account. The sessions can also be recorded for compliance purposes.

“Digital technologies are giving the financial services industry the opportunity to leap forward and provide products and services that match the digital lifestyle of their customers,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “Our project with BBVA shows how large banks can differentiate themselves by creating a new banking experience. They have a clear vision and an understanding of their customers’ needs, as well as the technology that allows them to innovate while integrating seamlessly with their existing systems.”

Quobis and BT are Gold level members of the Oracle PartnerNetwork (OPN).

Contact Info Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Quobis

Quobis is leading the industry of browser-based communication solutions for services providers and enterprises with its award-winning Sippo product familiy. For more information about Quobis visit www.quobis.com

About BT

BT is one of the world’s leading providers of communications services and solutions, serving customers in 180 countries. For more information about BT visit http://www.bt.com.

About BBVA

BBVA is a customer-centric global financial services group founded in1857. The Group is the largest financial institution in Spain and Mexico and it has leading franchises in South America and the Sunbelt Region of the United States; and it is also the leading shareholder in Garanti, Turkey’s biggest bank for market capitalization. Its diversified business is focused on high-growth markets and it relies on technology as a key sustainable competitive advantage. Corporate responsibility is at the core of its business model. BBVA fosters financial education and inclusion, and supports scientific research and culture. It operates with the highest integrity, a long-term vision and applies the best practices.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.415.856.5145

Follow Oracle Corporate

Categories: Database, Vendor

Getting Started with Azure SQL Data Warehouse - Part 2

Database Journal News - Thu, 02/23/2017 - 09:01

Arshad Ali discusses the architecture of Azure SQL Data Warehouse and how you can scale up or down, based on your need.

Categories: Database