Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database

Microsoft Azure and Couchbase Hands on Lab (Detroit)

NorthScale Blog - Fri, 02/24/2017 - 02:17

Microsoft Azure and Couchbase are presenting a free hands-on lab “Lunch & Learn” on using NoSQL with Docker Containers.

  • When: Wednesday, March 8th, 2017 – 11:00am – 2:00pm
  • Where: Microsoft Technology Center
    1000 Town Center, Suite 250, Room MPR3
    Southfield, MI 48075

Sign up today to reserve your seat.

Event details

On Wednesday March 8th, Microsoft and Couchbase are holding a joint Lunch & Learn from 11:00 am to 2:00 pm to introduce you to the fundamentals of today’s quickly maturing NoSQL technology. Specifically we will show you have easy it is to add Couchbase to the Azure Cloud or Hybrid Cloud environment.

Whether you are new to NoSQL technologies or have had experience with Couchbase, we hope you can join this informative session showcasing how the world’s leading companies are utilizing Couchbase’s NoSQL solutions to power their mission-critical applications on Azure.

During our Lunch & Learn, we’ll discuss:

  • The basics of using Couchbase NoSQL for Azure cloud or hybrid cloud environments
  • Using Containers – Couchbase Server on Azure
  • Why leading organizations are using Azure & Couchbase with their modern web and mobile applications
  • Provisioning VMs in Azure and setting up Couchbase
  • Good general practices for Couchbase Server on Azure

Register Now to reserve your seat, and please share this invitation with your coworkers or anyone else who might be interested. If you have any questions, please leave a comment, email me at matthew.groves@couchbase.com, or contact me on Twitter @mgroves.

You may want to try out Couchbase on Azure before you come to the lab: you can find the latest Couchbase Server 4.6.0 release in the Azure marketplace.

The post Microsoft Azure and Couchbase Hands on Lab (Detroit) appeared first on The Couchbase Blog.

Categories: Architecture, Database

Oracle and Tech Mahindra Deliver Industry’s First VoLTE as a Service Offering

Oracle Database News - Thu, 02/23/2017 - 14:00
Press Release Oracle and Tech Mahindra Deliver Industry’s First VoLTE as a Service Offering Oracle Communications and Tech Mahindra helping drive VoLTE adoption by bringing operators an affordable, powerful VoLTE solution

Redwood Shores, Calif.—Feb 23, 2017

Oracle today announced that Tech Mahindra, a leading system integrator for network infrastructure services, and Oracle Communications  have partnered to deliver an end-to-end VoLTE-as-a-Managed-Service solution based on Oracle’s IMS Core and Signaling products. The partnership, represents the industry’s first end-to-end VoLTE solution built on best-of-breed technology. The solution offers operators the ability to achieve a faster time to market with new VoLTE services, increased voice quality and greater network efficiency while significantly reducing cost and complexity.

Today’s connected world places considerable demands on traditional communication services and the underlying networks. As service providers grapple with the move to an all-IP future, the resources needed to upgrade networks and services is a significant obstacle. Wireless operators have long recognized the need to adopt VoLTE in order to remain relevant and prepare for interoperability with other networks in the future, but the price and difficulty of this adjustment has been prohibitive. 

Tech Mahindra’s VoLTE-as-a-Managed-Service solution, powered by Oracle Communications technology, simplifies the path to an all-IP network by offering a fully virtualized solution that runs on common off the shelf hardware rather than relying on proprietary networking equipment. A typical service provider with an LTE data network can expect to service its first Oracle-enabled VoLTE call within 3-6 months of deploying the solution, often at significant cost savings compared to traditional vendors and in-house solutions.

“The need to drive increased network efficiency and coverage while offering enhanced voice quality necessitates the move to Voice-over-Packet technologies,” said Manish Vyas, CEO Tech Mahindra Network Services. “Leveraging Oracle technology, Tech Mahindra is enabling service providers to adopt VoLTE in a simpler and more cost-effective way, with a powerful end-to-end pre-integrated solution that is virtualized and offers industry leading capabilities at each function.”

 The VoLTE-as-a-Managed-Service solution is built on Oracle products that are used today in service providers around the world. Designed, deployed and operated by Tech Mahindra, it empowers service providers to offer the VoLTE services their customers demand with reduced operational costs and without requiring any internal skillset realignment.

“Oracle Communications is laser-focused on accelerating service providers’ transformation toward the software-centric networks of the future,” said Douglas Suriano, Senior Vice President and General Manager at Oracle Communications. “Tech Mahindra brings valuable experience in managed services, and this partnership will enable us to deliver the industry’s first end-to-tend VoLTE solution to service providers globally.”

The Oracle Communications technologies supporting the new VoLTE as a Service offering include Oracle Communications Core Session Manager, Oracle Communications Session Border Controller, Oracle Communications Evolved Communications Application Server, Oracle Communications Policy Management, Oracle Communications Diameter Signaling Router and Oracle Communications Applications Orchestrator. To learn more about these products and other Oracle Communications offerings, visit: http://bit.ly/2kLCqqZ.

Contact Info Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com Shalini Singh
Tech Mahindra
+91.965.446.3108
shalini.singh7@techmahindra.com About Tech Mahindra

Tech Mahindra represents the connected world, offering innovative and customer-centric information technology experiences, enabling Enterprises, Associates and the Society to Rise‚ĄĘ. We are a USD 4.2 billion company with 117,000+ professionals across 90 countries, helping over 837 global customers including Fortune 500 companies. Our convergent, digital, design experiences, innovation platforms and reusable assets connect across a number of technologies to deliver tangible business value and experiences to our stakeholders. Tech Mahindra is amongst the Fab 50 companies in Asia (Forbes 2016 list).

We are part of the USD 17.8 billion Mahindra Group that employs more than 200,000 people in over 100 countries. The Group operates in the key industries that drive economic growth, enjoying a leadership position in tractors, utility vehicles, after- market, information technology and vacation ownership.

Connect with us on www.techmahindra.com

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Shalini Singh

  • +91.965.446.3108

Follow Oracle Corporate

Categories: Database, Vendor

BBVA Banks on Oracle to Deliver a Better Mobile Experience to Customers

Oracle Database News - Thu, 02/23/2017 - 14:00
Press Release BBVA Banks on Oracle to Deliver a Better Mobile Experience to Customers Spanish financial services provider chooses Oracle to enable customers to open accounts with mobile devices

Redwood Shores, Calif.—Feb 23, 2017

Differentiating itself from competitors, while offering an enhanced experience to customers, Spanish bank, BBVA, is using Oracle Communications  technology to enable customers to open new accounts via their mobile devices in minutes.

The banking industry is under heavy scrutiny to validate and protect customer information.  BBVA has chosen a solution with comprehensive security features to enhance efforts to meet EU compliance requirements for confidential documentation and secure management of personal data, as well as standards for authentication, reporting and monitoring. BBVA chose Oracle Communications WebRTC Session Controller and Quobis Sippo WebRTC Application Controller as the foundation for its new platform because the technology is easily configured and integrates directly with the company’s existing internal systems.

“We live in an age of convenience where people can do everything from their mobile phones, whether it is to open a new account or to pay,” said Ignacio Teulon Ramírez, Digital Transformation - Customer Experience Director, BBVA. “We want to provide our customers with services in the way they prefer to consume them, and we want to provide them the best experience possible.”

Today, BBVA can provide a rich, real-time audio and video experience on a mobile phone or tablet. Jointly delivered by Quobis and in partnership with BT, the solution enables BBVA to validate customers’ identity so customers and prospects can quickly open a new account. The sessions can also be recorded for compliance purposes.

“Digital technologies are giving the financial services industry the opportunity to leap forward and provide products and services that match the digital lifestyle of their customers,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “Our project with BBVA shows how large banks can differentiate themselves by creating a new banking experience. They have a clear vision and an understanding of their customers’ needs, as well as the technology that allows them to innovate while integrating seamlessly with their existing systems.”

Quobis and BT are Gold level members of the Oracle PartnerNetwork (OPN).

Contact Info Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Quobis

Quobis is leading the industry of browser-based communication solutions for services providers and enterprises with its award-winning Sippo product familiy. For more information about Quobis visit www.quobis.com

About BT

BT is one of the world’s leading providers of communications services and solutions, serving customers in 180 countries. For more information about BT visit http://www.bt.com.

About BBVA

BBVA is a customer-centric global financial services group founded in1857. The Group is the largest financial institution in Spain and Mexico and it has leading franchises in South America and the Sunbelt Region of the United States; and it is also the leading shareholder in Garanti, Turkey’s biggest bank for market capitalization. Its diversified business is focused on high-growth markets and it relies on technology as a key sustainable competitive advantage. Corporate responsibility is at the core of its business model. BBVA fosters financial education and inclusion, and supports scientific research and culture. It operates with the highest integrity, a long-term vision and applies the best practices.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.415.856.5145

Follow Oracle Corporate

Categories: Database, Vendor

Getting Started with Azure SQL Data Warehouse - Part 2

Database Journal News - Thu, 02/23/2017 - 09:01

Arshad Ali discusses the architecture of Azure SQL Data Warehouse and how you can scale up or down, based on your need.

Categories: Database

Migrating Your MongoDB with Mongoose RESTful API to Couchbase with Ottoman

NorthScale Blog - Mon, 02/20/2017 - 20:54

When talking to Node.js developers, it is common to hear about NoSQL as the database of choice for development. JavaScript and JSON come hand in hand because after all JSON stands for JavaScript Object Notation. This is a format most common in document oriented databases which Node.js developers tend to use.

A very popular stack of development technologies is the MongoDB, Express Framework, Angular, and Node.js (MEAN) stack, but similarly there is also the Couchbase, Express Framework, Angular, and Node.js (CEAN) stack. Now don’t get me wrong, every technology I listed is great, but when your applications need to scale and maintain their performance, you might have better luck with Couchbase because of how it functions by design.

So what if you’re already using MongoDB in your Node.js application?

Chances are you’re using Mongoose which is an Object Document Model (ODM) for interacting with the database. Couchbase also has an ODM and it is called Ottoman. The great thing about these two ODM technologies is that they share pretty much the same set of APIs, making any transition incredibly easy.

We’re going to see how to take a MongoDB with Mongoose driven Node.js application and migrate it to Couchbase, using Ottoman.

The Requirements

This tutorial is going to be a little different because of all the technologies involved. We’re going to be building everything from scratch for simplicity, so the following are requirements and recommendations:

We’re going to start by building a MongoDB with Mongoose RESTful API in Node.js, hence the Node.js and MongoDB requirement. Then we’re going to take this application and migrate it to Couchbase.

For purposes of this example, we won’t be seeing how to configure Node.js, MongoDB, or Couchbase Server.

Understanding our NoSQL Data Model

Both MongoDB and Couchbase are document databases. One stores BSON data and the other stores JSON, however from the developer perspective they are incredibly similar. That said, let’s design a few models based around students attending courses at a school. The first model we create might be for actual courses, where a single course might look like the following:

{
    "id": "course-1",
    "type": "course",
    "name": "Computer Science 101",
    "term": "F2017",
    "students": [
        "student-1",
        "student-2"
    ]
}

In the above, notice that the course has a unique id, and we’ve defined it as being a course. The course has naming information as well as a list of students that are enrolled.

Now let’s say we want to define our model for student documents:

{
    "id": "student-1",
    "type": "student",
    "firstname": "Nic",
    "lastname": "Raboy",
    "courses": [
        "course-1",
        "course-25"
    ]
}

Notice that the above model has a similar format to that of the courses. What we’re saying here is that both documents are related, but still semi-structured. We’re saying that each course keeps track of its students and each student keeps track of their courses. This is useful when we try to query the data.

There are unlimited possibilities when it comes to modeling your NoSQL data. In fact there are probably more than one hundred ways to define a courses and students model, versus what I had decided. It is totally up to you, and that is the flexibility that NoSQL brings. More information on data modeling can be found here.

With a data model in mind, we can create a simple set of API endpoints using each MongoDB with Mongoose and Couchbase with Ottoman.

Developing an API with Express Framework and MongoDB

Because we’re, in theory, migrating away from MongoDB to Couchbase, it would make sense to figure out what we want in a MongoDB application first.

Create a new directory somewhere on your computer to represent the first part of our project. Within this directory, execute the following:

npm init --y
npm install express body-parser mongodb mongoose --save

The above commands will create a file called package.json that will keep track of each of the four project dependencies. The express dependency is for Express framework and the body-parser dependency allows for request bodies to exist in POST, PUT, and DELETE requests, all of which are common for altering data. Then mongodb and mongoose are required for working with the database.

The project we build will have the following structure:

app.js
routes/
    courses.js
    students.js
models/
    models.js
package.json
node_modules/

Go ahead and create those directories and files if they don’t already exist. The app.js file will be the application driver where as the routes will contain our API endpoints and the models will contain the database definitions for our application.

Defining the Mongoose Schemas

So let’s work backwards, starting with the Mongoose model that will communicate with MongoDB. Open the project’s models/models.js file and include the following:

var Mongoose = require("mongoose");
var Schema = Mongoose.Schema;
var ObjectId = Mongoose.SchemaTypes.ObjectId;

var CourseSchema = new Mongoose.Schema({
    name: String,
    term: String,
    students: [
        {
            type: ObjectId,
            ref: StudentSchema
        }
    ]
});

var StudentSchema = new Mongoose.Schema({
    firstname: String,
    lastname: String,
    courses: [
        {
            type: ObjectId,
            ref: CourseSchema
        }
    ]
});

module.exports.CourseModel = Mongoose.model("Course", CourseSchema);
module.exports.StudentModel = Mongoose.model("Student", StudentSchema);

In the above we’re creating MongoDB document schemas and then creating models out of them. Notice how similar the schemas are to the JSON models that we had defined previously outside of the application. We’re not declaring an id and type because the ODM handles this for us. In each of the arrays we use a reference to another schema. What we’ll see at save is a document id, but we can leverage the querying technologies to load that id into actual data.

So how do we use those models?

Creating the RESTful API Routes

Now we want to create routing information, or in other words, API endpoints. For example, let’s create all the CRUD endpoints for course information. In the project’s routes/courses.js file, add the following:

var CourseModel = require("../models/models").CourseModel;

var router = function(app) {

    app.get("/courses", function(request, response) {
        CourseModel.find({}).populate("students").then(function(result) {
            response.send(result);
        }, function(error) {
            response.status(401).send({ "success": false, "message": error});
        });
    });

    app.get("/course/:id", function(request, response) {
        CourseModel.findOne({"_id": request.params.id}).populate("students").then(function(result) {
            response.send(result);
        }, function(error) {
            response.status(401).send({ "success": false, "message": error});
        });
    });

    app.post("/courses", function(request, response) {
        var course = new CourseModel({
            "name": request.body.name
        });
        course.save(function(error, course) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            response.send(course);
        });
    });

}

module.exports = router;

In the above example we have three endpoints. We can view all available courses, view courses by id, and create new courses. Each endpoint is powered by Mongoose.

app.post("/courses", function(request, response) {
    var course = new CourseModel({
        "name": request.body.name
    });
    course.save(function(error, course) {
        if(error) {
            return response.status(401).send({ "success": false, "message": error});
        }
        response.send(course);
    });
});

When creating a document, the request POST data is added to a new model instantiation. Once save is called, it get’s saved to MongoDB. Similar things happen when reading data from the database.

app.get("/courses", function(request, response) {
    CourseModel.find({}).populate("students").then(function(result) {
        response.send(result);
    }, function(error) {
        response.status(401).send({ "success": false, "message": error});
    });
});

In the case of the above, the find function is called and parameters are passed in. When there are no parameters, then all documents are returned from the Course collection, otherwise data is queried by the properties passed. The populate function allows the document references to be loaded so instead of returning id values back, the actual documents are returned.

Now let’s take a look at the other route.

The second route is responsible for creating student data, but there is an exception here. We’re also going to be managing the document relationships here. Open the project’s routes/students.js file and include the following source code:

var CourseModel = require("../models/models").CourseModel;
var StudentModel = require("../models/models").StudentModel;

var router = function(app) {

    app.get("/students", function(request, response) {
        StudentModel.find({}).populate("courses").then(function(result) {
            response.send(result);
        }, function(error) {
            response.status(401).send({ "success": false, "message": error});
        });
    });

    app.get("/student/:id", function(request, response) {
        StudentModel.findOne({"_id": request.params.id}).populate("courses").then(function(result) {
            response.send(result);
        }, function(error) {
            response.status(401).send({ "success": false, "message": error});
        });
    });

    app.post("/students", function(request, response) {
        var student = new StudentModel({
            "firstname": request.body.firstname,
            "lastname": request.body.lastname
        });
        student.save(function(error, student) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            response.send(student);
        });
    });

    app.post("/student/course", function(request, response) {
        CourseModel.findOne({"_id": request.body.course_id}).then(function(course) {
            StudentModel.findOne({"_id": request.body.student_id}).then(function(student) {
                if(course != null && student != null) {
                    if(!student.courses) {
                        student.courses = [];
                    }
                    if(!course.students) {
                        course.students = [];
                    }
                    student.courses.push(course._id);
                    course.students.push(student._id);
                    student.save();
                    course.save();
                    response.send(student);
                } else {
                    return response.status(401).send({ "success": false, "message": "The `student_id` or `course_id` was invalid"});
                }
            }, function(error) {
                return response.status(401).send({ "success": false, "message": error});
            });
        }, function(error) {
            return response.status(401).send({ "success": false, "message": error});
        });
    });

}

module.exports = router;

The first three API endpoints should look familiar. The new endpoint student/course is responsible for adding students to a course and courses to a student.

The first thing that happens is a course is found based on a request id. Next, a student is found based on a different request id. If both documents are found then the ids are added to each of the appropriate arrays and the documents are saved once again.

The final step here is to create our application driver. This will connect to the database and serve the application to be consumed by clients.

Connecting to MongoDB and Serving the Application

Open the project’s app.js file and add the following code:

var Mongoose = require("mongoose");
var Express = require("express");
var BodyParser = require("body-parser");

var app = Express();
app.use(BodyParser.json());

Mongoose.Promise = Promise;
var studentRoutes = require("./routes/students")(app);
var courseRoutes = require("./routes/courses")(app);
Mongoose.connect("mongodb://localhost:27017/example", function(error, database) {
    if(error) {
        return console.log("Could not establish a connection to MongoDB");
    }
    var server = app.listen(3000, function() {
        console.log("Connected on port 3000...");
    });
});

In the above code we are importing each of the dependencies that we previously installed. Then we are initializing Express and telling it to accept JSON bodies in requests.

The routes that were previously created need to be linked to Express, so we’re importing them and passing the Express instance. Finally, a connection to MongoDB is made with Mongoose and the application starts serving.

Not particularly difficult right?

Developing an API with Express Framework and Couchbase

So we saw how to create an API with Node.js, Mongoose, and MongoDB, so now we need to accomplish the same thing with Node.js, Ottoman, and Couchbase. Again this is to show how easy it is to transition from MongoDB to Couchbase and get all the benefits of an Enterprise ready, powerful NoSQL database.

Create a new directory somewhere on your computer and within it, execute the following to create a new project:

npm init --y
npm install express body-parser couchbase ottoman --save

The above commands are similar to what we saw previously, with the exception that now we’re using Couchbase and Ottoman. The project we build will have exactly the same structure, and as a refresher, it looks like the following:

app.js
routes/
    courses.js
    students.js
models/
    models.js
package.json
node_modules/

All Ottoman models will exist in the models directory, all API endpoints and Ottoman logic will exist in the routes directory and all driver logic will exist in the app.js file.

Defining the Ottoman Models

We’re going to work in the same direction that we did for the MongoDB application to show the ease of transition. This means starting with the Ottoman models that will represent our data in Couchbase Server.

Open the project’s models/models.js file and include the following:

var Ottoman = require("ottoman");

var CourseModel = Ottoman.model("Course", {
    name: { type: "string" },
    term: { type: "string" },
    students: [
        {
            ref: "Student"
        }
    ]
});

var StudentModel = Ottoman.model("Student", {
    firstname: { type: "string" },
    lastname: { type: "string" },
    courses: [
        {
            ref: "Course"
        }
    ]
});

module.exports.StudentModel = StudentModel;
module.exports.CourseModel = CourseModel;

The above should look familiar, yet you have to realize that these are two very different ODMs. Instead of designing MongoDB schemas through Mongoose we can go straight to designing JSON models for Couchbase with Ottoman. Remember there are no schemas in Couchbase Buckets.

Each Ottoman model has a set of properties and an array referencing other documents. While the syntax is slightly different, it accomplishes the same thing.

This brings us to the API endpoints that use these models.

Creating the RESTful API Endpoints

The first set of endpoints that we want to create are in relation to managing courses. Open the project’s routes/courses.js file and include the following JavaScript code:

var CourseModel = require("../models/models").CourseModel;

var router = function(app) {

    app.get("/courses", function(request, response) {
        CourseModel.find({}, {load: ["students"]}, function(error, result) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            response.send(result);
        });
    });

    app.get("/course/:id", function(request, response) {
        CourseModel.getById(request.params.id, {load: ["students"]}, function(error, result) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            response.send(result);
        });
    });

    app.post("/courses", function(request, response) {
        var course = new CourseModel({
            "name": request.body.name
        });
        course.save(function(error, result) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            response.send(course);
        });
    });

}

module.exports = router;

In the above code we have three endpoints structured in a near identical way to what we saw with MongoDB and Mongoose. However, there are some minor differences. For example, instead of using promises we’re using callbacks.

One of the more visible differences is how querying is done. Not only do we have access to a find function like we saw in Mongoose, but we also have access to a getById function. In both scenarios we can pass information on how we expect a query to happen. Instead of using a populate function we can use load and provide which reference documents we wish to load. The concepts between Mongoose and Ottoman are very much the same.

This brings us to our second set of routes. Open the project’s routes/students.js file and include the following JavaScript code:

var StudentModel = require("../models/models").StudentModel;
var CourseModel = require("../models/models").CourseModel;

var router = function(app) {

    app.get("/students", function(request, response) {
        StudentModel.find({}, {load: ["courses"]}, function(error, result) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            response.send(result);
        });
    });

    app.get("/student/:id", function(request, response) {
        StudentModel.getById(request.params.id, {load: ["courses"]}, function(error, result) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            response.send(result);
        });
    });

    app.post("/students", function(request, response) {
        var student = new StudentModel({
            "firstname": request.body.firstname,
            "lastname": request.body.lastname
        });
        student.save(function(error, result) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            response.send(student);
        });
    });

    app.post("/student/course", function(request, response) {
        CourseModel.getById(request.body.course_id, function(error, course) {
            if(error) {
                return response.status(401).send({ "success": false, "message": error});
            }
            StudentModel.getById(request.body.student_id, function(error, student) {
                if(error) {
                    return response.status(401).send({ "success": false, "message": error});
                }
                if(!student.courses) {
                    student.courses = [];
                }
                if(!course.students) {
                    course.students = [];
                }
                student.courses.push(CourseModel.ref(course._id));
                course.students.push(StudentModel.ref(student._id));
                student.save(function(error, result) {});
                course.save(function(error, result) {});
                response.send(student);
            });
        });
    })
}

module.exports = router;

We already know the first three endpoints are going to be of the same format. We want to pay attention to the last endpoint which manages our relationships.

With this endpoint we are obtaining a course by its id value and a student based on its id value. As long as both return a document, we can add a reference of each to each of their arrays and re-save the document. The same thing and nearly the same code was found in the Mongoose version.

Now we can look at the logic to start serving the application after connecting to the database.

Connecting to Couchbase and Serving the Application

Open the project’s app.js file and include the following JavaScript:

var Couchbase = require("couchbase");
var Ottoman = require("ottoman");
var Express = require("express");
var BodyParser = require("body-parser");

var app = Express();
app.use(BodyParser.json());

var bucket = (new Couchbase.Cluster("couchbase://localhost")).openBucket("example");
Ottoman.store = new Ottoman.CbStoreAdapter(bucket, Couchbase);
var studentRoutes = require("./routes/students")(app);
var courseRoutes = require("./routes/courses")(app);
var server = app.listen(3000, function() {
    console.log("Connected on port 3000...");
});

Does the above look familiar? It should! We are just swapping out the Mongoose connection information with the Couchbase connection information. After connecting to the database we can start serving the application.

Conclusion

You just saw how to build a RESTful API with Node.js, Mongoose, and MongoDB, then bring it to Couchbase in a very seamless fashion. This was meant to prove that the migration process is nothing to be scared about if you’re using Node.js as your backend technology.

With Couchbase you have a high-performance, distributed NoSQL database that works at any scale. The need to use caching in front of your database is eliminated because it is built into Couchbase. For more information on using Ottoman, you can check out a previous blog post I wrote. More information on using Couchbase with Node.js can be found in the Couchbase Developer Portal.

The post Migrating Your MongoDB with Mongoose RESTful API to Couchbase with Ottoman appeared first on The Couchbase Blog.

Categories: Architecture, Database

New Profiling and Monitoring in Couchbase Server 5.0 Preview

NorthScale Blog - Mon, 02/20/2017 - 20:07

N1QL query monitoring and profiling updates are just some of goodness you can find in February’s developer preview release of Couchbase Server 5.0.0.

Go download the February 5.0.0 developer release of Couchbase Server today, click the “Developer” tab, and check it out. You still have time to give us some feedback before the official release.

As always, keep in mind that I’m writing this blog post on early builds, and some things may change in minor ways by the time you get the release.

What is profiling and monitoring for?

When I’m writing N1QL queries, I need to be able to understand how well (or how badly) my query (and my cluster) is performing in order to make improvements and diagnose issues.

With this latest developer version of Couchbase Server 5.0, some new tools have been added to your N1QL-writing toolbox.

N1QL Writing Review

First, some review.

There are multiple ways for a developer to execute N1QL queries.

In this post, I’ll be mainly using Query Workbench.

There are two system catalogs that are already available to you in Couchbase Server 4.5 that I’ll be talking about today.

  • system:active_request – This catalog lists all the currently executing active requests or queries. You can execute the N1QL query SELECT * FROM system:active_requests; and it will list all those results.

  • system:completed_requests – This catalog lists all the recent completed requests (that have run longer than some threshold of time, default of 1 second). You can execute SELECT * FROM system:completed_requests; and it will list these queries.

New to N1QL: META().plan

Both active_requests and completed_requests return not only the original N1QL query text, but also related information: request time, request id, execution time, scan consistency, and so on. This can be useful information. Here’s an example that looks at a simple query (select * from travel-sample) while it’s running by executing select * from system:active_requests;

{
	"active_requests": {
	  "clientContextID": "805f519d-0ffb-4adf-bd19-15238c95900a",
	  "elapsedTime": "645.4333ms",
	  "executionTime": "645.4333ms",
	  "node": "10.0.75.1",
	  "phaseCounts": {
		"fetch": 6672,
		"primaryScan": 7171
	  },
	  "phaseOperators": {
		"fetch": 1,
		"primaryScan": 1
	  },
	  "phaseTimes": {
		"authorize": "500.3µs",
		"fetch": "365.7758ms",
		"parse": "500µs",
		"primaryScan": "107.3891ms"
	  },
	  "requestId": "80787238-f4cb-4d2d-999f-7faff9b081e4",
	  "requestTime": "2017-02-10 09:06:18.3526802 -0500 EST",
	  "scanConsistency": "unbounded",
	  "state": "running",
	  "statement": "select * from `travel-sample`;"
	}
}

First, I want to point out that phaseTimes is a new addition to the results. It’s a quick and dirty way to get a sense of the query cost without looking at the whole profile. It gives you the overall cost of each request phase without going into detail of each operator. In the above example, for instance, you can see that parse took 500¬Ķs and primaryScan took 107.3891ms. This might be enough information for you to go on without diving into META().plan.

However, with the new META().plan, you can get very detailed information about the query plan. This time, I’ll execute SELECT *, META().plan FROM system:active_requests;

[
  {
    "active_requests": {
      "clientContextID": "75f0f401-6e87-48ae-bca8-d7f39a6d029f",
      "elapsedTime": "1.4232754s",
      "executionTime": "1.4232754s",
      "node": "10.0.75.1",
      "phaseCounts": {
        "fetch": 12816,
        "primaryScan": 13231
      },
      "phaseOperators": {
        "fetch": 1,
        "primaryScan": 1
      },
      "phaseTimes": {
        "authorize": "998.7µs",
        "fetch": "620.704ms",
        "primaryScan": "48.0042ms"
      },
      "requestId": "42f50724-6893-479a-bac0-98ebb1595380",
      "requestTime": "2017-02-15 14:44:23.8560282 -0500 EST",
      "scanConsistency": "unbounded",
      "state": "running",
      "statement": "select * from `travel-sample`;"
    },
    "plan": {
      "#operator": "Sequence",
      "#stats": {
        "#phaseSwitches": 1,
        "kernTime": "1.4232754s",
        "state": "kernel"
      },
      "~children": [
        {
          "#operator": "Authorize",
          "#stats": {
            "#phaseSwitches": 3,
            "kernTime": "1.4222767s",
            "servTime": "998.7µs",
            "state": "kernel"
          },
          "privileges": {
            "default:travel-sample": 1
          },
          "~child": {
            "#operator": "Sequence",
            "#stats": {
              "#phaseSwitches": 1,
              "kernTime": "1.4222767s",
              "state": "kernel"
            },
            "~children": [
              {
                "#operator": "PrimaryScan",
                "#stats": {
                  "#itemsOut": 13329,
                  "#phaseSwitches": 53319,
                  "execTime": "26.0024ms",
                  "kernTime": "1.3742725s",
                  "servTime": "22.0018ms",
                  "state": "kernel"
                },
                "index": "def_primary",
                "keyspace": "travel-sample",
                "namespace": "default",
                "using": "gsi"
              },
              {
                "#operator": "Fetch",
                "#stats": {
                  "#itemsIn": 12817,
                  "#itemsOut": 12304,
                  "#phaseSwitches": 50293,
                  "execTime": "18.5117ms",
                  "kernTime": "787.9722ms",
                  "servTime": "615.7928ms",
                  "state": "services"
                },
                "keyspace": "travel-sample",
                "namespace": "default"
              },
              {
                "#operator": "Sequence",
                "#stats": {
                  "#phaseSwitches": 1,
                  "kernTime": "1.4222767s",
                  "state": "kernel"
                },
                "~children": [
                  {
                    "#operator": "InitialProject",
                    "#stats": {
                      "#itemsIn": 11849,
                      "#itemsOut": 11848,
                      "#phaseSwitches": 47395,
                      "execTime": "5.4964ms",
                      "kernTime": "1.4167803s",
                      "state": "kernel"
                    },
                    "result_terms": [
                      {
                        "expr": "self",
                        "star": true
                      }
                    ]
                  },
                  {
                    "#operator": "FinalProject",
                    "#stats": {
                      "#itemsIn": 11336,
                      "#itemsOut": 11335,
                      "#phaseSwitches": 45343,
                      "execTime": "6.5002ms",
                      "kernTime": "1.4157765s",
                      "state": "kernel"
                    }
                  }
                ]
              }
            ]
          }
        },
        {
          "#operator": "Stream",
          "#stats": {
            "#itemsIn": 10824,
            "#itemsOut": 10823,
            "#phaseSwitches": 21649,
            "kernTime": "1.4232754s",
            "state": "kernel"
          }
        }
      ]
    }
  }, ...
]

The above output comes from the Query Workbench.

Note the new “plan” part. It contains a tree of operators that combine to execute the N1QL query. The root operator is a Sequence, which itself has a collection of child operators like Authorize, PrimaryScan, Fetch, and possibly even more Sequences.

Enabling the profile feature

To get this information when using cbq or the REST API, you’ll need to turn on the “profile” feature.

You can do this in cbq by entering set -profile timings; and then running your query.

You can also do this with the REST API on a per request basis (using the /query/service endpoint and passing a querystring parameter of profile=timings, for instance).

You can turn on the setting for the entire node by making a POST request to http://localhost:8093/admin/settings, using Basic authentication, and a JSON body like:

{
  "completed-limit": 4000,
  "completed-threshold": 1000,
  "controls": false,
  "cpuprofile": "",
  "debug": false,
  "keep-alive-length": 16384,
  "loglevel": "INFO",
  "max-parallelism": 1,
  "memprofile": "",
  "pipeline-batch": 16,
  "pipeline-cap": 512,
  "pretty": true,
  "profile": "timings",
  "request-size-cap": 67108864,
  "scan-cap": 0,
  "servicers": 32,
  "timeout": 0
}

Notice the profile setting. It was previously set to off, but I set it to “timings”.

You may not want to do that, especially on nodes being used by other people and programs, because it will affect other queries running on the node. It’s better to do this on a per-request basis.

It’s also what Query Workbench does by default.

Using the Query Workbench

There’s a lot of information in META().plan about how the plan is executed. Personally, I prefer to look at a simplified graphical version of it in Query Workbench by clicking the “Plan” icon (which I briefly mentioned in a previous post about the new Couchbase Web Console UI).

Query Workbench plan results

Let’s look at a slightly more complex example. For this exercise, I’m using the travel-sample bucket, but I have removed one of the indexes (DROP INDEX travel-sample.def_sourceairport;).

I then execute a N1QL query to find flights between San Francisco and Miami:

SELECT r.id, a.name, s.flight, s.utc, r.sourceairport, r.destinationairport, r.equipment
FROM `travel-sample` r
UNNEST r.schedule s
JOIN `travel-sample` a ON KEYS r.airlineid
WHERE r.sourceairport = 'SFO'
AND r.destinationairport = 'MIA'
AND s.day = 0
ORDER BY a.name;

Executing this query (on my single-node local machine) takes about 10 seconds. That’s definitely not an acceptible amount of time, so let’s look at the plan to see what the problem might be (I broke it into two lines so the screenshots will fit in the blog post).

Query Workbench plan part 1

Query Workbench plan part 2

Looking at that plan, it seems like the costliest parts of the query are the Filter and the Join. JOIN operations work on keys, so they should normally be very quick. But it looks like there are a lot of documents being joined.

The Filter (the WHERE part of the query) is also taking a lot of time. It’s looking at the sourceairport and destinationairport fields. Looking elsewhere in the plan, I see that there is a PrimaryScan. This should be a red flag when you are trying to write performant queries. PrimaryScan means that the query couldn’t find an index other than the primary index. This is roughly the equivalent of a “table scan” in relational database terms. (You may want to drop the primary index so that these issues get bubbled-up faster, but that’s a topic for another time).

Let’s add an index on the sourceairport field and see if that helps.

CREATE INDEX `def_sourceairport` ON `travel-sample`(`sourceairport`);

Now, running the same query as above, I get the following plan:

Query Workbench improved plan part 1

Query Workbench improved plan part 2

This query took ~100ms (on my single-node local machine) which is much more acceptible. The Filter and the Join still take up a large percentage of the time, but thanks to the IndexScan replacing the PrimaryScan, there are many fewer documents that those operators have to deal with. Perhaps the query could be improved even more with an additional index on the destinationairport field.

Beyond Tweaking Queries

The answer to performance problems is not always in tweaking queries. Sometimes you might need to add more nodes to your cluster to address the underlying problem.

Look at the PrimaryScan information in META().plan. Here’s a snippet:

"~children": [
  {
    "#operator": "PrimaryScan",
    "#stats": {
      "#itemsOut": 13329,
      "#phaseSwitches": 53319,
      "execTime": "26.0024ms",
      "kernTime": "1.3742725s",
      "servTime": "22.0018ms",
      "state": "kernel"
    },
    "index": "def_primary",
    "keyspace": "travel-sample",
    "namespace": "default",
    "using": "gsi"
  }, ... ]

The servTime value indicates how much time is spent by the Query service to wait on the Key/Value data storage. If the servTime is very high, but there is a small number of documents being processed, that indicates that the indexer (or the key/value service) can’t keep up. Perhaps they have too much load coming from somewhere else. So this means that something weird is running someplace else or that your cluster is trying to handle too much load. Might be time to add some more nodes.

Similarly, the kernTime is how much time is spent waiting on other N1QL routines. This might mean that something else downstream in the query plan has a problem, or that the query node is overrun with requests and are having to wait a lot.

We want your feedback!

The new META().plan functionality and the new Plan UI combine in Couchbase Server 5.0 to improve the N1QL writing and profiling process.

Stay tuned to the Couchbase Blog for information about what’s coming in the next developer build.

Interested in trying out some of these new features? Download Couchbase Server 5.0 today!

We want feedback! Developer releases are coming every month, so you have a chance to make a difference in what we are building.

Bugs: If you find a bug (something that is broken or doesn’t work how you’d expect), please file an issue in our JIRA system at issues.couchbase.com or submit a question on the Couchbase Forums. Or, contact me with a description of the issue. I would be happy to help you or submit the bug for you (my Couchbase handlers high-five me every time I submit a good bug).

Feedback: Let me know what you think. Something you don’t like? Something you really like? Something missing? Now you can give feedback directly from within the Couchbase Web Console. Look for the feedback icon icon at the bottom right of the screen.

In some cases, it may be tricky to decide if your feedback is a bug or a suggestion. Use your best judgement, or again, feel free to contact me for help. I want to hear from you. The best way to contact me is either Twitter @mgroves or email me matthew.groves@couchbase.com.

The post New Profiling and Monitoring in Couchbase Server 5.0 Preview appeared first on The Couchbase Blog.

Categories: Architecture, Database

Timestamp-based conflict resolution in XDCR ‚Äď a QE‚Äôs perspective

NorthScale Blog - Mon, 02/20/2017 - 15:01

Introduction

Cross Datacenter Replication (XDCR)  is an important core feature of Couchbase that helps users in disaster recovery and data locality. Conflict resolution is an inevitable challenge faced by XDCR when a document is modified in two different locations before it has been synchronized between the locations.

Until 4.6, Couchbase only supported a revision ID-based strategy to handle conflict resolution. In this strategy, a document‚Äôs revision ID, which is updated every time it is modified, is used as the first field to decide the winner. If the revision ID of both contestants are same, then CAS, TTL, and flags are used in the same order to resolve the conflict. This strategy works best for applications designed to work based on a ‚ÄúMost updates is best‚ÄĚ policy. For example, a ticker app used by conductors in a train which updates a counter stored by cb server to count the number of passengers will work best with this policy and hence perform accurately with revision ID-based conflict resolution.

Starting with 4.6, Couchbase will be supporting an additional strategy called timestamp-based conflict resolution. Here, the timestamp of a document which is stored in CAS is used as the first field to decide the winner. In order to keep consistent ordering of mutations, Couchbase uses a hybrid logical clock (HLC), which is a combination of a physical clock and a logical clock. If the timestamp of both the contestants are same, then revision ID, TTL, and flags are used in the same order to resolve conflicts. This strategy is adapted to facilitate applications which are designed based on a ‚ÄúMost recent update is best‚ÄĚ policy. For example, a flight tracking app that stores the estimated arrival time of a flight in Couchbase Server will perform accurately with this conflict resolution. Precisely, this mechanism can be summarized as ‚ÄúLast write wins‚ÄĚ.

One should understand that the most updated document need not essentially be the most recent document and vice versa. So the user really needs to understand the application’s design, needs, and data pattern before deciding on which conflict resolution mechanism to use. For the same reason, Couchbase has designed the conflict resolution mechanism as a bucket level parameter. The users need to decide and select the strategy they wish to follow while creating the buckets. Once a bucket is created with a particular conflict resolution mechanism via UI, Rest API, or CLI, it cannot be changed. The user will have to delete and recreate the bucket to change the strategy. Also to avoid confusions and complications, Couchbase has restricted XDCR from being set up in mixed mode, i.e source and destination buckets cannot have different conflict resolution strategies selected. They both have to use either revision ID-based, or timestamp based conflict resolution. If the user tries to set up otherwise via UI, Rest API, or CLI, there will be an error message displayed.

Timestamp-based conflict resolution Use Cases

High Availability with Cluster Failover

Here, all database operations go to Datacenter A and are replicated via XDCR to Datacenter B. If the cluster located in Datacenter A fails then the application fails all traffic over to Datacenter B.

Datacenter Locality

Here, two active clusters operate on discrete sets of documents. This ensures no conflicts are generated during normal operation. A bi-directional XDCR relationship is configured to replicate their updates to each other. When one cluster fails, application traffic can be failed over to the remaining active cluster.

How does Timestamp-based conflict resolution ensure safe failover?

Timestamp-based conflict resolution requires that applications only allow traffic to the other Datacenter after the maximum of the following two time periods has elapsed:

  1. The replication latency between A and B. This allows any mutations in-flight to be received by Datacenter B.
  2. The absolute time skew between Datacenter A and Datacenter B. This ensures that any writes to Datacenter B occur after the last write to Datacenter A, after the calculated delay, at which point all database operations would go to Datacenter B.

When availability is restored to Datacenter A, applications must observe the same time period before redirecting their traffic. For both of the use cases described above, using timestamp-based conflict resolution ensures that the most recent version of any document will be preserved.

How to configure NTP for Timestamp-based conflict resolution?

A prerequisite that users should keep in mind before opting for timestamp-based conflict resolution is that they need to use synchronized clocks to ensure the accuracy of this strategy. Couchbase advises them to use Network Time Protocol (NTP) to synchronize time across multiple servers. The users will have to configure their clusters to periodically synchronize their wall clocks with a particular NTP server or a pool of NTP peers to ensure availability. Clock synchronization is key to the accuracy of the Hybrid Logical Clock used by Couchbase to resolve conflicts based on timestamps.

As a QE, testing timestamp-based conflict resolution was a good learning experience. One of the major challenges was learning how NTP works. The default setup for all the testcases is to enable NTP, start the service, sync up the wall clock with 0.north-america.pool.ntp.org, and then proceed with the test. These steps were achieved using the following commands in setup:

~$ chkconfig ntpd on

~$ /etc/init.d/ntpd start

~$ ntpdate -q 0.north-america.pool.ntp.org

Once the test is done and results are verified, NTP service is stopped and disabled using the following commands:

~$ chkconfig ntpd off

~$ /etc/init.d/ntpd stop

This is vanilla setup where all the individual nodes sync up their wall clock with 0.north-america.pool.ntp.org. It was interesting to automate test cases where nodes sync up their wall clock with a pool of NTP peers, source and destination cluster sync with different NTP pools (A (0.north-america.pool.ntp.org) -> B (3.north-america.pool.ntp.org)) and each cluster in a chain topology of length 3 (A (EST)  -> B (CST) -> C (PST)) are in different timezones. We had to manually configure these scenarios, observe the behaviour and then automate it out.

How did we test NTP based negative scenarios?

Next challenge was to test scenarios where NTP is not running on the Couchbase nodes and there is a time skew between the source and destination. Time skew might also occur If the wall clock time difference across clusters is high. Any time synchronization mechanism will take some time to sync the clocks resulting in a time skewed window. Note that Couchbase only gives an advisory warning while creating a bucket with timestamp-based conflict resolution stating that the user should ensure a time synchronization mechanism is in place in all the nodes. It does not validate and restrict users from creating such a bucket if a time synchronization mechanism as such is not present. So it is quite possible that the user might ignore this warning, create a bucket with timestamp based conflict resolution and see weird behaviour when there is a time skew.

Let us consider one such situation here:

  1. Create default bucket on source and target cluster with timestamp based conflict resolution
  2. Setup XDCR from source to target
  3. Disable NTP on both clusters
  4. Make wall clock of target cluster slower than source cluster by 5 minutes
  5. Pause replication
  6. Create a doc D1 at time T1 in target cluster
  7. Create a doc D2 with same key at time T2 in source cluster
  8. Update D1 in target cluster at time T3
  9. Resume replication
  10. Observe that D2 overwrites D1 even though T1 > T2 > T3 and last update to D1 in target cluster should have won

Here last write by timeline did not win as the clocks were skewed and not in sync leading to incorrect doc being declared as the winner. This shows how important time synchronization is for the timestamp based conflict resolution strategy. Figuring out all such scenarios and automating them was indeed a challenge.

How did we test complex scenarios with Timestamp-based Conflict Resolution?

Up next was determining a way to validate the correctness of this timestamp based conflict resolution against revision ID based strategy. We needed to perform the same steps in a XDCR setup and verify that the results were different based on the bucket’s conflict resolution strategy. In order to achieve this, we created two different buckets, one configured to use revID based conflict resolution and other to use timestamp based. Now follow these steps on both buckets parallely:

  1. Setup XDCR and pause replication
  2. Create doc D1 in target at time T1
  3. Create doc D2 with same key in source at time T2
  4. Update doc D2 in source at time T3
  5. Update doc D2 in source again at time T4
  6. Update doc D1 in target at time T5
  7. Resume replication

In the first bucket which is configured to use revID based conflict resolution, doc D1 at target will be overwritten by D2 as it has been mutated the most. Whereas in the second bucket which is configured to use timestamp based conflict resolution, doc D1 at target will be declared winner and retained as it is the latest to be mutated. Figuring out such scenarios and automating them made our regression exhaustive and robust.

How did we test HLC correctness?

Final challenge was to test the monotonicity of the hybrid logical clock (HLC) used by Couchbase in timestamp based conflict resolution. Apart from verifying that the HLC remained the same between an active vbucket and its replica, we had some interesting scenarios as follows:

  1. C1 (slower) -> C2 (faster) – mutations made in C1 will lose based on timestamp and C2 will always win – so HLC of C2 should not change after replication
  2. C1 (faster) -> C2 (slower) – mutations made in C1 will always win based on timestamp – so HLC of C2 should be greater than what it was before replication due to monotonicity
  3. Same scenario as 1, even though HLC of C2 did not change due to replication, any updates on C2 should increase its HLC owing to monotonicity
  4. Similarly, for scenario described in 2, apart from C2’s HLC being greater than what it was before replication, more updates to docs on C2 should keep its HLC increasing owing to monotonicity

Thus, all these challenges made testing timestamp based conflict resolution a rewarding QE feat.

The post Timestamp-based conflict resolution in XDCR – a QE’s perspective appeared first on The Couchbase Blog.

Categories: Architecture, Database

Digital Intelligence Transcends BI for The Dynamic Online Businesses

Database Journal News - Mon, 02/20/2017 - 09:01

Business intelligence has evolved significantly, but the challenge of bringing it to the masses as a powerful and simple solution remains. One of the major challenges is in handling and managing the  big data and unifying the different data sources into a steady steam that can be easily analyzed to tell a story.

Categories: Database

Managing Secrets in Couchbase 4.6

NorthScale Blog - Mon, 02/20/2017 - 07:55

Every software application has secrets. Password, API Key, secure Tokens all fall into category of secrets. ¬†There are dire consequences if your production secret keys would get into the wrong hands. You’ll want to tightly control how and when your secret keys are accessible.

Couchbase has added more services to its infrastructure and these services have internal and external credentials, and storing credentials of these services is a challenge.  Another challenge is rotation of secrets for all internal and external services.

Couchbase 4.6 introduces management of secrets, where all secrets shared are encrypted when stored and passed correctly to nodes and services, along with ease of rotation of secrets.  There is not going to impact on any SDK client or UI and performance.

Couchbase maintains 2 levels of key hierarchy to allow easier in rotating master password without re-encoding data, supporting multiple master passwords and also will be easier to integrate with KMIP server.At startup of node, master password is created or is supplied by the user,  from which a master key is created using a strong Key Derivation Function. Couchbase uses PBKDF2 for generation key.

A random data_key is also created on server startup which is then encrypted with master key.  All secrets are encrypted using data_key on disk.  Couchbase uses an AES 256-bit algorithm in GCM mode to encrypt secrets.

To bootstrap the system, the master key is used to open the encrypted data key. The decrypted data key is then used to open the encrypted secrets, and the secrets are used to start Couchbase Server.  Couchbase recommends using a strong master password.

With Secret Management in 4.6, you can rotate your secrets at different levels of the key hierarchy periodically or in the event of a breach.

Master password rotation/first level of rotation and reset of password can be done using REST API or cli. Couchbase allows flexibility of setting one master password per node.  In case if the master password is lost and server is stopped, the node is lost. Data from node can be recovered using other tools with server.

Second level of rotation can be done by changing the data key using the REST API or cli.  

All rotation and setting of master password is audited by application.

An example of setting up server for master password using cli on ubuntu 14.

  • Install and configure couchbase server.
  • Setup master password using cli, execute the command below and pass password on prompt.

/opt/couchbase/bin/couchbase-cli master-password -c 192.168.0.1:8091 -u Administrator -p password –new-password

  • Stop the server – /etc/init.d/couchbase-server stop
  • Configure an environment variable

export CB_MASTER_PASSWORD=<password>

  • Start the server – /etc/init.d/couchbase-server start

Note if you are using sudo to start the server use -E option to sudo to start the server.

  • Rotate the data key using cli, execute the command below:

/opt/couchbase/bin/couchbase-cli master-password -c ¬†¬†¬†¬†192.168.0.1:8091 -u Administrator -p password –rotate-data-key

  • For changing the master password, execute the command below. Pass password on the prompt

/opt/couchbase/bin/couchbase-cli master-password -c 192.168.0.1:8091 -u Administrator -p password –new-password

 

Logging Details:

 

Babysitter log on password change [ns_server:info,2017-01-20T13:12:30.079Z,babysitter_of_ns_1@127.0.0.1:encryption_service<0.65.0>:encryption_service:call_gosecrets_and_store_data_key:227]Master password change succeded Babysitter log on incorrect master password during server start or env. variable is set incorrect [ns_server:error,2017-01-20T13:13:07.066Z,babysitter_of_ns_1@127.0.0.1:encryption_service<0.65.0>:encryption_service:init:174]Incorrect master password. Error: {error,”cipher: message authentication failed”} Babysitter log when master password is set correct for CB Server =========================PROGRESS REPORT=========================

supervisor: {local,ns_babysitter_sup}

started: [{pid,<0.65.0>},

{name,encryption_service},

{mfargs,{encryption_service,start_link,[]}},

{restart_type,permanent},

{shutdown,1000}

{child_type,worker}]

[ns_server:debug,2017-01-22T12:08:46.432Z,babysitter_of_ns_1@127.0.0.1:<0.70.0>:supervisor_cushion:init:39]starting ns_port_server with delay of 5000

The post Managing Secrets in Couchbase 4.6 appeared first on The Couchbase Blog.

Categories: Architecture, Database

JDBC 42.0.0 Released

PostgreSQL News - Mon, 02/20/2017 - 01:00

The JDBC group is proud to release a new version, and in keeping with the version renumbering meme we have released version 42.0.0.

Notable changes include:

  • Support for PostgreSQL versions below 8.2 was dropped
  • java.util.logging is now used for logging: logging documentation
  • Ensure executeBatch() can be used with pgbouncer. Previously pgjdbc could use server-prepared statements for batch execution even with prepareThreshold=0 (see issue 742)
  • Replication protocol API was added: replication API documentation, PR#550
  • Version bumped to 42.0.0 to avoid version clash with PostgreSQL version
  • Error position is displayed when SQL has unterminated literals, comments, etc (see issue 688)
Categories: Database, Open Source

Deploy Docker Container to Oracle Container Cloud Service

NorthScale Blog - Sat, 02/18/2017 - 16:07

Getting Started with Oracle Container Cloud Service¬†explained how to get started with Oracle’s managed¬†container service.¬†Well, the intent was to show how to get started but getting to “getting started” was itself quite involving. And now this blog will really¬†show how to run a simple Docker container¬†to Oracle Container Service.

Oracle Container Service is built upon Oracle’s StackEngine acquisition that was completed 1.5 years ago. The basis for this blog is going to be a 4 node cluster (1 manager and 3 workers) created following the steps in Getting Started with Oracle Container Cloud Service.

Make sure you note down the user name and password of the service specified during the creation. It is golden, and there is no way to either retrieve it or reset it afterwards. UPDATE: @kapmani clarified that the password can be reset by logging into the manager node.

Anyway, the dashboard looks like:

Oracle Cloud Dashboard

Similarly, Container Cloud Console with 4 nodes looks like:

Container Cloud Service is accessible using REST API as explained in About Oracle Container Cloud Service REST API. The console itself uses the REST API for fulfilling all the commands.

Oracle Container Cloud Service Concepts

Let’s learn about some concepts¬†first:

  • Service – Service comprises of the necessary configuration for running a Docker image as a container on a host, plus default deployment directives. Service is neither container nor image running in containers. It is¬†a high-level configuration objects that you can create, deploy, and manage using Oracle Container Cloud Service. Think of a service as a container ‚Äėtemplate‚Äô, or as a set of instructions to follow to deploy a running container.
  • Stack –¬†Stack is all the¬†necessary configuration for running a set of services as Docker containers in a coordinated way and managed as a single entity, plus default deployment directives. Think of it as multi-container application. Stacks themselves are neither containers nor images running in containers, but rather are high-level configuration objects that you can create, deploy, and manage using Oracle Container Cloud Service. For example, a stack might be one or more WildFly¬†containers and a Couchbase¬†container. Likewise, a cluster of database or application nodes can be built as a stack.
  • Deployment¬†– A Deployment comprises a service or stack in which Docker containers are managed, deployed, and scaled according to a set of orchestration rules that you‚Äôve defined.¬†A single deployment can result in the creation of one or many Docker containers, across one or many hosts in a resource pool.
  • Resource Pool –¬†Resource pools are a way to organize hosts and combine them into isolated groups of compute resources. Resource pools enable you to manage your Docker environment more effectively by deploying services and stacks efficiently across multiple hosts.Three resource pools are defined out of the box:
    Oracle Cloud Default Resource Pool

Rest of the terms like Containers, Images and Hosts are pretty straight forward.

Run Couchbase in Oracle Container Cloud Service
  • Click on Services,¬†New Service
  • Oracle Container Service only supports Compose v2. So a simple Compose file definition¬†can be used for service definition:
    version: "2"
    services:
      db:
        image: arungupta/couchbase
        ports:
          - 8091:8091
          - 8092:8092
          - 8093:8093
          - 11210:11210

    The image arungupta/couchbase is built from github.com/arun-gupta/docker-images/tree/master/couchbase. It uses Couchbase REST API to pre-configure the Couchbase server. The common Couchbase networking ports for application development are also exposed.

    In the YAML tab, use the Compose definition from above:
    Docker Container Oracle Cloud
    Alternatively, you can use the builder or docker run command as well. In our case, use the Compose definition and then specify the Service Description.

  • Click on Save to save the service definition.¬†The updated list now includes Couchbase service:
    Oracle Cloud Couchbase Service
  • Click on Deploy to¬†deploy the container:
    Oracle Cloud Deploy Couchbase
  • Take the defaults and click on Deploy to start the deployment.
  • The Docker image is download and the container is started. The screen is refreshed to show Deployments:Oracle Cloud Deployments Couchbase
    A single instance of the container is now up and running,  Other details like resource pool, hostname and uptime are also displayed.
Details about Couchbase Container in Oracle Cloud

Let’s get some details about the Couchbase¬†container in Oracle Cloud:

  • Click on the container name shown in Container Name column¬†to see more details about the container:
    Oracle Cloud Containers Couchbase Details
    The typical output that you’ll inspect from the docker inspect command is shown here.
  • Click on View Logs to see¬†the container logs:
    Oracle Cloud Containers Couchbase Logs
    This is equivalent to the docker container logs command.
    These logs are generated from when Couchbase REST API is configuring the server.
  • Click on Hosts to see the complete list of hosts:
    Oracle Cloud Container Couchbase Hosts
  • A single instance of container is running. Select the host that is running the container to see more details:
    Oracle Cloud Containers Couchbase Host Details
    Note the public_ip of the host. This IP address will be used to access Couchbase Web Console later.Another key part to note here is that this host is running Docker 1.10.3. That is the case with other hosts as well, as expected.
Access Couchbase

Now, let’s access the Couchbase Web Console. In our case, this is¬†available at 129.152.159.64:8091. This shows the main login screen as:

Oracle Cloud Couchbase Web Console

Use Administrator as Username and password as password, click on Sign In to see the main screen of the console:

Oracle Cloud Couchbase Web Console Main Screen

Click on Server Nodes to see that data, index and query services are running:

Oracle Cloud Couchbase Web Console Server Nodes

Pretty cool, eh!

A future blog post will show how to create a Couchbase cluster, run a simple application against this cluster and other fun stuff.

Use any of the Couchbase Starter Kits to get started with Couchbase.

Want to learn more about running Couchbase in containers?

The post Deploy Docker Container to Oracle Container Cloud Service appeared first on The Couchbase Blog.

Categories: Architecture, Database

SQL Server next version CTP 1.3 now available

Microsoft is excited to announce a new preview for the next version of SQL Server (SQL Server v.Next). Community Technology Preview (CTP) 1.3 is available on both Windows and Linux. In this preview, we added several feature enhancements to High Availability and Disaster Recovery (HADR), including the ability to run Always On Availability Groups on Linux. You can try the preview in your choice of development and test environments now: www.sqlserveronlinux.com.

Key CTP 1.3 enhancement: Always On Availability Groups on Linux

In SQL Server v.Next, we continue to add new enhancements for greater availability and higher uptime. A key design principle has been to provide customers with the same HA and DR solutions on all platforms supported by SQL Server. On Windows, Always On depends on Windows Server Failover Clustering (WSFC). On Linux, you can now create Always On Availability Groups, which integrate with Linux-based cluster resource managers to enable automatic monitoring, failure detection and automatic failover during unplanned outages. We started with the popular clustering technology, Pacemaker.

In addition, Availability Groups can now work across Windows and Linux as part of the same Distributed Availability Group. This configuration can accomplish cross-platform migrations without downtime. To learn more, you can read our blog titled ‚ÄúSQL Server on Linux: Mission Critical HADR with Always On Availability Groups‚ÄĚ.

Other Enhancements

SQL Server v.Next CTP 1.3 also includes these additional feature enhancements:

  • Full text search is now available for all supported Linux distributions.
  • Resumable online index rebuilds enables users to recover more easily from interruption of index builds, or split an index build across maintenance windows.
  • Temporal Tables Retention Policy support enables customers to more easily manage the amount of historical data retained by temporal tables.
  • Indirect checkpoint performance improvements. Indirect checkpoint is the recommended configuration for large databases and for SQL Server 2016, and now it will be even more performant in SQL Server v.Next.
  • Minimum Replica Commit Availability Groups setting enables users to set the minimum number of replicas that are required to commit a transaction before committing on the primary.
  • For SQL Server v.Next technical preview running on Windows Server, encoding hints in SQL Server Analysis Services is an advanced feature to help optimize refresh times with no impact on query performance.

For additional detail on CTP 1.3, please visit What’s New in SQL Server v.Next, Release Notes and Linux documentation.

Get SQL Server v.Next CTP 1.3 today!

Try the preview of the next release of SQL Server today! Get started with the preview of SQL Server with our developer tutorials that show you how to install and use SQL Server v.Next on macOS, Docker, Windows and Linux and quickly build an app in a programming language of your choice.

Have questions? Join the discussion of SQL Server v.Next at MSDN. If you run into an issue or would like to make a suggestion, you can let us know through Connect. We look forward to hearing from you!

Categories: Database

SQL Server on Linux: Mission-critical HADR with Always On Availability Groups

This post was authored by Mihaela Blendea, Senior Program Manager, SQL Server

In keeping with our goal to enable the same High Availability and Disaster Recovery solutions on all platforms supported by SQL Server, today Microsoft is excited to announce the preview of Always On Availability Groups for Linux in SQL Server v.Next Community Technology Preview (CTP) 1.3. This technology adds to the HADR options available for SQL Server on Linux, having previously enabled shared disk failover cluster instance capabilities.

First released with SQL Server 2012 and enhanced in the 2014 and 2016 releases, Always On Availability Groups is SQL Server’s flagship solution for HADR. It provides High Availability for groups of databases on top of direct attached storage, supporting multiple active secondary replicas for integrated HA/DR, automatic failure detection, fast transparent failover, and read load balancing. This broad set of capabilities is enabling customers to meet the strictest availability SLA requirements for their mission- critical workloads.

Here is an overview of the scenarios that Always On Availability Groups are enabling for SQL Server v.Next:

Run mission-critical application using SQL Server running on Linux

Always On Availability Groups make it easy for your applications to meet rigorous business continuity requirements. This feature is now available on all Linux OS distributions SQL Server v.Next supports ‚ÄĒ Red Hat Enterprise Linux, Ubuntu and SUSE Linux Enterprise Server. Also, all capabilities that make Availability Groups a flexible, integrated and efficient HADR solution are available on Linux as well:

  • Multidatabase failover ‚Äď an availability group supports a failover environment for a set of user databases, known as availability databases.
  • Fast failure detection and failover ‚Äď as a resource in a highly available cluster, an availability group benefits from built-in cluster intelligence for immediate failover detection and failover action.
  • Transparent failover using availability group listener ‚Äď enables client to use single connection string to primary or secondary databases that does not change in case of failover.
  • Multiple sync/async secondary replicas ‚Äď an availability group supports up to eight secondary replicas. The availability mode determines whether the primary replica waits (synchronous replica) or not (asynchronous replica) to commit transactions on a database until a given secondary replica has written the transaction log records to disk.
  • Manual/automatic failover with no data loss ‚Äď failover to a synchronized secondary replica can be triggered automatically by the cluster or on demand by the database administrator.
  • Active secondary replicas available for read/backup workloads ‚Äď one or more secondary replicas can be configured to support read-only access to secondary databases and/or to permit backups on secondary databases.
  • Automatic seeding ‚Äď SQL Server automatically creates the secondary replicas for every database in the availability group.
  • Read-only routing ‚Äď SQL Server routes incoming connections to an availability group listener to a secondary replica that is configured to allow read-only workloads.
  • Database level health monitoring and failover trigger ‚Äď enhanced database-level monitoring and diagnostics.
  • Disaster Recovery configurations ‚Äď with Distributed Availability Groups or multisubnet availability group setup.

Here is an illustration of a HADR configuration that an enterprise building a mission-critical application using SQL Server running on Linux can use to achieve: application-level protection (two synchronized secondary replicas), compliance with business continuity regulations (DR replica on remote site) as well as enhance performance (offload reporting and backup workloads to active secondary replicas):

clip_image002

Fig. 1 Always On Availability Groups as an Integrated and Flexible HADR Solution on Linux

On Windows, Always On depends on Windows Server Failover Cluster (WSFC) for distributed metadata storage, failure detection and failover orchestration. On Linux, we are enabling Availability Groups to integrate natively with your choice of clustering technology. For example, in preview today SQL Server v.Next integrates with Pacemaker, a popular Linux clustering technology. Users can add a previously configured SQL Server Availability Group as a resource to a Pacemaker cluster and all the orchestration regarding monitoring, failure detection and failover is taken care of. To achieve this, customers will use the SQL Server Resource Agent for Pacemaker available with the mssql-server-ha package, that is installed alongside mssql-server.

Workload load balancing for increased scale and performance

Previously, users had to set up a cluster to load balance read workloads for their application using readable secondary replicas. Configuring and operating a cluster implied a lot of manageability overhead, if HA was not the goal.

Users can now create a group of replicated databases and leverage the fastest replication technology for SQL Server to offload secondary read-only workloads from the primary replica. If the goal is to conserve resources for mission-critical workloads running on the primary, users can now use read-only routing or directly connect to readable secondary replicas, without depending on integration with any clustering technology. These new capabilities are available for SQL Server running on both Windows and Linux platforms.

clip_image008

Fig. 2 Group of Read-Only Replicated Databases to Load Balance Read-Only Workloads

Note this is not a high-availability setup, as there is no ‚Äúfabric‚ÄĚ to monitor and coordinate failure detection and automatic failover. For users who need HADR capabilities, we recommend they use a cluster manager (WSFC on Windows or Pacemaker on Linux).

Seamless cross-platform migration

By setting up a cross-platform Distributed Availability Group, users can do a live migration of their SQL Server workloads from Windows to Linux or vice versa. We do not recommend running in this configuration in a steady state as there is no cluster manager for cross-platform orchestration, but it is the fastest solution for a cross-platform migration with minimum downtime.

clip_image010

Fig. 3 Cross-Platform Live Migration Using Distributed Availability Groups

Please visit our reference documentation on business continuity for SQL Server on Linux for more specifics on how integration with Pacemaker clustering is achieved in all supported OS flavors and end-to-end functional samples.

Today’s announcement marks the first preview of new Always On Availability Groups capabilities: Linux platform support for HADR as well as new scenarios like creating a cluster-independent group of replicated databases for offloading read-only traffic. Availability Groups are available on all platforms and OS versions that SQL Server v.Next is running on. In upcoming releases, we are going to enhance these capabilities by providing high-availability solutions for containerized environments as well as tooling support for an integrated experience. Stay tuned!

Get started

You can get started with many of these capabilities today:

Learn more
Categories: Database

SDK Features ‚Äď New For Couchbase 4.6

NorthScale Blog - Fri, 02/17/2017 - 08:02

Along with this week’s Couchbase Server 4.6 release we have a super¬†packed release with several new SDK features to¬†help you streamline development. From efficiently managed Data Structures to the latest support for .NET Core, it is time to update to the latest libraries! ¬†We have also released significant updates to our Big Data connectors for Spark and Kafka.

Data Structures

By bringing Native Collection bindings to the Couchbase SDK, it is now even easier to map your document data into structures your language understands.  All the languages support it through simple functions and .NET and Java have extra special support using their Collections Frameworks.  Structures include List, Map, Set, and Queue Рeach with specific functions for add/remove, push/pop and more.

They are built to be as efficient as possible as well.  Behind the scenes it uses our network-friendly sub-document processes keeping traffic to a minimum while making atomic updates to documents on the server Рall while you simply update the collections in your code.

No extra upserts or pulling down the whole document just to modify part of an array.  This is a great way to reduce the amount of document handling you need to do in your application.

.NET Core Integration

Microsoft’s push to cross platform development via .NET Core is extremely important for our community. ¬†So we wanted to make sure you could get .Net Core support for Couchbase as soon as possible. ¬†All .NET applications will benefit from moving to this latest platform – especially for those wanting¬†cross operating system support straight out-of-the-box.

For example, write apps on Windows, deploy on OS X and Linux without having to change your code.  

As usual we push all our .NET libraries to NuGet to make it as simple as possible to integrate Couchbase into your application.

There are way more improvements in the latest .NET SDK release – read about them in the release notes.

Kafka 3.x Updates

Couchbase integration with Kafka has taken a major leap forward.  The 3.x updates bring support for both Sink and Source connector options, allowing you to read from and write to Couchbase using Kafka.  You can also easily process Couchbase events using Kafka Streams technology.

To help simplify development and deployment there is now Kafka Connect support Рplug and play without having to write custom connectors between your Buckets and Topics.  This is especially easy via integration with Confluent Control Center Рproviding many powerful features, including real time monitoring, through a web UI.

Other features worth checking out include Dynamic Topology for rebalance and failover and much more.

Spark 2.x Updates

As with Kafka, our Spark connector has had many significant improvements recently.  The latest improvements include support for Spark 2.0 and related features.  We have even implemented some of the latest leading edge improvements including Structure Streaming (both source and sink!).  Dynamic Topology is now supported to help with failover and rebalance needs in an easy manner.

Other Language Updates

There are many other updates across the Couchbase SDK this month Рcheck out the latest changes in each of them below.  Now is the time to upgrade!

Release notes: .NETJavaNode.js РGo РPHPPythonC

You can keep informed of these releases by following the projects in Github but a better way is to sign up to our Community Newsletter Рkeep informed of new releases, blogs and community training events that show off the latest new features.

The post SDK Features – New For Couchbase 4.6 appeared first on The Couchbase Blog.

Categories: Architecture, Database

Data Structures: Native Collections New in Couchbase 4.6

NorthScale Blog - Fri, 02/17/2017 - 06:40

Data Structures in Couchbase 4.6 is our newest time-saving SDK feature.  These allow your client applications to easily map your array-based JSON data into language specific structures.

Leveraging native collections support in Couchbase will save you time and hassle:

  • Easily map JSON arrays into language specific structures
  • Couchbase Server manages the document efficiently – automatically using sub-document calls
  • You choose the data structure type you need and start coding

Support for Data Structures is available for all our languages: Java, .NET, Node.js, Go, PHP, Python, and C.  Including powerful Java and .NET implementations via Collections Frameworks and all other languages have a wide range of functional options.

This post shows how to get started using Data Structures, with specific examples in Java (using the Map type) and Python (using List and Queue types).  Video and reference links follow below.

Couchbase Data Structure Types

Four specific kinds of structures have been added to Couchbase client libraries: Map, List, Set, and Queue.  They are all variants of JSON arrays in the database but presented as native types to your client application.

  • List – an array that stores values in order
  • Map ¬†– also known as a dictionary¬†– stores values by key
  • Set – a variant of list that only retains unique combination of values
  • Queue – a variant of a list that offers push and pop operations to add/remove items from the queue in a first-in-first-out (FIFO) manner
Java Collections Examples – Map & List

The Java and .NET APIs have the tightest native Collections interfaces.  This short example edits a user profile document as a Map and adds or updates the email contact information.

As the Map gets updated, so does the Document in the background – no manual saving or upserting is required!

See many more beautiful Couchbase .NET Data Structures examples in Matthew Grove’s blog post.

Map<String, String> userInfo = new CouchbaseMap<String>("user:mnunberg", bucket); 
userInfo.put("email", "mark.nunberg@couchbase.com");

Similarly the List is accessible through the CouchbaseArrayList and easily appended to.

List<String> ll = new CouchbaseArrayList<String>("user:mnunberg_list", bucket); 
ll.add("msg1"); 
ll.add("msg2");

Python Data Structures Examples – Queue

Here is a simple message Queue in Python, including a dictionary of timestamp, sender and some content.  Populate the queue using push to put new messages into it and then use pop to retrieve the first or next entry in the queue, while also removing it from the queue.

All this is done automatically behind the scenes when you use these functions.  No additional calls to the server are required to save the changed Queue.

>>> cb.queue_push("messages::tyler", {'timestamp': 1485389293, 'from':'user::mark', 'content':'Dear Tyler'}, create=True) 
>>> cb.queue_push("messages::tyler", {'timestamp': 1486390293, 'from':'user::jody', 'content':'Dear John...'}) 
>>> cb.queue_pop("messages::tyler").value 

{u'content': u'Dear Tyler', u'timestamp': 1485389293, u'from': u'user::mark'}

Python Data Structures Examples –¬†List

The following example shows a simplified Python example using the List type.  In each case a new document is also created at the same time that it is populated with values.  See the Couchbase Python documentation for examples of the other types.

In an IoT use case you may have sensors recording specific timestamped activities and related data values.  Here, a sensor has its own document and a vehicle ID and timestamp are recorded when detected by the sensor.

>>> cb.list_append("garage1", ['vehicle::1A', '2017-01-24 08:02:00'], create=True) 
>>> cb.list_append("garage1", ['vehicle::2A', '2017-01-24 10:21:00']) 
>>> cb.list_append("garage1", ['vehicle::1A', '2017-01-25 17:16:00'])

The resulting document is an array with each entry holding two values in an array.

[ [ "vehicle::1A", "2017-01-24 08:02:00" ],
  [ "vehicle::2A", "2017-01-24 10:21:00" ],
  [ "vehicle::1A", "2017-01-25 17:16:00" ] ]

Retrieving the values into a Python list is done easily.¬† Just grab the document and it’s instantly available to iterate over.

>>> garage1 = cb.get('garage1') 
>>> for rec in garage1.value: print rec 

[u'vehicle::1A', u'2017-01-24 08:02:00'] 
[u'vehicle::2A', u'2017-01-24 10:21:00'] 
[u'vehicle::1A', u'2017-01-25 17:16:00']

Next Step

As you can see, the syntax is easy and predictable. By offloading management of these structures to Couchbase Server it simplifies a lot of the communications required to manage dynamic documents. In no time you can be using Couchbase 4.6 as a Data Structure server for your applications.

 

The post Data Structures: Native Collections New in Couchbase 4.6 appeared first on The Couchbase Blog.

Categories: Architecture, Database

Introducing Couchbase .NET 2.4.0 ‚Äď .NET Core GA

NorthScale Blog - Fri, 02/17/2017 - 01:38

This release is the official GA release for .NET Core support for the Couchbase .NET SDK! .NET Core is the latest incarnation of the .NET framework and its described as¬†“.NET Core is a blazing fast, lightweight and modular platform for creating web applications and services that run on Windows, Linux and Mac”

Wait a minute…read that again:¬†“.NET Core is a blazing fast, lightweight and modular platform for creating web applications and services that¬†run on Windows, Linux and Mac. Microsoft .NET applications running on OSX and Linux? What kind of bizzaro world are we living in? It‚Äôs the “New” Microsoft for sure!

In this blog post, I’ll go over what is in the 2.4.0 release, changes to packaging (NuGet), and what version of .NET the SDK supports. We‚Äôll also demonstrate some of the new features such as Datastructures.

What’s in this release?

2.4.0 is a large release with over 30 commits. When you consider that we released 3 Developer Previews leading up to 2.4.0, there are actually many, many more commits leading up to this release over the last 6 months. Here is an overview of some of the more impressive features – you can see all of the commits in the “Release Notes” section below:

.NET Core Support

Of course the most significant feature of 2.4.0 is .NET Core support, which from the opening paragraph, means you can now develop on Mac OS or Windows and deploy to Linux (or vice-versa, but the tooling is a bit immature still). This is great stuff and a major change for the traditional Windows developer.

If you’re unaware of .NET Core, you can read up more about it over on the .NET Core website. One cool thing about it is that it’s open source (Apache 2.0) and source is all available on Github.

The Couchbase SDK specifically supports the netstandard1.5 or greater. We tested the SDK using 1.0.0-preview2-1-003177 of the Command Line Tools.

Packaging changes

Just like the three developer previews, the NuGet package will contain binaries for both the .NET Full Framework (targeting .NET 4.5 or greater), but also for .NET Core (targeting .NET Core 1.1). Depending on the target project you are including the dependency for, the correct binaries will be used.

So, if your Visual Studio project is a .NET Full Framework application greater than or equal to 4.5, you’ll get the binaries for the full framework version of .NET. Likewise, if your application is a .NET Core application, then the .NET Core version of the binaries will be used. There should be nothing you have to do to enable this.

The older .NET 4.5 version of the packages will no longer be released; 2.3.11 is the last supported release of the 2.3.X series.

MS Logging for Core

For .NET Core we decided to change from using Common.Logging to MS Logging mainly because no 3rd party (log4net for example) have stable support for .NET Core at this time.

Additionally, by moving from

Common.Logging
¬† to MS Logging we have removed one more 3rd party dependency – which is always nice. Not that Common.Logging wasn’t sufficient, but it makes more sense to use a dependency from Microsoft.

Here is an example of configuring the 2.4.0 client targeting .NET Core and using NLog:

First add the dependencies to the project.json:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "emitEntryPoint": true,
    "copyToOutput": {
      "include": [ "config.json", "nlog.config" ]
    }
  },

  "dependencies": {
    "CouchbaseNetClient": "2.4.0-dp6",
    "NLog.Extensions.Logging": "1.0.0-rtm-beta1",
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.1"
    },
    "Microsoft.Extensions.Logging.Debug": "1.1.0",
    "Microsoft.Extensions.Logging": "1.1.0"
  },

  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  }
}

Then, add a nlog.config file to your project with the following contents:

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Debug"
      internalLogFile="c:\temp\internal-nlog.txt">

  <!-- define various log targets -->
  <targets>
    <!-- write logs to file -->
    <target xsi:type="File" name="allfile" fileName="c:\temp\nlog-all-${shortdate}.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />
  </targets>

  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />
  </rules>
</nlog>

Finally, add the code to configure the Couchbase SDK for logging:

using Couchbase;
using Couchbase.Logging;
using Microsoft.Extensions.Logging;
using NLog.Extensions.Logging;

namespace ConsoleApp2
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var factory = new LoggerFactory();
            factory.AddDebug();
            factory.AddNLog();
            factory.ConfigureNLog("nlog.config");

            //configure logging on the couchbase client
            var config = new ClientConfiguration
            {
                LoggerFactory = factory
            };

            var cluster = new Cluster(config);
            //use the couchbase client
        }
    }
}

Note that the project.json has a

copyToOutput.include
  value for
nlog.config
 . This is required so the tooling will copy that file to the output directory when built.

Now for the .NET 4.5 Full Framework binaries, the dependency on

Common.Logging
 remains and any existing logging configuration should work as it always has.

Datastructures

Datastructures are a new way of working with Couchbase documents as if they are a common Computer Science data structures such as lists, queues, dictionaries or sets. There are two implementations in the SDK; one as a series of methods on

CouchbaseBucket
  which provide functionality for common data structure operations and another as implementations of the interfaces within
System.Collections.Generics
 . Here is a description of each Datastructure class found in the SDK:

  • CouchbaseDictionary<TKey, TValue>
     : Represents a collection of keys and values stored within a Couchbase Document.
  • CouchbaseList<T>
     : Represents a collection of objects, stored in Couchbase server, that can be individually accessed by index.
  • CouchbaseQueue<T>
     : Provides a persistent Couchbase data structure with FIFO behavior.
  • CouchbaseSet<T>
     : Provides a Couchbase persisted set, which is a collection of objects with no duplicates.

All of these classes are found in the

Couchbase.Collections
  namespace. Here is an example of using a
CouchbaseQueue<T>
 :

var queue = new CouchbaseQueue<Poco>(_bucket, "somekey");
queue.Enqueue(new Poco { Name = "pcoco1" });
queue.Enqueue(new Poco { Name = "pcoco2" });
queue.Enqueue(new Poco { Name = "pcoco3" });

var item = queue.Dequeue();
Assert.AreEqual("pcoco1", item.Name);

Multiplexing IO

The Couchbase SDK has used connection pooling in the past to allow high throughput and scale at the cost of latency and resource utilization. In Couchbase 2.2.4 we introduced a better IO model call Multiplexing IO or MUX-IO, which the client could be configured to use (the default was pooled connections).

In 2.4.0 we are making MUX-IO the default IO model and making connection pooling optional. What this means to you is that some connection pooling properties in your configuration may still be used SDK. For example:

  • PoolConfiguration.MaxSize
      is still used but should be relatively small values Рe.g. 5-10
  • PoolConfiguration.MinSize
     should be 0 or 1

To disable MUX-IO it’s simply a matter of setting the 

ClientConfiguration.UseConnectionPooling
to true (the default is false) to use connection pooling:

var clientConfig = new ClientConfiguration{
    UseConnectionPooling = false
 };
var cluster = new Cluster(clientConfig);
 
//open buckets and use the client

Streaming N1QL and Views

Streaming N1QL and Views are a performance optimization in certain cases where the amount of data retrieved is large. To understand why, let’s consider how non-streaming queries work:

  1. A request is dispatched to the server.
  2. The server does it’s processing and returns back the results as a stream after processing the entire response.
  3. The client buffers the entire stream and then de-serializes the stream into a collection of type “T”, where T is the POCO that each result is mapped to.
  4. The server returns back the list to the application within its
    IResult

What can go wrong here? Think about very large results and that memory resources are finite: eventually you will always encounter an

OutOfMemoryException
 ! There are other side effects as well related to Garbage Collection.

With streaming clients the process is as follows:

  1. A request is dispatched to the server
  2. The server does it’s processing and returns back the results as a stream as soon as the response headers are available.
  3. The client partially reads the headers and meta-data and then pauses until iteration occurs.
  4. When the application starts iterating over the
    IResult
     , each item is read one at a time without storing in an underlying collection.

The big benefit here is that the working set of memory will not grow as the collection grows and internally re-sized by .NET. Instead, you have a fixed working size of memory and GC can occur as soon as the read object is discarded.

To use streaming N1QL and views, all that you do is call the

UseStreaming()
 method and pass in true to stream:

var request = new QueryRequest("SELECT * FROM `travel-sample` LIMIT 100;").UseStreaming(true);
using (var result = _bucket.Query<dynamic>(request))
{
    Console.WriteLine(result);
}

Passing in false will mean that the entire response is buffered and processed before returning.

N1QL Query Cancellation

This feature allows long running N1QL queries to be canceled before they complete using task cancellation tokens. For example:

var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMilliseconds(5));

var result = await _bucket.QueryAsync<dynamic>(queryRequest, cancellationTokenSource.Token);
//do something with the result

This commit was via a community contribution from Brant Burnett of CenteredgeSoftware.com!

Important TLS/SSL Note on Linux

There is one issue on Linux that you may come across if you are using SSL: a PlatformNotSupportedException will be thrown if you have a version of libcurl installed on the server < 7.30.0. The work-around is to simply upgrade your libcurl installation on Linux to something equal to or greater than 7.30.0. You can read more about this on the Jira ticket: NCBC-1296.

 

The post Introducing Couchbase .NET 2.4.0 – .NET Core GA appeared first on The Couchbase Blog.

Categories: Architecture, Database

Couchbase Server 4.6 and macOS Sierra

NorthScale Blog - Thu, 02/16/2017 - 18:59

I am pleased to announce that the latest version of Couchbase Server (4.6) is now compatible with macOS Sierra! From the Couchbase downloads page, choose either Couchbase Server 4.6 Enterprise Edition or Couchbase Server 4.6 Community Edition depending on your needs.

macOS Sierra

If you’re unfamiliar with some of the things that Couchbase can do, check out an article I wrote recently on some of the Couchbase Server basics called, Couchbase and the Document-Oriented NoSQL Database. For help using Couchbase, check out the Developer Portal.

The post Couchbase Server 4.6 and macOS Sierra appeared first on The Couchbase Blog.

Categories: Architecture, Database

Couchbase Server 4.6 Supports Windows 10 Anniversary Update

NorthScale Blog - Thu, 02/16/2017 - 15:27

Back in August 2016, when the Windows 10 Anniversary Update was rolling out, I blogged that Couchbase Server was not working correctly on it. That is no longer true!

Short version: Couchbase Server 4.6 now supports Windows 10 Anniversary Update. Go download and try it out today.

The longer story is that this issue was addressed in the 4.5.1 release. The fix was somewhat experimental, and the anniversary update was still in the process of being rolled out. So there were two releases of Couchbase Server 4.5.1 for Windows:

  • Normal windows release (works with Windows 10, Windows Server, etc but not Anniversary Update)
  • Windows 10 Anniversary Edition Developer Preview (DP) release

Furthermore, Couchbase Server 4.6 has had a Developer Preview release of its own for a while, and that release also works with the anniversary update.

But now everything is official.

  • Couchbase Server 4.6 has been released
  • Couchbase Server 4.6 officially supports Windows 10 Anniversary Update

Go download Couchbase Server 4.6 now.

Got questions? Got comments? Check out our documentation on the Couchbase Developer Portal, post a question on the Couchbase Forums, leave a comment here, or ping me on Twitter.

The post Couchbase Server 4.6 Supports Windows 10 Anniversary Update appeared first on The Couchbase Blog.

Categories: Architecture, Database

Announcing Couchbase Server 4.6 ‚Äď What‚Äôs New and Improved

NorthScale Blog - Thu, 02/16/2017 - 15:00

Couchbase delivers the Couchbase Data Platform that powers Web, Mobile, and IoT applications for digital businesses. With our newest release, Couchbase Server 4.6 provides the availability, scalability, performance, and security that enterprises require for their mission-critical applications.

What’s New and Improved Query

The new string, date, array, and JSON object functions that have been added to N1QL simplify data transformations and provide richer query expressions.Faster queries in N1QL are the result of several query engine performance enhancements across many types of operations, including joins and index scans.

Check out documentation for String functions,  Date functions, Array functions and Object functions.

Replication

Cross datacenter replication (XDCR) with timestamp-based conflict resolution makes it easier for applications to implement a Last Write Wins (LWW) document conflict management policy across multiple Couchbase clusters. The per-document timestamp combines the server logical and physical clocks together, forming a hybrid logical clock and timestamp, which enables easy identification of consistent document snapshots across distributed Couchbase clusters.

Check out documentation for Timestamp-based conflict resolution.

Security

Adding support for Pluggable Authentication Modules (PAM) simplifies centralized password and policy management across servers. It also enables use of existing password management services for a Couchbase cluster (for example, Linux /etc/shadow).The new server secret management feature provides improved enterprise security compliance and a more security-hardened Couchbase Server.

Check out documentation for Pluggable Authentication Modules and Secret management.

Tools [Developer Preview]

It is now easier than ever to move data in and out of Couchbase Server using the new flexible import and export tools. cbimport imports data from a CSV file or a JSON document. cbexport exports data as a JSON document.

Check out documentation for Cbimport and Cbexport. Data Access Adding direct support for lists, maps, sets, and queues in the sub-document API using the new data structure SDK feature, further simplifies application development. The new data structures work seamlessly with the same underlying data representation, allowing developers in N1QL, Java, .NET, and other languages to access the same data across different programming languages and interfaces.Adding .NET Core support enables Microsoft application developers to easily develop and integrate their applications with Couchbase Server. Check out documentation for Data structures and .NET Core blog. Search [Developer Preview 2]

Search adds support for MossStore, the new default kv store mechanism for full text indexes in FTS. MossStore is part of Moss (‚ÄúMemory-oriented sorted segments‚ÄĚ), a simple, fast, persistable, ordered key value collection implemented as a pure Golang library.You can now create custom index mappings using the document key to determine the type and with this enhancement, so it‚Äôs easier to support the common data modeling style in which the document type is indicated by a portion of the key. This release also lets you sort search results by any field in the document, as long as that field is also indexed. In the earlier releases, search results were always returned in order of descending relevance score.

Check out documentation for Index Type Mapping By Keys and Sorting Query Results.

Here are some resources to get you started –

The post Announcing Couchbase Server 4.6 – What’s New and Improved appeared first on The Couchbase Blog.

Categories: Architecture, Database

DB2 V12 Features Supporting Large Databases

Database Journal News - Thu, 02/16/2017 - 09:01

Big data applications were once limited to hybrid hardware/software platforms. Now, recent advances are allowing applications like these to be integrated with and federated into operational systems. In particular, IBM's DB2 for z/OS Version 12 delivers new features and functions that allow the DBAs to design, define and implement very large databases and business intelligence query platforms that fulfill some big data expectations.

Categories: Database