Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Create a Continuous Deployment Pipeline with Node.js and Jenkins

NorthScale Blog - 10 hours 5 min ago

Previously I had written about using Jenkins for continuous deployment of Java applications, inspired by a keynote demonstration that I had developed for Couchbase Connect 2016. ¬†I understand that Java isn’t the only popular development technology that exists right now. ¬†Node.js is a very popular technology and a perfect candidate to be plugged into a continuous deployment pipeline using Jenkins.

We’re going to see how to continuously deploy a Node.js application with Jenkins based on changes made to a GitHub repository.

So let’s figure out the plan here. ¬†We’re going to be using an already existing Node.js repository that I had uploaded to GitHub a while back. ¬†When changes are made to this repository, Jenkins will build the application and deploy or run the application. ¬†Because of the nature of Node.js, the build process will consist of making sure the NPM modules are present.

The Requirements

There are a few software requirements that must be met in order to be successful with this guide.  They are as follows:

Since this is a Node.js pipeline, of course we’ll need it installed. ¬†However, since Jenkins is a Java application, we’ll also need Java installed. ¬†My sample application does use Couchbase, but that won’t be the focus of this guide. ¬†However, if you’re using the same application I am, you’ll need Couchbase Server installed.

All software listed should reside on the same host.  In a production environment you will probably want them dispersed across multiple machines.

Installing and Configuring Couchbase Server as the NoSQL Database

At this point you should have already downloaded Couchbase Server. ¬†After installing it and configuring it, you’ll need to create a Bucket called¬†restful-sample and that Bucket should have a primary index.

For instructions on configuring Couchbase and getting this Bucket created, check out a previous tutorial I wrote on the subject.  It is actually the tutorial that went with creating this Couchbase, Express Framework, Angular, and Node.js (CEAN) stack application.

With Couchbase ready to go, we can focus on configuring Jenkins and creating our workflow.

Configuring Jenkins with the Necessary Plugins

You should have already downloaded Jenkins by now. ¬†If you haven’t go ahead and obtain the WAR file from the Jenkins website.

To start Jenkins, execute the following command from your Command Prompt or Terminal:

java -jar jenkins.war -httpPort=8080

This will make Jenkins accessible from a web browser at http://localhost:8080. ¬†Upon first launch, you’ll be placed in a configuration wizard.

Jenkins Configuration Part 1

The first screen in this configuration wizard will ask you for the password that Jenkins generates.  Find it in the location presented on the screen.

The second screen will ask you which plugins you’d like to install.

Jenkins Configuration Part 2

For now, we’re going to install the suggested plugins. ¬†We’ll be installing extra plugins later.

The third screen will ask us to create our first administrative user. ¬†Technically, the generated password you’re using is an administrative user, but you may want to create a new one.

Jenkins Configuration Part 3

After you create a user, Jenkins is ready to go.  However, we are going to need another plugin and it can vary depending on how we wish to build and deploy the Node.js application.

From the main Jenkins screen, choose to Manage Jenkins to see a list of administration options.

Manage Jenkins

What we care about is managing the available plugins.  After choosing Manage Plugins we want to search for and install a plugin by the name of Post-Build Script.

Install Jenkins Post-Build Script Plugin

This plugin allows us to execute shell commands or scripts after the build stage has completed. ¬†In this example we’ll be building and deploying to the same host, we can run everything locally via shell commands. ¬†In a production environment you might want to use the SSH plugin to migrate the code to a remote server and run it there.

With the plugins available, let’s create our continuous deployment workflow for Node.js in Jenkins.

Creating a Jenkins Continuous Deployment Workflow for Node.js

Just to reiterate, our goal here is to create a workflow that will pull a project from GitHub, build it by installing all the dependencies, and deploy it by running it on a server, in this case our local machine.

Start by creating a new item, otherwise known as a new job or workflow.

Jenkins Node.js Freestyle Project

We’re going to be creating a¬†Freestyle Project, but you can give it any name you want. ¬†There are three things that need to be done on the next screen.

The source of our workspace will come from GitHub.  In your own project it can come from elsewhere, but for this one we need to define our source control information.

Jenkins Node.js Source Control

The GitHub project is one that I had previously created and written about, like mentioned before.  The project can be found at:

https://github.com/couchbaselabs/restful-angularjs-nodejs

Now in a production environment you’ll probably want to set up GitHub hooks to trigger the job process, but since this is all on localhost, GitHub won’t allow it. ¬†Instead we’ll be triggering the job manually.

Jenkins Node.js Build Step

After configuring the source control section we’ll need to configure the build step. ¬†For Node.js, building only consists of installing dependencies, but you could easily have unit testing and other testing in this step as well. ¬†In my previous Java example, the build step had a little more to it. ¬†In this Node.js example we have the following:

npm install

Finally we get to define what happens after the project is built.

Jenkins Node.js Post Build Step

In this example we will be deploying the application locally on our machine.  Probably not the case in your production scenario.

So you’ll notice in our post-build step we have the following commands:

npm stop
npm start

Before starting the application we are stopping any already running instance of it.  Once stopped we can start the new version.  However, where do these stop and start tasks come from?

"scripts": {
    "start": "forever start app.js",
    "stop": "forever stopall"
}

The above was taken from the GitHub project’s¬†package.json file. ¬†Each task starts and stops a¬†forever script for Node.js.

Go ahead and try to run the job choosing¬†Build Now from the list of options. ¬†It should obtain the project, install the dependencies, and make the project available at http://localhost:3000. ¬†Just make sure Couchbase Server is running for this project, otherwise you’ll get errors.

Conclusion

You just saw how to use Jenkins to continuously deploy your Node.js applications based on changes that have been made in GitHub. ¬†A similar version of this guide was created for Java applications called,¬†Create a Continuous Deployment Pipeline with Jenkins and Java, which is worth reviewing if you’re a Java developer.

If you’re interested in using Jenkins to deploy your Node.js application as Docker containers, check out a previous tutorial that I wrote on the subject.

Want more information on using Node.js with Couchbase NoSQL?  Check out the Couchbase Developer Portal for documentation and examples.

The post Create a Continuous Deployment Pipeline with Node.js and Jenkins appeared first on The Couchbase Blog.

Categories: Architecture, Database

Authorization and Authentication with RBAC (Part 2)

NorthScale Blog - Mon, 04/24/2017 - 19:30

Authorization and authentication are important to Couchbase. In March, I blogged about some of the new Role Based Access Control (RBAC) that we are showing in the Couchbase Server 5.0 Developer Builds. This month, I’d like to go into a little more detail now that the April Couchbase Server 5.0 Developer Build is available (make sure to click the “Developer” tab).

Authentication and authorization

In past version of Couchbase, buckets were secured by a password. In 5.0, bucket passwords for authorization are gone. You can no longer create a “bucket password” for authorization. Instead, you must create one (or more) users that have varying levels of authorization for that bucket. Notice that there is no “password” field anymore (not even in the “Advance bucket settings”:

Create a new Couchbase bucket - no password for authorization

So now, you no longer have to hand out a password that gives complete access to a bucket. You can fine-tune bucket authorization, and give out multiple sets of credentials with varying levels of access. This will help you tighten up security, and reduce your exposure.

Note: The administrator user still exists, and has permission to do everything. So I can still run N1QL queries (for instance) on that bucket while logged in as an administrator account. However, this is not the account you should be using from your clients.

Creating an authorized user

To create a new user, you must be logged in as an administrator (or as a user that has an Admin role). Go to the “Security” tab, and you’ll be able to see a list of users, and be able to add new ones.

Create a new user by clicking “ADD USER”. Enter the information for the user. You may want to create a user for a person (e.g. “Matt”), or you may want to create a user for a service (e.g. “MyAspNetApplication”). Make sure to enter a strong password, and then select the appropriate roles for the user you want to create.

For example, let’s create a user “Matt” that only has access to run SELECT queries on the bucket I just created. In “Roles”, I expand “Query Roles”, then “Query Select”, and check the box for “mynewbucket”, and then “Save” to finalize the user.

Create a new user with authorization to run a select query

Authorization in action

When I log out of the administrator account, and log back in as “Matt”, I can see that the authorization level I have is severely restricted. Only “Dashboard”, “Servers”, “Settings”, and “Query” are visible. If I go to “Query” I can execute SELECT 1;

Execute SELECT query logged in with only Query authorization

If I try something more complex, like SELECT COUNT(1) FROM mynewbucket, I’ll get an error message like:

[
  {
    "code": 13014,
    "msg": "User does not have credentials to access privilege cluster.bucket[mynewbucket].data.docs!read. Add role Data Reader[mynewbucket] to allow the query to run."
  }
]

So, it looks like I have the correct authentication to log in, and I have the correct authorization to execute a SELECT, but I don’t have the correct authorization to actually read the data. I’ll go back in as admin, and add Data Reader authorization.

User now has authorization for two roles

At this point, when I login with “Matt”, SELECT COUNT(1) FROM mynewbucket; will work. If you are following along, try SELECT * FROM mynewbucket;. You’ll get an error message that no index is available. But, if you try to CREATE INDEX you’ll need another permission to do that. You get the idea.

New N1QL functionality

There’s some new N1QL functionality to go along with the new authentication and authorization features.

GRANT and REVOKE ROLE

You can grant and revoke roles with N1QL commands. You need Admin access to do this.

Here’s a quick example of granting SELECT query authorization to a user named “Matt” on a bucket called “mynewbucket”:

GRANT ROLE query_select(mynewbucket) TO Matt;

And likewise, you can REVOKE a role doing something similar:

REVOKE ROLE query_select(mynewbucket) FROM Matt;

Creating users with REST

There is no way (currently) to create users with N1QL, but you can use the REST API to do this. Full documentation is coming later, but here’s how you can create a user with the REST API:

  • PUT to the /settings/rbac/users/builtin/<username> endpoint.

  • Use admin credentials for this endpoint (e.g. Administrator:password with basic auth)

  • The body should contain:

    • roles=<role1,role2,…​,roleN>

    • password=<password>

Below is an example. You can use cURL, Postman, Fiddler, or whatever your favorite tool is to make the request.

URL: PUT http://localhost:8091/settings/rbac/users/builtin/restman

Headers: Content-Type: application/x-www-form-urlencoded
Authorization: Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==

Body: roles=query_select[mynewbucket],query_update[mynewbucket]&password=password

The above assumes that you have an admin user/password of Administrator/password (hence the basic auth token of QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==).

After executing that, you’ll see a new user named “restman” with the two specified permissions.

Create a new user with a REST command

Wait, there’s more!

The RBAC system is far too rich to cover in a single blog post, and full documentation is on its way. In the meantime, here are some details that might help you get started with the preview:

  • You may have noticed the all option in the screenshots above. You can give a user roles on a bucket-by-bucket basis, or you can give permission to all buckets (even buckets that haven’t been created yet).

  • I covered FTS permissions in the previous blog post, but there are permissions that cover just about everything: views, bucket administration, backup, monitoring, DCP, indexes, etc.

  • You can’t create buckets with a password anymore. The equivalent is to instead create a user with the name as the bucket, and give it authorization to a role called “Bucket Full Access”. This will be useful for upgrading and transitioning purposes.

We still want your feedback!

Stay tuned to the Couchbase Blog for information about what’s coming in the next developer build.

Interested in trying out some of these new features? Download Couchbase Server 5.0 April 2017 Developer Build today!

The 5.0 release is fast approaching, but we still want your feedback!

Bugs: If you find a bug (something that is broken or doesn’t work how you’d expect), please file an issue in our JIRA system at issues.couchbase.com or submit a question on the Couchbase Forums. Or, contact me with a description of the issue. I would be happy to help you or submit the bug for you (my Couchbase handlers let me take selfies on our cartoonishly big couch when I submit good bugs).

Feedback: Let me know what you think. Something you don’t like? Something you really like? Something missing? Now you can give feedback directly from within the Couchbase Web Console. Look for the feedback icon icon at the bottom right of the screen.

In some cases, it may be tricky to decide if your feedback is a bug or a suggestion. Use your best judgement, or again, feel free to contact me for help. I want to hear from you. The best way to contact me is either Twitter @mgroves or email me matthew.groves@couchbase.com.

The post Authorization and Authentication with RBAC (Part 2) appeared first on The Couchbase Blog.

Categories: Architecture, Database

Data Synchronization Across iOS Devices Using Couchbase Mobile

NorthScale Blog - Mon, 04/24/2017 - 18:30

This post looks at how you get started with data replication/synchronization across iOS devices using Couchbase Mobile. The Couchbase Mobile Stack comprises of Couchbase Server, Sync Gateway and Couchbase Lite embedded NoSQL Database. In an earlier post, we discussed how Couchbase Lite can be used as a standalone embedded NoSQL database in iOS apps. This post will walk you through a sample iOS app in conjunction with a Sync Gateway that will demonstrate the core concepts of Push & Pull Replication, Authentication & Access Control, Channels and Sync Functions.

While we will be looking at data synchronization in the context of an iOS App in Swift, everything that’s discussed here applies equally to mobile apps developed in any other platform (Android, iOS (ObjC), Xamarin). Deviations will be specified as such.

NOTE:  We will be discussing Couchbase Mobile v1.4 which is the current production release. There is a newer Developer Preview version 2.0 of Couchbase Mobile that has a lot of new and exciting features.

Couchbase Mobile

The Couchbase Mobile Stack comprises the Couchbase Server, Sync Gateway and Couchbase Lite embedded NoSQL Database. This post will discuss the basics of NoSQL data replication and synchronization using Couchbase Mobile. I’ll assume you’re familiar with developing iOS Apps, basics of Swift, some basics of NoSQL and have some understanding of Couchbase. If you want to read up more on Couchbase Mobile, you can find lots of resources at the end of this post.

Couchbase Sync Gateway

The Couchbase Sync Gateway is an Internet-facing synchronization mechanism that securely syncs data across devices as well as between devices and the cloud.

It exposes a web interface that provides

  • Data Synchronization across devices and the cloud
  • Access Control
  • Data Validation

You can use any HTTP client to further  explore the interface. Check out this post on using Postman for querying the interface.

There are three main¬†concepts related to¬†data replication or synchronization using the Sync Gateway –

Channel

A channel can be viewed as a combination of a tag and message queue. Every document can be assigned to one or more channels. Documents are assigned to channels which specify who can access the documents. Uses are granted access to one or more channels and can only read documents assigned to those channels. For details, check out the documentation on Channels.

 

Sync Function

The sync function is a JavaScript function that runs on the Sync Gateway. Every time a new document, revision or deletion is added to a database, the sync function is called. The sync function is responsible for

  • ¬†Validating the document,
  • Authorizing the change
  • Assigning document to channels and
  • Granting users‚Äô access to channels.

For details, check out documentation on Sync Function .

 

Replication

Replication a.k.a Synchronization is the process of synchronizing changes between local database and remote Sync Gateway. There are two kinds –

  • Push Replication is used to push changes from local to remote database
  • Pull Replication is used to pull changes from remote to local database

For details, check out documentation on replications.

 

Installation of Couchbase Sync Gateway

Follow the installation guide to install the Sync Gateway.

Launch your Sync Gateway with the following config file. The exact location of the config file will depend on the platform. Please refer to the install guide for the same.

Sync Gateway Config File

{
  "log": ["*"],
  "CORS": {
     "Origin":["*"]
  },
  "databases": {
    "demo": {
      "server": "walrus:",
      "bucket": "default",
      "users": { 
        "GUEST": { "disabled": true, "admin_channels": ["*"] } ,
        "joe": {"password":"password" ,"disabled": false, "admin_channels":["_public","_joe"]} ,
        "jane":{"password":"password" ,"disabled": false, "admin_channels":["_public","_jane"]}
      },
      "unsupported": {
        "user_views": {
          "enabled":true
        }
      },
    
  
  "sync": 
  `
      
      function (doc, oldDoc){
     

        // Check if doc is being deleted
        if (doc._deleted == undefined) {
          // Validate current version has relevant keys
          validateDocument(doc);
        }
        else {
          // Validate  old document has relevant keys
          validateDocument(oldDoc);
        }

        var docOwner = (doc._deleted == undefined) ? doc.owner : oldDoc.owner;
    

        var publicChannel = "_public";

        var privateChannel = "_"+docOwner;

        // Grant user read access to public channels and user's own channel
        access(docOwner,[publicChannel,privateChannel]);


        // Check if this was a doc update (as opposed to a doc create or delete)
        if (doc._deleted == undefined && oldDoc != null && oldDoc._deleted == undefined) {

            if (doc.tag != oldDoc.tag) {
                 throw({forbidden: "Cannot change tag of document"});
         
            }
        }


        // Check if new/updated document is tagged as "public" 
        var docTag =  (doc._deleted == undefined) ? doc.tag : oldDoc.tag;
    
        if (doc._deleted == undefined) {
          if (docTag == "public") {
           
            // All documents tagged public go into "public" channel which to open to all
            channel(publicChannel);
         
        }
        else {

            // Ensure that the owner of document is the user making the request
            requireUser(docOwner);

            // All non-public tagged docs go into a user user specific channel
            channel(privateChannel);

         }
       }
       else {
          channel(doc.channels);
       }


        function validateDocument (doc) {
           // Basic validation of document
          if (!doc.tag ) {
            // Every doc must include a tag
            throw({forbidden: "Invalid document type: Tag not provided" + doc.tag});
          }

           if (!doc.owner) {
            // Every doc must include a owner
            throw({forbidden: "Invalid document type: Owner not provided" + doc.owner});
          
          }
        }
      }
  

`
    }
  }
}

Here are some key points to note in the configuration file:-

  • Line 8: The ‚Äúwalrus:‚ÄĚ value for ‚Äúserver‚ÄĚ ¬†indicates that the Sync Gateway should persist data in-memory and is not backed by a Couchbase server.
  • Line 11: Guest user access is disabled
  • Line 12-13: There are two users, ‚ÄúJane‚ÄĚ and ‚ÄúJoe‚ÄĚ configured in the system. Both users have access to a ‚Äú_public‚ÄĚ channel and each has access to their own private channel.
  • Line 22-100: A¬†simple sync function that does the following
    1. Line 29-36 : Document validation to ensure that the document contains user defined ‚Äútag‚ÄĚ and ‚Äúowner‚ÄĚ properties
      1. The ‚Äútag‚ÄĚ property is used to specify if the document is publicly available to any user or if it is private to a user
      2. The ‚Äúowner‚ÄĚ property is used to specify if the document is publicly available to any user or if it is private to a user
    2. Line 46: Give user‚Äôs access to the ‚Äú_public‚ÄĚ and a private channel (identified using owner of document)
    3. Line 51-56 : If it‚Äôs a document update, verify that the ‚Äútag‚ÄĚ property is unchanged across revisions
    4. Line 66: Assign all documents with ‚Äúpublic‚ÄĚ tag to the ‚Äú_public‚ÄĚ channel
    5. Line 72: Assign all documents with a tag other than ‚Äúpublic‚ÄĚ to the private channel
      1. Line 75: For private channel documents, first verify that the document’s owner is the one making the request
Couchbase Lite

The Couchbase Lite is an embedded NoSQL database that runs on devices. Couchbase Lite can be used in several deployment modes. The Getting Started with Couchbase Lite  post discusses the standalone deployment mode. Couchbase Lite can be used in conjunction with a remote Sync Gateway that would allow it to sync data across devices. This post discusses the deployment mode using a Sync Gateway.

There are many options to integrate Couchbase Lite framework into your iOS App. Check out our Couchbase Mobile Getting Started Guide for the various integration options.

Native API

Couchbase Lite exposes a native API for iOS, Android and Windows that allows the Apps to easily interface with the Couchbase platform. As an App Developer, you do not have to worry about the internals of  the Couchbase Lite embedded database, but you can instead focus on building your awesome app . The native API allows you to interact with the Couchbase Lite framework just as you would interact with other platform frameworks/ subsystems. Again, we will be discussing Couchbase Mobile v1.4 in this blog post. You can get a full listing of the APIs on our Couchbase Developer site.

Demo iOS App

Please download the Demo Xcode project from this¬†Github repo¬†and switch to “sync support” branch. We will use this app as an example in the rest of the blog. This app uses Cocoapods to integrate the Couchbase Lite framework.

git clone git@github.com:couchbaselabs/couchbase-lite-ios-standalone-sampleapp.git
git checkout syncsupport

 

Synchronization of Documents Across Users

  1. Build and Launch the App. You should be presented with a Login alert
  2. Enter user “jane” and password of “password” . This user was configured in the Sync Gateway config file
  3. Add first document by tapping on the “+” button on top right hand corner.
    1. Give a name to document and a one line description.
    2. Use tag “private”.
    3. Behind the scene, the Push Replicator pushes the document to the Sync Gateway and is¬†processed by the Sync Function. Based on the tag,¬†the Sync function assigns the document to the user’s private channel.
  4. Add second¬†document by tapping on the “+” button on top right hand corner.
    1. Give a name to document and a one line description
    2. Use tag “public”.
    3. Behind the scene, the Push Replicator pushes the document to the Sync Gateway and is processed by the Sync Function. Based on the public tag, the Sync function assigns the document to the public channel
  5. Now “log off” Jane¬†. You will be presented with the Login alert again
  6. Enter user “joe” and password of “password” . This user was also configured in the Sync Gateway config file
  7. The public document that was created by Jane will be listed.
    1. ¬†Behind the scenes,¬†the Pull Replicator pulls¬†all the documents from Joe’s private channel and the public channel. The¬†public document that was created by Jane is pulled. However, since Joe did not have access to Jane’s private channel, the private document created by Jane is not¬†pulled.

To verify the state of things on the Sync Gateway, you can use query the Admin REST interface using Postman or any HTTP client.

This is the CURL request to the Sync Gateway

curl -X GET \
 'http://localhost:4985/demo/_all_docs?access=false&channels=false&include_docs=true' \
 -H 'accept: application/json' \
 -H 'cache-control: no-cache' \
 -H 'content-type: application/json'

The Response from the Sync Gateway shows the two documents assigned to the public and Jane’s private channel respectively

{
  "rows": [
    {
      "key": "-6gCouN6jj0ScYgpMD7Qj1a",
      "id": "-6gCouN6jj0ScYgpMD7Qj1a",
      "value": {
        "rev": "1-dfa6d453a1515ee3dd64012ccaf53046",
        "channels": [
          "_jane"
        ]
      },
      "doc": {
        "_id": "-6gCouN6jj0ScYgpMD7Qj1a",
        "_rev": "1-dfa6d453a1515ee3dd64012ccaf53046",
        "name": "doc101",
        "overview": "This is a private doc from Jane",
        "owner": "jane",
        "tag": "private"
      }
    },
    {
      "key": "-A2wR44pAFCdu1Yufx14_1S",
      "id": "-A2wR44pAFCdu1Yufx14_1S",
      "value": {
        "rev": "1-1a8cd0ea3b7574cf6f7ba4a10152a466",
        "channels": [
          "_public"
        ]
      },
      "doc": {
        "_id": "-A2wR44pAFCdu1Yufx14_1S",
        "_rev": "1-1a8cd0ea3b7574cf6f7ba4a10152a466",
        "name": "doc102",
        "overview": "This is a public doc shared by Jane",
        "owner": "jane",
        "tag": "public"
      }
    }
  ],
  "total_rows": 2,
  "update_seq": 5
}

 

Exploring the Code

Now, lets examine relevant code snippets of the iOS Demo App –

Opening/ Creating a per-user Database 

Open DocListTableViewController.swift file and locate openDatabaseForUser function.

 do {
       // 1: Set Database Options
       let options = CBLDatabaseOptions()
       options.storageType  = kCBLSQLiteStorage
       options.create = true
            
       // 2: Create a DB for logged in user if it does not exist else return handle to existing one
       self.db  = try cbManager.openDatabaseNamed(user.lowercased(), with: options)
       self.showAlertWithTitle(NSLocalizedString("Success!", comment: ""), message: NSLocalizedString("Database \(user) was opened succesfully at path \(CBLManager.defaultDirectory())", comment: ""))
            
       // 3. Start replication with remote Sync Gateway
       startDatabaseReplicationForUser(user, password: password)
       return true
        }
        catch  {
            // handle error    
        }

  1. Specify the options to associate with the database. Explore the other options on CBLDatabaseOptions class.
  2. Create a database with name of the current user. This way, every user of the app will have their own local copy of the database. If a database exists with the name, a handle to existing database will be returned else a new one is created. Database Names must to be lowercase. If success, a new local database will be created if it does not exist. By default, database will be created in the default path (/Library/Application Support). You can specify a different directory when you instantiate the CBLManager class.
  3. Start Database Replication Process for given user credentials. We will discuss Replication code in detail in the following sections.
Fetching Documents

Open the DocListTableViewController.swift file and locate getAllDocumentForUserDatabase  function.

// 1. Create Query to fetch all documents. You can set a number of properties on the query object
liveQuery = self.db?.createAllDocumentsQuery().asLive()
            
guard let liveQuery = liveQuery else {
       return
}
            
// 2: You can optionally set a number of properties on the query object.
// Explore other properties on the query object
liveQuery.limit = UInt(UINT32_MAX) // All documents
            
//   query.postFilter =
            
//3. Start observing for changes to the database
self.addLiveQueryObserverAndStartObserving()
            
            
// 4: Run the query to fetch documents asynchronously
liveQuery.runAsync({ (enumerator, error) in
        switch error {
            case nil:
            // 5: The "enumerator" is of type CBLQueryEnumerator and is an enumerator for the results
            self.docsEnumerator = enumerator
                    
                    
            default:
            self.showAlertWithTitle(NSLocalizedString("Data Fetch Error!", comment: ""), message: error.localizedDescription)
                }
            })            
        }
        catch  {
            // handle error            
        }

  1. Get handle to database with specified name
  2. Create a query object. This Query is used to fetch all documents. The Sync Function on the Sync Gateway will ensure that documents are pulled ¬†from only the channels ¬†that are accessible to the user.¬†You can create a regular query object or a ‚Äúlive‚ÄĚ query object. The ‚Äúlive‚ÄĚ query object is of type CBLLiveQuery that¬† automatically refreshes everytime the database changes in a way that affects the query results. The query has a number of properties that can be tweaked in order to customize the results. Try modifying the properties and seeing the effect on results
  3. You will have to explicitly add an observer to the Live Query object be notified of changes to the database. We will discuss this more on section on “Observing Local & Remote Synchronized¬†Changes to Documents‚ÄĚ. Don‚Äôt forget to remove the observer and stop observing changes when you no longer need it!
  4. Execute the query asynchronously. You can also do it synchronously if you prefer , but its probably recommended to do it async if the data sets are large.

Once the query executes successfully, you get a CBLQueryEnumerator object. The query enumerator allows you to enumerate the results. It lends itself very well as a data source for the Table View that displays the results

Observing Local & Remote Synchronized Changes to Documents 

Open the DocListTableViewController.swift file and locate the addLiveQueryObserverAndStartObserving function.

Changes to the database could be as a result of the user’s actions on the local¬†device or could be a result of changes synchronized from other devices.

 // 1. iOS Specific. Add observer to the live Query object
    liveQuery.addObserver(self, forKeyPath: "rows", options: NSKeyValueObservingOptions.new, context: nil)
        
 // 2. Start observing changes
    liveQuery.start()

  1. In order to be notified of changes to the database that affect the Query results, add an observer to the Live Query object . We will instead leverage iOS‚Äôs Key-Value-Observer pattern to be notified of ‚Ä®changes. Add a KVO observer to the Live Query object to start observing changes to the ‚Äúrows‚ÄĚ property on Live Query object This is handled through appropriate Event Handler APIs on other platforms such as the addChangeListener function on¬†Android/Java.
  2. Start observing changes .

Whenever there is a change to the database that affects the ‚Äúrows‚ÄĚ property of the LiveQuery object, your app will be notified of changes. When you receive the notification of change, you can update your UI, which in this case would be reloading the tableview.

if keyPath == "rows" {
    self.docsEnumerator = self.liveQuery?.rows
    tableView.reloadData()
}

 

Authentication of Replication Requests

Open DocListTableViewController.swift file and locate startDatabaseReplicationForUser function.

All Replication requests must be authenticated. In this app, we use HTTP Basic Authentication.

let auth = CBLAuthenticator.basicAuthenticator(withName: user, password: password)

There are several Authenticator types namely ‚Äď Basic, Facebook, OAuth1, Persona, SSL/TLS Cert.

Pull Replication

Open DocListTableViewController.swift file and locate startPullReplicationWithAuthenticator function.

// 1: Create a Pull replication to start pulling from remote source
let pullRepl = db?.createPullReplication(URL(string: kDbName, relativeTo: URL.init(string: kRemoteSyncUrl))!)
        
// 2. Set Authenticator for pull replication
pullRepl?.authenticator = auth
        
// Continuously look for changes
pullRepl?.continuous = true
        
// Optionally, Set channels from which to pull
// pullRepl?.channels = [...]
        
 // 4. Start the pull replicator
 pullRepl?.start()

  1. Create a Pull Replicator to pull changes from remote Sync Gateway. The kRemoteSyncUrl is the URL of the remote database endpoint on the Sync Gateway.
  2. Associate Authenticator with the Pull Replication. Optionally one can set the channels from which documents should be pulled
  3. Setting replication to ‚Äúcontinuous‚ÄĚ will allow change updates to be pulled indefinitely unless explicitly stopped or database is closed.
  4. Start the Pull Replication
Push Replication

Open DocListTableViewController.swift file and locate startPushReplicationWithAuthenticator function.

// 1: Create a push replication to start pushing to remote source
let pushRepl = db?.createPushReplication(URL(string: kDbName, relativeTo: URL.init(string:kRemoteSyncUrl))!)
        
// 2. Set Authenticator for push replication
pushRepl?.authenticator = auth
        
// Continuously push  changes
pushRepl?.continuous = true
        
        
// 3. Start the push replicator
pushRepl?.start()

  1. Create a Push Replicator to push changes to remote Sync Gateway. The kRemoteSyncUrl is the URL of the remote database endpoint on the Sync Gateway.
  2. Associate Authenticator with the Push Replication.
  3. Setting replication to ‚Äúcontinuous‚ÄĚ will allow change updates to be pushed indefinitely unless explicitly stopped or database is closed.
  4. Start the Push Replication
Monitoring the Status of the Replication

Open the DBListTableViewController.swift file and locate addRemoteDatabaseChangesObserverAndStartObserving function.

// 1. iOS Specific. Add observer to the NOtification Center to observe replicator changes
NotificationCenter.default.addObserver(forName: NSNotification.Name.cblReplicationChange, object: db, queue: nil) {
            [unowned self] (notification) in
          
  // Handle changes to the replicator status - Such as displaying progress
  // indicator when status is .running 
}

 

You can monitor the status of the replication by adding an observer to the iOS Notification Center to be notified of cblReplicationChange notifications . You could use the notification handler for instance, to display appropriate progress indicators to user.  This is handled through appropriate Event Handler APIs on other platforms such as the addChangeListener function on Android/Java.

What Next ?

We would love to hear from you. So if you have questions or feedback, feel free to reach out to me at Twitter @rajagp or email me priya.rajagopal@couchbase.com. If you would like to enhance the demo app, please submit a pull request to the Github Repo.

The Couchbase Mobile Dev Forums is another great place to get your mobile related questions answered .Check out the development portal for details on the Sync Gateway and Couchbase Lite . Everything that was discussed here is on the Context of Couchbase Mobile 1.4. There are a lot of new and exciting changes coming up on Couchbase Mobile 2.0. Be sure to check out the  Developer Preview version 2.0 of Couchbase Mobile.

The post Data Synchronization Across iOS Devices Using Couchbase Mobile appeared first on The Couchbase Blog.

Categories: Architecture, Database

Testing your Sync Gateway functions with synctos

NorthScale Blog - Mon, 04/24/2017 - 11:28

Joel Andrews is a polyglot developer living on the rainy west coast of Canada. He fills a variety of roles at Kashoo including backend developer, database admin, DevOps guy, product owner and occasionally web and Android frontend developer. He has a deep and abiding love for Couchbase (especially Couchbase Mobile) that makes him completely unbiased when discussing the pros and cons of any data storage solution.

In my previous blog post, I introduced synctos, a handy open source tool that we built and use at Kashoo¬†to ease the process of creating comprehensive sync functions for Couchbase Sync Gateway. Near the end of that post I alluded to the fact that synctos includes a built-in test-helper module that helps you to write tests that validate your document definitions. It’s always a good idea to test your code/configuration for bugs, and your synctos document definitions are no different.

In this post I will walk you through what it takes to get started writing your own specifications/test cases. Before continuing, I suggest reading the introductory post, if you haven’t already, to ensure you have a general understanding of what synctos is all about and how it works.

First, you’ll need to install Node.js¬†to use synctos. Once installed, you should create an empty project directory with a new file called “package.json”:

{

  "name": "synctos-test-examples",

  "devDependencies": {

    "expect.js": "^0.3.1",

    "mocha": "^3.2.0",

    "simple-mock": "^0.7.3",

    "synctos": "1.x"

  },

  "scripts": {

    "test": "./generate-sync-function.sh && node_modules/.bin/mocha"

  }

}

This file tells the Node.js package manager (npm) which dependencies synctos and your test cases will need: expect.js¬†for test assertions,¬†mocha¬†for running your tests, and¬†simple-mock¬†for mocking/stubbing functions from the Sync Gateway sync function API. It also specifies the “test” command that will execute your tests with mocha.

Next, run the following command from the root of your project directory to download the packages it needs to its local “node_modules” directory:

npm install

The project will need some document definitions, so create “my-example-doc-definitions.js” in the project’s root directory:

{

  exampleDoc: {

    typeFilter: simpleTypeFilter,

    channels: function(doc, oldDoc) {

      return {

        write: [ 'write-' + doc._id ]

      }

    },

    propertyValidators: {

      foo: {

        type: 'string',

        required: true,

        regexPattern: /^[a-z]{3}$/

      }

    }

  }

}

As you can see, this is a very simple document definition for demonstration purposes. Your own document definitions will undoubtedly be larger and more complex, but the same principles apply. The file defines a single document property (a required string called “foo” whose value must satisfy the specified regular expression), a simple type filter that determines the document’s type based on the contents of the implicit “type” property (i.e., a document’s “type” property must be “exampleDoc” to match this document type), and document channels that are constructed dynamically from the document ID.

Now create a new file called “generate-sync-function.sh” in the root directory of your project:

#!/bin/sh -e

# Determine the current script's directory, so it can execute commands from the root of the project no matter where it was run from

projectDir="$(dirname "$0")"

# This is where the generated sync function will be created

outputDir="$projectDir/build"

# This is where the synctos package was downloaded by npm

synctosDir="$projectDir/node_modules/synctos"

# Ensure the build directory exists

mkdir -p "$outputDir"

# Generate the sync function from the document definitions file

"$synctosDir"/make-sync-function "$projectDir/my-example-doc-definitions.js" "$outputDir/my-example-sync-function.js"

This file will be used to generate the sync function in the project’s “build” directory as “my-example-sync-function.js”. Make sure “generate-sync-function.sh” is executable by running:

chmod a+x generate-sync-function.sh

At this point, you have everything you need to generate a sync function from the document definitions file:

./generate-sync-function.sh

If you look in the “build” directory, you will find a fully-formed Sync Gateway sync function file called “my-example-sync-function.js”. If you felt so inclined, you could insert the sync function’s contents into a Sync Gateway configuration file¬†now. When doing so, remember to surround the sync function with backticks/backquotes (`), since it is more than one line long.

Now it’s time to validate that sync function! Create a directory called “test” in the root of the project and add a file called “my-example-spec.js”:

var testHelper = require('../node_modules/synctos/etc/test-helper.js');

var errorFormatter = testHelper.validationErrorFormatter;

describe('my example document definitions', function() {

  // Test cases go here!

});

This is the skeleton of the specification file. The first two lines of the file import the synctos test-helper module and the error message formatter, which will greatly ease the process of writing test cases. The “describe” block will encompass all of the code that we add in later steps.

Next, add the following snippet inside the “describe” block of the specification file:

beforeEach(function() {

  testHelper.init('build/my-example-sync-function.js');

});

This block ensures that the test-helper module is re-initialized at the start of each (i.e., before each) test case with the contents of the generated sync function.

Below the “beforeEach” block and still inside the “describe” block, add the following test case:

it('should consider the document valid when all constraints are met', function() {

  var doc = {

    _id: 'my-document-id',

    type: 'exampleDoc',

    foo: 'bar'

  }

  testHelper.verifyDocumentCreated(doc, [ 'write-' + doc._id ]);

});

Now we’re getting somewhere. Here we’ve defined the document that we’d like to test and we’re asserting that the document can be created because it meets the criteria specified by the document definition. The second parameter of the “verifyDocumentCreated” function expects a complete list of the document channels that are accepted for the write option, which allows you to verify that the document definition’s channel assignment logic is correct.

How about a document that is invalid? Add another test case:

it('should consider a value of foo that is not three letters invalid', function() {

  var doc = {

    _id: 'my-document-id',

    type: 'exampleDoc',

    foo: 'invalid'

  }

  testHelper.verifyDocumentNotCreated(

    doc,

    doc.type,

    [ errorFormatter.regexPatternItemViolation('foo', /^[a-z]{3}$/) ],

    [ 'write-' + doc._id ]);

});

Since the document’s “foo” property does not match the regular expression that was specified in the document definition, we expect that this document will be rejected. Some notes on the arguments to the “verifyDocumentNotCreated” function:

  1. This is the document under test.
  2. This is the expected document type name.
  3. A complete list of all errors that are expected due to the failure. Note that the “errorFormatter” exposes formatter functions for all supported error types.
  4. A complete list of the expected document channels that are accepted for the write operation. As in the previous test case, this helps to verify that correct channels are assigned to the document during the operation.

Now that there are some test cases, you can run the test suite by executing the following from the project root:

npm test

You’ll find that both test cases ran and passed (indicated by a green check mark next to each)! If ever a test case fails, mocha (the test runner tool) will generate a detailed error message that should help you to figure out where to find the problem.

So, what’s next? There is plenty more that the test-helper module can do to help you write your specifications. Your next stop should be the test-helper module’s documentation¬†to learn what other options are available; notably, you’ll find that you can also verify your sync function’s behaviour when a document is replaced or deleted (handy if your documents or their properties are meant to be immutable). The validation-error-message-formatter’s documentation¬†should also be a big help in verifying errors that are returned when a document revision is rejected. And finally, you’ll find the complete source code for these examples on GitHub.

Happy testing!

The post Testing your Sync Gateway functions with synctos appeared first on The Couchbase Blog.

Categories: Architecture, Database

Docker and Vaadin Meet Couchbase ‚Äď Part 2

NorthScale Blog - Fri, 04/21/2017 - 12:52

Ratnopam Chakrabarti is a software developer currently working for Ericsson Inc. He has been focused on IoT, machine-to-machine technologies, connected cars, and smart city domains for quite a while. He loves learning new technologies and putting them to work. When he’s not working, he enjoys spending time with his 3-year-old son.

Introduction

Welcome to the part two of the series where I describe how to develop and run a Couchbase powered, fully functional Spring Boot web application using the Docker toolset. In part one of the series, I demonstrated how to run two Docker containers to run a functional application with a presentable UI. The two Docker containers that we were running are:

  1. A Couchbase container with preconfigured settings
  2. An application container talking to the Couchbase container (Run in step 1)

While this method is useful, it‚Äôs not fully automated ‚Äď meaning the automated orchestration is not there. You have to run two different Docker run commands to run the entire setup.

Is there a way to build and run the application container which also triggers running of the Couchbase container? Of course there’s a way.

Enter Docker Compose

Using Docker Compose, you can orchestrate the running of multi-container environments, which is exactly what we need for our use case. We need to run the Couchbase container first, and then the application container should run and talk to the Couchbase container.

Here’s the docker-compose.yml file to achieve this:

version: "2"

services:

  app:

    build: .

    ports:

      - 8080:8080

    environment:

      - BUCKET_NAME=books

      - HOST=192.168.99.100

    depends_on:

      - db

  db:

    image: chakrar27/couchbase:books

    ports:

      - 8091:8091

      - 8092:8092

      - 8093:8093

      - 8094:8094

      - 11210:11210

Our app ‚Äúdepends_on‚ÄĚ the db image which is the Couchbase container. In other words, the Couchbase container runs first and then the app container starts running. There‚Äôs one potential issue here: the ‚Äúdepends_on‚ÄĚ keyword doesn‚Äôt guarantee that the Couchbase container has finished configuring the image and started running. All it ensures is that the container is started first; it doesn‚Äôt check whether the container is actually running or ready to be accepting requests by an application. In order to ensure that the Couchbase container is actually running and that all the pre-configuration steps, such as setting up the query, index services, and bucket, is completed, we need to do a check from the application container.

Here‚Äôs the Dockerfile of the app container that invokes a script which, in turn, checks whether the bucket ‚Äúbooks‚ÄĚ has been set up already or not. It goes into a loop till the bucket is set up and then triggers the app container.

https://github.com/ratchakr/bookstoreapp/blob/master/Dockerfile-v1

The script can be seen at https://github.com/ratchakr/bookstoreapp/blob/master/run_app.sh

The script does the following things:

It uses the REST endpoint supported by Couchbase for querying the bucket.

Curl is used to call REST endpoints. Installation of curl is covered in the Dockerfile of the application.

The script parses the JSON response of the REST call by using a tool called jq.

If the bucket is set up, it then runs the app container; otherwise it waits for the bucket to be set up first.

It’s worth mentioning that more checks, such as verifying if the index service and the query service are set up properly or not, can be added in the shell script to make it more robust. One word of caution is to confirm your particular use case and requirement before following the docker-compose approach; there’s not a sure-fire way to determine if the Couchbase db container is fully up and running and ready to serve requests from the client application. Some of the approaches that might work are as follows:

  1. If you have a preconfigured bucket, you can test if the bucket exists
  2. Check if the indexes are in place
  3. If you know the record-count in a bucket (let’s say for a .csv file which has been imported into a bucket at the time of initial data load), you can check if the count matches the number of records in the .csv file). For our use case, the one mentioned above works nicely.
Build and Run

Now that we have our docker-compose file and Dockerfile, we can build the application image by using the simple docker-compose up command.

Here’s the output snippet from the Docker console:

$ docker-compose up

Creating network "bookstoreapp_default" with the default driver

Pulling db (chakrar27/couchbase:books)...

books: Pulling from chakrar27/couchbase

Digest: sha256:4bc356a1f2b5b3d7ee3daf10cd5c55480ab831a0a147b07f5b14bea3de909fd9

Status: Downloaded newer image for chakrar27/couchbase:books

Building app

Step 1/8 : FROM frolvlad/alpine-oraclejdk8:full

full: Pulling from frolvlad/alpine-oraclejdk8

Digest: sha256:a344745faa77a9aa5229f26bc4f5c596d13bcfc8fcac051a701b104a469aff1f

Status: Downloaded newer image for frolvlad/alpine-oraclejdk8:full

---> 5f7037acb78d

Step 2/8 : VOLUME /tmp

---> Running in 7d18e0b90bfd

---> 6a43ccb712dc

Removing intermediate container 7d18e0b90bfd

Step 3/8 : ADD target/bookstore-1.0.0-SNAPSHOT.jar app.jar

---> a3b4bf7745e0

Removing intermediate container 0404f1d094d3

Step 4/8 : RUN sh -c 'touch /app.jar'

---> Running in 64d1c82a0694

---> 1ec5a68cafa9

Removing intermediate container 64d1c82a0694

Step 5/8 : RUN apk update && apk add curl

---> Running in 1f912e8341bd

fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz

fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz

v3.5.2-16-g53ad101cf8 [http://dl-cdn.alpinelinux.org/alpine/v3.5/main]

v3.5.2-14-gd7ba0e189f [http://dl-cdn.alpinelinux.org/alpine/v3.5/community]

OK: 7961 distinct packages available

(1/4) Installing ca-certificates (20161130-r1)

(2/4) Installing libssh2 (1.7.0-r2)

(3/4) Installing libcurl (7.52.1-r2)

(4/4) Installing curl (7.52.1-r2)

Executing busybox-1.25.1-r0.trigger

Executing ca-certificates-20161130-r1.trigger

Executing glibc-bin-2.25-r0.trigger

OK: 12 MiB in 18 packages

---> 8f99863af926

Removing intermediate container 1f912e8341bd

Step 6/8 : ADD run_app.sh .

---> cedb8d545070

Removing intermediate container 8af5ac3ab0a0

Step 7/8 : RUN chmod +x run_app.sh

---> Running in 74a141de2f52

---> 77ffd7425bea

Removing intermediate container 74a141de2f52

Step 8/8 : CMD sh run_app.sh

---> Running in 6f81c8ebaa37

---> 56a3659005ef

Removing intermediate container 6f81c8ebaa37

Successfully built 56a3659005ef

Image for service app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.

Creating bookstoreapp_db_1

Creating bookstoreapp_app_1

Attaching to bookstoreapp_db_1, bookstoreapp_app_1

db_1   | docker host ip =  192.168.99.100

db_1   | sleeping...

app_1  | Starting application run script...........

app_1  | couchbase is running on 192.168.99.100

app_1  | bucket to check is books

db_1   | < Date: Fri, 24 Mar 2017 06:53:00 GMT

db_1   | < Content-Length: 0

db_1   | < Cache-Control: no-cache

db_1   | <

100    55    0     0  100    55      0    827 --:--:-- --:--:-- --:--:--   833

db_1   | * Connection #0 to host 127.0.0.1 left intact

db_1   | bucket set up done

app_1  | response from cb

app_1  | ************************************************

app_1  | ************************************************

app_1  | response from cb books

app_1  | ************************************************

app_1  | ************************************************

app_1  | bucket is now ready bucket name books

app_1  | Run application container now

app_1  | ************************************************

app_1  |

app_1  |   .   ____          _            __ _ _

app_1  |  /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

app_1  | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

app_1  |  \\/  ___)| |_)| | | | | || (_| |  ) ) ) )

app_1  |   '  |____| .__|_| |_|_| |_\__, | / / / /

app_1  |  =========|_|==============|___/=/_/_/_/

app_1  |  :: Spring Boot ::        (v1.4.2.RELEASE)

app_1  |

app_1  | 2017-03-24 06:53:59.839  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=06bad9c4-85fc-4c0b-83a7-ad21b2fdd405, title=The Immortal Irishman, author=Timothy Egan, isbn=ISBN444, category=History]

app_1  | 2017-03-24 06:53:59.839  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=328eaf44-edff-43c6-9f55-62d7e095256d, title=The Kite Runner, author=Khaled Hosseini, isbn=ISBN663, category=Fiction]

app_1  | 2017-03-24 06:53:59.839  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=56882f5a-d466-457f-82c1-1c3bca0c6d75, title=Breaking Blue, author=Timothy Egan, isbn=ISBN777, category=Thriller]

app_1  | 2017-03-24 06:53:59.839  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=845a2fe8-cbbf-4780-b216-41abf86d7d61, title=History of Mankind, author=Gabriel Garcia, isbn=ISBN123, category=History]

app_1  | 2017-03-24 06:53:59.840  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=9d2833c3-e005-4c4f-98f9-75b69bbb7bf5, title=The Night Gardener, author=Eric Fan, isbn=ISBN333, category=Kids Books]

app_1  | 2017-03-24 06:53:59.840  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=5756bf4d-551c-429e-8bc3-2339dc065ff8, title=Grit: The Power of Passion and Perseverance, author=Angela Duckworth, isbn=ISBN555, category=Business]

app_1  | 2017-03-24 06:53:59.840  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=e8e34f30-6fdf-4ca7-9cef-e06f504f8778, title=War and Turpentine, author=Stefan Hertmans, isbn=ISBN222, category=Fiction]

app_1  | 2017-03-24 06:54:00.234  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Books by Timothy Egan = Book [id=06bad9c4-85fc-4c0b-83a7-ad21b2fdd405, title=The Immortal Irishman, author=Timothy Egan, isbn=ISBN444, category=History]

app_1  | 2017-03-24 06:54:00.238  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Books by Timothy Egan = Book [id=56882f5a-d466-457f-82c1-1c3bca0c6d75, title=Breaking Blue, author=Timothy Egan, isbn=ISBN777, category=Thriller]

app_1  | 2017-03-24 06:54:00.346  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Starting with title 'The' = Book [id=06bad9c4-85fc-4c0b-83a7-ad21b2fdd405, title=The Immortal Irishman, author=Timothy Egan, isbn=ISBN444, category=History]

app_1  | 2017-03-24 06:54:00.349  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Starting with title 'The' = Book [id=328eaf44-edff-43c6-9f55-62d7e095256d, title=The Kite Runner, author=Khaled Hosseini, isbn=ISBN663, category=Fiction]

app_1  | 2017-03-24 06:54:00.349  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Starting with title 'The' = Book [id=9d2833c3-e005-4c4f-98f9-75b69bbb7bf5, title=The Night Gardener, author=Eric Fan, isbn=ISBN333, category=Kids Books]

app_1  | 2017-03-24 06:54:00.443  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book in Fiction = Book [id=328eaf44-edff-43c6-9f55-62d7e095256d, title=The Kite Runner, author=Khaled Hosseini, isbn=ISBN663, category=Fiction]

app_1  | 2017-03-24 06:54:00.453  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book in Fiction = Book [id=e8e34f30-6fdf-4ca7-9cef-e06f504f8778, title=War and Turpentine, author=Stefan Hertmans, isbn=ISBN222, category=Fiction]

app_1  | 2017-03-24 06:54:02.745  INFO 31 --- [nio-8080-exec-1] o.v.spring.servlet.Vaadin4SpringServlet  : Could not find a SystemMessagesProvider in the application context, using default

app_1  | 2017-03-24 06:54:02.753  INFO 31 --- [nio-8080-exec-1] o.v.spring.servlet.Vaadin4SpringServlet  : Custom Vaadin4Spring servlet initialization completed

app_1  | 2017-03-24 06:54:02.864  INFO 31 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring FrameworkServlet 'dispatcherServlet'

app_1  | 2017-03-24 06:54:02.865  INFO 31 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet 'dispatcherServlet': initialization started

At this point our application is up and running with a single docker-compose orchestration command.

Type 192.168.99.100:8080 into the browser; you should see the following screen:

pasted image 0 12

Docker Compose is a nice way to orchestrate multi-container Docker environments. It has almost similar command chains as ‚Äúdocker‚ÄĚ command sets. For instance, to see a list of running containers, you simply¬†type:

docker-compose ps > which would give you

$ docker-compose ps

Name                     Command               State                                                                                Ports

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

bookstoreapp_app_1   /bin/sh -c sh run_app.sh         Up      0.0.0.0:8080->8080/tcp

bookstoreapp_db_1    /entrypoint.sh /opt/couchb ...   Up      11207/tcp, 0.0.0.0:11210->11210/tcp, 11211/tcp, 18091/tcp, 18092/tcp, 18093/tcp, 0.0.0.0:8091->8091/tcp, 0.0.0.0:8092->8092/tcp, 0.0.0.0:8093->8093/tcp, 0.0.0.0:8094->8094/tcp

The name of the containers are shown in bold here.

If you need to stop or tear down your orchestrated environment with Docker Compose, you can do that with the docker-compose down command as shown below:

A sample run produces:

$ docker-compose down

Stopping bookstoreapp_app_1 ... done

Stopping bookstoreapp_db_1 ... done

Removing bookstoreapp_app_1 ... done

Removing bookstoreapp_db_1 ... done

Removing network bookstoreapp_default

Now, if you do a docker-compose ps, it shows that no container is currently running.

$ docker-compose ps

Name   Command   State   Ports

---------------------------------------------------------------

You can also use Docker compose for an automated test environment where you fire up your Docker containers, run the tests, then tear down the complete infrastructure ‚Äď all with Compose. For a detailed overview of Docker compose, please visit the official website.

This post is part of the Couchbase Community Writing Program

The post Docker and Vaadin Meet Couchbase – Part 2 appeared first on The Couchbase Blog.

Categories: Architecture, Database

NDP Episode #17: Marten for .NET Developers

NorthScale Blog - Thu, 04/20/2017 - 16:07

I am pleased to announce that the latest episode of The NoSQL Database Podcast has been published to all the popular podcasting networks. ¬†In this episode I’m joined by Jeremy Miller and Matt Groves where we talk about Marten and where it fits into the .NET development spectrum.

Jeremy Miller is the author of Marten which is a wrapper for PostgreSQL to make it into a document style NoSQL database. ¬†Being that I don’t know a thing about .NET, I have my co-host Matt Groves on the show to help me out.

This episode titled¬†Marten for .NET Developers can be found on all the major podcast networks, which include, but are not limited to, iTunes and Pocket Casts. ¬†If you’d like to listen to it outside of an app, it can be heard below.

http://traffic.libsyn.com/nosql/TNDP_-_Episode_17_-_Marten_for_DotNet_Developers.mp3

If you have any questions for anyone on the show, feel free to drop them a message on Twitter. ¬†If you’re interested in learning more about Marten, check out the official website.

If you’re interested in learning about Couchbase as a NoSQL solution, check out the Couchbase Developer Portal for more information on using it with .NET.

The post NDP Episode #17: Marten for .NET Developers appeared first on The Couchbase Blog.

Categories: Architecture, Database

Announcing Couchbase Analytics Developer Preview 2

NorthScale Blog - Wed, 04/19/2017 - 02:14

I am very pleased to announce Couchbase Analytics Developer Preview 2. Couchbase Analytics allows you to analyze data in its natural form without defining a rigid schema and removes the need for expensive data preparation and transformation. Couchbase Analytics complements your existing investments in Analytics and significantly reduces time to insight.

Major features

The salient features of Couchbase Analytics are:

  • Rich query language: Couchbase Analytics supports SQL++, a next-generation declarative query language which has much in common with SQL. SQL++ also includes extensions for the nested, schema-optional or even schema-less world of modern NoSQL systems.
  • Common data model: Couchbase Analytics natively supports the same rich, flexible-schema document data model used in Couchbase Server, rather than forcing your data into a relational model.
  • Workload isolation: Analytical queries are run on dedicated nodes that can run complex resource intensive queries and don‚Äôt impact the query latency and throughput of operational workloads.
  • High data freshness: Couchbase Analytics uses DCP, a fast memory-to-memory protocol that Couchbase Server nodes use to synchronize data among themselves. Consequently, analytics run on data that is synchronized in almost real-time without the overhead of data ingestion and transformation.

Developer Preview 2 focuses on ease of use and enhances query support to include

  • Configurable parallelism: The system can execute each request using multiple cores on multiple machines. A user can manually specify the maximum execution parallelism for a request to scale it up or down.
  • Query cancellation: We heard you! Couchbase Analytics supports cancellation of an ongoing query. We also added a ‚Äúcancel‚ÄĚ button on the Analytics workbench.
  • Simplified cluster installation: You have the option to deploy the Couchbase Analytics cluster in your data center or on EC2. More details available here.

SDK Support

The SDKs for Java, .NET, node.js, php and python now support Couchbase Analytics. Here is an example of how you would use Couchbase Analytics queries using the Java SDK.

Java SDK (2.4.3 or later)

By default, Analytics support is off and ‚ÄúanalyticsEnabled‚ÄĚ system property should be set to enable it.

System.setProperty("com.couchbase.analyticsEnabled", "true");

AnalyticsQueryResult result = bucket.query(AnalyticsQuery.simple("SELECT bw.name AS brewer,(SELECT br.name, br.abv FROM beers br WHERE br.brewery_id = meta(bw).id) AS beers " +

    "FROM breweries bw " +

    "ORDER BY bw.name " +

    "LIMIT 5;"));

for (AnalyticsQueryRow row:result) {

System.out.println(row.toString());

}

Go ahead and give it a try by downloading the binary and starting with the tutorial.

We invite you to join us at Couchbase Connect NYC for more on Couchbase Analytics.

The post Announcing Couchbase Analytics Developer Preview 2 appeared first on The Couchbase Blog.

Categories: Architecture, Database

Use Docker to Deploy a Containerized Java with Couchbase Web Application

NorthScale Blog - Tue, 04/18/2017 - 13:00

Not too long ago I wrote about containerizing a Node.js RESTful API and Couchbase Server to demonstrate how easy it is to deploy web applications in a quick and reliable fashion.  In that guide we created a simple API, built a Docker image from it, deployed it as a container, and deployed Couchbase as a container.  However, I understand that not everyone is familiar with Node.js.

Here we’re going to build a simple Java RESTful API using Spring Boot, create a Docker image from it, and deploy it as a container with Couchbase. ¬†This will create a familiar environment for Java developers.

This tutorial requires that you have a Docker installed and configured on your machine. ¬†With Docker we’ll be creating custom Docker images and deploying them as containers.

Create a Custom Docker Image for Couchbase Server

Let’s start with creating a custom Docker image for Couchbase Server. ¬†While an official Couchbase image exists, it isn’t automatically provisioned when deployed. ¬†Our custom image will automatically provision itself upon deployment as a container.

Somewhere on your computer create a directory with a Dockerfile file and configure.sh file in it.  The Dockerfile file will be the blueprint for our image and the configure.sh file will be the provisioning script that is run when the container is deployed.

Open the configure.sh file and include the following:

set -m

/entrypoint.sh couchbase-server &

sleep 15

curl -v -X POST http://127.0.0.1:8091/pools/default -d memoryQuota=512 -d indexMemoryQuota=512

curl -v http://127.0.0.1:8091/node/controller/setupServices -d services=kv%2cn1ql%2Cindex

curl -v http://127.0.0.1:8091/settings/web -d port=8091 -d username=$COUCHBASE_ADMINISTRATOR_USERNAME -d password=$COUCHBASE_ADMINISTRATOR_PASSWORD

curl -i -u $COUCHBASE_ADMINISTRATOR_USERNAME:$COUCHBASE_ADMINISTRATOR_PASSWORD -X POST http://127.0.0.1:8091/settings/indexes -d 'storageMode=memory_optimized'

curl -v -u $COUCHBASE_ADMINISTRATOR_USERNAME:$COUCHBASE_ADMINISTRATOR_PASSWORD -X POST http://127.0.0.1:8091/pools/default/buckets -d name=$COUCHBASE_BUCKET -d bucketType=couchbase -d ramQuotaMB=128 -d authType=sasl -d saslPassword=$COUCHBASE_BUCKET_PASSWORD

sleep 15

curl -v http://127.0.0.1:8093/query/service -d "statement=CREATE PRIMARY INDEX ON `$COUCHBASE_BUCKET`"

fg 1

Couchbase can be configured through HTTP after being deployed. ¬†Our configuration script will specify instance resources, administrative credentials, a Bucket, and a primary index. ¬†You’ll notice that a variety of variables are used such as¬†$COUCHBASE_ADMINISTRATIVE_USERNAME and¬†$COUCHBASE_BUCKET. ¬†These can be passed in at runtime preventing us from having to hard-code sensitive information.

More information on provisioning a Couchbase container via HTTP can be seen in a previous article that I wrote on the topic.

With the provisioning script complete, we have to finish the Dockerfile file.  Open it and include the following:

FROM couchbase

COPY configure.sh /opt/couchbase

CMD ["/opt/couchbase/configure.sh"]

The custom Docker image will use the official Docker image as the base, copy our provisioning script during the build process, and execute it at runtime.

To build the custom image for Couchbase, execute the following:

docker build -t couchbase-custom /path/to/directory/with/dockerfile

In the above command couchbase-custom is the image name and it is built from the path that contains the Dockerfile file.

Developing a Spring Boot RESTful API with Java

Before we can containerize our Java application we have to build it.  Because we are using Spring Boot, we need to download a starter project.  This can easily be done from the Spring Initializr website.

Spring Boot Initializr

For this project I’m using¬†com.couchbase as my¬†group and¬†docker as my¬†artifact. ¬†I also prefer Gradle, so I’m using that instead of Maven.

Extract the downloaded project, and open the project’s¬†src/main/resources/application.properties file. ¬†In this file include the following:

couchbase_host=couchbase
couchbase_bucket=default
couchbase_bucket_password=

In the above we are assuming our host instance is called couchbase and it has a passwordless Bucket called default.  If you were testing locally, the host would probably be localhost instead.  In any case, all these properties are going to be defined at container runtime through environment variables.

Now open the project’s¬†src/main/java/com/couchbase/DockerApplication.java file. ¬†Here we’re going to load our properties and define our endpoints. ¬†Open this file and include the following Java code:

package com.couchbase;

import com.couchbase.client.java.Bucket;
import com.couchbase.client.java.Cluster;
import com.couchbase.client.java.CouchbaseCluster;
import com.couchbase.client.java.query.*;
import com.couchbase.client.java.query.consistency.ScanConsistency;
import com.couchbase.client.java.document.json.JsonObject;
import com.couchbase.client.java.document.JsonDocument;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.*;
import org.springframework.context.annotation.*;
import org.springframework.http.*;
import org.springframework.web.bind.annotation.*;
import javax.servlet.*;
import javax.servlet.http.HttpServletResponse;
import java.util.*;
import java.util.concurrent.TimeUnit;

@SpringBootApplication
@RestController
@RequestMapping("/")
public class DockerApplication {

    @Value("${couchbase_host}")
    private String hostname;

    @Value("${couchbase_bucket}")
    private String bucket;

    @Value("${couchbase_bucket_password}")
    private String password;

    public @Bean
    Cluster cluster() {
        return CouchbaseCluster.create(hostname);
    }

    public @Bean
    Bucket bucket() {
        return cluster().openBucket(bucket, password);
    }

    @RequestMapping(value="/", method= RequestMethod.GET)
    public String root() {
        return "Try visiting the `/get` or `/save` endpoints";
    }


    @RequestMapping(value="/get", method= RequestMethod.GET)
    public Object get() {
        String query = "SELECT `" + bucket().name() + "`.* FROM `" + bucket().name() + "`";
        return bucket().async().query(N1qlQuery.simple(query, N1qlParams.build().consistency(ScanConsistency.REQUEST_PLUS)))
                .flatMap(AsyncN1qlQueryResult::rows)
                .map(result -> result.value().toMap())
                .toList()
                .timeout(10, TimeUnit.SECONDS)
                .toBlocking()
                .single();
    }

    @RequestMapping(value="/save", method=RequestMethod.POST)
    public Object save(@RequestBody String json) {
        JsonObject jsonData = JsonObject.fromJson(json);
        JsonDocument document = JsonDocument.create(UUID.randomUUID().toString(), jsonData);
        bucket().insert(document);
        return new ResponseEntity<String>(json, HttpStatus.OK);
    }

	public static void main(String[] args) {
		SpringApplication.run(DockerApplication.class, args);
	}
}

Not too much is happening in the above. ¬†Much of it is boilerplate code and import statements. ¬†Because the goal of this article isn’t in regards to using Java with Couchbase, I won’t explain each part of the code. ¬†Instead know that it has three endpoints, one of which will get all documents in the Bucket and one of which will save new documents to Couchbase.

If you’re using Gradle like I am, you need to change the¬†build.gradle file. ¬†It needs to have a task created and dependencies added. ¬†Your¬†build.gradle file should look something like this:

buildscript {
	ext {
		springBootVersion = '1.5.2.RELEASE'
	}
	repositories {
		mavenCentral()
	}
	dependencies {
		classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
	}
}

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'org.springframework.boot'

version = '0.0.1-SNAPSHOT'
sourceCompatibility = 1.8

repositories {
	mavenCentral()
}


dependencies {
    compile('org.springframework.boot:spring-boot-starter-web')
	compile('org.springframework:spring-tx')
	compile('org.springframework.security:spring-security-core')
	compile('com.couchbase.client:java-client')
	testCompile('org.springframework.boot:spring-boot-starter-test')
}

task(run, dependsOn: 'classes', type: JavaExec) {
    main = 'com.couchbase.DockerApplication'
    classpath = sourceSets.main.runtimeClasspath
}

To build the application, execute the following:

gradle build -x test

Now you’ll have a JAR file to be used in our Docker image.

Build a Custom Docker Image for the Spring Boot Application

Building a custom image will require that we have a Dockerfile file in place.  At the base of your Java project, add a Dockerfile file and include the following:

FROM openjdk:8

COPY ./build/libs/java-project-0.0.1-SNAPSHOT.jar spring-boot.jar

CMD java -jar spring-boot.jar

In the above we’re using the official OpenJDK image as our base and we’re copying our JAR into the image at build time. ¬†At deployment, the JAR is executed.

To build this image, execute the following:

docker build -t spring-boot-custom /path/to/directory/with/dockerfile

The above command should look familiar. ¬†We’re creating a¬†spring-boot-custom image using the blueprint found in the directory of our¬†Dockerfile file.

For more information on creating custom Docker images, you can visit a previous article I wrote called, Build a Custom Docker Image for Your Containerized Web Application.

Deploying the Couchbase and the Spring Boot Images as Containers

There are a few options when it comes to deploying our images. ¬†We can use a Compose file or deploy them as vanilla containers. ¬†I find Compose to be a cleaner approach so we’ll use that.

Somewhere on your computer create a docker-compose.yml file and include the following:

version: '2'

services:
    couchbase:
        image: couchbase-custom
        ports:
            - 8091:8091
            - 8092:8092
            - 8093:8093
        environment:
            - COUCHBASE_ADMINISTRATOR_USERNAME=Administrator
            - COUCHBASE_ADMINISTRATOR_PASSWORD=password
            - COUCHBASE_BUCKET=default
            - COUCHBASE_BUCKET_PASSWORD=

    spring-boot:
        image: spring-boot-custom
        ports:
            - 8080:8080
        environment:
            - COUCHBASE_HOST=couchbase
            - COUCHBASE_BUCKET=default
            - COUCHBASE_BUCKET_PASSWORD=
        restart: always

In the above file we are defining the custom images that we built and we are doing port mapping to the host machine.  What is particularly interesting is the environment options.  These match the variables that we have in our application.properties and configure.sh files.

To deploy our containers with Compose, execute the following:

docker-compose run -d --service-ports --name couchbase couchbase
docker-compose run -d --service-ports --name spring-boot spring-boot

Something to note about the above commands. ¬†Couchbase does not deploy instantly. ¬†You’ll need to wait until it is completely launched before you deploy the Java application. ¬†After both applications are launched, check them out by navigating to the Java application in your web browser.

Conclusion

You just saw how to create custom Docker images for a Spring Boot application and Couchbase Server.  After deploying each as containers they will be able to communicate to each other which is incredible convenient for maintenance.

If you’re interested in seeing this done with Node.js, check out the previous article I wrote on the topic. ¬†If you’re interested in learning more about the Java SDK for Couchbase, check out the Couchbase Developer Portal.

The post Use Docker to Deploy a Containerized Java with Couchbase Web Application appeared first on The Couchbase Blog.

Categories: Architecture, Database

Managing REST APIs with Swagger (video)

NorthScale Blog - Tue, 04/18/2017 - 00:48

Couchbase incorporated Swagger into our documentation a few months ago. “Swagger” refers to an ecosystem of tools and other resources for managing REST APIs.

Core to Swagger is the Swagger specification. (The group behind Swagger donated the spec to the OpenAPI Initiative. The original site, swagger.io remains the main site for tools and such.)

Once you have the API spec defined for you endpoints, you get several valuable capabilities. My two favorite are the “live” embeddable documentation and the client libraries. Take a look at this short video for a demonstration of some of the features of Swagger.

For an example of using a Swagger JavaScript client, take a look at this blog post: https://blog.couchbase.com/node-js-swagger-monitor-document-changes-couchbase-mobile/

You can find the Swagger specs for the Sync Gateway public API here, and the admin API here.

Here are the Sync Gateway configurations I refer to in the video. The first has the CORS configuration to allow access from swagger.io.

{
  "log": ["*"],
  "CORS": {
     "Origin":["*"],
     "Headers": ["Content-Type"]
  },
  "databases": {
    "db": {
      "server": "walrus:",
      "users": { "GUEST": { "disabled": false, "admin_channels": ["*"] } },
      "allow_empty_password": true
    }
  }
}

The second is for trying out calls through our live documentation.

{
   "log": [
      "*"
   ],
   "SSLCert": "cert.pem",
   "SSLKey": "privkey.pem",
   "CORS": {
      "Origin": ["*"],
      "Headers": ["Content-Type"]
   },
   "databases": {
      "db": {
         "server": "walrus:",
         "users": {
            "GUEST": {
               "disabled":false,
               "admin_channels": [
                  "*"
               ]
            }
         }
      }
   }
}

(Note: There’s currently a known issue with making the calls from the Couchbase documentation. Couchbase now requires access via https, in keeping with security best practices. This means the documentation also redirects using https. This means you have to set up Sync Gateway to use SSL. See the documentation here for more information.)

Postscript

Download Couchbase and Sync Gateway here. See our documentation for how to add Couchbase Lite to a project.

Check out more resources on our developer portal and follow us on Twitter @CouchbaseDev.

You can post questions on our forums. And we actively participate on Stack Overflow.

Hit me up on Twitter with any questions, comments, topics you’d like to see, etc. @HodGreeley

The post Managing REST APIs with Swagger (video) appeared first on The Couchbase Blog.

Categories: Architecture, Database

Querying Couchbase Sync Gateway using Postman

NorthScale Blog - Mon, 04/17/2017 - 09:00

This post discusses a convenient way to query, explore and test the REST API exposed by the Couchbase Mobile Sync Gateway using the Postman Chrome Developer tool. Sync Gateway exposes a REST, Batch & Stream interface that allows clients interact with it over the Internet.

 

NOTE:  We will be discussing Couchbase Mobile v1.4 which is the current production release. There is a newer Developer Preview version 2.0 of Couchbase Mobile

 

Background

Couchbase Sync Gateway is part of the Couchbase Mobile stack and is an Internet-facing synchronization mechanism that securely syncs data across devices as well as between devices and the cloud. There are two ports over which the Sync Gateway listens to requests- The Admin port (defaults to 4985) and the Public port (defaults to 4984). In production deployments, the admin port is typically blocked from access over the Internet.

Installation of Couchbase Sync Gateway

Please follow instructions in the blog post to install the the Sync Gateway in your Mac OS development environment. See the downloads site for all the available packages, and the full installation guide for complete details. To install on Linux distributions other than the supported ones, see this post.

 

Installation of Postman

Postman is a Chrome Developer tool that can be downloaded for free from the Chrome browser web store .

Using Postman to Query the Couchbase  Sync Gateway
  • Get¬†the Postman collections

The Postman collection files and environment definition for the Admin and Public interface of the Sync Gateway is available for download from this Github Repo.

git clone git@github.com:couchbaselabs/SyncGateway-Postman-Collection.git

There should be three files:-

Sync-Gateway-Admin.postman_collection 

This is the Portman collection corresponding to the Admin interface of the Sync Gateway

Sync-Gateway-Public.postman_collection

This is the Portman collection corresponding to the Public interface of the Sync Gateway

Sync-Gateway-Environment.postman_environment

This is the Environment Definitions file that defines the variables used by the Admin and Public collections

Launch the Postman App

  • Import the collections

Follow the steps in the video below to import the collections that were downloaded in the previous step.

Importing Sync gateway postman collections

  • Import the environment file

Follow the steps in the video below to import the environment definition corresponding to the Postman collections

Importing the Environment Definition for Admin and Public Interfaces

Importing the Environment Definition for Admin and Public Interfaces

  • Set¬†the appropriate environment

Follow the steps in the video below to set the environment to the one that you just imported. Update the values of the variables to suit your environment. Make sure that the adminurl points to the Sync Gateway at Admin Port and the publicurl variable points to the Sync Gateway at Public port. It defaults to http://localhost:4985 and http://localhost:4984 respectively.

Setting the Postman Environment

  • That‚Äôs it! ¬†Run your¬†queries

The following is a demonstration of running a request on the Admin Interface

Querying the Admin Interface of Sync Gateway

Querying the Admin Interface of Sync Gateway

 

The following is a demonstration of running a request on the Public Interface. Make sure you set the authorization header is set appropriately to use the appropriate authentication mechanism.

Querying Sync Gateway Public Interface

Querying the Public Interface of Sync Gateway

CLI Option

If you are interested in executing the Postman Collection from the command line interface, which may be the case for instance, if you want to integrate this as part of your Continuous Integration process, then you should check out  Newman, which is CLI runner for Postman.

For example, the command below will run the Sync-Gateway-Admin.postman_collection with the Sync-Gateway-Environment file.

If your Sync Gateway is running with SSL enabled and is using a self-signed certificate, the “-k” option will suppress validation of the cert (“insecure” SSL connection). This is not recommended in production environments.

newman run -k  Sync-Gateway-Admin.postman_collection --environment Sync-Gateway-Environment.postman_environment --bail --delay-request 300;

 

 

 

Next Steps

The Postman Collections provide an easy way to query, explore and test the REST interface exposed by the Sync Gateway. If you see an issue in the collections definitions or if you would like to enhance it, please submit a Pull Request to the Github Repo.

If you have further questions, feel free to reach out to me at Twitter @rajagp or email me priya.rajagopal@couchbase.com.

The Couchbase Mobile Dev Forums is another great place to get your mobile related questions answered .

Also, checkout out the Couchbase Sync Gateway API Definitions for details on the web interface.

 

The post Querying Couchbase Sync Gateway using Postman appeared first on The Couchbase Blog.

Categories: Architecture, Database

Couchbase 5.0 April 2017 Developer Build Features & Enhancements

NorthScale Blog - Thu, 04/13/2017 - 20:01

April showers bring May flowers, but until then it’s time for the April 2017 developer build.

The April 2017 Developer Build has a ton of bug fixes and feature enhancements, and we are one step closer to the stable release of Couchbase Server 5.0.

You can get the April 2017 developer build from the Couchbase downloads page in the developer tab.

April 2017 Developer Build

New platforms in April 2017 Developer Build

Based on the feedback we have received, here are some additional platforms that we are introducing starting with the April 2017 Developer Build. We hope you try out the April 2017 Developer Build on these platforms and give us your feedback!

  • Oracle Linux 7
  • Ubuntu 16.04
  • Windows Server 2016

Note that Ubuntu 12 is EOL this month, so future Couchbase releases will likely not support Ubuntu 12.

Bugs

Thank you for you feedback and helping us to identify and fix bugs for Couchbase Server 5.0. Here is a list of the critical and major bugs that were fixed in the April 2017 Developer Build.

Issue #

Description

MB-23102

[Ephemeral]: Fix the potential regression due to extra memory usage for sequential links in Stored Value

MB-23562

Ephemeral buckets: item count goes to -1

MB-23664

XDCR between ephemeral buckets gets stuck

MB-23055

[FTS] RBAC: Unable to create alias on an index created by the same user on sasl bucket

MB-23139

[FTS] moss compaction unit test assumes one particular segment impl

MB-23349

[FTS] up to 10x performance degradation when using large “query size” (limit) setting

MB-22870

[FTS] ephemeral: Docs not getting indexed when multiple indexes are present on an ephemeral bucket

MB-22871

[FTS] ephemeral: No docs indexed to a memory-only-index from ephemeral bucket

MB-23561

[FTS] moss store files are not deleted after compaction

MB-23674

[FTS] race detected in cbft

MB-21785

[FTS] UI: stats: the “items remaining” graph shows wrong data when KV mutations are in progress

MB-21783

[FTS] UI: stats: The “queries/sec” graph shows wrong data in FTS multi-node cluster setup

MB-21645

[FTS] slow query log conflicts with requirement to not log user data

MB-23227

couchbase server install fails on centos 6 machines

MB-23579

Windows Docker Container : service-stop.bat shuts down the container

MB-23517

Set ‚Üí Remove ‚Üí Set sequence of KV operations using the same key blocks the client connection

MB-23429

Auth failure with mem client for LDAP user

MB-23269

Memcached crashes when trying to write an audit event to a file with wrong permissions

MB-22691

ability to upsert the xattr key with any names depends on the keys already set in xattrs

MB-23347

Very low rate of insert operations due to lock contention

MB-23479

Query-select – can query any bucket

MB-23197

Addition of new node fails due to “Join completion call failed. Failed to start ns_server cluster processes back.”

MB-22759

N1QL insert/delete/update operations incorrectly authorized

MB-23758

Eliminate GO_DEFAULT_VERSION

MB-23372

Gap in covering array indexes

MB-23222

YCSB workload e with wrong n1ql syntax 100% memory is consumed

MB-23203

Index join chooses the wrong index and doesn’t choose the right index consistently.

MB-23186

Index Collation checks can’t use the API setting.

MB-23057

Scan is covered avoid IntersectScan

MB-23361

[N1QL] test_order_by_alias_aggr_fn functional test is failing

MB-23236

[N1QL RBAC]Incorrect message displayed when indexer storage mode is not set

MB-23179

[IndexAPI2] cbq engine panics with create index desc

MB-23049

cbq-engine constantly re-validates empty credentials

MB-23277

[n1ql rbac] The builtin user is getting updated when specifying a new role on a different bucket

MB-23245

[N1QL][CURL] Occasionally setting the connect-timeout option results in a panic

MB-23165

[N1QL] test_indexcountscan fails

MB-23132

CURL : Remove max-redirs option

MB-23101

Restrict the limit pushdown on IntersectScan(s)

MB-23219

investigate query logging if there is a parser crash.

MB-23134

CURL – Disable all other protocols except HTTP/HTTPS

MB-22994

[N1QL]Query with predicates on 3 different fields with OR clause does not use UnionScan

MB-23610

[N1QL][Monitoring] Cannot delete/filter system:completed_requests by node

MB-23723

request_plus range queries with LIMIT are slow

MB-23716

N1QL: test_array_index_regexp_covering fails,query with regular expression times out on centos and windows,panic seen in query.log

MB-21971

Expose meta().cas and meta().expiration to N1QL

MB-22874

jdbc-json driver throws nullpointer exception with YCSB workload

MB-23106

panic found in indexer

MB-22920

Intermittent failure:”Index scan timed out√Ę‚ā¨¬Ě

MB-22879

Throughput of composite queries with TOKENS() dropped from ~24K to ~4K queries/sec

MB-23729

Initial indexing of 200M items increased from 6 minutes to 1 hour

MB-23657

Q2, Q3, and YCSB Workload E throughput dropped to 400 queries/sec

MB-22982

Tree form query output fails to print in query workbench

MB-23311

cbbackupmgr crashes with “fatal error: concurrent map read and map write”

MB-23490

Changing the password of user from a particular session should invalidate other sessions

MB-23280

[UI]Authentication Source is not selected for a ldap user

MB-23016

[FTS UI] Rebalance progress indicator doesn’t show granular level progress

MB-23437

[UI] Not able to set Index Storage settings on the Index Node at setup time

MB-23085

View Engine not detecting meta.id if doc has xattrs

MB-23423

Memcached connection closed for no apparent reason after a couple minutes

MB-22997

Fix deadlock issue in when closing upr stream

MB-23228

Avoid frequent replication restart when node is removed from target cluster

MB-23728

Remote cluster ref cannot rotate on target nodes when target is elastic search

We still want your feedback!

Stay tuned to the Couchbase Blog for information about what’s coming in the next developer build.

Interested in trying out some of these new features? Download Couchbase Server 5.0 April 2017 Developer Build today!

The 5.0 release is fast approaching, but we still want your feedback!

Bugs: If you find a bug (something that is broken or doesn’t work how you’d expect), please file an issue in our JIRA system at issues.couchbase.com or submit a question on the Couchbase Forums. Or, contact me with a description of the issue. I would be happy to help you or submit the bug for you (my Couchbase handlers let me take selfies on our cartoonishly big couch when I submit good bugs).

Feedback: Let me know what you think. Something you don’t like? Something you really like? Something missing? Now you can give feedback directly from within the Couchbase Web Console. Look for the feedback icon icon at the bottom right of the screen.

In some cases, it may be tricky to decide if your feedback is a bug or a suggestion. Use your best judgement, or again, feel free to contact me for help. I want to hear from you. The best way to contact me is either Twitter @mgroves or email me matthew.groves@couchbase.com.

The post Couchbase 5.0 April 2017 Developer Build Features & Enhancements appeared first on The Couchbase Blog.

Categories: Architecture, Database

SQL to JSON Data Modeling with Hackolade

NorthScale Blog - Thu, 04/13/2017 - 19:15

SQL to JSON data modeling is something I touched on in the first part of my “Moving from SQL Server to Couchbase” series. Since that blog post, some new tooling has come to my attention from Hackolade, who have recently added first-class Couchbase support to their tool.

In this post, I’m going to review the very simple modeling exercise I did by hand, and show how IntegrIT’s Hackolade can help.

I’m using the same SQL schema that I used in the previous blog post series; you can find it on GitHub (in the SQLServerDataAccess/Scripts folder).

Review: SQL to JSON data modeling

First, let’s review, the main way to represent relations in a relational database is via a key/foreign key relationship between tables.

When looking at modeling in JSON, there are two main ways to represent relationships:

  • Referential – Concepts are given their own documents, but reference other document(s) using document keys.

  • Denormalization – Instead of splitting data between documents using keys, group the concepts into a single document.

I started with a relational model of shopping carts and social media users.

Relational model of SQL before moving to JSON

In my example, I said that a Shopping Cart – to – Shopping Cart Items relationship in a relational database would probably be better represented in JSON by a single Shopping Cart document (which contains Items). This is the “denormalization” path. Then, I suggested that a Social Media User – to – Social Media User Update relationship would be best represented in JSON with a referential relationship: updates live in their own documents, separate from the user.

This was an entirely manual process. For that simple example, it was not difficult. But with larger models, it would be helpful to have some tooling to assist in the SQL to JSON data modeling. It won’t be completely automatic: there’s still some art to it, but the tooling can do a lot of the work for us.

Starting with a SQL Server DDL

This next part assumes you’ve already run the SQL scripts to create the 5 tables: ShoppingCartItems, ShoppingCart, FriendBookUsers, FriendBookUpdates, and FriendBookUsersFriends. (Feel free to try this on your own databases, of course).

The first step is to create a DDL script of your schema. You can do this with SQL Server Management Studio.

First, right click on the database you want. Then, go to “Tasks” then “Generate Scripts”. Next, you will see a wizard. You can pretty much just click “Next” on each step, but if you’ve never done this before you may want to read the instructions of each step so you understand what’s going on.

Generate DDL script from SQL Management Studio

Finally, you will have a SQL file generated at the path you specified.

This will be a text file with a series of CREATE and ALTER statements in it (at least). Here’s a brief excerpt of what I created (you can find the full version on Github).

CREATE TABLE [dbo].[FriendBookUpdates](
	[Id] [uniqueidentifier] NOT NULL,
	[PostedDate] [datetime] NOT NULL,
	[Body] [nvarchar](256) NOT NULL,
	[UserId] [uniqueidentifier] NOT NULL,
 CONSTRAINT [PK_FriendBookUpdates] PRIMARY KEY CLUSTERED
(
	[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

-- etc...

By the way, this should also work with SQL Azure databases.

Note: Hackolade works with other types of DDLs too, not just SQL Server, but also Oracle and MySQL.

Enter Hackolade

This next part assumes that you have downloaded and installed Hackolade. This feature is only available on the Professional edition of Hackolade, but there is a 30-day free trial available.

Once you have a DDL file created, you can open Hackolade.

In Hackolade, you will be creating/editing models that correspond to JSON models: Couchbase (of course) as well as DynamoDB and MongoDB. For this example, I’m going to create a new Couchbase model.

Create a new Couchbase model in Hackolade

At this point, you have a brand new model that contains a “New Bucket”. You can use Hackolade as a designing tool to visually represent the kinds of documents you are going to put in the bucket, the relationships to other documents, and so on.

We already have a relational model and a SQL Server DDL file, so let’s see what Hackolade can do with it.

Reverse engineer SQL to JSON data modeling

In Hackolade, go to Tools → Reverse Engineer → Data Definition Language file. You will be prompted to select a database type and a DDL file location. I’ll select “MS SQL Server” and the “script.sql” file from earlier. Finally, I’ll hit “Ok” to let Hackolade do its magic.

SQL to JSON data modeling reverse engineering with Hackolade

Hackolade will process the 5 tables into 5 different kinds of documents. So, what you end up with is very much like a literal translation.

SQL to JSON data modeling reverse engineering with Hackolade result

This diagram gives you a view of your model. But now you can think of it as a canvas to construct your ultimate JSON model. Hackolade gives you some tools to help.

Denormalization

For instance, Hackolade can make suggestions about denormalization when doing SQL to JSON data modeling. Go to Tools→Suggest denormalization. You’ll see a list of document kinds in “Table selection”. Try selecting “shoppingcart” and “shoppingcartitems”. Then, in the “Parameters” section, choose “Array in parent”.

Suggest denormalization in Hackolade

After you do this, you will see that the diagram looks different. Now, the items are embedded into an array in shoppingcart, and there are dashed lines going to shoppingcartitems. At this point, we can remove shoppingcartitems from the model (in some cases you may want to leave it, that’s why Hackolade doesn’t remove it automatically when doing SQL to JSON data modeling).

Remove excess table in Hackolade

Notice that there are other options here too:

  • Embedding Array in parent – This is what was demonstrated above.

  • Embedding Sub-document in child – If you want to model the opposite way (e.g. store the shopping cart within the shopping cart item).

  • Embedding Both – Both array in parent and sub-document approach.

  • Two-way referencing – Represent a many-to-many relationship. In relational tables, this is typically done with a “junction table” or “mapping table”

Also note cascading. This is to prevent circular referencing where there can be a parent, child, grandchild, and so on. You select how far you want to cascade.

More cleanup

There are a couple of other things that I can do to clean up this model.

  • Add a ‘type’ field. In Couchbase, we might need to distinguish shoppingcart documents from other documents. One way to do this is to add a “discriminator” field, usually called ‘type’ (but you can call it whatever you like). I can give it a “default” value in Hackolade of “shoppingcart”.

  • Remove the ‘id’ field from the embedded array. The SQL table needed this field for a foreign key relationship. Since it’s all embedded into a single document, we no longer need this field.

  • Change the array name to ‘items’. Again, since a shopping cart is now consolidated into a single document, we don’t need to call it ‘shoppingcartitems’. Just ‘items’ will do fine.

Clean up JSON data model in Hackolade

Output

A model like this can be a living document that your team works on. Hackolade models are themselves stored as JSON documents. You can share with team members, check them into source control, and so on.

You can also use Hackolade to generate static documentation about the model. This documentation can then be used to guide the development and architecture of your application.

Go to File → Generate Documentation → HTML/PDF. You can choose what components to include in your documentation.

Summary

Hackolade is a NoSQL modeling tool created by the IntegrIT company. It’s useful not only in building models from scratch, but also in reverse engineering for SQL to JSON data modeling. There are many other features about Hackolade that I didn’t cover in this post. I encourage you to download a free trial of Hackolade today. You can also find Hackolade on Twitter @hackolade.

If you have questions about Couchbase Server, please ask away in the Couchbase Forums. Also check out the Couchbase Developer Portal for more information on Couchbase for developers. Always feel free to contact me on Twitter @mgroves.

The post SQL to JSON Data Modeling with Hackolade appeared first on The Couchbase Blog.

Categories: Architecture, Database

Announcing Couchbase Server 4.5.1 CE

NorthScale Blog - Thu, 04/13/2017 - 16:29

We at Couchbase are committed to the continued growth of our Couchbase community and we believe that our community edition is a good way for developers in the community to get to know Couchbase.

Today, we are glad to announce that Couchbase Community Edition 4.5.1 is generally available. This release further fortifies the previously released 4.5 Community Edition series with the top bug fixes to improve product stability.

Start exploring Couchbase Server Community Edition 4.5.1 today by downloading the software or reviewing the release notes.

Need an Enterprise-Grade Solution?

If you are managing business-critical use-cases in a production environment, you may consider using Couchbase Server Enterprise Edition. You can learn more about Couchbase Server Enterprise Edition here.

The post Announcing Couchbase Server 4.5.1 CE appeared first on The Couchbase Blog.

Categories: Architecture, Database

Create a Continuous Deployment Pipeline with Jenkins and Java

NorthScale Blog - Thu, 04/13/2017 - 16:04

Lately I’ve been working a lot with Jenkins for continuous deployment of one of my applications. ¬†In case you haven’t seen it, the keynote demonstration given at Couchbase Connect 2016 used Jenkins to build and redeploy the Java backend and Angular frontend every time a change was detected on GitHub. ¬†This is the application I helped build.

So how did I leverage Jenkins to make this possible? ¬†We’re going to see how to create a continuous deployment pipeline for a Java application which includes building and deploying to a server.

To be clear, I won’t be explaining how to use the Couchbase Connect 2016 application, which I’m calling GitTalent, with Jenkins as it is a bit more complicated. ¬†We’re going to start slow to get a better understanding.

The Requirements

There are a few requirements that need to be met in order to be successful with this guide.  They can be found below:

In this example, Jenkins, the JDK, and Couchbase Server will all reside on the same machine.  This means that Jenkins will pull code from GitHub, build it using the JDK, and deploy it locally rather than to some remote server.  That said, some of the automation is removed from this example because to get Jenkins to automatically build when commits are found, GitHub hooks require a machine that is not localhost.  The effectiveness of this guide will still be present.

While Couchbase is a requirement, it is not the focus of this guide. ¬†It needs to be present for the sample application that we’ll be pulling off GitHub.

Configuring Jenkins with the Required Plugins and Dependencies

At this point you should have at least downloaded Jenkins. ¬†We’re going to walk through setting it up because it can be a little confusing for a first time user.

You can run Jenkins by executing the following from the Command Prompt or Terminal:

java -jar jenkins.war -httpPort=8080

The above command will make Jenkins accessible at http://localhost:8080 in your web browser. ¬†Upon first launch you’ll be guided through a wizard for configuration.

Jenkins Configuration Part 1

As part of the first configuration step, you’ll need to obtain a generated value to be used as the super admin password.

After providing the password as per the instructions on the screen, you’ll be asked about the installation of Jenkins plugins.

Jenkins Configuration Part 2

We’re going to start by installing the suggested plugins and then install extras later. ¬†It may take some time, but after the suggested plugins have been installed you’ll be prompted to set up your first administrative user account.

Jenkins Configuration Part 3

You can choose to create an administrative user, or continue to use the generated password when working with Jenkins.

After creating an account, Jenkins is ready to be used.  Before we create our first workflow, or job, we need to install an extra plugin.

Manage Jenkins

You’ll want to choose to¬†Manage Jenkins which will bring us to a list of management sections, one of which being a section for managing plugins.

Choose to Manage Plugins and do a search for the Post-Build Script plugin.

Install Jenkins Post-Build Script Plugin

This plugin will allow us to execute scripts on the Jenkin host after the build has completed without errors.  We need this so we can deploy our Java JAR after it has been packaged.

Jenkins as a whole has been configured. ¬†While it is ready for us to create jobs, we’re first going to get Couchbase ready to go.

Preparing Couchbase with a Bucket for Document Storage

Couchbase isn’t the highlight of this example, but since we’re using one of my old projects, it was a requirement per the project.

If you haven’t already downloaded Couchbase, do so now and walk through the configuration. ¬†For help with configuring Couchbase, check out a previous article I wrote.

What is important is the Bucket that we’ll be using. ¬†Make sure to create a Bucket titled¬†restful-sample with at least a primary¬†N1QL index.

If you need help with any of this, a full write-up of our project can be found here. ¬†Of course, that write-up doesn’t include continuous deployment with Jenkins.

Now we can focus on our pipeline.

Creating a Job for Building and Deploying an Application Developed with Java

With Jenkins and Couchbase ready to go we can focus on creating a job that will control our pipeline.  To re-iterate on our plan, we will be pulling from GitHub, packaging a JAR in the build process, and deploying that JAR during the post-build (deployment) process.

Go ahead and create a new job in Jenkins. ¬†The first thing you’ll be asked is for a project name as well as the type of project name

Jenkins Freestyle Java Project

Give the job a unique name and also make sure to use¬†Freestyle Project from the list. ¬†When configuring this project, our first concern is the GitHub repository that we’ll be using.

Jenkins Java Project Source Control

If your repository is not public, don’t worry as you can add credentials, but for this example my project is public.

I’m using the project found here:

https://github.com/couchbaselabs/restful-angularjs-java

Feel free to use your own project if you’d like. ¬†With the source control figured out, let’s move onto the¬†Build step.

Jenkins Build Java Project

Here we can enter whatever shell commands that we want in order to build the application that was pulled from Git.

We only want to generate a JAR file in this example, which can be done by adding the following command:

mvn clean package

This leaves us with a JAR file in a new target directory that is only relevant to Jenkins at this time.  The final step is to kill any already running instance of the application and run it again.

Since we installed the Post-Build Plug-in Script we can define what happens with our build.

Jenkins Post-Build Java Actions

In this example we aren’t deploying it to another machine, but we could. ¬†Instead we’re just going to run it on the local machine.

Choose to execute a shell script and include the following:

ps | grep java-fullstack | awk '{print $1}' | xargs kill -9 || true
env SERVER.PORT=8081 nohup java -jar ./target/java-fullstack-1.0-SNAPSHOT.jar &

The above is cheating a little, but it’s necessary because we’re running everything locally on the same machine.

The first command will search for any running process on my Mac that looks like our Java application. ¬†Once found it will kill the process. ¬†Even if it doesn’t find the process it will return true because of our pipe. ¬†This prevents the job from failing.

After the process is stopped, we run the built JAR in the background.

Keep in mind that your script may be a bit different if you’re running locally on Linux or Windows. ¬†In a production scenario, you’ll probably use the SSH plugin for Jenkins and push the JAR to some server and restart the controlling daemon.

Go ahead and try to run the job using the¬†Build Now button. ¬†If you don’t want to initiate it manually, consider adding a hook or a cron timer in the configuration area that we had just explored.

If everything goes well, you’ll have your Java application accessible at http://localhost:8081 and it will communicate with Couchbase Server.

Conclusion

You just saw how to configure Jenkins to do continuous deployment on a Java application that communicates with Couchbase Server.  This Jenkins configuration will pull from GitHub, build a JAR, and deploy it.  While it should be a little more polished in a production scenario, it is a good way to get started.

Want to use Jenkins for continuous deployment of microservices bundled into Docker containers?  Check out a previous article that I wrote titled, Continuous Deployment of Web Application Containers with Jenkins and Docker.

Want more information on using Java with Couchbase?  Check out the Couchbase Developer Portal for examples and documentation.

I encourage you to check out the Couchbase Connect 2016 keynote demonstration if you haven’t already.

The post Create a Continuous Deployment Pipeline with Jenkins and Java appeared first on The Couchbase Blog.

Categories: Architecture, Database

Docker and Vaadin Meet Couchbase ‚Äď Part 1

NorthScale Blog - Thu, 04/13/2017 - 13:04

Ratnopam Chakrabarti is a software developer currently working for Ericsson Inc. He has been focused on IoT, machine-to-machine technologies, connected cars, and smart city domains for quite a while. He loves learning new technologies and putting them to work. When he’s not working, he enjoys spending time with his 3-year-old son.

Ratnopam Chakrabarti

Introduction

Running Couchbase as a Docker container is fairly easy. Simply inherit from the latest, official Couchbase image and add your customized behavior according to your requirement. In this post, I am going to show how you can fire up a web application using Spring Boot, Vaadin, and of course Couchbase (as backend)‚Äď all using Docker.

This is part one of a two-part series where I am going to describe ways to run a fully featured web application powered by Couchbase as the NoSQL backend using Docker toolsets. In this post, I will describe the steps to set up and configure a Couchbase environment using Docker; I will also mention ways to Dockerize the web application (in this case, it’s a Spring Boot application with Vaadin) and talk to the Couchbase backend for the CRUD operations.

Prerequisites

Docker needs to be set up and working. Please refer to the following link for details of the installation: https://docs.docker.com/engine/installation/ If you are on macOS or Windows 10, you can go for native Docker packages. If you are on an earlier version of Windows (7 or 8) like me, then you can use Docker Toolbox which comes with Docker achine.

The Application

Ours is a simple CRUD application for maintaining a bookstore. Users of the application can add books by entering information such as title and/or author, and can then view the list of books, edit some information, and even delete the books. The app is built on Spring Boot. The backend is powered by Couchbase 4.6, and for the front-end I have used Vaadin 7 since it has pretty neat integration with the Spring Boot framework.

The main steps to build this app are listed below:

  • Run and configure Couchbase 4.6, including setting up the bucket and services using Docker.
  • Build the application using Spring Boot, Vaadin, and Couchbase.
  • Dockerize and run the application.
Run Couchbase 4.6 using Docker

Check your Docker host IP. You can use:

docker-machine ip default to find out the docker_host ip address. You can also check the environment variables by doing  

printenv | grep -i docker_host; it would show something like this ->

DOCKER_HOST=tcp://192.168.99.100:2376

The next task is to write the Dockerfile to run and configure Couchbase. For our application to talk to the Couchbase backend, we need to set up a bucket named ‚Äúbooks‚ÄĚ and also enable the index query services on the Couchbase node. The Dockerfile to all of this can be found¬†here.

The Dockerfile uses a configuration script to set up the Couchbase node. Couchbase offers REST endpoints that can easily enable services such as querying, N1QL, and index. One can also set up buckets using these REST APIs. The configuration script can be downloaded from here.

Let’s try to build and run the Couchbase image now.

Go to the directory where the Dockerfile is.

Build the image ->

docker build -t <chakrar27>/couchbase:books .

Replace chakrar27 by your image-prefix or docker hub id.

Once the image is built, verify by doing

$ docker images

 

REPOSITORY                   TAG                    IMAGE ID            CREATED             SIZE

chakrar27/couchbase          books               93e7ba199eef        1 hour ago         581 MB

couchbase                    latest              337dab68d2d1        9 days ago          581 MB

Run the image by typing

docker run -p 8091-8093:8091-8093 -p 8094:8094 -p 11210:11210 chakrar27/couchbase:books

Sample output:

Starting Couchbase Server -- Web UI available at http://<ip>:8091 and logs available in /opt/couchbase/var/lib/couchbase/logs

Start configuring env by calling REST endpoints

Note: Unnecessary use of -X or --request, POST is already inferred.

*   Trying 192.168.99.100...

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)

> POST /pools/default HTTP/1.1

> Host: 127.0.0.1:8091

> User-Agent: curl/7.49.1-DEV

> Accept: */*

> Content-Length: 55

> Content-Type: application/x-www-form-urlencoded

>

} [55 bytes data]

* upload completely sent off: 55 out of 55 bytes

< HTTP/1.1 200 OK

< Server: Couchbase Server

< Pragma: no-cache

< Date: Fri, 24 Mar 2017 03:20:51 GMT

< Content-Length: 0

< Cache-Control: no-cache

<

100    55    0     0  100    55      0   2966 --:--:-- --:--:-- --:--:--  3666

* Connection #0 to host 127.0.0.1 left intact

*   Trying 127.0.0.1...

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)

> POST /node/controller/setupServices HTTP/1.1

> Host: 127.0.0.1:8091

> User-Agent: curl/7.49.1-DEV

> Accept: */*

> Content-Length: 32

> Content-Type: application/x-www-form-urlencoded

>

} [32 bytes data]

* upload completely sent off: 32 out of 32 bytes

< HTTP/1.1 200 OK

< Server: Couchbase Server

< Pragma: no-cache

< Date: Fri, 24 Mar 2017 03:20:56 GMT

< Content-Length: 0

< Cache-Control: no-cache

<

100    32    0     0  100    32      0   3389 --:--:-- --:--:-- --:--:--  4000

* Connection #0 to host 127.0.0.1 left intact

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100   180  100   152  100    28   8068   1486 --:--:-- --:--:-- --:--:--  8444

HTTP/1.1 200 OK

Server: Couchbase Server

Pragma: no-cache

Date: Fri, 24 Mar 2017 03:21:01 GMT

Content-Type: application/json

Content-Length: 152

Cache-Control: no-cache

{"storageMode":"memory_optimized","indexerThreads":0,"memorySnapshotInterval":200,"stableSnapshotInterval":5000,"maxRollbackPoints":5,"logLevel":"info"}Note: Unnecessary use of -X or --request, POST is already inferred.

*   Trying 127.0.0.1...

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)

> POST /settings/web HTTP/1.1

> Host: 127.0.0.1:8091

> User-Agent: curl/7.49.1-DEV

> Accept: */*

> Content-Length: 50

> Content-Type: application/x-www-form-urlencoded

>

} [50 bytes data]

* upload completely sent off: 50 out of 50 bytes

< HTTP/1.1 200 OK

< Server: Couchbase Server

< Pragma: no-cache

< Date: Fri, 24 Mar 2017 03:21:01 GMT

< Content-Type: application/json

< Content-Length: 44

< Cache-Control: no-cache

<

{ [44 bytes data]

100    94  100    44  100    50   1554   1765 --:--:-- --:--:-- --:--:--  2380

* Connection #0 to host 127.0.0.1 left intact

{"newBaseUri":"http://127.0.0.1:8091/"}bucket set up start

bucket name =  books

Note: Unnecessary use of -X or --request, POST is already inferred.

*   Trying 127.0.0.1...

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)

* Server auth using Basic with user 'Administrator'

> POST /pools/default/buckets HTTP/1.1

> Host: 127.0.0.1:8091

> Authorization: Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==

> User-Agent: curl/7.49.1-DEV

> Accept: */*

> Content-Length: 55

> Content-Type: application/x-www-form-urlencoded

>

} [55 bytes data]

* upload completely sent off: 55 out of 55 bytes

< HTTP/1.1 202 Accepted

< Server: Couchbase Server

< Pragma: no-cache

< Location: /pools/default/buckets/books

< Date: Fri, 24 Mar 2017 03:21:01 GMT

< Content-Length: 0

< Cache-Control: no-cache

<

100    55    0     0  100    55      0    748 --:--:-- --:--:-- --:--:--   820

* Connection #0 to host 127.0.0.1 left intact

bucket set up done

/entrypoint.sh couchbase-server

Verify the configuration by typing http://192.168.99.100:8091 into your favorite browser.

Configuration

Type ‚ÄúAdministrator‚ÄĚ as Username and ‚Äúpassword‚ÄĚ in the Password field and click Sign In.

Check the settings of the Couchbase node and verify that it is according to the configure.sh we used above.

Couchbase Setting Cluster Ram Quota

The bucket ‚Äúbooks‚ÄĚ.

Data bucket settings

At this point our back-end Couchbase infrastructure is up and running. We now need to build an application that can use this backend to build something functional.

Build the application using Spring Boot, Vaadin, and Couchbase

Go to start.spring.io and add Couchbase as a dependency. This would place spring-data-couchbase libraries in the application classpath. Since Couchbase is considered a first-class citizen of the Spring Boot ecosystem, we can make use of the Spring Boot auto-configuration feature to access the Couchbase bucket at runtime.

Also, add Vaadin as a dependency in the project. We are going to use it for building the UI layer.

The project object model (pom) file can be found here.

We create a Couchbase repository like this:

@ViewIndexed(designDoc = "book")

@N1qlPrimaryIndexed

@N1qlSecondaryIndexed(indexName = "bookSecondaryIndex")

public interface BookStoreRepository extends CouchbasePagingAndSortingRepository<Book, Long> {

        List<Book> findAll();        

        List<Book> findByAuthor(String author);

        List<Book> findByTitleStartsWithIgnoreCase(String title);

        List<Book> findByCategory(String category);

}

The annotations ensure that a View named ‚Äúbook‚ÄĚ will be supplied at runtime to support view-based queries. A primary index will be created to support N1QL queries. In addition, a secondary index will also be provided.

The methods have been defined to return List<Book>. We don’t have to provide any implementation since that is already provided behind the scenes by the spring-data-couchbase.

We need to define the entity, which in our case is Book. We annotate it with @Document.

@Document

public class Book {

        @Id

        private String id = UUID.randomUUID().toString();

        private String title;

        private String author;

        private String isbn;

        private String category;

}

To enable auto-configuration, use application.properties or application.yml file as shown below:

spring.couchbase.bootstrap-hosts=127.0.0.1

spring.couchbase.bucket.name=books

spring.couchbase.bucket.password=

spring.data.couchbase.auto-index=true

One thing to note here is that when the application container runs, it would need to connect to the Couchbase container and set up the auto-configuration. The property spring.couchbase.bootstrap-hosts lists the IP address of the Couchbase node. Here, I have specified 127.0.0.1 which is not going to work since at runtime, the app container will not find the Couchbase container running in that IP. So we need to pass an environment variable (env variable) when running the Docker image of the application.

In order to pass an env variable as mentioned above, we need to write the Dockerfile of the application such that the value of the spring.couchbase.bootstrap-hosts property can be passed as an env variable. Here’s the Dockerfile of the app:

FROM frolvlad/alpine-oraclejdk8:full

VOLUME /tmp

ADD target/bookstore-1.0.0-SNAPSHOT.jar app.jar

RUN sh -c 'touch /app.jar'

CMD java -Dspring.couchbase.bootstrap-hosts=$HOSTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar

As you can see, we are basically overriding the value of the spring.couchbase.bootstrap-hosts property defined in the application.properties file by the env variable HOSTS.

This is pretty much all we have to do to wire Spring Boot with Couchbase.

UI (U and I)

For UI, we make use of the spring-vaadin integration. I am using version 7.7.3 of Vaadin, vaadin-spring version 1.1.0,¬†and¬†‚ÄúViritin,‚ÄĚ a useful Vaadin add-on. To install Viritin, add the following dependency:

<dependency>

        <groupId>org.vaadin</groupId>

        <artifactId>viritin</artifactId>

        <version>1.57</version>

</dependency>

Annotate the UI class as

@SpringUI

@Theme(“valo”)

public class BookstoreUI extends UI {

//////

}

And then hook the repository methods with the UI elements.

A bean that implements the CommandLineRunner interface is used to prepopulate the database with some initial values.

For full source code, refer to this link.

Dockerize the application

Using Maven, it’s very easy to Dockerize an application using Spotify’s docker-maven plugin. Please check the pom.xml file plugin section.

Alternatively, you can build using Docker command line ->

docker build -t chakrar27/books:standalone .

And then run the image -> Note that we need to pass the value of the variable HOSTS that our app container is going to look for when it tries to connect to the Couchbase container. The run command would look like:

docker run -p 8080:8080 -e HOSTS=192.168.99.100 chakrar27/books:standalone

Once the application is started, navigate to http://192.168.99.100:8080/

The following page shows up:

pasted image 0 2

An entry can be edited and saved.

pasted image 0 1

There’s also a neat filtering feature provided by the N1QL query running underneath.

pasted image 0 3

Users¬†can also add a new book and delete an existing record. All the CRUD¬†(Create/Read/Update/Delete) features of this simple application are¬†powered by Couchbase N1QL queries,¬†which we enabled by creating the ‚ÄúBookStoreRepository,‚ÄĚ and,¬†in turn,¬†extends¬†the¬†‚ÄúCouchbasePagingAndSortingRepository.‚ÄĚ

 

This post is part of the Couchbase Community Writing Program

The post Docker and Vaadin Meet Couchbase – Part 1 appeared first on The Couchbase Blog.

Categories: Architecture, Database

Perform Various N1QL Queries without Indexes in Couchbase Server

NorthScale Blog - Tue, 04/11/2017 - 16:03

As you probably already know, you’re able to query Couchbase NoSQL documents using a SQL dialect called N1QL. ¬†This is made possible through indexes that you create on documents in your Couchbase Buckets. ¬†However, what if I told you that not every N1QL query requires an index to first exist? ¬†After talking with my colleague, Justin Michaels, he showed me an awesome trick to perform bulk operations in N1QL without indexes. ¬†This was news to me because I always thought you needed at least one index to exist, but hey, you learn something new every day.

We’re going to see how to run a few N1QL queries on a Couchbase Bucket that has no indexes and that includes no primary index.

Before we jump into some sample scenarios, you might be wondering how it is possible run queries without an index.  This is actually possible by making use of the USE KEYS operator to hone in on specific documents by their key that exists in the meta information.

Take the following document for example:

{
    "type": "person",
    "firstname": "Nic",
    "lastname": "Raboy",
    "social_media": [
        {
            "website": "https://www.thepolyglotdeveloper.com"
        }
    ]
}

Above we have a simple document that represents a particular person. ¬†Let’s say the above document has¬†nraboy as the id value. ¬†To make things interesting, let’s create another document.

Assume the following has mraboy as the id value:

{
    "type": "person",
    "firstname": "Maria",
    "lastname": "Raboy",
    "social_media": [
        {
            "website": "https://www.mraboy.com"
        }
    ]
}

So if we wanted to query either of these two documents with the USE KEYS operator in N1QL, we could compose a query that looks like the following:

SELECT * 
FROM example 
USE KEYS ["nraboy", "mraboy"];

If you look at the¬†EXPLAIN of the above query you’ll notice that no index was used in the query. ¬†The above type of query would be useful if you knew the keys that you wanted to obtain and wanted incredibly fast performance similar to how it was done in a previous article I wrote titled,¬†Getting Multiple Documents by Key in a Single Operation with Node.js.

Let’s make things a bit more complicated. ¬†What if we wanted to query with a relationship on one or more of the document properties?

Let’s create another document with¬†couchbase as the document id:

{
    "type": "company",
    "name": "Couchbase Inc",
    "address": {
      "city": "Mountain View",
      "state": "CA"
    }
}

The above document represents a company. ¬†As you probably guessed, we’re going to query for the company information of each person. ¬†To make this possible, let’s change the¬†nraboy document to look like the following:

{
    "type": "person",
    "firstname": "Nic",
    "lastname": "Raboy",
    "social_media": [
        {
            "website": "https://www.thepolyglotdeveloper.com"
        }
    ],
    "company": "couchbase"
}

Notice we’ve added a property with the key to our other document. ¬†We won’t add any company information to the¬†mraboy document.

Take the following query that has a multiple document relationship, but no indexes created:

SELECT
    p.firstname,
    p.lastname,
    (SELECT c.* FROM example c USE KEYS p.company)[0] AS company
FROM example p 
USE KEYS ["nraboy", "mraboy"];

Notice that the above query has a subquery that also uses the USE KEYS operator.  Not bad right?  Try using other operators like UNNEST to flatten the array data found in the social_media property.

Conclusion

You just saw how to write N1QL queries in Couchbase that use no index.  By using the USE KEYS operator we can do bulk operations based on key, like I demonstrated in the articles, Getting Multiple Documents by Key in a Single Operation with Node.js and Using Golang to get Multiple Couchbase Documents by Key in a Single Operation.  A huge thanks to Justin Michaels from Couchbase for helping me with this.

To learn more about N1QL and Couchbase, check out the Couchbase Developer Portal for more information.

The post Perform Various N1QL Queries without Indexes in Couchbase Server appeared first on The Couchbase Blog.

Categories: Architecture, Database

Getting Started with Couchbase Lite in your iOS App : Part1

NorthScale Blog - Mon, 04/10/2017 - 18:15

This post looks at  how you can get started with Couchbase Lite in your iOS App. Couchbase Lite is an embedded JSON database that can work standalone, in a P2P network, or with a Sync Gateway as a remote endpoint. While we will be looking at the framework in the context of an iOS App in Swift, everything that’s discussed here applies equally to mobile apps developed in any other platform (Android, iOS (ObjC), Xamarin). Deviations will be specified as such. Stay tuned for related posts for the other platforms!

NOTE:  We will be discussing Couchbase Mobile v1.4 which is the current production release. There is a newer Developer Preview version 2.0 of Couchbase Mobile . We will dive into that in a future post.

Background

The Couchbase Mobile Stack comprises the Couchbase Server, Couchbase Sync Gateway and Couchbase Lite embedded Database. You can learn more about the server in the Getting started with Couchbase Server guide and the Sync Gateway in the Getting Started with Couchbase Sync Gateway guide.

I’ll assume you’re familiar with developing iOS Apps and basics of Swift. If you want to read up on NoSQL databases or Couchbase, you can find lots of resources on the Couchbase site.

Couchbase is open-source. Everything I’ll use here is free to try out.

Couchbase Lite

Couchbase Lite can be used in several deployment modes.

  • Option 1: It can be used as a standalone cross-platform embedded database on a device
  • Option 2: It can be used in conjunction with¬†a remote Sync Gateway that would allow¬†it to sync data across devices. This case can be extended to include the full Couchbase stack with the Couchbase Server. From the perspective of Couchbase Lite on the device, it should not really matter if there is a Couchbase Server or not since Couchbase Lite will interface with the remote Sync Gateway.
  • Option 3: It can be used in¬†a P2P mode

We will focus on Option 1 here.

Native API

Couchbase Lite exposes a native API for iOS, Android and Windows that allows the Apps to easily interface with the Couchbase platform. As an App Developer, you do not have to worry about the internals of  the Couchbase Lite embedded database, but you can instead focus on building your awesome app . The native API allows you to interact with the Couchbase Lite framework just as you would interact with other platform frameworks/ subsystems. Again, we will be discussing Couchbase Mobile v1.4 in this blog post. You can get a full listing of the APIs on our Couchbase Developer site.

Integration

There are many options to integrate Couchbase Lite framework into your iOS App. It is probably the simplest to use Dependency Management Systems like Cocoapods or Carthage, but if you prefer, there is the option to manually include the framework into your app project . Check out our Couchbase Mobile Getting Started Guide for the various integration options.

Note that in case of a Swift app, after importing the framework, you will have to create a Bridging Header (if your app doesn’t already have one) and import the following files

#import <CouchbaseLite/CouchbaseLite.h>
#import <CouchbaseLiteListener/CBLListener.h>

 

Demo App

Please download the Demo Xcode project from this Github repo . We will use this app as an example in the rest of the blog.

git clone git@github.com:couchbaselabs/couchbase-lite-ios-starterapp.git

 

This app uses Cocoapods to integrate the Couchbase Lite framework and is intended to familiarize you with the basics of using Couchbase Lite framework. Once downloaded, build and run the App. Play around with the app.You can use this app as a starting point and extend it to test the other APIs.

Couchbase Lite Demo

Couchbase Lite Standalone iOS App Demo

 

Basic Workflow Creating a Local Database

Open the DBMainMenuViewController.swift file and locate the createDBWithName function.

This will create a database with specified name in the default path (/Library/Application Support). You can specify a different directory when you instantiate the CBLManager class.

do {
    // 1: Set Database Options
    let options = CBLDatabaseOptions()
    options.storageType = kCBLSQLiteStorage
    options.create = true

     // 2: Create a DB if it does not exist else return handle to existing one
     try cbManager.openDatabaseNamed(name.lowercased(), with: options)
}
catch {
// handle error
}

  1. Create CBLDatabaseOptions object to associate with the database. For instance, you can set the encryption key to use with your database using the encryptionKey property. Explore the other options on CBLDatabaseOptions class.
  2. Database Names must to be lowercase. The sample app will automatically lowercase the names . If successful, a new local database will be created if it does not exist. If it exists, a handle to existing database will be returned.
Listing the Databases

This is very straightforward. Open the DBListTableViewController.swift file.  The allDatabaseNames property on CBLManager lists the databases that were created.

Adding a New Document to a Database

Open the DocListTableViewController.swift file and locate the createDocWithName function.

do {
            // 1: Create Document with unique Id
            let doc = self.db?.createDocument()
             
            // 2: Construct user properties Object
            let userProps = [DocumentUserProperties.name.rawValue:name,DocumentUserProperties.overview.rawValue:overview]
           
            // 3: Add a new revision with specified user properties
            let _ = try doc?.putProperties(userProps)            
        }
        catch  {
         // handle error            
        }

  1. As a result of this call, a Document is created with a unique Id
  2. You can specify a set of user properties as <key:value> pairs. There is an alternative in which you can use CBLDocumentModel object to specify your application data. The CBLDocumentModel is only available for the iOS platform. We will use <key:value> properties in our example

This creates new revision of the document with the specified user properties

Listing Documents in Database

Open the DocListTableViewController.swift file and locate the getAllDocumentForDatabase function

do {
      guard let dbName = dbName else {
          return
       }
       // 1. Get handle to DB with specified name
       self.db = try cbManager.existingDatabaseNamed(dbName)
            
       // 2. Create Query to fetch all documents. You can set a number of properties
       // on the query object
       liveQuery = self.db?.createAllDocumentsQuery().asLive()
            
       guard let liveQuery = liveQuery else {
           return
       }
            
       // 3: You can optionally set a number of properties on the query object.
       // Explore other properties on the query object
       liveQuery.limit = UInt(UINT32_MAX) // All documents
            
       //   query.postFilter =
            
       //4. Start observing for changes to the database
       self.addLiveQueryObserverAndStartObserving()
            
      // 5: Run the query to fetch documents asynchronously
      liveQuery.runAsync({ (enumerator, error) in
           switch error {
             case nil:
             // 6: The "enumerator" is of type CBLQueryEnumerator and
             // is an enumerator for the results
             self.docsEnumerator = enumerator                   
           default:    
                }
            })
            
        }
        catch  {
           // handle error
        }

  1. Get handle to database with specified name
  2. Create a CBLQuery object. This Query is used to fetch all documents. You can create a regular query object or a ‚Äúlive‚ÄĚ query object. The ‚Äúlive‚ÄĚ query object is of type CBLLiveQuery that¬† automatically refreshes everytime the database changes in a way that affects the query results
  3. The query object  has a number of properties that can be tweaked in order to customize the results. Try modifying the properties and seeing the effect on results
  4. You will have to explicitly add an observer to the Live Query object be notified of changes to the database. We will discuss this more on section on “Observing Changes to Documents In Database”.¬†Don‚Äôt forget to remove the observer and stop observing changes when you no longer need it!
  5.  Execute the query asynchronously. You can also do it synchronously if you prefer , but its probably recommended to do it async if the data sets are large.
  6. Once the query executes successfully, you get a CBLQueryEnumerator object. The query enumerator allows you to enumerate the results. It lends itself very well as a data source for the Table View that displays the results
Editing an Existing Document

 Open the DocListTableViewController.swift file and locate the updateDocWithName function.

do {
       // 1: Get the document associated with the row
       let doc = self.docAtIndex(index)
  
       // 2: Construct user properties Object with updated values
       var userProps = [DocumentUserProperties.name.rawValue:name,DocumentUserProperties.overview.rawValue:overview]
            
       // 3: If a previous revision of document exists, make sure to specify that.
       // SInce its an update, it should exist!
      if let revId = doc?.currentRevisionID  {
            userProps["_rev"] = revId
      }
            
      // 4: Add a new revision with specified user properties
      let savedRev = try doc?.putProperties(userProps)
  }
  catch  {
         // handle error           
   }

 fileprivate func docAtIndex(_ index:Int) -> CBLDocument? {
        // 1. Get the CBLQueryRow object at specified index
        let queryRow = self.docsEnumerator?.row(at: UInt(index))
       
        // 2: Get the document associated with the row
        let doc = queryRow?.document       
        return doc
 }

  1. Get handle to document that needs to be edited. The CBLQueryEnumerator can be can be queried to fetch handle to the document at selected Index
  2. Update the user properties as <key:value> pairs. There is an alternative  in which you can use CBLDocumentModel object to specify your application data. This is only available on iOS. We will use <key:value> properties in our example.
  3. Updates to the document will require a revisionId to explicitly indicate the revision of the document that needs to be updated. This is specified using ‚Äú_rev‚ÄĚ key. This is required for conflict resolution. You can find more details here.¬†This creates a new revision of the document with the specified user properties
Deleting an Existing Document

 Open the DocListTableViewController.swift file and locate the deleteDocAtIndex function.

do {
       // 1: Get the document associated with the row
       let doc = self.docAtIndex(index)         
             
       // 2: Delete the document
       try doc?.delete()
   }
   catch  {
       // Handle error
   }

  1. Get handle to document that needs to be edited. The CBLQueryEnumerator can be can be queried to fetch handle to the document at selected Index
  2. Delete the document. This deletes all revisions of document
Observing Changes to Documents in Database

Open the DocListTableViewController.swift file and locate the addLiveQueryObserverAndStartObserving function

   // 1. iOS Specific. Add observer to the live Query object
        liveQuery.addObserver(self, forKeyPath: "rows", options: NSKeyValueObservingOptions.new, context: nil)
        
   // 2. Start observing changes
        liveQuery.start()

  1. In order to be notified of changes to the database¬†that affect the Query results, add an observer to the Live Query object . This is a case where the Swift/ Obj C versions differ from other mobile platforms. If you are developing on other platforms, you can call the addChangeListener API on the Live Query object. However, ¬†in Couchbase Lite 1.4, this¬†API is unsupported on¬†the iOS platforms and we will instead leverage iOS‚Äôs Key-Value-Observer pattern ¬†to be notified of changes.¬†¬†Add a KVO observer to the Live Query object to start observing changes to the ‚Äúrows‚ÄĚ property on Live Query object
  2. Start observing changes

Whenever there is a change to the database that affects the ‚Äúrows‚ÄĚ property of the LiveQuery object, your app will be notified of changes. When you receive the notification of change, you can update your UI, which in this case would be reloading the tableview.

override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
        if keyPath == "rows" {
            self.docsEnumerator = self.liveQuery?.rows
            tableView.reloadData()
        }
    }

Deleting a Database

Open the DBListTableViewController.swift  file and locate the deleteDatabaseAtIndex function.

do {
        // 1.  Get handle to database if exists
        let db = try cbManager.existingDatabaseNamed(dbToDelete)
                    
        // 2. Delete the database
        try db.delete()
                    
        // 3. update local bookkeeping
        self.dbNames?.remove(at: indexPath.row)

        // 4. Update UI
        tableView.deleteRows(at: [indexPath], with: .automatic)
    }
    catch {
         // handle error
     }

Deletion of a database is handled through a simple delete() call on the database.

Conclusion

As you can see, it is pretty straightforward to integrate a standalone version of Couchbase Lite into your new or existing iOS App. You can download the sample app discussed in this post from Github repo and try exploring the various interfaces.  If you have further questions, feel free to reach out to me at Twitter @rajagp or email me priya.rajagopal@couchbase.com.

The Couchbase Mobile Dev Forums is another great place to get your mobile related questions answered . Also, checkout out the Couchbase Developer Portal for more on Couchbase Mobile.

 

The post Getting Started with Couchbase Lite in your iOS App : Part1 appeared first on The Couchbase Blog.

Categories: Architecture, Database

Ahoy-Hoy Couchbase! This is Priya Rajagopal…

NorthScale Blog - Mon, 04/10/2017 - 16:15

I just started as a Developer Advocate, focusing on Mobile at Couchbase, so let me use the opportunity to introduce myself. I am based in Ann Arbor, Michigan, ranked as one of the top college towns in the US.

I’ve been professionally developing & architecting software for a very long time now. I spent the first decade or so of my career in Research working on future technologies that would eventually make its way into the products. During that time, I’ve had the opportunity to work on some very cool projects around x86 Platform security, Detecting & Thwarting Network Security Attacks, Platform Management, Virtualization, IPTV, Social TV, Targeted Advertising among others.  For a period in my career, I was also a ETSI TISPAN standards delegate where I co-authored the IPTV architectural specifications.

I am a co-inventor on almost 2 dozen patents spanning various technologies.

Over the past several years, I have been focused on Mobile Development and iOS is my platform of choice. Before Couchbase, I was the Director of Mobile Development at a startup, managing the SDLC of mobile apps. So it’s been quite a journey, from building low level embedded device software with a command line interface to mobile apps with a pretty UI!

While I‚Äôve enjoyed building software, I‚Äôve equally enjoyed the community side of things ‚Äď blogging, speaking, organizing user groups, mentoring. So I‚Äôm really excited about my new role at Couchbase ‚Äď I think it‚Äôs the best of both worlds! I haven‚Äôt really worked extensively on Database technologies, so I‚Äôm going to be learning a lot of new stuff. I believe that Couchbase Server and Couchbase Mobile is poised to have a significant impact in the Mobile/ IoT space where there is an explosion of massively scalable, responsive applications. I can‚Äôt wait to help spread the word.

On the personal side, I enjoy spending time with my family and watching movies. And of course, if it’s Fall, it’s college football- M Go Blue!

Michigan Football

Scene at a Saturday football game at the Big House, Ann Arbor

 

You can reach me on the Interwebs at

Twitter: @rajagp

Email: Priya.rajagopal <at> couchbase.com

The post Ahoy-Hoy Couchbase! This is Priya Rajagopal… appeared first on The Couchbase Blog.

Categories: Architecture, Database

Our Commitment to Performance

NorthScale Blog - Fri, 04/07/2017 - 15:10

A few months ago I went to see The Human League when they came to Manchester. They were Britain‚Äôs Best Breakthrough Act in 1982, don‚Äôt tell me my finger isn‚Äôt on the pulse. I‚Äôm not a superfan, I just like a few of their songs. I‚Äôm certainly not hipster enough to talk about their ‚Äúearlier work‚ÄĚ and ‚Äúrare b-sides.‚ÄĚ In fact, the songs I like are mostly the usual crowd pleasers. The concert was heaving, and for someone who doesn‚Äôt see live music all that often it was LOUD. As they were going through their repertoire I happily belted out the ones I knew. But the honest truth is for the songs I‚Äôm not that familiar with, there‚Äôs a part of me thinking, ‚Äúwhat about the classics?‚ÄĚ They closed with ‚ÄúDon‚Äôt You Want Me‚ÄĚ and the crowd went bananas, and this particular passing fan went home delighted. The increased volume level that accompanied the encore told me I wasn‚Äôt the only one holding out for the favourites. I‚Äôm sure this is a quandary all bands face when they‚Äôre launching a new album. How much do they focus on the new material versus reinforcing what made them popular in the first place?

The 5 years we have had have been such good times…

At Couchbase we‚Äôre locked in the studio working hard on our fifth album, Couchbase 5.0 ‚Äď previews are available now. If you‚Äôre relatively new to Couchbase, here‚Äôs a quick recap of our back catalogue. Our first edition, (‚ÄúSimple, Fast, Elastic‚ÄĚ) was a pure Key-Value store. It was scalable, but my word it was fast. The product¬†was built with a memory-centric architecture, written in C and based on the popular memcached project. This debut¬†earned us a lot of notoriety and had some great success. We followed that up in 2.0 with a fully fledged document store including indexing and cross datacenter replication (XDCR) capabilities. For our ‚Äúdifficult third album‚ÄĚ we completely rewrote our internal plumbing to make it more fault tolerant and lay the groundwork for the next set of features. Our 4.0 release went in a significant new direction and one that really changed the game. It introduced N1QL, a SQL-like language familiar to the masses to query and manipulate JSON documents. Along with this came a powerful secondary indexer and an integrated mobile solution complete with cloud synchronisation. Couchbase Server was no longer the preserve of the alternative stations but getting serious airplay in the mainstream.

The Couchbase discography

The Couchbase discography

5.0 will build on this even further: production full-text search, role-based access control for documents, new bucket capabilities, a brand new UI, and an analytics service preview. It‚Äôs going to be another huge progression for us, and the product is barely recognisable from version 1. But often I speak with customers and they inquire (possibly concerned), “what about the classics?” What of that high performance key-value store that made people stand up and take notice of Couchbase in the first place? Given all of these additions, it might have been all too easy for us to forego some of the early qualities that brought us success. In fact, the opposite is true.

Don‚Äôt forget it‚Äôs me who put you where you are now…

At Couchbase we are obsessed with performance. We always have been and always will be. This obsession is a voice in our heads at every turn reminding us what brought us here . The Key-Value (KV) Engine development team takes enormous pride in ensuring that each release is an improvement over the next. For example, take a look at how previous releases stack up for one particular Key-Value throughput metric:

Couchbase Server Performance Comparison

Maximum throughput 50:50 Read/Write workload, 20 million * 256 byte items, 1 replica, 2 node cluster using cbc-pillowfight

This graph is worth a second look: our current edition, 4.x, is more than twice as fast as its 3.x predecessor, and 3.1.6 was no slouch. In a follow-on post I’ll go into the technical detail of how these improvements were achieved (tl;dr hard-fought profiling, analysis, use of efficient multithreaded C++ data structures, and cache line awareness).

You’d better change it back or we will both be sorry

Maintaining performance takes a lot of effort. Couchbase Server is relied upon for the smooth running of thousands of mission-critical applications in all manner of industries. Many developers are familiar with the dilemma, ‚ÄúI can make this fast, but I‚Äôm not sure how safe it is.‚ÄĚ It‚Äôs not that functionality always wins the debate over performance ‚Äď at Couchbase there is never a debate to be had. We always design and implement features with an eye on how to make them fast, but functionality always comes first. The culture to improve performance initially comes from holding a high bar in terms of preventing performance regressions. We have an independent team that bombards every single build with rigorous performance tests and scrutinises the results. Take a look, the list of metrics is endless. And from time to time when we introduce performance regressions they let us know all about them.

Couchbase Performance Measurement

Each build comes under high scrutiny

We have been relentless in the pursuit of new and advanced features and will continue to do so. I’m hugely excited about what’s coming in 5.0 and beyond. Interesting new use cases will be opened up by these features and we’re sure to get a new army of first-time downloaders checking us out because of the brand new content. We are fast becoming the defacto standard for building systems of engagement. But rest assured, no matter what features we develop, for those of you that just want the old classics, we’ll continue to deliver the things that matter most to our some of our core customers: scalability, high availability, and consistently low latency, high throughput operations.

The post Our Commitment to Performance appeared first on The Couchbase Blog.

Categories: Architecture, Database

Building Applications with Node.js, Angular, and Couchbase (video)

NorthScale Blog - Fri, 04/07/2017 - 02:05

Couchbase holds regular Meetups in our Mountain View office and elsewhere.

During our February 2nd, 2017 Meetup, Nic Raboy gave a hands-on demonstration of building an application using Node.js, Angular, and Couchbase.

Watch the video here and check out the code on GitHub at https://github.com/couchbaselabs/full-stack-node-example

Postscript

Download and try Couchbase here.

You can find more resources on our developer portal and follow us on Twitter @CouchbaseDev.

You can post questions on our forums. And we actively participate on Stack Overflow.

Hit me up on Twitter with any questions, comments, topics you’d like to see, etc. @HodGreeley

Follow Nic Raboy on Twitter: @nraboy

 

The post Building Applications with Node.js, Angular, and Couchbase (video) appeared first on The Couchbase Blog.

Categories: Architecture, Database