Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database

Oracle Helps Transform Lead Generation with LinkedIn Matched Audiences

Oracle Database News - 8 hours 12 min ago
Press Release Oracle Helps Transform Lead Generation with LinkedIn Matched Audiences Product integration between Oracle Marketing Cloud and LinkedIn Campaign Manager enables marketers to generate high-quality leads at scale, increase conversion rates and accelerate sales.

Modern Customer Experience 2017, Las Vegas —Apr 25, 2017

To help marketers generate high-quality leads at scale and maximize the ROI of lead generation campaigns, Oracle today announced a new product integration between Oracle Marketing Cloud and LinkedIn Campaign Manager - a new targeting capability within LinkedIn’s recently announced product, Matched Audiences. This new product integration enables Oracle Marketing Cloud customers to seamlessly integrate data from more than 467 million LinkedIn users into existing marketing campaigns to reach and engage their ideal buyers on the world’s largest professional social network.

Marketers need to be able to deliver personalized experiences at scale in order to increase conversion rates and accelerate sales. With the new product integration between Oracle Eloqua, part of the Oracle Marketing Cloud, and LinkedIn Campaign Manager, marketers now have the power to seamlessly nurture leads using LinkedIn Matched Audiences. The product integration enables marketers to personalize and orchestrate campaigns across multiple channels including email, websites and digital ads in order to engage the right audience at the right time in the buyer’s journey.

“At LinkedIn, we strive to equip B2B marketers with the tools and insights that they need to reach the audiences that matter most to their business,” said Russ Glass, vice president of product, LinkedIn Marketing Solutions. “Matched Audiences gets us closer to that goal by enabling us to give marketers custom ways to combine LinkedIn’s powerful professional data with their own first-party data. Our product integration with Oracle was key to helping make that happen for Oracle Eloqua customers.”

This integration also empowers marketers to enhance their Account Based Marketing (ABM) strategies. Marketers can leverage the powerful insights delivered by the new integration to help convert unknown prospects into known buyers, retarget buyers with relevant digital ads, and enrich buyer profiles and optimize digital ad spend on LinkedIn. This enables marketers to increase conversion rates and accelerate sales.

“We are focused on empowering marketers with the data they need to inform, measure and maximize the impact of marketing campaigns,” said Laura Ipsen, general manager and senior vice president, Oracle Marketing Cloud. “LinkedIn is the world’s largest professional network and by enabling marketers to seamlessly integrate data from more than 467 million LinkedIn users we are able to provide powerful customer insights that can transform lead generation campaigns. The new product integration between Oracle Eloqua and LinkedIn Campaign Manager can ultimately help marketers enhance the customer experience and increase revenues.” 

Oracle Marketing Cloud is part of Oracle CX Cloud Suite. Oracle CX Cloud Suite empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences and outcomes, Oracle CX Cloud Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results. The Oracle CX Cloud Suite includes Oracle Commerce Cloud, Oracle Marketing Cloud, Oracle Sales Cloud and Oracle Service Cloud.

Contact Info Simon Jones
Public Relations for Oracle
+1.415.856.5155
sjones@blancandotus.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Simon Jones

  • +1.415.856.5155

Follow Oracle Corporate

Categories: Database, Vendor

Blue Microphones Turns Up the Volume with NetSuite

Oracle Database News - 8 hours 12 min ago
Press Release Blue Microphones Turns Up the Volume with NetSuite Microphone Maker Gains Vital Business Agility for Innovation and Growth

SUITEWORLD 2017, LAS VEGAS, Nev.—Apr 25, 2017

Oracle NetSuite Global Business Unit (GBU), the industry’s leading provider of cloud financials / ERP, HR and omnichannel commerce software suites, today announced that Blue Microphones (www.bluemic.com), a leading designer and producer of premium microphones and sound recording equipment for audio professionals, musicians and consumers, has gained vital business agility with NetSuite’s unified cloud business management system. NetSuite allows Blue Microphones to focus on product innovation, scale into new markets, and challenge larger rivals in the competitive recording industry. To ensure agility for innovation and growth, the company replaced QuickBooks Enterprise with NetSuite to manage its end-to-end business operations including financials, fixed assets, demand planning, inventory management, bill of materials, work orders and assemblies, warehouse management, CRM, HR, and multi-currency transactions in the Canadian dollar and Euro. After a rapid three-month implementation, the company went live on NetSuite in July 2014.

Headquartered in Westlake, Calif., Blue Microphones was founded in 1995 by an American jazz musician and a recording engineer who designed high-quality studio microphones and pioneered the digital USB microphone – microphones that plug directly into a computer. Beginning in 2004, the company saw significant growth thanks to its Snowball, a USB mic built for use with Apple’s GarageBand recording software, and continued to develop a range of USB mics made vastly successful by the explosion of user generated content. Since then, Blue Microphones has continued to design and manufacture a full range of studio microphones, the world’s #1 USB mic line, and recently launched a lineup of premium headphones.

With 30 to 35 percent annual growth, the company was quickly realizing it needed new functionality its existing system couldn’t provide, including: multicurrency transactions; anywhere, anytime access to business data across the globe; integration with marketing and engineering systems for better collaboration; and integration to its shipping software.

“Replacing our entry-level accounting system with a unified cloud-based business management suite has made a huge difference in our business and has us well positioned for our next stage of growth,” said Bart Thielen, CFO and COO of Blue Microphones. “With NetSuite, we’re able to scale the business very quickly and our visibility is tremendously improved. We now have strong financial controls and a solid infrastructure with great agility to respond rapidly to changing market conditions.”

Leveraging the NetSuite SuiteCloud Development Platform, Blue Microphones was able to customize the NetSuite system to meet its business needs and industry specific requirements. The company was also able to integrate the Pacejet Enterprise Shipping Software with NetSuite seamlessly, allowing it to better manage millions of freight quotes and shipments with reduced costs and improved efficiency. Bidirectional EDI interface powered by the NetSuite SuiteCloud Development Platform enables Electronic Data Interchange to 15 different partners including large retailers like Best Buy, Apple and Amazon; which saves Blue Microphones up to 30 to 40 hours a week of manual order entry and invoicing and enables near 100 percent accuracy.

Blue Microphones’ IT modernization efforts have also attracted the notice of others. Most recently, the company was awarded the prestigious 2016 Manufacturing Leadership (ML) Award in the “Enterprise Leadership" category from Frost and Sullivan’s ML Council. The awards are given to companies and individuals that have undertaken breakthrough projects in manufacturing as determined by an expert panel of judges. Blue Microphones was recognized for its "IT Modernization Project,” which allowed the company to rapidly scale, while giving it the agility to adapt to changing market conditions.

As a result of its implementation of NetSuite, Blue Microphones has realized multiple benefits, including:

  • Improved IT and operational efficiency. As a pure cloud system, NetSuite spares Blue Microphones from the hassles of managing on-premise software and the hardware required to support it, including patches, upgrades and security. As a result of efficiencies gained, Blue Microphones can attribute avoiding the costs of hiring two full-time employees by converting to NetSuite, an annual savings of $80,000.
  • Improved inventory management. Third-party logistics partners in Amsterdam and Hong Kong are now able to access NetSuite through a portal to fulfill orders, improving accuracy and efficiency. Previously, Blue Microphones would scan documents and email them over as PDFs, a labor intensive, error-prone process. Additionally, the company has been able to setup a virtual warehouse for its marketing group, which sequesters key products for product reviews and key influencers like artists and producers.
  • Product improvements. Blue Microphones is now capturing customer feedback in the system, which can be shared directly with engineering and overseas manufacturers to improve the products.
  • Reduced manual entry. Workflow and scripting has saved up to 15 hours a month of manual entry for the sales department.
  • A flexible and powerful development platform. NetSuite’s SuiteCloud Development Platform provides flexibility for Blue Microphones to tailor the system and integrate with other third-party solutions to meet its unique business needs and industry-specific requirements.
  • Improved asset tracking. A new fixed assets interface saves significant time at both monthly and year-end audits. During the course of a year, the company estimates it is saving about 50 hours total.
About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Follow Oracle Corporate

Categories: Database, Vendor

Pods Turns to NetSuite SuiteSuccess to Fuel Its Global Operation

Oracle Database News - 8 hours 27 min ago
Press Release Pods Turns to NetSuite SuiteSuccess to Fuel Its Global Operation SuiteSuccess Engineers Customer Success – Readies PODS for Rapid Global Expansion

LAS VEGAS, Nev. —Apr 25, 2017

Oracle NetSuite Global Business Unit (GBU), the industry's leading provider of cloud financials / ERP, HR and omnichannel commerce software suites, today announced that PODS Enterprises, LLC, the leading provider of portable container-based moving and storage solutions, chose NetSuite SuiteSuccess to support its business expansion. PODS will be using NetSuite to manage mission-critical business processes including financials, inventory management, multi-subsidiary management for 16 subsidiaries and multi-currency management for the U.S., Canadian and Australian dollars. SuiteSuccess is the culmination of a multi-year transformation effort to combine the NetSuite unified suite, 20 years of industry leading practices, a new customer engagement model, and business optimization methods into a unified, industry cloud solution. SuiteSuccess is NetSuite’s new purpose-built, unified cloud solution tailored for each industry. 

“I was amazed how closely NetSuite worked with us to ensure we had good process alignment with the fewest possible gaps,” said Bill Tingle, CIO of PODS. “We need to get a system up and running quickly to meet our strategic objectives and SuiteSuccess seems to fit the bill perfectly.”

PODS transformed the moving business when it invented the concept of portable storage in 1998 and now counts a network of more than 170,000 PODS storage containers and 230 PODS Storage Centers in North America alone. In 2016, leadership reevaluated the business and IT strategy and determined the need to move onto a new software platform to support the company’s growth and development plans moving forward. After a rigorous evaluation process, PODS selected NetSuite OneWorld for its flexible platform, strong partner ecosystem and its commitment to customer success.

“Before we had even made the decision to purchase NetSuite, they had a detailed engagement plan with very specific steps on how we would implement the solution,” said Tingle. “We felt comfortable knowing we wouldn’t be starting from scratch and would have a robust solution with NetSuite’s years of experience baked into the product.”

PODS is using SuiteSuccess to ensure a successful deployment of NetSuite OneWorld across its 16 subsidiaries and a complex franchise model with 230 PODS Storage Centers. NetSuite OneWorld will give PODS a flexible, scalable system for growth. With support for 190 currencies, 20 languages, automated tax compliance in more than 100 countries, and transactions in more than 200 countries.

SuiteSuccess was engineered to solve unique industry challenges that historically have limited a company’s ability to grow, scale and adapt to change. Most ERP vendors have tried to solve the industry solution problem with templates, rapid implementation methodologies, and custom code. NetSuite took a holistic approach to the problem and productized domain knowledge, leading practices, KPI’s, and an agile approach to product adoption. The benefits of this are faster time to value, increased business efficiency, flexibility, and greater customer success.

For more information about SuiteSuccess, please visit:  http://www.netsuite.com/portal/services/suitesuccess.shtml

Other expected features and benefits include:

  • Rapid deployment. Backed by SuiteSuccess, PODS expects to launch NetSuite OneWorld in less than three months, a huge difference from its experience with the previous system that took nearly four years.
  • A powerful development platform. The SuiteCloud development platform provides unprecedented flexibility that enables businesses to tailor the system to meet their unique requirements and industry-specific needs.
  • A robust partner ecosystem. NetSuite’s wide array of partner solutions will allow PODS to extend NetSuite to meet its current and future business needs. PODS plans to leverage NetSuite’s comprehensive partner ecosystem for solutions in warehouse management, Electronic Data Interchange (EDI) integration and financial planning.
  • Centralized order and inventory management. NetSuite OneWorld can provide real-time inventory visibility across the business for better forecasting and optimization of more than 170,000 PODS containers moving throughout North America.
  • Built-in business intelligence. With NetSuite OneWorld, PODS will have real-time insights into key business performance indicators for a unified view of the organization and a single version of truth.
  • A highly scalable system for growth. NetSuite’s scalable infrastructure will enable PODS to easily expand to support growing business volumes.

Bill Tingle is sharing PODS’ story at SuiteWorld 2017, the number one cloud ERP event of the year, this year being held in Las Vegas from April 24-27. Watch all keynotes live here.

About PODS Enterprises, LLC
PODS® is a leader in the moving and storage industry providing both residential and commercial services in 46 U.S. states, Canada, Australia and the UK. Founded in 1998, PODS pioneered the portable moving and storage industry now preferred by many customers’ increasingly active and mobile lifestyles. To date, the PODS network has completed more than 700,000 long-distance moves, exceeded 3 million deliveries and has more than 170,000 PODS containers in service.

About Oracle NetSuite Global Business Unit  
Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world's first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit www.netsuite.com.  

Follow Oracle NetSuite Global Business Unit's Cloud blogFacebook page and @NetSuite Twitter handle for real-time updates.

About Oracle
Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Follow Oracle Corporate

Categories: Database, Vendor

Smartsheet Realizes the Power of Unified Billing and Revenue Recognition with NetSuite

Oracle Database News - 8 hours 42 min ago
Press Release Smartsheet Realizes the Power of Unified Billing and Revenue Recognition with NetSuite Enterprise Collaboration Software Leader Can Easily Adapt to Market Changes, New Business Models and New Revenue Recognition Standards

SUITEWORLD 2017, LAS VEGAS, Nev.—Apr 25, 2017

Oracle NetSuite Global Business Unit (GBU), the industry's leading provider of cloud financials / ERP, HR and omnichannel commerce software suites, today announced that Smartsheet, a leading provider of SaaS-based solutions for managing and automating collaborative work, has implemented NetSuite OneWorld to manage rapid growth. Smartsheet is leveraging NetSuite OneWorld to manage mission critical business processes including financials, billing, revenue recognition, analytics and multi-currency transactions in the Euro, yen, Pound and US, Australian and Canadian dollars. With NetSuite OneWorld, Smartsheet has a platform that is ready for next and provides the foundation to scale the business. As Smartsheet has evolved its business model, NetSuite’s SuiteBilling software is helping the company to manage recurring revenue processes from order to billing and revenue recognition with complete control and auditability for its software subscription business.

“Using a hairball of multiple point products, whether on-premise or in the cloud, is always a bad idea. But doing it in your financial system is downright reckless,” said Jason Maynard, SVP of Strategy and Marketing, NetSuite. “Smartsheet is an innovator in collaboration software and knew how to leverage NetSuite innovation to fuel their growth.”

Founded 11 years ago in Bellevue, Wash., Smartsheet began selling collaboration software for individuals and small teams but has grown rapidly as it began selling its product into large enterprises, including 50 percent of the Fortune 500, to help solve large, complex work management and project automation challenges. It has achieved more than 70 percent year-over-year revenue growth for the fifth consecutive year and currently serves more than 68,000 companies and 10 million users across 190 countries. Smartsheet’s prior accounting system couldn’t handle that growth, requiring extensive manual workarounds that were cumbersome and inefficient, including the physical entry of every invoice. Smartsheet knew that it needed a new platform that could manage all its critical business processes in one system while also providing the flexibility to scale rapidly with the company’s growth. After a rigorous evaluation of several applications, Smartsheet chose NetSuite OneWorld for its scalability, revenue recognition and international capabilities.

A key factor in Smartsheet’s decision was SuiteBilling, the industry’s first unified cloud-based order-to-billing-to revenue recognition solution. SuiteBilling enables businesses to adopt any business model from product-based, time- and services-based, through to usage- and subscription-based, or any combination of these without limit. In the future, Smartsheet plans to manage subscriptions for its enterprise customers while providing a self-service model for its smaller customers allowing them to sign up for a trial, pay with a credit card and establish a recurring billing account all from a single system.

“We needed a solution that could support significant growth in our primary segments—enterprise customers as well as SMBs where we leverage a self-service model—each of which has its own unique set of complexities,” said Mark Mader, Smartsheet CEO. “NetSuite was the only solution that offered us the ability to automate both the back office and customer-facing aspects across both these important segments of our business.”

After a four-month implementation, Smartsheet deployed NetSuite OneWorld, which supports 190 currencies, 20 languages, automated tax calculation and reporting in more than 100 countries, and support for customer transactions in more than 190 countries. NetSuite OneWorld supports Smartsheet’s growth by providing:

  • Scalability for growth. NetSuite OneWorld’s single cloud solution allows Smartsheet to quickly and easily; add functionality as the business evolves; quickly add offices as it expands with anywhere, anytime access.
  • Single source of truth. NetSuite’s unified billing and advanced revenue management system, synchronizes complex processes from order to billing to revenue recognition.
  • Real-time visibility. NetSuite OneWorld’s unified platform gives Smartsheet visibility across its operations with one single unified financial system of record, providing users with real-time information right at their fingertips.
  • Improved efficiency. NetSuite has automated numerous processes previously done manually, including processing of thousands of renewals at a time, saving significant time in order entry.
  • Robust customization and integration. NetSuite's SuiteCloud Developer Network (SDN) provides a platform for Smartsheet to customize the system to its specific needs and adapt as its business evolves. Customized business rule logic improves data quality while a planned integration with Salesforce.com will fully automate the quote-to-cash process for sales floor led deals.


Mark Mader will be sharing Smartsheet’s story at SuiteWorld 2017, the number one cloud ERP event of the year, this year being held in Las Vegas from April 24-27. Watch him and all other SuiteWorld 2017 keynotes live here.

About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Follow Oracle Corporate

Categories: Database, Vendor

Ring Dials NetSuite to Fuel Growth

Oracle Database News - 8 hours 57 min ago
Press Release Ring Dials NetSuite to Fuel Growth Outdoor Home Security Innovator Leverages NetSuite to Scale Global Business Operations

SUITEWORLD 2017, LAS VEGAS, Nev.—Apr 25, 2017

Oracle NetSuite Global Business Unit (GBU), the industry's leading provider of cloud financials / ERP, HR and omnichannel commerce software suites, today announced that Ring, a leader in outdoor home security, has implemented NetSuite to fuel its rapid growth, which took it from startup to 1,000 team members in less than three years. Ring is using NetSuite for financial management and reporting with plans to significantly extend the NetSuite platform to other areas of the business as it continues to extend its business model beyond doorbells, to services and subscriptions. With NetSuite, Ring has gained efficiencies and real-time visibility with a unified platform that is next ready. NetSuite enables Ring to grow, scale and adapt its business model as Ring’s business expands and evolves.

With a mission to reduce crime in neighborhoods, Ring has revolutionized outdoor home security by building smart doorbells and outdoor cameras designed to proactively monitor your home. Initially founded as DoorBot, Ring got a significant boost when CEO and Chief Inventor Jamie Siminoff appeared on ABC’s “Shark Tank” with the smart doorbell in September 2013. It has since expanded its product line and is building a “Ring of Security” around your home with data services and digital neighborhood watches. Additionally, Ring has received $209 million in investment, reached over 1,000,000 users and has expanded its footprint in retail and is now available in over 15,000 stores globally. Initially, Ring relied on an outsourced accounting firm for basic general ledger and journal entries, to manage its financial processes with reporting done in Excel and online spreadsheets. Understanding that the company’s growth trajectory demanded more comprehensive solution, Ring evaluated several software packages on the market including Microsoft Dynamics, before selecting NetSuite.

“When you grow as fast as we’ve grown, one of the biggest challenges is breaking older systems and having to move to new things,” said Siminoff. “NetSuite gives us a platform that allows us to grow from zero to infinity. It takes away those concerns and means resources can be directed to sales and business development.”

“We are honored to partner with a mission-driven business like Ring as they transform communities and help make neighborhoods safer,” said Jason Maynard, SVP of Strategy and Marketing, Oracle NetSuite GBU. “Empowering entrepreneurial companies to scale and achieve their vision is NetSuite’s purpose.”

While still a young and fast-growing business, Ring has complex needs including a B2C website and B2B relationships with retailers like Costco, Best Buy and Home Depot. Ring is confident that the advanced revenue recognition features in NetSuite can allow it to sell, bill and recognize its revenue as it continues to transform its business and meet the needs of the modern customer. With NetSuite, Ring has streamlined financial processes and gained operational efficiency with a roadmap for future growth.

Since implementing NetSuite, Ring has gained the following benefits:

  • A flexible and agile platform. The NetSuite SuiteCloud Platform easily enables the company to customize NetSuite to meets its current and future business requirements and to integrate with other third-party solutions.
  • Real-time visibility. One single, unified financial system of record and financial reporting across the entire organization has created efficiencies by automating cumbersome manual processes.
  • Significant savings in IT costs and complexity. NetSuite's proven, securely featured cloud solution eliminates the hassles of managing, maintaining and upgrading business applications and offers significant time and cost savings.
  • Anytime, anywhere access. For a company built around accessing a video doorbell from a smartphone, NetSuite’s mobile capabilities enable staff to access the system anywhere there is an internet connection.
  • Scalability for growth. With plans to extend the NetSuite platform to adopt advanced revenue recognition, CRM and ecommerce, plus inventory management for its warehouse and fulfillment for its retail partners, Ring can continue to grow and adapt its business on a single platform.

Jamie Siminoff will be sharing Ring’s story at SuiteWorld 2017, the number one cloud ERP event of the year, this year being held in Las Vegas from April 24-27. Watch him and all SuiteWorld 2017 keynotes live here.

About Ring

Ring's mission is to reduce crime in neighborhoods and empower consumers by creating a Ring of Security around homes and communities with its suite of smart home security products: Ring Video Doorbell, Ring Video Doorbell Pro (HomeKit-enabled), Ring Stick Up Cam and the new Ring Floodlight Cam. With these security devices, Ring has created the neighborhood watch for the digital age and continues to keep homes around the world safe. For more information, visit www.ring.com. With Ring, you’re always home. To learn more, please visit https://ring.com.

About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Follow Oracle Corporate

Categories: Database, Vendor

NetSuite Announces SuiteSuccess: The First Unified Industry Cloud Solution

Oracle Database News - 9 hours 12 min ago
Press Release NetSuite Announces SuiteSuccess: The First Unified Industry Cloud Solution SuiteSuccess Transforms the Way NetSuite Builds, Sells, Delivers and Supports Industry Solutions
SuiteSuccess Empowers Customers to Be Ready for What’s Next

SUITEWORLD 2017, LAS VEGAS, Nev.—Apr 25, 2017

Oracle NetSuite Global Business Unit, the world’s leading provider of cloud-based financials / ERP, HR and omnichannel commerce software suites, today announced SuiteSuccess, the first unified industry cloud solution. NetSuite has delivered 12 SuiteSuccess editions for the following eight industries:

  • Advertising, media, publishing
  • Financial technology
  • Manufacturing
  • Nonprofit
  • Retail
  • Service-based businesses
  • Software/internet
  • Wholesale distribution

SuiteSuccess is the culmination of a multi-year transformation effort to combine the NetSuite unified suite, 20 years of industry leading practices, a new customer engagement model, and business optimization methods into a unified, industry cloud solution. SuiteSuccess was engineered to solve unique industry challenges that historically have limited a company’s ability to grow, scale and adapt to change. Most ERP vendors have tried to solve the industry solution problem with templates, rapid implementation methodologies, and custom code. NetSuite took a holistic approach to the problem and productized domain knowledge, leading practices, KPI’s, and an agile approach to product adoption. The benefits of this are faster time to value, increased business efficiency, flexibility, and greater customer success.

Within each industry offering, NetSuite has built unique micro-vertical solutions to address specific market needs. SuiteSuccess has over 300 customers and is available now, expanding globally into more industries and to current NetSuite customers over the course of this year.

“SuiteSuccess is what is next for NetSuite and our customers,” said Jim McGeever, Executive Vice President, Oracle NetSuite Global Business Unit. “When we started SuiteSuccess, we had high expectations, but the results our customers have achieved have gone way beyond our wildest dreams.”

“I was amazed, not only with how fast the project went, but how closely NetSuite worked with us,” said Bill Tingle, CIO of PODS. “We needed to get a system up and running quickly to keep up with our growth and SuiteSuccess fit the bill perfectly.”

The four key pillars of SuiteSuccess are:

  • BUILD. A complete suite to support the modern business including ERP, CRM, PSA, omnichannel commerce, HR, and Business Intelligence (BI) built on the NetSuite cloud platform, continually updated with leading edge capabilities and technologies to support all eight industries.
  • ENGAGE. Leading practices for each industry and role including workflows, KPI’s, reports, dashboards and metrics, with the flexibility to personalize on the NetSuite platform from the initial sales contact to ongoing support. With these leading practices, value is added at each stage of the engagement.
  • CONSUME. Intelligent staged approach via NetSuite’s industry ‘stairway’ allows companies to consume capabilities based on their business needs. Re-imagined consumption model drives faster time to value, better ROI and greater user adoption. Companies can now go from zero to cloud in 100 days.
  • OPTIMIZE. Customers benefit from continuous engagement, updated leading practices, new feature releases, value added SuiteCloud partners, and movement up the stairway. Customers also are always on the latest release.

DirectScale, a software solution designed to seamlessly meet the needs of the direct and social sales industry, implemented NetSuite OneWorld in November of 2016 with SuiteSuccess.

“NetSuite had customers in our industry that were dealing with the same issues as us. Setting up contracts, revenue recognition rules, revenue schedules. Choosing NetSuite, it was a match made in heaven,” said Ansen Hatch, Corporate Controller, DirectScale. “We know that NetSuite can grow with us as quickly as we can grow.”

Sourcingpartner, a leading provider of complete end-to-end sourcing solutions from product concept to after care support, implemented NetSuite in July 2016 with SuiteSuccess.

“Everything went much smoother than our Microsoft implementation. I’ve had nothing but good comments from everyone involved and I’m so glad we switched,” said Richard Gardner, Director for Customer Service, Sourcingpartner.

Precision Disposables, a wholesale distributor of high quality, cost efficient, medical products, implemented NetSuite in January 2017 with SuiteSuccess.

“I have 15 plus years of experience in all the major ERP platforms, including SAP, and I’ve been through six major implementations. This was by far the best,” said Bruce Capagli, COO, Precision Disposables.

To learn more about SuiteSuccess, please visit www.netsuite.com/suitesuccess.

About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Follow Oracle Corporate

Categories: Database, Vendor

NetSuite Unveils SuitePeople, the Most Unified and Flexible Cloud Core HR Offering

Oracle Database News - 9 hours 27 min ago
Press Release NetSuite Unveils SuitePeople, the Most Unified and Flexible Cloud Core HR Offering Addition of Core HR Unifies Financial, Customer, Product, Services and People Data in NetSuite

SUITEWORLD 2017, LAS VEGAS, Nev.—Apr 25, 2017

Oracle NetSuite Global Business Unit (GBU), the industry’s leading provider of cloud financials / ERP, HR and omnichannel commerce software suites, today announced SuitePeople, a new core human resources (HR) offering natively built on NetSuite’s unified cloud suite. NetSuite is the first and only cloud ERP suite to give businesses a single platform to manage mission critical business processes across ERP, Customer Relationship Management (CRM), Professional Services Automation (PSA), omnichannel commerce and now HR.

Traditionally, businesses have had to manage core HR processes in separate modules, disparate systems, or worse, in spreadsheets. Growing businesses can’t afford the cost, time and risk associated with manual workarounds, complex integrations or siloed data. Unlike legacy HRIS solutions, SuitePeople helps people information actionable throughout the organization from a single suite.

“Since its inception, NetSuite has believed that the best way to run a business is with a unified suite. Today, we are incredibly excited to complete that vision with the launch of SuitePeople,” said Jim McGeever, Executive Vice President of the Oracle NetSuite Global Business Unit. “NetSuite has always delivered applications designed to run a complete business not just a department. And that same philosophy was built into the core of SuitePeople, the most unified flexible core HR offering ever available.”

“Having HR data available in NetSuite has transformed how we manage, train and retain our most valuable asset—our people,” said Noah Finn, Managing Partner at Finn Partners, a SuitePeople customer. “We’ve already seen significant business benefits and have only just scratched the surface of what we can do with all of our information in one central repository.”

“Successful organizations seek employee engagement solutions to ensure that their people are empowered and motivated. These solutions must enable everyone from the shop floor to the top floor,” said R “Ray” Wang, Principal Analyst and CEO of Constellation Research, Inc.

SuitePeople weaves people data throughout the suite, giving businesses complete control over their core HR processes. SuitePeople help enable employees to request time off, access employee directories and organizational charts, monitor upcoming vacation schedules and new hires or publicly recognize good work. SuitePeople help empower managers and HR professionals to streamline employee information, new hires, employee onboarding, promotions and compensation changes, all from a single suite.

SuitePeople plans to provide:

  • Core HR Capabilities. Native organization design, job and position management, workflows and  compliance management, all powered by effective-dated employee master data, providing HR with the systems they need to run a best-in-class operation.
  • HR Analytics. With pre-built reports and dashboards focused on key people metrics and compliance, including a new Chief People Officer dashboard, SuitePeople gives employees the data they need, right at their fingertips.
  • Employee Engagement. Kudos help allow all employees to recognize those co-workers who have helped drive the business forward, vital in today’s world of distributed workforces.
  • HR compliance. Built in human resources reports, searches and notifications are paired with tailored compliance features to ensure regulatory requirements are easily met and filing deadlines aren’t missed.
  • Unified access. With NetSuite’s unified data model, people information can be seamlessly analyzed from a single application suite. From HR to finance, services to the shop floor and the warehouse, the suite wins.
  • Global reach. With the NetSuite OneWorld supporting 190 currencies, 20 languages, automated tax calculation and reporting in more than 100 countries, and customer transactions in more than 200 countries, fast growing businesses know they can expand abroad and manage a global workforce.
  • Flexibility. Built on the flexible SuiteCloud platform, SuitePeople help allow customers and partners to configure workflows and forms to meet their unique needs without worrying about upgrades.
  • People security. Sophisticated role-based security features help allow executives, managers, supervisors and employees to simultaneously view information based on their operational and management role. Supporting both each other and their teams while safeguarding sensitive data.
  • Unmatched ease of use. Built within NetSuite’s intuitive user interface, SuitePeople can give employees a familiar experience that speeds adoption and training.

“Systems are the backbone of your business, but the heartbeat is your people,” said Joseph Fung, VP of Product Development at NetSuite. “With SuitePeople, businesses can achieve a more engaged workforce, improved operational efficiency, and timely strategic decision making. And best of all, because it is part of the unified suite, you can now manage and engage a global workforce better than ever.”

About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Follow Oracle Corporate

Categories: Database, Vendor

NetSuite Announces Massive Global Expansion Initiatives

Oracle Database News - 9 hours 42 min ago
Press Release NetSuite Announces Massive Global Expansion Initiatives Launching More Data Centers, More Field Offices, More Development Centers, More International Product Functionality and a Broader Partner Network

SUITEWORLD 2017, LAS VEGAS, Nev.—Apr 25, 2017

Oracle NetSuite Global Business Unit, the industry's leading provider of cloud financials / ERP, HR and omnichannel commerce software suites, today announced a massive expansion plan to accelerate its international growth. NetSuite customers will benefit from Oracle’s vast global scale and resources. The expansion initiatives will enable Oracle NetSuite Global Business Unit to launch more data centers, more field offices and more development centers globally, which will help to bring the leading cloud ERP suite to more organizations around the world.

“Leveraging Oracle’s global scale, we are able to massively accelerate NetSuite’s vision of bringing a single unified suite to companies all over the world,” said Jim McGeever, Executive Vice President of Oracle NetSuite Global Business Unit. “Oracle’s technology infrastructure and global reach enables us to help ensure customer success no matter where they are located in the world.”

“Oracle’s increased investment in all areas of the NetSuite product and operations offers more opportunities to customers, particularly growing international businesses like PageGroup,” said Mark Hearn, Finance Director of recruitment company PageGroup. “As we continue our global roll-out of NetSuite OneWorld, I am reassured by the even greater capabilities and resources behind the product. A commitment to strong and sustained investment in OneWorld functionality will enable international companies like us to continue to grow with NetSuite in the future.”

Adding Oracle’s global resources to NetSuite’s existing global footprint provides rapid entry and expansion into new markets across three key areas:

  • Data Centers. Oracle NetSuite Global Business Unit plans to more than double its data center footprint from five data centers globally to 11. NetSuite currently operates five data centers, three in North America, one in Amsterdam, Netherlands and one in Dublin, Ireland. NetSuite expects to add a fourth North American data center in Chicago. As part of the global expansion plans, NetSuite will leverage existing Oracle data centers in Europe and Asia. In Europe, NetSuite is scheduled to open a data center in Frankfurt, Germany to remedy the lack of modern cloud computing offerings in the country. In Asia Pacific, NetSuite plans to initially launch facilities in Australia and Singapore, followed by Japan and China. The addition of Oracle data centers to NetSuite’s operations will provide even greater security, redundancy, performance and scalability for new and existing customers across the globe.
  • Field offices. NetSuite expects to double its global presence, expanding from offices in 10 countries to 23 spread across the globe. The addition of Oracle’s field offices significantly increases NetSuite’s ability to meet the rising demand for cloud ERP around the world. NetSuite is establishing a new presence in Argentina, Brazil, Colombia, Chile, Mexico, France, Germany, Sweden, Dubai, China, India, Malaysia and New Zealand. In addition, NetSuite is expanding headcount in existing field offices by over 50 percent to provide better resources for customer demand.
  • Development centers. Oracle NetSuite Global Business Unit is leveraging existing Oracle development centers across India, China and Japan. The development centers will be able to accelerate the development of international, regional and local features and functionality within NetSuite OneWorld.
About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Follow Oracle Corporate

Categories: Database, Vendor

Get to cloud faster with a new database migration service private preview

 

scott-data-amp-keynote-migration-service

Last week at the Microsoft Data Amp online event, Microsoft announced a private preview for a new database migration service that will streamline the process for migrating on-premises databases to Azure. Using this new database migration service will simplify the migration of existing on-premises SQL Server, Oracle, and MySQL databases to Azure, whether your target database is Azure SQL Database or Microsoft SQL Server in an Azure virtual machine.

The automated workflow with assessment reporting guides you through the necessary changes before performing the migration. When you are ready, the service will migrate the database to SQL Server in Azure or Azure SQL Database. For an opportunity to access this service, please sign up for the limited preview.

At the same time, if you’re interested in a managed database service but need instance-level features enabling greater security, extensibility, and compatibility with SQL Server, consider signing up for the Azure SQL Database private preview as well. This new private preview of our SQL Server-based database-as-a-service can help you move hundreds of apps to the cloud with minimal changes.

You can sign up for one or both previews at aka.ms/sqldatabase-migrationpreview.

For more information about all the announcements we made yesterday, get the full scoop in this Data Amp blog. You can also watch videos from the event and other on-demand content at the Data Amp website.

Categories: Database

Create a Continuous Deployment Pipeline with Node.js and Jenkins

NorthScale Blog - 10 hours 37 min ago

Previously I had written about using Jenkins for continuous deployment of Java applications, inspired by a keynote demonstration that I had developed for Couchbase Connect 2016.  I understand that Java isn’t the only popular development technology that exists right now.  Node.js is a very popular technology and a perfect candidate to be plugged into a continuous deployment pipeline using Jenkins.

We’re going to see how to continuously deploy a Node.js application with Jenkins based on changes made to a GitHub repository.

So let’s figure out the plan here.  We’re going to be using an already existing Node.js repository that I had uploaded to GitHub a while back.  When changes are made to this repository, Jenkins will build the application and deploy or run the application.  Because of the nature of Node.js, the build process will consist of making sure the NPM modules are present.

The Requirements

There are a few software requirements that must be met in order to be successful with this guide.  They are as follows:

Since this is a Node.js pipeline, of course we’ll need it installed.  However, since Jenkins is a Java application, we’ll also need Java installed.  My sample application does use Couchbase, but that won’t be the focus of this guide.  However, if you’re using the same application I am, you’ll need Couchbase Server installed.

All software listed should reside on the same host.  In a production environment you will probably want them dispersed across multiple machines.

Installing and Configuring Couchbase Server as the NoSQL Database

At this point you should have already downloaded Couchbase Server.  After installing it and configuring it, you’ll need to create a Bucket called restful-sample and that Bucket should have a primary index.

For instructions on configuring Couchbase and getting this Bucket created, check out a previous tutorial I wrote on the subject.  It is actually the tutorial that went with creating this Couchbase, Express Framework, Angular, and Node.js (CEAN) stack application.

With Couchbase ready to go, we can focus on configuring Jenkins and creating our workflow.

Configuring Jenkins with the Necessary Plugins

You should have already downloaded Jenkins by now.  If you haven’t go ahead and obtain the WAR file from the Jenkins website.

To start Jenkins, execute the following command from your Command Prompt or Terminal:

java -jar jenkins.war -httpPort=8080

This will make Jenkins accessible from a web browser at http://localhost:8080.  Upon first launch, you’ll be placed in a configuration wizard.

Jenkins Configuration Part 1

The first screen in this configuration wizard will ask you for the password that Jenkins generates.  Find it in the location presented on the screen.

The second screen will ask you which plugins you’d like to install.

Jenkins Configuration Part 2

For now, we’re going to install the suggested plugins.  We’ll be installing extra plugins later.

The third screen will ask us to create our first administrative user.  Technically, the generated password you’re using is an administrative user, but you may want to create a new one.

Jenkins Configuration Part 3

After you create a user, Jenkins is ready to go.  However, we are going to need another plugin and it can vary depending on how we wish to build and deploy the Node.js application.

From the main Jenkins screen, choose to Manage Jenkins to see a list of administration options.

Manage Jenkins

What we care about is managing the available plugins.  After choosing Manage Plugins we want to search for and install a plugin by the name of Post-Build Script.

Install Jenkins Post-Build Script Plugin

This plugin allows us to execute shell commands or scripts after the build stage has completed.  In this example we’ll be building and deploying to the same host, we can run everything locally via shell commands.  In a production environment you might want to use the SSH plugin to migrate the code to a remote server and run it there.

With the plugins available, let’s create our continuous deployment workflow for Node.js in Jenkins.

Creating a Jenkins Continuous Deployment Workflow for Node.js

Just to reiterate, our goal here is to create a workflow that will pull a project from GitHub, build it by installing all the dependencies, and deploy it by running it on a server, in this case our local machine.

Start by creating a new item, otherwise known as a new job or workflow.

Jenkins Node.js Freestyle Project

We’re going to be creating a Freestyle Project, but you can give it any name you want.  There are three things that need to be done on the next screen.

The source of our workspace will come from GitHub.  In your own project it can come from elsewhere, but for this one we need to define our source control information.

Jenkins Node.js Source Control

The GitHub project is one that I had previously created and written about, like mentioned before.  The project can be found at:

https://github.com/couchbaselabs/restful-angularjs-nodejs

Now in a production environment you’ll probably want to set up GitHub hooks to trigger the job process, but since this is all on localhost, GitHub won’t allow it.  Instead we’ll be triggering the job manually.

Jenkins Node.js Build Step

After configuring the source control section we’ll need to configure the build step.  For Node.js, building only consists of installing dependencies, but you could easily have unit testing and other testing in this step as well.  In my previous Java example, the build step had a little more to it.  In this Node.js example we have the following:

npm install

Finally we get to define what happens after the project is built.

Jenkins Node.js Post Build Step

In this example we will be deploying the application locally on our machine.  Probably not the case in your production scenario.

So you’ll notice in our post-build step we have the following commands:

npm stop
npm start

Before starting the application we are stopping any already running instance of it.  Once stopped we can start the new version.  However, where do these stop and start tasks come from?

"scripts": {
    "start": "forever start app.js",
    "stop": "forever stopall"
}

The above was taken from the GitHub project’s package.json file.  Each task starts and stops a forever script for Node.js.

Go ahead and try to run the job choosing Build Now from the list of options.  It should obtain the project, install the dependencies, and make the project available at http://localhost:3000.  Just make sure Couchbase Server is running for this project, otherwise you’ll get errors.

Conclusion

You just saw how to use Jenkins to continuously deploy your Node.js applications based on changes that have been made in GitHub.  A similar version of this guide was created for Java applications called, Create a Continuous Deployment Pipeline with Jenkins and Java, which is worth reviewing if you’re a Java developer.

If you’re interested in using Jenkins to deploy your Node.js application as Docker containers, check out a previous tutorial that I wrote on the subject.

Want more information on using Node.js with Couchbase NoSQL?  Check out the Couchbase Developer Portal for documentation and examples.

The post Create a Continuous Deployment Pipeline with Node.js and Jenkins appeared first on The Couchbase Blog.

Categories: Architecture, Database

Authorization and Authentication with RBAC (Part 2)

NorthScale Blog - Mon, 04/24/2017 - 19:30

Authorization and authentication are important to Couchbase. In March, I blogged about some of the new Role Based Access Control (RBAC) that we are showing in the Couchbase Server 5.0 Developer Builds. This month, I’d like to go into a little more detail now that the April Couchbase Server 5.0 Developer Build is available (make sure to click the “Developer” tab).

Authentication and authorization

In past version of Couchbase, buckets were secured by a password. In 5.0, bucket passwords for authorization are gone. You can no longer create a “bucket password” for authorization. Instead, you must create one (or more) users that have varying levels of authorization for that bucket. Notice that there is no “password” field anymore (not even in the “Advance bucket settings”:

Create a new Couchbase bucket - no password for authorization

So now, you no longer have to hand out a password that gives complete access to a bucket. You can fine-tune bucket authorization, and give out multiple sets of credentials with varying levels of access. This will help you tighten up security, and reduce your exposure.

Note: The administrator user still exists, and has permission to do everything. So I can still run N1QL queries (for instance) on that bucket while logged in as an administrator account. However, this is not the account you should be using from your clients.

Creating an authorized user

To create a new user, you must be logged in as an administrator (or as a user that has an Admin role). Go to the “Security” tab, and you’ll be able to see a list of users, and be able to add new ones.

Create a new user by clicking “ADD USER”. Enter the information for the user. You may want to create a user for a person (e.g. “Matt”), or you may want to create a user for a service (e.g. “MyAspNetApplication”). Make sure to enter a strong password, and then select the appropriate roles for the user you want to create.

For example, let’s create a user “Matt” that only has access to run SELECT queries on the bucket I just created. In “Roles”, I expand “Query Roles”, then “Query Select”, and check the box for “mynewbucket”, and then “Save” to finalize the user.

Create a new user with authorization to run a select query

Authorization in action

When I log out of the administrator account, and log back in as “Matt”, I can see that the authorization level I have is severely restricted. Only “Dashboard”, “Servers”, “Settings”, and “Query” are visible. If I go to “Query” I can execute SELECT 1;

Execute SELECT query logged in with only Query authorization

If I try something more complex, like SELECT COUNT(1) FROM mynewbucket, I’ll get an error message like:

[
  {
    "code": 13014,
    "msg": "User does not have credentials to access privilege cluster.bucket[mynewbucket].data.docs!read. Add role Data Reader[mynewbucket] to allow the query to run."
  }
]

So, it looks like I have the correct authentication to log in, and I have the correct authorization to execute a SELECT, but I don’t have the correct authorization to actually read the data. I’ll go back in as admin, and add Data Reader authorization.

User now has authorization for two roles

At this point, when I login with “Matt”, SELECT COUNT(1) FROM mynewbucket; will work. If you are following along, try SELECT * FROM mynewbucket;. You’ll get an error message that no index is available. But, if you try to CREATE INDEX you’ll need another permission to do that. You get the idea.

New N1QL functionality

There’s some new N1QL functionality to go along with the new authentication and authorization features.

GRANT and REVOKE ROLE

You can grant and revoke roles with N1QL commands. You need Admin access to do this.

Here’s a quick example of granting SELECT query authorization to a user named “Matt” on a bucket called “mynewbucket”:

GRANT ROLE query_select(mynewbucket) TO Matt;

And likewise, you can REVOKE a role doing something similar:

REVOKE ROLE query_select(mynewbucket) FROM Matt;

Creating users with REST

There is no way (currently) to create users with N1QL, but you can use the REST API to do this. Full documentation is coming later, but here’s how you can create a user with the REST API:

  • PUT to the /settings/rbac/users/builtin/<username> endpoint.

  • Use admin credentials for this endpoint (e.g. Administrator:password with basic auth)

  • The body should contain:

    • roles=<role1,role2,…​,roleN>

    • password=<password>

Below is an example. You can use cURL, Postman, Fiddler, or whatever your favorite tool is to make the request.

URL: PUT http://localhost:8091/settings/rbac/users/builtin/restman

Headers: Content-Type: application/x-www-form-urlencoded
Authorization: Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==

Body: roles=query_select[mynewbucket],query_update[mynewbucket]&password=password

The above assumes that you have an admin user/password of Administrator/password (hence the basic auth token of QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==).

After executing that, you’ll see a new user named “restman” with the two specified permissions.

Create a new user with a REST command

Wait, there’s more!

The RBAC system is far too rich to cover in a single blog post, and full documentation is on its way. In the meantime, here are some details that might help you get started with the preview:

  • You may have noticed the all option in the screenshots above. You can give a user roles on a bucket-by-bucket basis, or you can give permission to all buckets (even buckets that haven’t been created yet).

  • I covered FTS permissions in the previous blog post, but there are permissions that cover just about everything: views, bucket administration, backup, monitoring, DCP, indexes, etc.

  • You can’t create buckets with a password anymore. The equivalent is to instead create a user with the name as the bucket, and give it authorization to a role called “Bucket Full Access”. This will be useful for upgrading and transitioning purposes.

We still want your feedback!

Stay tuned to the Couchbase Blog for information about what’s coming in the next developer build.

Interested in trying out some of these new features? Download Couchbase Server 5.0 April 2017 Developer Build today!

The 5.0 release is fast approaching, but we still want your feedback!

Bugs: If you find a bug (something that is broken or doesn’t work how you’d expect), please file an issue in our JIRA system at issues.couchbase.com or submit a question on the Couchbase Forums. Or, contact me with a description of the issue. I would be happy to help you or submit the bug for you (my Couchbase handlers let me take selfies on our cartoonishly big couch when I submit good bugs).

Feedback: Let me know what you think. Something you don’t like? Something you really like? Something missing? Now you can give feedback directly from within the Couchbase Web Console. Look for the feedback icon icon at the bottom right of the screen.

In some cases, it may be tricky to decide if your feedback is a bug or a suggestion. Use your best judgement, or again, feel free to contact me for help. I want to hear from you. The best way to contact me is either Twitter @mgroves or email me matthew.groves@couchbase.com.

The post Authorization and Authentication with RBAC (Part 2) appeared first on The Couchbase Blog.

Categories: Architecture, Database

Data Synchronization Across iOS Devices Using Couchbase Mobile

NorthScale Blog - Mon, 04/24/2017 - 18:30

This post looks at how you get started with data replication/synchronization across iOS devices using Couchbase Mobile. The Couchbase Mobile Stack comprises of Couchbase Server, Sync Gateway and Couchbase Lite embedded NoSQL Database. In an earlier post, we discussed how Couchbase Lite can be used as a standalone embedded NoSQL database in iOS apps. This post will walk you through a sample iOS app in conjunction with a Sync Gateway that will demonstrate the core concepts of Push & Pull Replication, Authentication & Access Control, Channels and Sync Functions.

While we will be looking at data synchronization in the context of an iOS App in Swift, everything that’s discussed here applies equally to mobile apps developed in any other platform (Android, iOS (ObjC), Xamarin). Deviations will be specified as such.

NOTE:  We will be discussing Couchbase Mobile v1.4 which is the current production release. There is a newer Developer Preview version 2.0 of Couchbase Mobile that has a lot of new and exciting features.

Couchbase Mobile

The Couchbase Mobile Stack comprises the Couchbase Server, Sync Gateway and Couchbase Lite embedded NoSQL Database. This post will discuss the basics of NoSQL data replication and synchronization using Couchbase Mobile. I’ll assume you’re familiar with developing iOS Apps, basics of Swift, some basics of NoSQL and have some understanding of Couchbase. If you want to read up more on Couchbase Mobile, you can find lots of resources at the end of this post.

Couchbase Sync Gateway

The Couchbase Sync Gateway is an Internet-facing synchronization mechanism that securely syncs data across devices as well as between devices and the cloud.

It exposes a web interface that provides

  • Data Synchronization across devices and the cloud
  • Access Control
  • Data Validation

You can use any HTTP client to further  explore the interface. Check out this post on using Postman for querying the interface.

There are three main concepts related to data replication or synchronization using the Sync Gateway –

Channel

A channel can be viewed as a combination of a tag and message queue. Every document can be assigned to one or more channels. Documents are assigned to channels which specify who can access the documents. Uses are granted access to one or more channels and can only read documents assigned to those channels. For details, check out the documentation on Channels.

 

Sync Function

The sync function is a JavaScript function that runs on the Sync Gateway. Every time a new document, revision or deletion is added to a database, the sync function is called. The sync function is responsible for

  •  Validating the document,
  • Authorizing the change
  • Assigning document to channels and
  • Granting users’ access to channels.

For details, check out documentation on Sync Function .

 

Replication

Replication a.k.a Synchronization is the process of synchronizing changes between local database and remote Sync Gateway. There are two kinds –

  • Push Replication is used to push changes from local to remote database
  • Pull Replication is used to pull changes from remote to local database

For details, check out documentation on replications.

 

Installation of Couchbase Sync Gateway

Follow the installation guide to install the Sync Gateway.

Launch your Sync Gateway with the following config file. The exact location of the config file will depend on the platform. Please refer to the install guide for the same.

Sync Gateway Config File

{
  "log": ["*"],
  "CORS": {
     "Origin":["*"]
  },
  "databases": {
    "demo": {
      "server": "walrus:",
      "bucket": "default",
      "users": { 
        "GUEST": { "disabled": true, "admin_channels": ["*"] } ,
        "joe": {"password":"password" ,"disabled": false, "admin_channels":["_public","_joe"]} ,
        "jane":{"password":"password" ,"disabled": false, "admin_channels":["_public","_jane"]}
      },
      "unsupported": {
        "user_views": {
          "enabled":true
        }
      },
    
  
  "sync": 
  `
      
      function (doc, oldDoc){
     

        // Check if doc is being deleted
        if (doc._deleted == undefined) {
          // Validate current version has relevant keys
          validateDocument(doc);
        }
        else {
          // Validate  old document has relevant keys
          validateDocument(oldDoc);
        }

        var docOwner = (doc._deleted == undefined) ? doc.owner : oldDoc.owner;
    

        var publicChannel = "_public";

        var privateChannel = "_"+docOwner;

        // Grant user read access to public channels and user's own channel
        access(docOwner,[publicChannel,privateChannel]);


        // Check if this was a doc update (as opposed to a doc create or delete)
        if (doc._deleted == undefined && oldDoc != null && oldDoc._deleted == undefined) {

            if (doc.tag != oldDoc.tag) {
                 throw({forbidden: "Cannot change tag of document"});
         
            }
        }


        // Check if new/updated document is tagged as "public" 
        var docTag =  (doc._deleted == undefined) ? doc.tag : oldDoc.tag;
    
        if (doc._deleted == undefined) {
          if (docTag == "public") {
           
            // All documents tagged public go into "public" channel which to open to all
            channel(publicChannel);
         
        }
        else {

            // Ensure that the owner of document is the user making the request
            requireUser(docOwner);

            // All non-public tagged docs go into a user user specific channel
            channel(privateChannel);

         }
       }
       else {
          channel(doc.channels);
       }


        function validateDocument (doc) {
           // Basic validation of document
          if (!doc.tag ) {
            // Every doc must include a tag
            throw({forbidden: "Invalid document type: Tag not provided" + doc.tag});
          }

           if (!doc.owner) {
            // Every doc must include a owner
            throw({forbidden: "Invalid document type: Owner not provided" + doc.owner});
          
          }
        }
      }
  

`
    }
  }
}

Here are some key points to note in the configuration file:-

  • Line 8: The “walrus:” value for “server”  indicates that the Sync Gateway should persist data in-memory and is not backed by a Couchbase server.
  • Line 11: Guest user access is disabled
  • Line 12-13: There are two users, “Jane” and “Joe” configured in the system. Both users have access to a “_public” channel and each has access to their own private channel.
  • Line 22-100: A simple sync function that does the following
    1. Line 29-36 : Document validation to ensure that the document contains user defined “tag” and “owner” properties
      1. The “tag” property is used to specify if the document is publicly available to any user or if it is private to a user
      2. The “owner” property is used to specify if the document is publicly available to any user or if it is private to a user
    2. Line 46: Give user’s access to the “_public” and a private channel (identified using owner of document)
    3. Line 51-56 : If it’s a document update, verify that the “tag” property is unchanged across revisions
    4. Line 66: Assign all documents with “public” tag to the “_public” channel
    5. Line 72: Assign all documents with a tag other than “public” to the private channel
      1. Line 75: For private channel documents, first verify that the document’s owner is the one making the request
Couchbase Lite

The Couchbase Lite is an embedded NoSQL database that runs on devices. Couchbase Lite can be used in several deployment modes. The Getting Started with Couchbase Lite  post discusses the standalone deployment mode. Couchbase Lite can be used in conjunction with a remote Sync Gateway that would allow it to sync data across devices. This post discusses the deployment mode using a Sync Gateway.

There are many options to integrate Couchbase Lite framework into your iOS App. Check out our Couchbase Mobile Getting Started Guide for the various integration options.

Native API

Couchbase Lite exposes a native API for iOS, Android and Windows that allows the Apps to easily interface with the Couchbase platform. As an App Developer, you do not have to worry about the internals of  the Couchbase Lite embedded database, but you can instead focus on building your awesome app . The native API allows you to interact with the Couchbase Lite framework just as you would interact with other platform frameworks/ subsystems. Again, we will be discussing Couchbase Mobile v1.4 in this blog post. You can get a full listing of the APIs on our Couchbase Developer site.

Demo iOS App

Please download the Demo Xcode project from this Github repo and switch to “sync support” branch. We will use this app as an example in the rest of the blog. This app uses Cocoapods to integrate the Couchbase Lite framework.

git clone git@github.com:couchbaselabs/couchbase-lite-ios-standalone-sampleapp.git
git checkout syncsupport

 

Synchronization of Documents Across Users

  1. Build and Launch the App. You should be presented with a Login alert
  2. Enter user “jane” and password of “password” . This user was configured in the Sync Gateway config file
  3. Add first document by tapping on the “+” button on top right hand corner.
    1. Give a name to document and a one line description.
    2. Use tag “private”.
    3. Behind the scene, the Push Replicator pushes the document to the Sync Gateway and is processed by the Sync Function. Based on the tag, the Sync function assigns the document to the user’s private channel.
  4. Add second document by tapping on the “+” button on top right hand corner.
    1. Give a name to document and a one line description
    2. Use tag “public”.
    3. Behind the scene, the Push Replicator pushes the document to the Sync Gateway and is processed by the Sync Function. Based on the public tag, the Sync function assigns the document to the public channel
  5. Now “log off” Jane . You will be presented with the Login alert again
  6. Enter user “joe” and password of “password” . This user was also configured in the Sync Gateway config file
  7. The public document that was created by Jane will be listed.
    1.  Behind the scenes, the Pull Replicator pulls all the documents from Joe’s private channel and the public channel. The public document that was created by Jane is pulled. However, since Joe did not have access to Jane’s private channel, the private document created by Jane is not pulled.

To verify the state of things on the Sync Gateway, you can use query the Admin REST interface using Postman or any HTTP client.

This is the CURL request to the Sync Gateway

curl -X GET \
 'http://localhost:4985/demo/_all_docs?access=false&channels=false&include_docs=true' \
 -H 'accept: application/json' \
 -H 'cache-control: no-cache' \
 -H 'content-type: application/json'

The Response from the Sync Gateway shows the two documents assigned to the public and Jane’s private channel respectively

{
  "rows": [
    {
      "key": "-6gCouN6jj0ScYgpMD7Qj1a",
      "id": "-6gCouN6jj0ScYgpMD7Qj1a",
      "value": {
        "rev": "1-dfa6d453a1515ee3dd64012ccaf53046",
        "channels": [
          "_jane"
        ]
      },
      "doc": {
        "_id": "-6gCouN6jj0ScYgpMD7Qj1a",
        "_rev": "1-dfa6d453a1515ee3dd64012ccaf53046",
        "name": "doc101",
        "overview": "This is a private doc from Jane",
        "owner": "jane",
        "tag": "private"
      }
    },
    {
      "key": "-A2wR44pAFCdu1Yufx14_1S",
      "id": "-A2wR44pAFCdu1Yufx14_1S",
      "value": {
        "rev": "1-1a8cd0ea3b7574cf6f7ba4a10152a466",
        "channels": [
          "_public"
        ]
      },
      "doc": {
        "_id": "-A2wR44pAFCdu1Yufx14_1S",
        "_rev": "1-1a8cd0ea3b7574cf6f7ba4a10152a466",
        "name": "doc102",
        "overview": "This is a public doc shared by Jane",
        "owner": "jane",
        "tag": "public"
      }
    }
  ],
  "total_rows": 2,
  "update_seq": 5
}

 

Exploring the Code

Now, lets examine relevant code snippets of the iOS Demo App –

Opening/ Creating a per-user Database 

Open DocListTableViewController.swift file and locate openDatabaseForUser function.

 do {
       // 1: Set Database Options
       let options = CBLDatabaseOptions()
       options.storageType  = kCBLSQLiteStorage
       options.create = true
            
       // 2: Create a DB for logged in user if it does not exist else return handle to existing one
       self.db  = try cbManager.openDatabaseNamed(user.lowercased(), with: options)
       self.showAlertWithTitle(NSLocalizedString("Success!", comment: ""), message: NSLocalizedString("Database \(user) was opened succesfully at path \(CBLManager.defaultDirectory())", comment: ""))
            
       // 3. Start replication with remote Sync Gateway
       startDatabaseReplicationForUser(user, password: password)
       return true
        }
        catch  {
            // handle error    
        }

  1. Specify the options to associate with the database. Explore the other options on CBLDatabaseOptions class.
  2. Create a database with name of the current user. This way, every user of the app will have their own local copy of the database. If a database exists with the name, a handle to existing database will be returned else a new one is created. Database Names must to be lowercase. If success, a new local database will be created if it does not exist. By default, database will be created in the default path (/Library/Application Support). You can specify a different directory when you instantiate the CBLManager class.
  3. Start Database Replication Process for given user credentials. We will discuss Replication code in detail in the following sections.
Fetching Documents

Open the DocListTableViewController.swift file and locate getAllDocumentForUserDatabase  function.

// 1. Create Query to fetch all documents. You can set a number of properties on the query object
liveQuery = self.db?.createAllDocumentsQuery().asLive()
            
guard let liveQuery = liveQuery else {
       return
}
            
// 2: You can optionally set a number of properties on the query object.
// Explore other properties on the query object
liveQuery.limit = UInt(UINT32_MAX) // All documents
            
//   query.postFilter =
            
//3. Start observing for changes to the database
self.addLiveQueryObserverAndStartObserving()
            
            
// 4: Run the query to fetch documents asynchronously
liveQuery.runAsync({ (enumerator, error) in
        switch error {
            case nil:
            // 5: The "enumerator" is of type CBLQueryEnumerator and is an enumerator for the results
            self.docsEnumerator = enumerator
                    
                    
            default:
            self.showAlertWithTitle(NSLocalizedString("Data Fetch Error!", comment: ""), message: error.localizedDescription)
                }
            })            
        }
        catch  {
            // handle error            
        }

  1. Get handle to database with specified name
  2. Create a query object. This Query is used to fetch all documents. The Sync Function on the Sync Gateway will ensure that documents are pulled  from only the channels  that are accessible to the user. You can create a regular query object or a “live” query object. The “live” query object is of type CBLLiveQuery that  automatically refreshes everytime the database changes in a way that affects the query results. The query has a number of properties that can be tweaked in order to customize the results. Try modifying the properties and seeing the effect on results
  3. You will have to explicitly add an observer to the Live Query object be notified of changes to the database. We will discuss this more on section on “Observing Local & Remote Synchronized Changes to Documents”. Don’t forget to remove the observer and stop observing changes when you no longer need it!
  4. Execute the query asynchronously. You can also do it synchronously if you prefer , but its probably recommended to do it async if the data sets are large.

Once the query executes successfully, you get a CBLQueryEnumerator object. The query enumerator allows you to enumerate the results. It lends itself very well as a data source for the Table View that displays the results

Observing Local & Remote Synchronized Changes to Documents 

Open the DocListTableViewController.swift file and locate the addLiveQueryObserverAndStartObserving function.

Changes to the database could be as a result of the user’s actions on the local device or could be a result of changes synchronized from other devices.

 // 1. iOS Specific. Add observer to the live Query object
    liveQuery.addObserver(self, forKeyPath: "rows", options: NSKeyValueObservingOptions.new, context: nil)
        
 // 2. Start observing changes
    liveQuery.start()

  1. In order to be notified of changes to the database that affect the Query results, add an observer to the Live Query object . We will instead leverage iOS’s Key-Value-Observer pattern to be notified of 
changes. Add a KVO observer to the Live Query object to start observing changes to the “rows” property on Live Query object This is handled through appropriate Event Handler APIs on other platforms such as the addChangeListener function on Android/Java.
  2. Start observing changes .

Whenever there is a change to the database that affects the “rows” property of the LiveQuery object, your app will be notified of changes. When you receive the notification of change, you can update your UI, which in this case would be reloading the tableview.

if keyPath == "rows" {
    self.docsEnumerator = self.liveQuery?.rows
    tableView.reloadData()
}

 

Authentication of Replication Requests

Open DocListTableViewController.swift file and locate startDatabaseReplicationForUser function.

All Replication requests must be authenticated. In this app, we use HTTP Basic Authentication.

let auth = CBLAuthenticator.basicAuthenticator(withName: user, password: password)

There are several Authenticator types namely – Basic, Facebook, OAuth1, Persona, SSL/TLS Cert.

Pull Replication

Open DocListTableViewController.swift file and locate startPullReplicationWithAuthenticator function.

// 1: Create a Pull replication to start pulling from remote source
let pullRepl = db?.createPullReplication(URL(string: kDbName, relativeTo: URL.init(string: kRemoteSyncUrl))!)
        
// 2. Set Authenticator for pull replication
pullRepl?.authenticator = auth
        
// Continuously look for changes
pullRepl?.continuous = true
        
// Optionally, Set channels from which to pull
// pullRepl?.channels = [...]
        
 // 4. Start the pull replicator
 pullRepl?.start()

  1. Create a Pull Replicator to pull changes from remote Sync Gateway. The kRemoteSyncUrl is the URL of the remote database endpoint on the Sync Gateway.
  2. Associate Authenticator with the Pull Replication. Optionally one can set the channels from which documents should be pulled
  3. Setting replication to “continuous” will allow change updates to be pulled indefinitely unless explicitly stopped or database is closed.
  4. Start the Pull Replication
Push Replication

Open DocListTableViewController.swift file and locate startPushReplicationWithAuthenticator function.

// 1: Create a push replication to start pushing to remote source
let pushRepl = db?.createPushReplication(URL(string: kDbName, relativeTo: URL.init(string:kRemoteSyncUrl))!)
        
// 2. Set Authenticator for push replication
pushRepl?.authenticator = auth
        
// Continuously push  changes
pushRepl?.continuous = true
        
        
// 3. Start the push replicator
pushRepl?.start()

  1. Create a Push Replicator to push changes to remote Sync Gateway. The kRemoteSyncUrl is the URL of the remote database endpoint on the Sync Gateway.
  2. Associate Authenticator with the Push Replication.
  3. Setting replication to “continuous” will allow change updates to be pushed indefinitely unless explicitly stopped or database is closed.
  4. Start the Push Replication
Monitoring the Status of the Replication

Open the DBListTableViewController.swift file and locate addRemoteDatabaseChangesObserverAndStartObserving function.

// 1. iOS Specific. Add observer to the NOtification Center to observe replicator changes
NotificationCenter.default.addObserver(forName: NSNotification.Name.cblReplicationChange, object: db, queue: nil) {
            [unowned self] (notification) in
          
  // Handle changes to the replicator status - Such as displaying progress
  // indicator when status is .running 
}

 

You can monitor the status of the replication by adding an observer to the iOS Notification Center to be notified of cblReplicationChange notifications . You could use the notification handler for instance, to display appropriate progress indicators to user.  This is handled through appropriate Event Handler APIs on other platforms such as the addChangeListener function on Android/Java.

What Next ?

We would love to hear from you. So if you have questions or feedback, feel free to reach out to me at Twitter @rajagp or email me priya.rajagopal@couchbase.com. If you would like to enhance the demo app, please submit a pull request to the Github Repo.

The Couchbase Mobile Dev Forums is another great place to get your mobile related questions answered .Check out the development portal for details on the Sync Gateway and Couchbase Lite . Everything that was discussed here is on the Context of Couchbase Mobile 1.4. There are a lot of new and exciting changes coming up on Couchbase Mobile 2.0. Be sure to check out the  Developer Preview version 2.0 of Couchbase Mobile.

The post Data Synchronization Across iOS Devices Using Couchbase Mobile appeared first on The Couchbase Blog.

Categories: Architecture, Database

Testing your Sync Gateway functions with synctos

NorthScale Blog - Mon, 04/24/2017 - 11:28

Joel Andrews is a polyglot developer living on the rainy west coast of Canada. He fills a variety of roles at Kashoo including backend developer, database admin, DevOps guy, product owner and occasionally web and Android frontend developer. He has a deep and abiding love for Couchbase (especially Couchbase Mobile) that makes him completely unbiased when discussing the pros and cons of any data storage solution.

In my previous blog post, I introduced synctos, a handy open source tool that we built and use at Kashoo to ease the process of creating comprehensive sync functions for Couchbase Sync Gateway. Near the end of that post I alluded to the fact that synctos includes a built-in test-helper module that helps you to write tests that validate your document definitions. It’s always a good idea to test your code/configuration for bugs, and your synctos document definitions are no different.

In this post I will walk you through what it takes to get started writing your own specifications/test cases. Before continuing, I suggest reading the introductory post, if you haven’t already, to ensure you have a general understanding of what synctos is all about and how it works.

First, you’ll need to install Node.js to use synctos. Once installed, you should create an empty project directory with a new file called “package.json”:

{

  "name": "synctos-test-examples",

  "devDependencies": {

    "expect.js": "^0.3.1",

    "mocha": "^3.2.0",

    "simple-mock": "^0.7.3",

    "synctos": "1.x"

  },

  "scripts": {

    "test": "./generate-sync-function.sh && node_modules/.bin/mocha"

  }

}

This file tells the Node.js package manager (npm) which dependencies synctos and your test cases will need: expect.js for test assertions, mocha for running your tests, and simple-mock for mocking/stubbing functions from the Sync Gateway sync function API. It also specifies the “test” command that will execute your tests with mocha.

Next, run the following command from the root of your project directory to download the packages it needs to its local “node_modules” directory:

npm install

The project will need some document definitions, so create “my-example-doc-definitions.js” in the project’s root directory:

{

  exampleDoc: {

    typeFilter: simpleTypeFilter,

    channels: function(doc, oldDoc) {

      return {

        write: [ 'write-' + doc._id ]

      }

    },

    propertyValidators: {

      foo: {

        type: 'string',

        required: true,

        regexPattern: /^[a-z]{3}$/

      }

    }

  }

}

As you can see, this is a very simple document definition for demonstration purposes. Your own document definitions will undoubtedly be larger and more complex, but the same principles apply. The file defines a single document property (a required string called “foo” whose value must satisfy the specified regular expression), a simple type filter that determines the document’s type based on the contents of the implicit “type” property (i.e., a document’s “type” property must be “exampleDoc” to match this document type), and document channels that are constructed dynamically from the document ID.

Now create a new file called “generate-sync-function.sh” in the root directory of your project:

#!/bin/sh -e

# Determine the current script's directory, so it can execute commands from the root of the project no matter where it was run from

projectDir="$(dirname "$0")"

# This is where the generated sync function will be created

outputDir="$projectDir/build"

# This is where the synctos package was downloaded by npm

synctosDir="$projectDir/node_modules/synctos"

# Ensure the build directory exists

mkdir -p "$outputDir"

# Generate the sync function from the document definitions file

"$synctosDir"/make-sync-function "$projectDir/my-example-doc-definitions.js" "$outputDir/my-example-sync-function.js"

This file will be used to generate the sync function in the project’s “build” directory as “my-example-sync-function.js”. Make sure “generate-sync-function.sh” is executable by running:

chmod a+x generate-sync-function.sh

At this point, you have everything you need to generate a sync function from the document definitions file:

./generate-sync-function.sh

If you look in the “build” directory, you will find a fully-formed Sync Gateway sync function file called “my-example-sync-function.js”. If you felt so inclined, you could insert the sync function’s contents into a Sync Gateway configuration file now. When doing so, remember to surround the sync function with backticks/backquotes (`), since it is more than one line long.

Now it’s time to validate that sync function! Create a directory called “test” in the root of the project and add a file called “my-example-spec.js”:

var testHelper = require('../node_modules/synctos/etc/test-helper.js');

var errorFormatter = testHelper.validationErrorFormatter;

describe('my example document definitions', function() {

  // Test cases go here!

});

This is the skeleton of the specification file. The first two lines of the file import the synctos test-helper module and the error message formatter, which will greatly ease the process of writing test cases. The “describe” block will encompass all of the code that we add in later steps.

Next, add the following snippet inside the “describe” block of the specification file:

beforeEach(function() {

  testHelper.init('build/my-example-sync-function.js');

});

This block ensures that the test-helper module is re-initialized at the start of each (i.e., before each) test case with the contents of the generated sync function.

Below the “beforeEach” block and still inside the “describe” block, add the following test case:

it('should consider the document valid when all constraints are met', function() {

  var doc = {

    _id: 'my-document-id',

    type: 'exampleDoc',

    foo: 'bar'

  }

  testHelper.verifyDocumentCreated(doc, [ 'write-' + doc._id ]);

});

Now we’re getting somewhere. Here we’ve defined the document that we’d like to test and we’re asserting that the document can be created because it meets the criteria specified by the document definition. The second parameter of the “verifyDocumentCreated” function expects a complete list of the document channels that are accepted for the write option, which allows you to verify that the document definition’s channel assignment logic is correct.

How about a document that is invalid? Add another test case:

it('should consider a value of foo that is not three letters invalid', function() {

  var doc = {

    _id: 'my-document-id',

    type: 'exampleDoc',

    foo: 'invalid'

  }

  testHelper.verifyDocumentNotCreated(

    doc,

    doc.type,

    [ errorFormatter.regexPatternItemViolation('foo', /^[a-z]{3}$/) ],

    [ 'write-' + doc._id ]);

});

Since the document’s “foo” property does not match the regular expression that was specified in the document definition, we expect that this document will be rejected. Some notes on the arguments to the “verifyDocumentNotCreated” function:

  1. This is the document under test.
  2. This is the expected document type name.
  3. A complete list of all errors that are expected due to the failure. Note that the “errorFormatter” exposes formatter functions for all supported error types.
  4. A complete list of the expected document channels that are accepted for the write operation. As in the previous test case, this helps to verify that correct channels are assigned to the document during the operation.

Now that there are some test cases, you can run the test suite by executing the following from the project root:

npm test

You’ll find that both test cases ran and passed (indicated by a green check mark next to each)! If ever a test case fails, mocha (the test runner tool) will generate a detailed error message that should help you to figure out where to find the problem.

So, what’s next? There is plenty more that the test-helper module can do to help you write your specifications. Your next stop should be the test-helper module’s documentation to learn what other options are available; notably, you’ll find that you can also verify your sync function’s behaviour when a document is replaced or deleted (handy if your documents or their properties are meant to be immutable). The validation-error-message-formatter’s documentation should also be a big help in verifying errors that are returned when a document revision is rejected. And finally, you’ll find the complete source code for these examples on GitHub.

Happy testing!

The post Testing your Sync Gateway functions with synctos appeared first on The Couchbase Blog.

Categories: Architecture, Database

Using the optimizer_index_cost_adj Parameter in Oracle

Database Journal News - Mon, 04/24/2017 - 08:01

When all else fails it may be necessary to tell Oracle that index access is the way to go.  Read on to see what parameter is used, how to set it, and what surprises you may find after you do.

Categories: Database

Docker and Vaadin Meet Couchbase – Part 2

NorthScale Blog - Fri, 04/21/2017 - 12:52

Ratnopam Chakrabarti is a software developer currently working for Ericsson Inc. He has been focused on IoT, machine-to-machine technologies, connected cars, and smart city domains for quite a while. He loves learning new technologies and putting them to work. When he’s not working, he enjoys spending time with his 3-year-old son.

Introduction

Welcome to the part two of the series where I describe how to develop and run a Couchbase powered, fully functional Spring Boot web application using the Docker toolset. In part one of the series, I demonstrated how to run two Docker containers to run a functional application with a presentable UI. The two Docker containers that we were running are:

  1. A Couchbase container with preconfigured settings
  2. An application container talking to the Couchbase container (Run in step 1)

While this method is useful, it’s not fully automated – meaning the automated orchestration is not there. You have to run two different Docker run commands to run the entire setup.

Is there a way to build and run the application container which also triggers running of the Couchbase container? Of course there’s a way.

Enter Docker Compose

Using Docker Compose, you can orchestrate the running of multi-container environments, which is exactly what we need for our use case. We need to run the Couchbase container first, and then the application container should run and talk to the Couchbase container.

Here’s the docker-compose.yml file to achieve this:

version: "2"

services:

  app:

    build: .

    ports:

      - 8080:8080

    environment:

      - BUCKET_NAME=books

      - HOST=192.168.99.100

    depends_on:

      - db

  db:

    image: chakrar27/couchbase:books

    ports:

      - 8091:8091

      - 8092:8092

      - 8093:8093

      - 8094:8094

      - 11210:11210

Our app “depends_on” the db image which is the Couchbase container. In other words, the Couchbase container runs first and then the app container starts running. There’s one potential issue here: the “depends_on” keyword doesn’t guarantee that the Couchbase container has finished configuring the image and started running. All it ensures is that the container is started first; it doesn’t check whether the container is actually running or ready to be accepting requests by an application. In order to ensure that the Couchbase container is actually running and that all the pre-configuration steps, such as setting up the query, index services, and bucket, is completed, we need to do a check from the application container.

Here’s the Dockerfile of the app container that invokes a script which, in turn, checks whether the bucket “books” has been set up already or not. It goes into a loop till the bucket is set up and then triggers the app container.

https://github.com/ratchakr/bookstoreapp/blob/master/Dockerfile-v1

The script can be seen at https://github.com/ratchakr/bookstoreapp/blob/master/run_app.sh

The script does the following things:

It uses the REST endpoint supported by Couchbase for querying the bucket.

Curl is used to call REST endpoints. Installation of curl is covered in the Dockerfile of the application.

The script parses the JSON response of the REST call by using a tool called jq.

If the bucket is set up, it then runs the app container; otherwise it waits for the bucket to be set up first.

It’s worth mentioning that more checks, such as verifying if the index service and the query service are set up properly or not, can be added in the shell script to make it more robust. One word of caution is to confirm your particular use case and requirement before following the docker-compose approach; there’s not a sure-fire way to determine if the Couchbase db container is fully up and running and ready to serve requests from the client application. Some of the approaches that might work are as follows:

  1. If you have a preconfigured bucket, you can test if the bucket exists
  2. Check if the indexes are in place
  3. If you know the record-count in a bucket (let’s say for a .csv file which has been imported into a bucket at the time of initial data load), you can check if the count matches the number of records in the .csv file). For our use case, the one mentioned above works nicely.
Build and Run

Now that we have our docker-compose file and Dockerfile, we can build the application image by using the simple docker-compose up command.

Here’s the output snippet from the Docker console:

$ docker-compose up

Creating network "bookstoreapp_default" with the default driver

Pulling db (chakrar27/couchbase:books)...

books: Pulling from chakrar27/couchbase

Digest: sha256:4bc356a1f2b5b3d7ee3daf10cd5c55480ab831a0a147b07f5b14bea3de909fd9

Status: Downloaded newer image for chakrar27/couchbase:books

Building app

Step 1/8 : FROM frolvlad/alpine-oraclejdk8:full

full: Pulling from frolvlad/alpine-oraclejdk8

Digest: sha256:a344745faa77a9aa5229f26bc4f5c596d13bcfc8fcac051a701b104a469aff1f

Status: Downloaded newer image for frolvlad/alpine-oraclejdk8:full

---> 5f7037acb78d

Step 2/8 : VOLUME /tmp

---> Running in 7d18e0b90bfd

---> 6a43ccb712dc

Removing intermediate container 7d18e0b90bfd

Step 3/8 : ADD target/bookstore-1.0.0-SNAPSHOT.jar app.jar

---> a3b4bf7745e0

Removing intermediate container 0404f1d094d3

Step 4/8 : RUN sh -c 'touch /app.jar'

---> Running in 64d1c82a0694

---> 1ec5a68cafa9

Removing intermediate container 64d1c82a0694

Step 5/8 : RUN apk update && apk add curl

---> Running in 1f912e8341bd

fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz

fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz

v3.5.2-16-g53ad101cf8 [http://dl-cdn.alpinelinux.org/alpine/v3.5/main]

v3.5.2-14-gd7ba0e189f [http://dl-cdn.alpinelinux.org/alpine/v3.5/community]

OK: 7961 distinct packages available

(1/4) Installing ca-certificates (20161130-r1)

(2/4) Installing libssh2 (1.7.0-r2)

(3/4) Installing libcurl (7.52.1-r2)

(4/4) Installing curl (7.52.1-r2)

Executing busybox-1.25.1-r0.trigger

Executing ca-certificates-20161130-r1.trigger

Executing glibc-bin-2.25-r0.trigger

OK: 12 MiB in 18 packages

---> 8f99863af926

Removing intermediate container 1f912e8341bd

Step 6/8 : ADD run_app.sh .

---> cedb8d545070

Removing intermediate container 8af5ac3ab0a0

Step 7/8 : RUN chmod +x run_app.sh

---> Running in 74a141de2f52

---> 77ffd7425bea

Removing intermediate container 74a141de2f52

Step 8/8 : CMD sh run_app.sh

---> Running in 6f81c8ebaa37

---> 56a3659005ef

Removing intermediate container 6f81c8ebaa37

Successfully built 56a3659005ef

Image for service app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.

Creating bookstoreapp_db_1

Creating bookstoreapp_app_1

Attaching to bookstoreapp_db_1, bookstoreapp_app_1

db_1   | docker host ip =  192.168.99.100

db_1   | sleeping...

app_1  | Starting application run script...........

app_1  | couchbase is running on 192.168.99.100

app_1  | bucket to check is books

db_1   | < Date: Fri, 24 Mar 2017 06:53:00 GMT

db_1   | < Content-Length: 0

db_1   | < Cache-Control: no-cache

db_1   | <

100    55    0     0  100    55      0    827 --:--:-- --:--:-- --:--:--   833

db_1   | * Connection #0 to host 127.0.0.1 left intact

db_1   | bucket set up done

app_1  | response from cb

app_1  | ************************************************

app_1  | ************************************************

app_1  | response from cb books

app_1  | ************************************************

app_1  | ************************************************

app_1  | bucket is now ready bucket name books

app_1  | Run application container now

app_1  | ************************************************

app_1  |

app_1  |   .   ____          _            __ _ _

app_1  |  /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

app_1  | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

app_1  |  \\/  ___)| |_)| | | | | || (_| |  ) ) ) )

app_1  |   '  |____| .__|_| |_|_| |_\__, | / / / /

app_1  |  =========|_|==============|___/=/_/_/_/

app_1  |  :: Spring Boot ::        (v1.4.2.RELEASE)

app_1  |

app_1  | 2017-03-24 06:53:59.839  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=06bad9c4-85fc-4c0b-83a7-ad21b2fdd405, title=The Immortal Irishman, author=Timothy Egan, isbn=ISBN444, category=History]

app_1  | 2017-03-24 06:53:59.839  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=328eaf44-edff-43c6-9f55-62d7e095256d, title=The Kite Runner, author=Khaled Hosseini, isbn=ISBN663, category=Fiction]

app_1  | 2017-03-24 06:53:59.839  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=56882f5a-d466-457f-82c1-1c3bca0c6d75, title=Breaking Blue, author=Timothy Egan, isbn=ISBN777, category=Thriller]

app_1  | 2017-03-24 06:53:59.839  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=845a2fe8-cbbf-4780-b216-41abf86d7d61, title=History of Mankind, author=Gabriel Garcia, isbn=ISBN123, category=History]

app_1  | 2017-03-24 06:53:59.840  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=9d2833c3-e005-4c4f-98f9-75b69bbb7bf5, title=The Night Gardener, author=Eric Fan, isbn=ISBN333, category=Kids Books]

app_1  | 2017-03-24 06:53:59.840  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=5756bf4d-551c-429e-8bc3-2339dc065ff8, title=Grit: The Power of Passion and Perseverance, author=Angela Duckworth, isbn=ISBN555, category=Business]

app_1  | 2017-03-24 06:53:59.840  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Details = Book [id=e8e34f30-6fdf-4ca7-9cef-e06f504f8778, title=War and Turpentine, author=Stefan Hertmans, isbn=ISBN222, category=Fiction]

app_1  | 2017-03-24 06:54:00.234  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Books by Timothy Egan = Book [id=06bad9c4-85fc-4c0b-83a7-ad21b2fdd405, title=The Immortal Irishman, author=Timothy Egan, isbn=ISBN444, category=History]

app_1  | 2017-03-24 06:54:00.238  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Books by Timothy Egan = Book [id=56882f5a-d466-457f-82c1-1c3bca0c6d75, title=Breaking Blue, author=Timothy Egan, isbn=ISBN777, category=Thriller]

app_1  | 2017-03-24 06:54:00.346  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Starting with title 'The' = Book [id=06bad9c4-85fc-4c0b-83a7-ad21b2fdd405, title=The Immortal Irishman, author=Timothy Egan, isbn=ISBN444, category=History]

app_1  | 2017-03-24 06:54:00.349  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Starting with title 'The' = Book [id=328eaf44-edff-43c6-9f55-62d7e095256d, title=The Kite Runner, author=Khaled Hosseini, isbn=ISBN663, category=Fiction]

app_1  | 2017-03-24 06:54:00.349  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book Starting with title 'The' = Book [id=9d2833c3-e005-4c4f-98f9-75b69bbb7bf5, title=The Night Gardener, author=Eric Fan, isbn=ISBN333, category=Kids Books]

app_1  | 2017-03-24 06:54:00.443  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book in Fiction = Book [id=328eaf44-edff-43c6-9f55-62d7e095256d, title=The Kite Runner, author=Khaled Hosseini, isbn=ISBN663, category=Fiction]

app_1  | 2017-03-24 06:54:00.453  INFO 31 --- [           main] c.chakrar.sample.books.BookStoreRunner   : Book in Fiction = Book [id=e8e34f30-6fdf-4ca7-9cef-e06f504f8778, title=War and Turpentine, author=Stefan Hertmans, isbn=ISBN222, category=Fiction]

app_1  | 2017-03-24 06:54:02.745  INFO 31 --- [nio-8080-exec-1] o.v.spring.servlet.Vaadin4SpringServlet  : Could not find a SystemMessagesProvider in the application context, using default

app_1  | 2017-03-24 06:54:02.753  INFO 31 --- [nio-8080-exec-1] o.v.spring.servlet.Vaadin4SpringServlet  : Custom Vaadin4Spring servlet initialization completed

app_1  | 2017-03-24 06:54:02.864  INFO 31 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring FrameworkServlet 'dispatcherServlet'

app_1  | 2017-03-24 06:54:02.865  INFO 31 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet 'dispatcherServlet': initialization started

At this point our application is up and running with a single docker-compose orchestration command.

Type 192.168.99.100:8080 into the browser; you should see the following screen:

pasted image 0 12

Docker Compose is a nice way to orchestrate multi-container Docker environments. It has almost similar command chains as “docker” command sets. For instance, to see a list of running containers, you simply type:

docker-compose ps > which would give you

$ docker-compose ps

Name                     Command               State                                                                                Ports

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

bookstoreapp_app_1   /bin/sh -c sh run_app.sh         Up      0.0.0.0:8080->8080/tcp

bookstoreapp_db_1    /entrypoint.sh /opt/couchb ...   Up      11207/tcp, 0.0.0.0:11210->11210/tcp, 11211/tcp, 18091/tcp, 18092/tcp, 18093/tcp, 0.0.0.0:8091->8091/tcp, 0.0.0.0:8092->8092/tcp, 0.0.0.0:8093->8093/tcp, 0.0.0.0:8094->8094/tcp

The name of the containers are shown in bold here.

If you need to stop or tear down your orchestrated environment with Docker Compose, you can do that with the docker-compose down command as shown below:

A sample run produces:

$ docker-compose down

Stopping bookstoreapp_app_1 ... done

Stopping bookstoreapp_db_1 ... done

Removing bookstoreapp_app_1 ... done

Removing bookstoreapp_db_1 ... done

Removing network bookstoreapp_default

Now, if you do a docker-compose ps, it shows that no container is currently running.

$ docker-compose ps

Name   Command   State   Ports

---------------------------------------------------------------

You can also use Docker compose for an automated test environment where you fire up your Docker containers, run the tests, then tear down the complete infrastructure – all with Compose. For a detailed overview of Docker compose, please visit the official website.

This post is part of the Couchbase Community Writing Program

The post Docker and Vaadin Meet Couchbase – Part 2 appeared first on The Couchbase Blog.

Categories: Architecture, Database

Getting Started with Azure SQL Data Warehouse - Part 4

Database Journal News - Thu, 04/20/2017 - 17:52

Azure SQL Data Warehouse is a new enterprise-class, elastic petabyte-scale, data warehouse service. Join Arshad Ali as he discusses round-robin and distributed tables, and how to create them. He also discusses how partitioning works in SQL Data Warehouse and looks at the impact of choosing the right distribution key. As a bonus Arshad shows you how to leverage PolyBase to quickly and easily import or export data from SQL Data Warehouse.

Categories: Database

Graph Data Processing with SQL Server 2017

SQL Server is trusted by many customers for enterprise-grade, mission-critical workloads that store and process large volumes of data. Technologies like in-memory OLTP and columnstore have also helped our customers to improve application performance many times over. But when it comes to hierarchical data with complex relationships or data that share multiple relationships, users might find themselves struggling with a good schema design to represent all the entities and relationships, and writing optimal queries to analyze complex data and relationships between the tables. SQL Server uses foreign keys and joins to handle relationships between entities or tables. Foreign keys only represent one-to-many relationships and hence, to model many-to-many relationships, a common approach is to introduce a table that holds such relationships. For example, Student and Course in a school share a many-to-many relationship; a Student takes multiple Courses and a Course is taken by multiple Students. To represent this kind of relationship one can create an “Attends” table to hold information about all the Courses a Student is taking. The “Attends” table can then store some extra information like the dates when a given Student took this Course, etc.

Over time applications tend to evolve and get more complex. For example, a Student can start “Volunteering” in a Course or start mentoring “Mentoring” others. This will add new types of relationships to the database. With this type of approach, it is not always easy to modify existing tables to accommodate evolving relationships. To analyze data connected by means of foreign keys or multiple junction tables involves writing complex queries with joins across multiple tables, and this is no trivial task. The queries can quickly get complex, resulting in complex execution plans and degraded query performance over time.

We live in an era of big data and connected information; people, machines, devices, businesses across the continents are connected to each other more than ever before. Analyzing connected information is becoming critical for businesses to achieve operational agility. Users are finding it easier to model data and complex relationships with the help of graph databases. Native graph databases have risen in popularity, being used for social networks, transportation networks, logistics, and much more. Graph database scenarios can easily be found across several business disciplines, including supply chain management, computer or telecommunication networks, detecting fraud attacks, and recommendation engines.

At Microsoft, we believe that there should be no need for our customers to turn to a new system just to meet their new or evolving graph database requirements. SQL Server is already trusted by millions of customers for mission-critical workloads, and with graph extensions in SQL Server 2017, customers get the best of both relational and graph databases in a single product, including the ability to query across all data using a single platform. Users can also benefit from other cutting-edge technologies already available in SQL Server, such as columnstore indexes, advanced analytics using SQL Server R Services, high availability, and more.

Graph extensions available in SQL Server 2017

A graph schema or database in SQL Server is a collection of node and edge tables. A node represents an entity—for example, a person or an organization—and an edge represents a relationship between the two nodes it connects. Figure 1 shows the architecture of a graph database in SQL Server.

Figure 1

Figure 1: SQL graph database architecture

Create graph objects

With the help of T-SQL extensions to DDL, users can create node or edge tables. Both nodes and edges can have properties associated to them. Users can model many-to-many relationships using edge tables. A single edge type can connect multiple type of nodes with each other, in contrast to foreign keys in relational tables. Figure 2 shows how a node and edge table are stored internally in the database. Since nodes and edges are stored as tables, most of the operations supported on tables are available on node or edge tables, too.

Figure 2

Figure 2: Person Node and Friends Edge table.

The CREATE TABLE syntax guide shows the supported syntax for creation of node and edge tables.

Query language extensions

To help search a pattern or traverse through the graph, a new MATCH clause is introduced that uses ASCII-art syntax for pattern matching and navigation. For example, consider the Person and Friends node tables shown in Figure 2; the following query will return friends of “John”:

SELECT Person2.Name
FROM Person Person1, Friends, Person Person2
WHERE MATCH(Person1-(Friends)->Person2)
AND Person1.Name = ‘John’;

The MATCH clause is taking a search pattern as input. This pattern traverses the graph from one node to another via an edge. Edges appear inside parentheses and nodes appear at the ends of the arrow. Please refer to MATCH syntax guide to find out more ways in which MATCH can be used.

Fully integrated in SQL Server engine

Graph extensions are fully integrated in the SQL Server engine. Node and edge tables are just new types of tables in the database. The same storage engine, metadata, query processor, etc., is used to store and query graph data. All security and compliance features are also supported. Other cutting-edge technologies like columnstore, ML using R Services, HA, and more can also be combined with graph capabilities to achieve more. Since graphs are fully integrated in the engine, users can query across their relational and graph data in a single system.

Tooling and ecosystem

Users benefit from the existing tools and ecosystem that SQL Server offers. Tools like backup and restore, import and export, BCP, and SSMS “just work” out of the box.

FAQs

How can I ingest unstructured data?

Since we are storing data in tables, users must know the schema at the time of creation. Users can always add new types of nodes or edges to their schema. But if they want to modify an existing node or edge table, they can use ALTER TABLE to add or delete attributes. If you expect any unknown attributes in your schema, you could either use sparse columns or create a column to hold JSON strings and use that as a placeholder for unknown attributes.

Do you maintain an adjacency list for faster lookups?

No. We are not maintaining an adjacency list on every node; instead we are storing edge data in tables. Because it is a relational database, storing data in the form of tables was a more natural choice for us. In native-directed graph databases with an adjacency list, you can only traverse in one direction. If you need to traverse in the reverse direction, you need to maintain an adjacency list at the remote node too. Also, with adjacency lists, in a big graph for a large query that spawns across your graph, you are essentially always doing a nested loop lookup: for every node, find all the edges, from there find all the connected nodes and edges, and so on.

Storing edge data in a separate table allows us to benefit from the query optimizer, which can pick the optimal join strategy for large queries. Depending on the complexity of query and data statistics, the optimizer can pick a nested loop join, hash join, or other join strategies — as opposed to always using nested loop join, as in the case of an adjacency list. Each edge table has two implicit columns, $from_id and $to_id, which store information about the nodes that it connects. For OLTP scenarios, we recommend that users create indexes on these columns ($from_id, $to_id) for faster lookups in the direction of the edge. If your application needs to perform traversals in reverse direction of an edge, you can create an index on ($to_id, $from_id).

Is the new MATCH syntax supported on relational tables?

No. MATCH clause works only on graph node and edge tables.

Can I alter an existing table into a node or edge table?

No. In the first release, ALTER TABLE to convert an existing relational table into a node or edge table is not supported. Users can create a node table and use INSERT INTO … SELECT FROM to populate data into the node table. To populate an edge table from an existing table, proper $from_id and $to_id values must be obtained from the node tables.

What are some table operations that are not supported on node or edge tables?

In the first release, node or edge tables cannot be created as memory-optimized, system-versioned, or temporary tables. Stretching or creating a node or edge table as external table (PolyBase) is also not supported in this release.

How do I find a node connected to me, arbitrary number of hops away, in my graph?

The ability to recurse through a combination of nodes and edges, an arbitrary number of times, is called transitive closure. For example, find all the people connected to me through three levels of indirections or find the employee chain for a given employee in an organization. Transitive closure is not supported in the first release. A recursive CTE or a T-SQL loop may be used to work around these types of queries.

How do I find ANY Node connected to me in my graph?

The ability to find any type of node connected to a given node in a graph is called polymorphism. SQL graph does not support polymorphism in the first release. A possible workaround is to write queries with UNION clause over a known set of node and edge types. However, this workaround is good for a small set of node and edge types.

Are there special graph analytics functions introduced?

Some graph databases provide dedicated graph analytical functions like “shortest path” or “page rank.” SQL Graph does not provide any such functions in this release. Again, T-SQL loops and temp tables may be used to write a workaround for these scenarios.

Thank you for reading this post! We are excited to announce the first version of graph extensions to SQL Server. To learn more, see this article on Graph processing with SQL Server 2017. Stay tuned for more blog posts and updates on SQL graph database!

Try SQL Server 2017

Get started with the preview of SQL Server 2017 on macOS, Docker, Windows, and Linux using these links:

Categories: Database

Resumable Online Index Rebuild is in public preview for SQL Server 2017 CTP 2.0

We are delighted to announce that Resumable Online Index Rebuild is now available for public preview in the SQL Server vNext 2017 CTP 2.0 release. With this feature, you can resume a paused index rebuild operation from where the rebuild operation was paused rather than having to restart the operation at the beginning. In addition, this feature rebuilds indexes using only a small amount of log space. You can use the new feature in the following scenarios:

  • Resume an index rebuild operation after an index rebuild failure, such as after a database failover or after running out of disk space. There is no need to restart the operation from the beginning. This can save a significant amount of time when rebuilding indexes for large tables.
  • Pause an ongoing index rebuild operation and resume it later. For example, you may need to temporarily free up system resources to execute a high priority task or you may have a single maintenance window that is too short to complete the operation for a large index. Instead of aborting the index rebuild process, you can pause the index rebuild operation and resume it later without losing prior progress.
  • Rebuild large indexes without using a lot of log space and have a long-running transaction that blocks other maintenance activities. This helps log truncation and avoid out-of-log errors that are possible for long-running index rebuild operations.

Read the articles: The following articles provide detailed and updated information about this feature:

Public preview information: For public preview communication on this topic, please contact the ResumableIDXPreview@microsoft.com alias.

To try SQL Server 2017: Get started with the preview of SQL Server 2017 on macOS, Docker, Windows, and Linux.

Categories: Database

NDP Episode #17: Marten for .NET Developers

NorthScale Blog - Thu, 04/20/2017 - 16:07

I am pleased to announce that the latest episode of The NoSQL Database Podcast has been published to all the popular podcasting networks.  In this episode I’m joined by Jeremy Miller and Matt Groves where we talk about Marten and where it fits into the .NET development spectrum.

Jeremy Miller is the author of Marten which is a wrapper for PostgreSQL to make it into a document style NoSQL database.  Being that I don’t know a thing about .NET, I have my co-host Matt Groves on the show to help me out.

This episode titled Marten for .NET Developers can be found on all the major podcast networks, which include, but are not limited to, iTunes and Pocket Casts.  If you’d like to listen to it outside of an app, it can be heard below.

http://traffic.libsyn.com/nosql/TNDP_-_Episode_17_-_Marten_for_DotNet_Developers.mp3

If you have any questions for anyone on the show, feel free to drop them a message on Twitter.  If you’re interested in learning more about Marten, check out the official website.

If you’re interested in learning about Couchbase as a NoSQL solution, check out the Couchbase Developer Portal for more information on using it with .NET.

The post NDP Episode #17: Marten for .NET Developers appeared first on The Couchbase Blog.

Categories: Architecture, Database

Python in SQL Server 2017: enhanced in-database machine learning

We are excited to share the preview release of in-database analytics and machine learning with Python in SQL Server. Python is one of the most popular languages for data science and has a rich ecosystem of powerful libraries.

Starting with the CTP 2.0 release of SQL Server 2017, you can now bring Python-based intelligence to your data in SQL Server.

The addition of Python builds on the foundation laid for R Services in SQL Server 2016 and extends that mechanism to include Python support for in-database analytics and machine learning. We are renaming R Services to Machine Learning Services, and R and Python are two options under this feature.

The Python integration in SQL Server provides several advantages:

  • Elimination of data movement: You no longer need to move data from the database to your Python application or model. Instead, you can build Python applications in the database. This eliminates barriers of security, compliance, governance, integrity, and a host of similar issues related to moving vast amounts of data around. This new capability brings Python to the data and runs code inside secure SQL Server using the proven extensibility mechanism built in SQL Server 2016.
  • Easy deployment: Once you have the Python model ready, deploying it in production is now as easy as embedding it in a T-SQL script, and then any SQL client application can take advantage of Python-based models and intelligence by a simple stored procedure call.
  • Enterprise-grade performance and scale: You can use SQL Server’s advanced capabilities like in-memory table and column store indexes with the high-performance scalable APIs in RevoScalePy package. RevoScalePy is modeled after RevoScaleR package in SQL Server R Services. Using these with the latest innovations in the open source Python world allows you to bring unparalleled selection, performance, and scale to your SQL Python applications.
  • Rich extensibility: You can install and run any of the latest open source Python packages in SQL Server to build deep learning and AI applications on huge amounts of data in SQL Server. Installing a Python package in SQL Server is as simple as installing a Python package on your local machine.
  • Wide availability at no additional costs: Python integration is available in all editions of SQL Server 2017, including the Express edition.

Data scientists, application developers, and database administrators can all benefit from this new capability.

  • Data scientists can build models using the full datasets on the SQL Server instead of moving data to your IDE or being forced to work with samples of data. Working from your Python IDE, you can execute Python code that runs in SQL Server on the data in SQL Server and get the results in your IDE. You are no longer dependent on application developers to deploy your models for production use, which often involves translating models and scripts to a different application language. These models can be deployed to production easily by embedding them in T-SQL stored procedures. You can use any open source Python package for machine learning in SQL Server. The usage pattern is identical to the now popular SQL Server R Services.
  • Application developers can take advantage of Python-based models by simply making a stored procedure call that has Python script embedded in it. You don’t need a deep understanding of the inner workings of the Python models, or have to translate it to a line of business language in close coordination with data scientists to consume it. You can even leverage both R and Python models in the same application—they are both stored procedure calls.
  • Database administrators can enable Python-based applications and set up policies to govern how Python runtime behaves on SQL Server. You can manage, secure, and govern the Python runtime to control how the critical system resources on the database machine are used. Security is ensured by mechanisms like process isolation, limited system privileges for Python jobs, and firewall rules for network access.

The standard open source CPython interpreter (version 3.5) and some Python packages commonly used for data science are downloaded and installed during SQL Server setup if you choose the Python option in the feature tree.

Currently, a subset of packages from the popular Anaconda distribution is included along with Microsoft’s RevoScalePy package. The set of packages available for download will evolve as we move toward general availability of this feature. Users can easily install any additional open source Python package, including the modern deep learning packages like Cognitive Toolkit and TensorFlow to run in SQL Server. Taking advantage of these packages, you can build and deploy GPU-powered deep learning database applications.

Currently, Python support is in “preview” state for SQL Server 2017 on Windows only.

We are very excited about the possibilities this integration opens up for building intelligent database applications. Please watch the Python based machine learning in SQL Server presentation and Joseph Sirosh Keynote at Microsoft Data Amp 2017 event for demos and additional information. We encourage you to install SQL Server 2017. Please share your feedback with us as we work toward general availability of this technology.

Thank you!

Sumit Kumar, Senior Program Manager, SQL Server Machine Learning Services

Nagesh Pabbisetty, Director of Program Management, Microsoft R Server and Machine Learning

Categories: Database