Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Database

Progressive Insurance data performance grows by factor of four, fueling business growth online experience

At the Accelerate your Insights event last week, Quentin Clark described how SQL Server 2014 was now part of a platform that had in-built in-memory technology across all data workloads.  In particular with this release Microsoft has added in-memory Online Transaction Processing delivering breakthrough performance for applications in throughput and latency.

One of the early adopter customers of this technology is Progressive Insurance, a company that has long made customer service a competitive strength.  Central to customer service experience is the company’s policy-serving web app.  As it updated the app, Progressive planned to add its Special Lines business such as insuring motorcycles, recreational vehicles, boats, and even Segway electric scooters. However, Progressive needed to know that the additional workloads wouldn’t put a damper on the customer experience.

Progressive was interested in In-Memory OLTP capability, which can host online transaction processing (OLTP) tables and databases in a server’s working memory. The company tested In-Memory OLTP even before SQL Server 2014 became commercially available. Modifying the policy-serving app for the test was relatively straightforward, according to Craig Lanford, IT Manager at Progressive. 

The company modified eight natively compiled stored procedures, using already-documented code. In those tests, In-Memory OLTP boosted the processing rate from 5,000 transactions per second to 21,000—a 320 percent increase.

Lanford and his colleagues were delighted that the session-state database performance proved four times as fast with SQL Server 2014, adding “Our IT leadership team gave us the numbers we had to meet to support the increased database workload, and we far exceeded those numbers using Microsoft In-Memory OLTP”.  The company will use the throughput gain to support the addition of its Special Lines business to its policy-servicing app and session-state database. With its use of SQL Server 2014, Progressive can run a single, larger database reliably and avoid the cost of multiple databases.

You can read more about how Progressive is using SQL Server 2014 here.

Whether you’ve already built a data culture in your organization, or if you’re new to exploring how you can turn insights into action, try the latest enhancements to these various technologies: SQL Server 2014, Power BI for Office 365, Microsoft Azure HDInsight, and the Microsoft Analytics Platform System

Categories: Database

Customers using Microsoft technologies to accelerate their insights

At yesterday’s Accelerate your insights event in San Francisco, we heard from CEO Satya Nadella, COO Kevin Turner and CVP Quentin Clark about how building a data culture in your company is critical to success. By combining data-driven DNA with the right analytics tools, anyone can transform data into action.

Many companies, and many of our customers, are already experiencing the power of data - taking advantage of the fastest performance for their critical apps, and revealing insights from all their data, big and small.

Since SQL Server 2014 was released to manufacturing in April we’ve seen many stories featuring the new technical innovations in the product.  In-memory transaction processing (In-Memory OLTP), speeds up an already very fast experience by delivering speed improvement of typically up to 30x.  Korean entertainment giant CJ E&M is using In-Memory OLTP to attract more customers for its games by holding online giveaway events for digital accessories like character costumes and decorations soon after each game is released.   When it ran tests in an actual operational environment for one of its most popular games, the results were that SQL Server 2014 delivered 35-times-faster performance over the 2012 version in both batch requests per second and I/O throughput. 

SQL Server 2014 also delivers enhanced performance to data warehouse storage and query performance – NASDAQ OMX is using the In-Memory Columnstore for a particular system which handles billions of transactions per day, multiple petabytes of online data and has single tables with quintillions of records of business transactions.  They have seen storage reduced by 50% and some query times reduced from days to minutes. 

Lufthansa Systems is using the hybrid features of SQL 2014 to anticipate customer needs for high-availability and disaster-recovery solutions.  It has piloted the combined power of Microsoft SQL Server 2014 and Windows Azure has led to even faster and fuller data recovery, reduced costs, and the potential for a vastly increased focus on customer service and solutions, compared with the company’s current solutions.

Growth in data volumes provides multiple challenges and opportunities.  For executives and researchers at Oslo University Hospital providing ease of access to data is important.  Using Power BI for Office 365, they can analyze data in hours rather than months, collaborate with colleagues around the country, and avoid traditional BI costs.  For Virginia Tech the data deluge presents challenges for researchers in the life sciences where new types of unstructured data types from gene sequencing machines are generating petabytes of data.  They are using the power of the cloud with Microsoft Azure HDInsight to not only analyzing data faster, but analyzing it more intelligently and which may in the future provide cures for cancer.  For The Royal Bank of Scotland handling multiple terabytes of data and an unprecedented level of query complexity more efficiently led them to use the power of the Analytics Platform System (formerly Parallel Data Warehouse).  As a result, it gained near-real-time insight into customers’ business needs as well as emerging economic trends, cut a typical four-hour query to less than 15 seconds, and simplified deployment. 

Whether you’ve already built a data culture in your organization, or if you’re new to exploring how you can turn insights into action, try the latest enhancements to these various technologies: SQL Server 2014, Power BI for Office 365, Microsoft Azure HDInsight, and the Microsoft Analytics Platform System.

Categories: Database

SQL Server 2014 and HP Sets Two World Records for Data Warehousing Leading In Both Performance and Price/Performance

Yesterday we talked about how we are delivering real-time performance to customers in every part of the platform.  I’m excited to announce another example of where we are delivering this to customers in conjunction with one of our partners. Microsoft and Hewlett Packard broke two world records in TPC-H 10 Terabyte and 3 Terabyte benchmarks for non-clustered configuration for superior data warehousing performance and price-performance. Each of the world records showed SQL Server breaking the previously held record by Oracle/SPARC on both performance and price/performance1 by significant margins.

10TB: Running on a HP ProLiant DL580 Gen8 Server with SQL Server 2014 Enterprise Edition and Windows Server 2012 R2 Standard Edition, the configuration achieved a world record non-clustered performance of 404,005 query-per-hour (QphH) topping the previously held record from Oracle/SPARC of 377,594 query-per-hour (QphH) 1.  The SQL Server configuration also shattered the Price/performance metric with a $2.34 USD Dollar/Query-per-Hour ($/QphH) topping Oracle’s $4.65 $/QphH1

3TB: Running on a HP ProLiant DL580 Gen8 Server with SQL Server 2014 Enterprise Edition and Windows Server 2012 R2 Standard Edition, the configuration achieved a world record non-clustered performance of 461,837 query-per-hour (QphH) topping the previously held record from Oracle/SPARC of 409,721 query-per-hour (QphH) 1.  The SQL Server configuration also shattered the Price/performance metric with a $2.04 USD Dollar/Query-per-Hour ($/QphH) topping Oracle’s $3.94 $/QphH1

By breaking the world records for both performance and price/performance validates how SQL Server 2014 is delivering on leading in-memory performance at exceptional value. It also validates SQL Server’s leadership in data warehousing.

The TPC Benchmark™H (TPC-H) is an industry standard decision support benchmark that consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.  The performance metric is called the TPC-H Composite Query-per-Hour Rating and the price/performance metric is the cost / performance metric. More information can be found at http://www.tpc.org/tpch/results/tpch_perf_results.asp?resulttype=noncluster

Eron Kelly,
General Manager
SQL Server

 

For more information:

 

1As of April 15, 2014.

SQL Server 2014 HP 10TB TPC-H Result: http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=114041502&layout=

2SQL Server 2014 HP 3TB TPC-H Result: http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=114041501&layout=
Oracle 10TB TPC-H Result: http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=113112501&layout=

Oracle 3TB TPC-H Result: http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=113060701&layout=

Categories: Database

The data platform for a new era

Earlier today, Microsoft hosted a customer event in San Francisco where I joined CEO Satya Nadella and COO Kevin Turner to share our perspective on the role of data in business. Satya outlined his vision of a platform built for an era of ambient intelligence. He also stressed the importance of a “data culture” that encourages curiosity, action and experimentation – one that is supported by technology solutions that put data within reach of everyone and every organization. 

Kevin shared how customers like Beth Israel Deaconess Medical Center, Condé Nast, Edgenet, KUKA systems, NASDAQ, telent, Virginia Tech and Xerox are putting Microsoft’s platform to work and driving real business results. He highlighted an IDC study on the tremendous opportunity for organizations to realize an additional $1.6 trillion dividend over the next four years by taking a comprehensive approach to data. According to the research, businesses that pull together multiple data sources, use new types of analytics tools and push insights to more people across their organizations at the right time, stand to dramatically increase their top-line revenues, cut costs and improve productivity. 

A platform centered on people, data and analytics
In my keynote, I talked about the platform required to achieve the data culture and realize the returns on the data dividend – a platform for data, analytics and people. 

It’s people asking questions about data that’s the starting point -- Power BI for Office 365 and Excel’s business intelligence features helps get them there. Data is key – data from all kinds of sources, including SQL Server, Azure and accessibility of the world’s data from Excel. Analytics brings order and sets up insights from broad data – analytics from SQL Server and Power BI for Office 365, and Azure HDInsight for running Hadoop in the cloud.

A platform that solves for people, data, and analytics accelerates with in-memory. We created the platform as customers are increasingly needing the technology to scale with big data, and accelerate their insights at the speed of modern business. 

Having in-memory across the whole data platform creates speed that is revolutionary on its own, and with SQL Server we built it into the product that customers already know and have widely deployed. At the event we celebrated the launch of SQL Server 2014. With this version we now have in-memory capabilities across all data workloads delivering breakthrough performance for applications in throughput and latency. Our relational database in SQL Server has been handling data warehouse workloads in the terabytes to petabyte scale using in-memory columnar data management. With the release of SQL Server 2014, we have added in-memory Online Transaction Processing. In-memory technology has been allowing users to manipulate millions of records at the speed of thought, and scaling analytics solutions to billions of records in SQL Server Analysis Services. 

The platform for people, data and analytics needs to be where the data and the people are. Our on-premises and cloud solutions provide endpoints for a continuum of how the realities of business manage data and experiences – making hybrid a part of every customer’s capability. Today we announced that our Analytics Platform System is generally available – this is the evolution of the Parallel Data Warehouse product that now supports the ability to query across the traditional relational data warehouse and data stored in a Hadoop region – either in the appliance or in a separate Hadoop cluster. SQL Server has seamless integration with VMs in Azure to provide secondaries for high availability and disaster recovery. The data people access in the business intelligence experience comes through Excel from their own data and partner data – and Power BI provides accessibility to wherever the data resides.  

The platform for people, data and analytics needs to have full reach. The natural language search query Q&A feature in Power BI for Office 365 is significant in that it provides data insights to anyone that is curious enough to ask a question. We have changed who is able to reach insights by not demanding that everyone learn the vernacular of schemas and chart types. With SQL Server, the most widely-deployed database on the planet, we have many people who already have the skills to take advantage of all the capabilities of the platform. With a billion people who know how to use Excel, people have the skills to get engaged on the data.

Looking forward, we will be very busy. Satya mentioned some work we are doing in the Machine Learning space and today we also announced a preview of Intelligent Systems Service – just a couple of the things we are working to deliver a platform for the era of ambient intelligence. The Machine Learning work originates in what it takes to run services at Microsoft like Bing. We had to transform ML from a deep vertical domain into an engineering capability, and in doing so learned what it would take to democratize ML for our customers. Stay tuned. 

The Internet of Things (IoT) space is very clearly one of the most important trends in data today. Not only do we envision the data from IoT solutions being well served by the data platform, but we need to ensure the end-to-end solution can be realized by any customer. To that end, Intelligent Systems Service (ISS) is an Internet of Things offering built on Azure, which makes it easier to securely connect, manage, capture and transform machine-generated data regardless of the operating system platform.

It takes a data platform built for the era of ambient intelligence with data, analytics and people to let companies get the most value from their data and realize a data culture. I believe Microsoft is uniquely positioned to provide this platform – through the speed of in-memory, our cloud and our reach. Built on the world’s most widely-deployed database, connected to the cloud through Azure, delivering insights to billions through Office and understanding the world through our new IoT service – it is truly a data platform for a new era. When you put it all together only Microsoft is bringing that comprehensive a platform and that much value to our customers.

 

Quentin Clark
Corporate Vice President
Data Platform Group

Categories: Database

Tune in tomorrow and accelerate your insights

Tomorrow’s the day! Tune in to hear from Microsoft CEO Satya Nadella, COO Kevin Turner, and Data Platform Group CVP Quentin Clark about Microsoft’s approach to data, and how the latest advancements in technology can help you transform data into action.

Who should watch?

Join us tomorrow morning at 10AM PDT if you like data or want to learn more about it. If you store it, you manage it, you explore it, you slice and dice it, you analyze it, you visualize it, you present it, or if you make decisions based on it. If you’re architecting data solutions or deciding on the best data technology for your business. If you’re a DBA, business analyst, data scientist, or even just a data geek on the side, join the live stream.

What will I hear about?

Data infrastructure. Data tools. And ultimately, the power of data. From finding the connections that could cure cancer, to predicting the success of advertising campaigns, data can do incredible things. Join us online and get inspired. You’ll see how your peers are putting their data, big and small, to work.

From a product perspective, we’ll celebrate the latest advancements in SQL Server 2014, Power BI for Office 365, SQL Server Parallel Data Warehouse, and Microsoft Azure HDInsight. And ultimately, we’ll explore how these offerings can help you organize, analyze, and make sense of your data – no matter the size, type, or location.

Where do I sign up?

Mark your calendar now or RSVP on Facebook so you’re ready to go tomorrow. When streaming goes live, you can join us here for all the action live from San Francisco.

When do things get started?

Tomorrow, April 15, at 10AM PDT. Be there.

See you tomorrow!

Categories: Database

Actian and Attivio OEM Agreement Accelerates Big Data Business Value by Integrating Big Content

Actian Corporation (Ingres) Press Releases - Wed, 11/13/2013 - 14:03
Attivio AIE® integration into the Actian ParAccel Big Data Analytics Platform delivers unmatched performance and insights at fraction of big stack vendor cost

Newton, MA and Redwood City, CA – November 13, 2013 Attivio, creator of the award-winning Active Intelligence Engine (AIE®) and Actian, a leader in next-generation big data analytics, today announced a strategic OEM agreement. The agreement gives Actian customers the power to integrate and correlate human-generated, unstructured content sources with their structured data analytics to dramatically increase the return on their information assets.

Actian’s ParAccel Big Data Analytics Platform accelerates analytic performance and enables rapid deployment of highly targeted analytics for critical initiatives including customer segmentation, fraud prevention, risk avoidance and digital marketing optimization. The addition of Attivio AIE’s ability to include Big Content sources – social media, blog posts, email, research documents, SEC filings, insurance reports, legal depositions, open-ended survey responses and more – completes the Big Data analytics picture to greatly improve its impact and value.

With its patented query-time JOIN technology, Attivio AIE delivers unrivaled access to information from internal and external human-created information, so customers can integrate, correlate, access and analyze all of their assets, regardless of format. As a result, business users gain new, real-time business insights that help build revenue, cut costs, increase competitiveness and manage risk — insights that might otherwise go undiscovered.

“Big Content has become a vital piece in the Big Data puzzle,” said David Schubmehl, Research Director, IDC. “The majority of enterprise information created today is human-generated, but legacy systems have traditionally required processing structured data and unstructured content separately. The addition of Attivio AIE to Actian ParAccel provides an extremely cost-effective option that delivers impressive performance and value.”

“Through this alliance with Attivio, our customers now have access to all internal and external sources of unstructured content, solidifying our leadership position in the Big Data space,” said Fred Gallagher, VP Strategic Business for Actian. “With Attivio, we are giving customers more options and making this technology accessible to companies who recognize the value of innovation that is simply not available from legacy big stack vendors who typically impose expensive, long-term contracts, inflexible technology and long deployment times.” “Actian ParAccel delivers high-performance analytics on massive amounts of data,” said Sid Probstein, CTO at Attivio. “Being able to access and analyze outside sources from social media to any and all types of human-created content provides a complete and detailed view of customers and their requirements. It advances the entire field of BI and analytics.”

For more information about the Actian alliance and the Attivio Partner Network, please contact partners@attivio.com.

About Attivio

Attivio’s unified information access platform, the Active Intelligence Engine® (AIE®), redefines the business impact of our customers’ information assets, so they can quickly seize opportunities, solve critical challenges and fulfill their strategic vision.

Attivio integrates and correlates disparate silos of structured data and unstructured content in ways never before possible. Offering both intuitive search capabilities and the power of SQL, AIE seamlessly integrates with existing BI and big data tools to reveal insight that matters, through the access method that best suits each user’s technical skills and priorities. Please visit us at www.attivio.com.

Attivio and Active Intelligence Engine are trademarks of Attivio, Inc. All other names are trademarks of their respective companies.

About Actian: Take Action on Big Data

Actian powers the action-driven enterprise in the Age of Data, delivering rapid time to analytic value with a modular approach that connects data assets behind the scenes, enables unconstrained analytics and scales almost without limits. The Actian DataCloud Integration and ParAccel Big Data Analytics Platforms help organizations leverage data assets for competitive advantage, delivering highly parallelized software that fully exploits modern chip, core and cluster infrastructures. Its scalable solutions deliver powerful business results for industry innovators. Actian serves tens of thousands of users worldwide and is headquartered in Redwood City, California. Stay connected with Actian Corporation on Facebook, Twitter and LinkedIn.

Actian, DataCloud, ParAccel, Action Apps, Ingres, RushAnalytics, Versant and Vectorwise are trademarks of Actian Corporation and its subsidiaries. All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.

Categories: Database, Vendor

Actian to Present at the NASSCOM Product Conclave 2013

Actian Corporation (Ingres) Press Releases - Mon, 10/28/2013 - 19:00

Redwood City, Calif. – October 28, 2013 — – Steve Shine, CEO and President of Actian Corporation (“Actian”), a next-generation leader in big data analytics, will present during the NASSCOM® Product Conclave 2013 – The Community Meet-up of Doers Who Share! event, held Oct. 28-30 at Vivanta by Taj – Yeshwantpur in Bangalore, India.

During the Opportunities in Big Data session, which takes place Oct. 30 at 2:00-2:30 p.m. IST in the Aura room, Shine will discuss the various opportunities presented by big data, as well as new areas of innovation for startups trying to effect meaningful competitive differentiation through analytics on big data.

Actian has a strong presence in India, helping companies like Future Group, Mahindra Comviva and ManageEngine, a division of Zoho Corporation, harness the power of big data analytics to predict business trends, increase customer retention and gain a competitive advantage.

For more information on the NASSCOM Product Conclave 2013, please visit http://productconclave.nasscom.in/.

About Actian: Take Action on Big Data

Actian powers the action-driven enterprise in the Age of Data, delivering rapid time to analytic value with a modular approach that connects data assets behind the scenes, enables unconstrained analytics and scales almost without limits. The Actian DataCloud Integration and ParAccel Big Data Analytics Platforms help organizations leverage data assets for competitive advantage, delivering highly parallelized software that fully exploits modern chip, core and cluster infrastructures. Its scalable solutions deliver powerful business results for industry innovators. Actian serves tens of thousands of users worldwide and is headquartered in Redwood City, California. Stay connected with Actian Corporation on Facebook, Twitter and LinkedIn.

Actian, DataCloud, ParAccel, Action Apps, Ingres, RushAnalytics, Versant and Vectorwise are trademarks of Actian Corporation and its subsidiaries. All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.

Categories: Database, Vendor

Actian Teams with Hortonworks to Deliver Reference Architecture that Addresses Most Vexing Hadoop Challenges for Enterprises

Actian Corporation (Ingres) Press Releases - Fri, 10/25/2013 - 03:51

Sand Hill research pinpoints Hadoop adoption hurdles at each phase of maturity cycle

Redwood City, Calif. – October 24, 2013 Actian Corporation (“Actian”), a next-generation leader in big data analytics, has further integrated its ParAccel Big Data Analytics Platform on the Hortonworks Data Platform to address top Hadoop adoption hurdles of data extraction challenges, design-time friction and poor run-time performance.

The deep integration of Actian’s ParAccel Big Data Analytics Platform on the Hortonworks Data Platform enables high-performance analytics without constraints, simplifies the delivery of data services for entire ecosystems of users, and significantly lowers the total cost of ownership for the modern data platform. By utilizing the new Hortonworks/Actian reference architecture, users can experience quicker time to analytic value by:

  • Easily moving data to and from Hadoop: Users can easily extract data from Hadoop and enrich it seamlessly from within the ParAccel Platform.
  • Eliminating design-time friction: ParAccel Dataflow’s easy-to-use drag-and-drop development capabilities negate the need for complex MapReduce coding or parallel programming.
  • Improving performance: Actian is Hortonworks Data Platform 2.0 and YARN certified, giving users the ability to leverage ParAccel Dataflow for the enhanced performance of pipeline parallelism alongside their existing MapReduce jobs.

Sand Hill Group recently surveyed Hadoop users, primarily data architects and CIOs, to assess their level of maturity in Hadoop adoption and the top challenges they face. For 46.7 percent of respondents, the top challenge was knowledge and experience with the Hadoop platform, while 20.7 percent cited availability of Hadoop and Big Data skills and 6.7 percent struggled with the amount of technology development and engineering required to implement a Hadoop-based solution. Actian’s and Hortonworks’ joint reference architecture tackles these key challenges head-on.

“Hadoop is the clear winner for enterprises that seek to drive transformational, actionable business insights from their big data,” said Mike Hoskins, chief technology officer of Actian Corporation. “For all its power, Hadoop still presents significant barriers to enterprise-scale adoption. The combined Actian-Hortonworks solution overcomes challenges around MapReduce skills shortages, design-time friction and run-time performance to elevate Hadoop to true enterprise readiness. Actian has many global clients delivering groundbreaking big data projects. For nearly all of these, Hadoop is a central building block. The collaboration between Hortonworks and Actian provides a huge boost for performance, productivity and access to rich big data functionality, which helps increase adoption of Hadoop. For our mutual customers it boils down to faster time to value – in the end, that’s what really counts.”

“We have collaborated closely with Actian on the reference architecture for the ParAccel Big Data Analytics Platform and the Hortonworks Data Platform,” said Ari Zilka, chief technology officer of Hortonworks. “When used together, these products allow users to rapidly and effectively deploy high-performance Hadoop applications for scenarios ranging from archiving, ETL and operational applications to advanced analytics, including predictive analytics. By working together we are excited to help forward-thinking architects and CIOs successfully design and implement incredibly high-performing, hard-hitting big data applications with proven best practices, robust and secure architecture, and unmatched technology capabilities.”

“Design and implementation of next-generation big data integration processes can be a daunting leap for BI professionals who are used to classic ETL technology,” said Steve Miller, president and co-founder of OpenBI. “Actian’s approach – providing a visual data programming technology that executes natively in cluster – will increase productivity while opening up big data analytics programming to a much broader set of professionals.”

“Our research findings offer compelling insights into friction points on the march to harnessing Hadoop to unlock big data analytics value for the enterprise,” says M.R. Rangaswami, co-founder and managing director of Sand Hill Group. "With more than 35 percent of respondents indicating that the results of their Hadoop/Big Data initiatives were less than they expected, the market is ripe for more easy-to-use, easy-to-implement solutions on Hadoop such as Actian's analytics and ETL offerings.”

Hoskins and Zilka will jointly present at Strata + Hadoop World 2013 on Tuesday, October 29, 2013 at 1:45 p.m. EDT. In addition, both Actian and Hortonworks will provide hands-on demonstrations of the reference architecture at their Strata booths, #108 and #306, respectively.

About Actian: Take Action on Big Data
Actian powers the action-driven enterprise in the Age of Data, delivering rapid time to analytic value with a modular approach that connects data assets behind the scenes, enables unconstrained analytics and scales almost without limits. The Actian DataCloud integration and ParAccel Big Data Analytics Platforms help organizations leverage data assets for competitive advantage, delivering highly parallelized software that fully exploits modern chip, core and cluster infrastructures. Actian serves tens of thousands of users worldwide and is headquartered in Redwood City, Calif. with offices in Austin, New York, London, Paris, Frankfurt, Amsterdam, New Delhi and Melbourne. Stay connected with Actian Corporation on Facebook, Twitter and LinkedIn.

Actian, Actian DataCloud, ParAccel, Action Apps, Ingres, Versant and Vectorwise are trademarks of Actian Corporation and its subsidiaries. All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.

# # #

Categories: Database, Vendor

Actian Appoints New CMO and SVP of Business Development to Accelerate the Transformation of the Big Data Marketplace

Actian Corporation (Ingres) Press Releases - Thu, 10/24/2013 - 19:00

Ashish Gupta, Former Microsoft and Vidyo executive, is poised to redefine the data analytics marketplace by creating new opportunities not addressed by aging stack players

Redwood City, Calif. – October 24, 2013 — – Actian Corporation (“Actian”), a next-generation leader in big data analytics and cloud integration, has appointed Ashish Gupta as chief marketing officer and senior vice president of business development.

“We have very systematically evolved our business so that we have a very powerful engine for the market’s transition to the Age of Data,” said Steve Shine, chief executive officer of Actian Corporation. “Ashish Gupta brings the go-to-market experience to ensure that we build our base in big data analytics and cloud integration. More importantly, he has the strategic vision to work with the team to entirely redefine these markets as we leapfrog legacy stack players.”

Gupta brings a passion and track record for challenging established companies saddled with legacy approaches, often caught in the Innovator’s Dilemma, by exploiting disruptive technologies and creative marketing strategies to rapidly scale the new entrant’s business and open new markets not addressed by incumbents. He joins Actian from Vidyo, a video conferencing provider, where he served as chief marketing officer and senior vice president of corporate development. Under his leadership, Vidyo experienced significant growth, formed key strategic partnerships and built a leading brand in the industry – challenging the leaders of a 30-year-old industry. Prior to Vidyo, Gupta held various executive roles within the Microsoft Office Division where he led marketing, sales and business development teams for Microsoft’s Unified Communications Group, competing with established PBX vendors to create a leadership position for Microsoft Lync in the communications and collaboration market. Gupta has also served in executive roles at Alcatel/Genesys Telecommunications, Deloitte Consulting, Covad and Hewlett-Packard.

“Severe market disruption happens when innovative technologies materially shift an industry’s fulcrum to create new business approaches and markets,” said Gupta. “Actian’s portfolio has proven to be such a force in the industry by enabling next-generation analytical solutions and invisible integration for thousands of customers that use data to create sustainable competitive advantage. I welcome the opportunity to contribute to the company’s fast growth and market leadership.”

About Actian: Take Action on Big Data

Actian powers the action-driven enterprise in the Age of Data, delivering rapid time to analytic value with a modular approach that connects data assets behind the scenes, enables unconstrained analytics and scales almost without limits. The Actian DataCloud Integration and ParAccel Big Data Analytics Platforms help organizations leverage data assets for competitive advantage, delivering highly parallelized software that fully exploits modern chip, core and cluster infrastructures. Its scalable solutions deliver powerful business results for industry innovators. Actian serves tens of thousands of users worldwide and is headquartered in Redwood City, California. Stay connected with Actian Corporation on Facebook, Twitter and LinkedIn.

Actian, DataCloud, ParAccel, Action Apps, Ingres, RushAnalytics, Versant and Vectorwise are trademarks of Actian Corporation and its subsidiaries. All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.

Categories: Database, Vendor

Actian to Present at the 10th Annual East Coast Technology Growth Conference

Actian Corporation (Ingres) Press Releases - Tue, 10/15/2013 - 02:41

Redwood City, Calif. – October 14, 2013 — – John Santaferraro, vice president of solutions and product marketing at Actian Corporation(“Actian”), a next-generation leader in big data analytics, will present during the AGC Partners 10th Annual East Coast Technology Growth Conference at the Westin Copley Place in Boston on Tuesday, October 15, 2013.

During the event, Santaferraro will participate in a panel focused on Big Data Analytics moderated by Jeff Kelly, Principal Research Coordinator of the Wikibon Project. Santaferraro and fellow panelists Kathy McGroddy, VP of Business Development of IBM; Louis Brun, Senior VP of Product Strategy of Guavus; Jit Saxena, Chairman of Netezza and Anthony Deighton, CTO of QlikView will discuss why traditional data management and business analytics tools and technologies are straining under the weight of big data. They will shed light on challenges organizations face when deriving value from their data and share new approaches in helping enterprises gain actionable insights through next-generation big data analytics.

Visit actian.com to learn more about Actian’s next-generation approach of enabling organizations to predict the future, prescribe the next best action, prevent damage to their business and discover hidden risks and opportunities through use of the ParAccel Big Data Analytics Platform.

About Actian: Take Action on Big Data

Actian powers the action-driven enterprise in the Age of Data, delivering rapid time to analytic value with a modular approach that connects data assets behind the scenes, enables unconstrained analytics and scales almost without limits. The Actian DataCloud Integration and ParAccel Big Data Analytics Platforms help organizations leverage data assets for competitive advantage, delivering highly parallelized software that fully exploits modern chip, core and cluster infrastructures. Its scalable solutions deliver powerful business results for industry innovators. Actian serves tens of thousands of users worldwide and is headquartered in Redwood City, California. Stay connected with Actian Corporation on Facebook, Twitter and LinkedIn.

Actian, DataCloud, ParAccel, Action Apps, Ingres, RushAnalytics, Versant and Vectorwise are trademarks of Actian Corporation and its subsidiaries. All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.

Categories: Database, Vendor

Actian Integrates ParAccel Big Data Analytics Platform to Speed Time to Analytic Value

Actian Corporation (Ingres) Press Releases - Fri, 10/11/2013 - 21:14

Platform combines ParAccel Big Data Analytics Platform’s high-performance analytical capabilities with Hadoop’s data management capabilities to democratize big data analytics

Redwood City, Calif. – October 11, 2013 Actian Corporation (“Actian”), a next-generation leader in big data analytics, announced the release of new ParAccel Dataflow Operators that integrate the recently announced ParAccel Big Data Analytics Platform, further unifying its technology offerings. This new innovation ties together the data management capabilities of Hadoop with the high-performance analytic processing of the ParAccel Big Data Analytic Platform, enabling organizations to achieve better price performance and quicker time to analytic value.

The ParAccel Big Data Analytics Platform provides a single, visual interface to load, prepare, enrich and analyze data natively on various Apache Hadoop distributions. Users can now process data directly on Hadoop and move the data to the ParAccel Big Data Analytics Platform’s high-performance analytics engine without any MapReduce or SQL programming, rapidly increasing the rate at which business value is derived from the analytics process. Business users and analysts alike can leverage the extreme performance of the ParAccel Big Data Analytics Platform for their predictive analytics initiatives.

“In the Age of Data, organizations need next-generation technologies that rapidly turn big-data investments into value-creation engines. To achieve this, different data platforms must work together to meet the mixed workload demands that result from the convergence of emerging data and new analytics,” said Steve Shine, chief executive officer of Actian Corporation. “Actian has quickly integrated its leading-edge technology assets to create a cooperative analytic processing environment that accelerates the speed at which data flows from data management to high-value analytics.”

The ParAccel Dataflow Operators enable data to persist within Actian’s massively parallel analytic processing engine or the single-node version used for accelerated business intelligence or analytic applications. When used in conjunction with Hadoop’s superior data management capabilities, the platform gives business users the ability to conduct analytics leveraging best-of-breed point applications for enterprise-class deployments.

“Platforms that allow workloads to gravitate to the engine best-suited to handle specific kinds of processing will consistently lower the cost of computing,” said Shawn Rogers, vice president of Enterprise Management Associates. “Hadoop is optimal for preparing and enriching data, as well as search and discovery analytics. Analytic platforms are best-suited for running high-performance, lower latency analytics at any scale. Actian’s next-generation, shared processing approach is consistent with the way companies should architect systems to leverage both technologies.”

This technology integration comes only months removed from Actian’s announcement of leveraging recently acquired companies ParAccel, Pervasive Software and Versant Corporation to deliver the ParAccel Big Data Analytics and Actian DataCloud Platforms. “You are seeing the new Actian in action,” said Shine. “We are a big data innovator with the agility of a small Silicon Valley startup and the strength and staying power of a global stalwart.”

About Actian: Take Action on Big Data

Actian powers the action-driven enterprise in the Age of Data, delivering rapid time to analytic value with a modular approach that connects data assets behind the scenes, enables unconstrained analytics and scales almost without limits. The Actian DataCloud Integration and ParAccel Big Data Analytics Platforms help organizations leverage data assets for competitive advantage, delivering highly parallelized software that fully exploits modern chip, core and cluster infrastructures. Its scalable solutions deliver powerful business results for industry innovators. Actian serves tens of thousands of users worldwide and is headquartered in Redwood City, California. Stay connected with Actian Corporation on Facebook, Twitter and LinkedIn.

Actian, DataCloud, ParAccel, Action Apps, Ingres, RushAnalytics, Versant and Vectorwise are trademarks of Actian Corporation and its subsidiaries. All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.

Categories: Database, Vendor

Four short links: 20 December 2011 - Maximum MySQL, Digital News, Unbiased Mining, and Congressional Clue

O'Reilly News: MySQL - Tue, 12/20/2011 - 12:00
How Twitter Stores 250M Tweets a Day Using MySQL (High Scalability) -- notes from a talk at the MySQL conference on how Twitter built a high-volume MySQL store. How The Atlantic Got Profitable With Digital First (Mashable) -- Lauf says his team has focused on putting together premium advertising experiences that span print, digital, events and (increasingly) mobile. Data...
Categories: Database

A Great Day for Membase

NorthScale Blog - Tue, 10/05/2010 - 01:14

Whenever I talk about Membase with candidates, employees, or friends, I feel more and more excited about what we are building and how it is going to impact the industry. Each discussion validates my belief that what we do *is* unique and a game changer.

Just today, we had two important “wins,” one from a prospect who evaluated our technology against other NoSQL databases and chose Membase. I can’t talk much about it yet, but this is an amazing win. The second is the fact that IDC chose us as an innovative company to watch. Great day!

Every morning when I look at my calendar, I find myself looking forward several things. At the top of the list is the meetings in which I am going to discuss Membase technology, meet smart people, and demo a data management solution they can get excited about. I also look forward to the end of each day to see what improvements are in the latest build that make it even better for Membase users. It’s fun being in a position where people are hungry to learn about what you do and how you do it.

After five years in a big company I now remember how much I love being in a startup: to be able to move quickly, change direction fast when needed, develop features in days that in other environments would take weeks or even months, wear multiple hats, and most importantly, be close to the customer. I like building meaningful systems that solve real problems. This company is an amazing place to be and it is getting better every day.

And, by the way, we are *always* looking for great people to join us. If you’re one of them, just shoot me an email.

Categories: Architecture, Database

Membase Recognized by IDC

NorthScale Blog - Mon, 10/04/2010 - 17:52

Winning awards is always fun. Over the years, companies I’ve been part of have won their fair share. But not all awards are created equal. Some definitely carry more weight than others, and I put the IDC Innovative Company to Watch award in this category. The fact that IDC does extensive research on the markets they address, talks regularly to a broad set of vendors and customers, and has a rigorous process for award selection all brings great credibility to the award. The award certainly has great meaning for us and I suspect this is also true for organizations who are thinking through what database to use for their next project.

As a small company it’s always a challenge to get the word out about your products and this is particularly true when you’re in a space like NoSQL where there are lots of competing technologies. Membase wasn’t one of the first NoSQL products in the market, so it’s encouraging that our innovative work and early customer success is being recognized so quickly. We’re very proud that while IDC could have given the award to any of the many NoSQL contenders, they chose to give it to us.

While we are thrilled to be recognized as a company to watch, it is even more gratifying that IDC understands the strategic importance of this new category of databases for enterprise customers and the significant near-term opportunity (tens of millions of dollars) it represents for companies like ours. IDC notes that they are seeing “an ‘intensifying trend’ for application development to move to the Web, creating the need for back-end architectures that demand extreme speed and scale elasticity while maintaining high levels of reliability. I can second that. We’ve seen a marked increase in the interest and uptake of our software – we just hit a run-rate of 30,000 downloads a month, and judging by this heightened demand in the marketplace, customers with interactive web applications are clearly looking for alternatives to complement their relational database solutions.

We’re also excited about the range of customer interest. Yes, our customers include many Web 2.0 type companies such as social gaming and ad targeting platforms among others – but many enterprise customers are now recognizing the need for non-relational solutions on the back end. A recent InformationWeek survey indicated that 44% of IT staff in the enterprise had not yet heard of NoSQL databases. But that means that 56% have heard of NoSQL– and in fact, if our interactions with customers is any indication, many of those already have pilots underway. From financial services to retailers to media companies, we’re seeing a growing number of inquiries and engagements in the enterprise and expect those numbers to increase as the value of NoSQL becomes more widely understood among those in mainstream IT.

I’d love to hear from you, especially if you are involved with web-based application development for an enterprise. What are your plans for exploring this emerging class of data management solutions optimized to support interactive web applications?

Categories: Architecture, Database

Membase Server Beta 4 is here, with memcached buckets!

NorthScale Blog - Thu, 09/23/2010 - 14:36

We NorthScalers have been hard at work and are proud to release Membase Server Beta 4, our final Beta release ahead of our general availability release.

Go and grab it here!

In addition to support for 64-bit Windows, we think you’ll be particularly excited by a major new feature in the release: memcached buckets!

Introducing Memcached Buckets
You now can create buckets in your Membase Server cluster that behave exactly like memcached, which means you can use Membase Server as a drop-in replacement for your existing memcached setup. In a single cluster you can now share the resources between memcached buckets and membase buckets.

Let’s look at the differences between memcached and membase bucket types:

Fundamentally, membase buckets are designed as permanent data stores. Once you put a key-value (KV) pair into a membase bucket it will remain there until you remove it (or the time-to-live expires). In a membase bucket, data will be written to disk, so your store can grow, constrained only by the available disk space. In addition, membase buckets offer replication; further, they are using vbuckets to allow data to be moved between nodes as cluster topology changes.

On the other hand, memcached buckets follow the memcached semantics: they are fundamentally designed as caches, not permanent data stores. As the cache runs out of available cache memory, items are evicted from the cache, based on a least-recently-used (LRU) policy. As a result, your application needs to be able to cope with the expected behavior, namely that an item stored in the cache may not be available at some later point. If that’s not the behavior you want, then you need to consider membase buckets as an alternative.

And, just like memcached, memcached buckets do not persist data to disk and there is no replication between nodes. When you add a node, keys served from the new node are no longer accessible from the old node, as they do not get transferred from the old node to the new one. There will be cache misses and the KV pairs will need to be set again on the new node – again, these are the normal, expected behaviors of a memcached setup.

Memory quota allocation for memcached buckets is identical to the that of the current NorthScale Memcached Server product. A fixed amount of memory per node is allocated for use by the memcached bucket, so adding or removing nodes will change the size of the memcached bucket. This is different from membase buckets, where the quota stays unchanged as the number of nodes changes, but we chose to keep memcached bucket behavior consistent with what current memcached users are accustomed to.

The new setup wizard lets you configure the default bucket when creating a new cluster, so you can start with a memcached bucket as the only bucket, and expand from there as your needs dictate.

A Quick Word on Disk Quotas
Based on our own experience and feedback from users, we took a hard second look at our disk quota system for membase buckets. Ultimately, we decided to remove that option. We believe this change brings the product more in line with typical database behavior: we now only return errors and run out of disk space when you actually run out of disk space :) Of course we are still showing disk usage per bucket and across the cluster, so that you can keep an eye on overall resource usage.

Enjoy Beta 4 and let us know what you think!

Categories: Architecture, Database

Membase and RightScale: Elastic Data Scaling in the Cloud

NorthScale Blog - Tue, 09/14/2010 - 13:47

I am very excited that Membase ServerTemplates are now up and running on the RightScale Cloud Management Platform (see today’s announcement). RightScale customers now have easy access to a leading NoSQL database for the first time, and Membase customers can rest easy that when they’re ready to deploy their applications in the cloud they can take advantage of the leading cloud management platform in the industry.

For those who may not be familiar with RightScale ServerTemplates, they’re really cool. They provide a kind of blueprint for what a server should do in the cloud. They let users deploy preconfigured, cloud-ready servers that know how to operate in the cloud: how to obtain an IP address, how to submit monitoring data, and how to work with other servers in a cloud deployment. In our case, the Membase ServerTemplate sets everything up so you can easily use Right Scale to deploy, provision, and manage a Membase database server running in Amazon (AWS), Rackspace, GoGrid, Eucalyptus or any other cloud service that RightScale supports.

Establishing a close relationship with RightScale was an easy decision. Many of our customers, including Zynga, already use RightScale extensively and prodded us to integrate our products with theirs. We’ve been working closely with RightScale for the past couple of months and I think you’ll like what we’ve put together.

Since social gaming turned out to be one of biggest initial adopters of Membase, RightScale, and Amazon (AWS), we’ve collectively decided to host a webinar to talk about how leading social gaming companies are using our products to build successful businesses (register now). If you’re in the social gaming business you won’t want to miss this but even if you’re not you can still learn a lot about deploying your web application in the cloud.

So, check out our new partnership with RightScale and let me know what you think. And let us know what other hot cloud companies we should work with next. Our goal is to make Membase easily accessible no matter how and where you choose to develop and deploy your application. Tell us how we can make your life easier!

Categories: Architecture, Database

Membase and Open Source 4.0

NorthScale Blog - Fri, 09/03/2010 - 16:33

I read Matt Aslett’s (The 451) post on the golden age of open source with interest. In it he describes that we’ve arrived at the fourth stage of open source, which is ”in short: a return to a focus on collaboration and community, as well as commercial interests.”

What we’re doing with membase.org definitely falls in line with this description although with a slightly different twist. NorthScale saw the need for a simple, fast, and elastic NoSQL database that we felt wasn’t being met by existing technologies. When it became clear that many prominent companies shared this view and were committed to an open source solution, NorthScale stepped in to shepherd the development of a broad community around the membase.org project. Consistent with Matt Aslett’s description of open source 4.0, the result is a project with an “emphasis on collaboration and community rather than control.” While NorthScale has contributed the bulk of the code to the project, our customers Zynga and NHN are co-sponsors of the project who have a strong commitment to its success. This blurring of the line between vendor and customer – the collaboration between two seemingly opposite sides of a transaction – has long set open source apart from the large proprietary vendors who want nothing more than a lock on their customers.

Traditionally, the primary attraction to open source, and what enabled it to make inroads in the enterprise, has been cost. This is the “cheaper Oracle than Oracle” model where the technology is not necessarily solving any new problems in the market, but provides a cheaper open source version of something enterprises are already paying for.

However, when I talk to enterprise companies, lowering costs no longer cuts it as a sole driver for open source technology adoption. On the other hand, if we engage our customers around a very real and painful problem they’re dealing with – in our case, the mismatch between relational databases and the needs of interactive web applications – and demonstrate how we’re solving this with innovative new technology, then we can have a discussion.

In a nutshell, the fourth stage of open source is much more than just a return to community and collaboration – it’s about putting open source front and center as an engine of innovation. We’re seeing an emergence of open source projects that solve a new problem and create a new solution that eases this pain point. The source code just happens to be open because it’s what we have all come to expect. This is particularly true of infrastructure software going forward, where it’s expected that some component, if not all of it, is available as open source.

We believe open source 4.0 is characterized in part by projects that solve new problems with innovative solutions and use a highly collaborative model. We encourage the participation of both “corporate sponsors” and passionate individuals who are willing to contribute to the membase roadmap and strengthen the community.

Categories: Architecture, Database

Implementing Membase Clients

NorthScale Blog - Tue, 08/31/2010 - 20:46

Recently, Attila Kiskó, the author of the best .NET memcached client, the Enyim .NET memcached client, has been enhancing his client library to speak directly to membase data nodes.  Membase already supports all existing memcached client libraries and memcached protocols via a high-performance proxy, but there’s a “direct path” that client libraries can use for ever-increased performance.  Along the way, we ended up with a quick guide on the membase.org wiki on how to create your own native or “smart” membase client library, so anybody else with their own favorite programming language can also do the same.

http://wiki.membase.org/bin/view/Main/ClientImplementationGuide

The easiest approach is to start with your favorite memcached client library (that speaks memcached binary protocol) and proceed from there.  The fun part  is with handling the cases during Rebalance operations to allow for seamless cluster elasticity without data loss, but who doesn’t like fun challenges like these?

Categories: Architecture, Database

NorthScale Membase Server Beta 3 is Here!

NorthScale Blog - Mon, 08/30/2010 - 14:14

I am excited to announce that NorthScale Membase Server 1.6 Beta3 is now available and ready for download.

This beta release adds a lot of new functionality and reflects most of what you’ll find in the final product. Highlights include:

  • Windows support
  • Multi-tenancy – allows multiple buckets on a single cluster including bucket quotas
  • “Cluster Overview” as a new monitoring dashboard
  • And lots of small improvements and bug fixes, of course!

Let’s take a look at these features in a bit more detail:

Windows support is by far one of the most frequently requested features, and we are very pleased to offer it with this beta release. Beta3 provides 32-bit Windows support, with 64-bit support on the way (Note: The 32bit binary runs just fine on Windows 64-bit but is subject to the 32-bit memory limits). The Windows version provides the same feature set as our Linux version.

Multi-tenancy is the mechanism for creating multiple buckets on one membase cluster. Each bucket represents a separate namespace, but more importantly it also provides a resource control mechanism on a per bucket basis, allowing buckets to have different behavior. For example if you have some data you consider very important, you may want to create a bucket with a replica count of 3; for other less crucial data, a replica count of 0 might make sense. This way you can decide how to divide the cluster resources to accommodate different requirements for different applications or different types. No more one size fits all!

Bucket quotas are worth a bit more explanation. Each time you create a cluster, you set a fixed amount of memory that each server node in the cluster will contribute to the total cluster memory that buckets can consume. Once set, this value will be inherited by any server joining the cluster and cannot be changed. Hence, the total memory available for membase use in the cluster increases by this amount with each addition of server to the cluster.

Similarly, each bucket defines a memory quota that sets the amount of memory it can use out of the cluster total memory. This quota does not change as you add servers to your cluster, but you can manually edit this on the “Manage Bucket” screen.

In addition to the memory quota, there is also a disk quota associated with each bucket. In contrast to the memory quota, there is no fixed limit of disk space that each server brings to the cluster; all free disk space on the assigned storage path may be used. It is up to sysadmin to make sure that each node provides sufficient space to accommodate the data written (and you can track free disk space in the new Cluster Overview dashboard). Disk quotas are not yet enforced in Beta3, but you can already use it to monitor your bucket’s usage versus the quota.

The Cluster Overview provides a single cluster overview dashboard, showing you the most crucial stats of your cluster in one place.

As you can see you get a single page to keep track of the memory and disk usage of all your buckets, as well as how many operations your cluster is performing. The “disk fetches per second” serves as a potential issue indicator. If you are seeing a lot of disk reads it means that the working set for at least one of your buckets does not fit into RAM alone anymore. Disk reads are much higher latency than memory reads, so should this happen you can use the Data Bucket monitor section to drill down and understand which bucket is encountering the issue. If you need to take action you can increase the bucket memory quota in the Manage Data Bucket section. Issue resolved!

As you see we packed a lot of great new features into Beta3. But there is still more to come. You might be able to guess from the new bucket creation dialog that we have another bucket type in store, which will make multi-tenancy even more exciting – but for more on that you’ll have to check back later.

Enjoy Beta3 and let us know how you are getting on with the new features!

Categories: Architecture, Database

Meetup with Membase at VMworld – Win an iPad

NorthScale Blog - Thu, 08/26/2010 - 23:36

We are looking forward to a great week next week at VMworld 2010 in San Francisco. It looks like it’s shaping up to be a great conference.

See a Membase demo.
If you’re at the show, be sure to come by NorthScale’s booth (#640) for a Membase demo. Membase is an elastic key-value database that stores data behind interactive web applications far more efficiently and cost effectively than it can be stored in a relational database. We’d love to show you how this highly available, cloud-friendly data layer expands and rebalances dynamically as application needs change. Just talk to anyone in the booth wearing a t-shirt with the Membase mascot (right).

membase t-shirtWear a t-shirt, win an iPad!
And speaking of t-shirts, pick up your own yellow membase t-shirt and wear it around the show floor to win a chance for an iPad. We’ll be giving away iPads at our booth on Tuesday (8/31) and Wednesday (9/1) at 5:30pm, so stop by during the day to get your t-shirt and find out more about how to win.

Can’t make the show?
Test drive Membase anyway.

If you won’t be in San Francisco next week, you can still take Membase for a spin by downloading it here. There are also a number of webinars available for getting started, and a very active user forum as well.

Hope to see you at the show!

Categories: Architecture, Database