Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!


Dell Doubles Application Speeds, Processes Transactions 9X Faster with In-Memory OLTP

As a global IT leader, Dell manufactures some of the world’s most innovative hardware and software solutions. It also manages one of the most successful e-commerce sites. In 2013, the company facilitated billions in online sales. On a typical day, 10,000 people are browsing at the same time. During peak online shopping periods, the number of concurrent shoppers can increase 100 times, to as many as one million people.

To help facilitate fast, frustration-free shopping despite traffic spikes, Dell has distributed the website’s online transaction processing (OLTP) load between 2,000 virtual machines, which include 27 mission-critical databases that run on Microsoft SQL Server 2012 Enterprise software and the Windows Server 2012 operating system. These databases, along with hundreds of web applications, are supported by Dell PowerEdge servers, Dell Compellent storage, and Dell Networking switches.

When Dell learned about SQL Server 2014 and its in-memory capabilities, the company immediately signed up to be an early adopter. Not only are memory-optimized tables in SQL Server 2014 lock-free—making it possible for numerous applications to simultaneously access and write to the same database rows—but also the solution is based on the technologies that IT staff already know how to use.

Initially, engineers set up the database tables to be fully durable, meaning that the table replicas are synchronous copies. However, developers can also configure the tables to use delayed durability, which means that changes made to a table’s replica are delayed slightly to minimize any impact on performance.

By gaining the option to store tables in memory, Dell is achieving unprecedented OLTP speeds. “The performance increase we realize with In-Memory OLTP in SQL Server 2014 is astounding!” says Scott Hilleque, Design Architect at Dell. “After just a few hours of work, groups sped database performance by as much as nine times. And all aspects of our In-Memory OLTP experience has been seamless for our staff because it is so easy to adopt, and its implementation produces zero friction for architects, developers, database administrators, and operations staff.”

Although Dell is in the very early stages of adopting SQL Server 2014, IT workers are excited by the impact of In-Memory OLTP. The more the IT team can speed database performance, the faster web applications can get the information that they need to deliver a responsive and customized browsing experience for customers.  Reinaldo Kibel, Database Strategist at Dell summarizes that “In-Memory OLTP in SQL Server 2014 really signifies a new mindset in database development because with it, we no longer have to deal with the performance hits caused by database locks—and this is just one of the amazing benefits of this solution.”

You can read the full case study here  and watch the video here:

Also, check out the website to learn more about SQL Server 2014 and start a free trial today.  

Categories: Database

Architecture of the Microsoft Analytics Platform System

In today’s world of interconnected devices and broad access to more and more data, the ability to glean ambient insight from the variety of data sources has been made quite hard by the variety and speed with which data is being delivered. Think about it for a minute, your servers continue to provide interesting data to you about the operations happening in your business but now you have data coming to you from the temperature sensors in the A/C units, the power supplies and networking equipment in the data center that can be combined to show that spikes in temperature and traffic have a dramatic effect on the life of a server. This type of contextual data is growing to include larger and with more detailed insights into the operations and management of your business. As we look to the future, Pew Research has released a report that predicts 50 billion connected devices by 2025. That is 5 devices for every person expected to be alive. With data coming from sources like manufacturing equipment to jet airliners, from mobile phones to your scale, and to things we haven’t even imaged yet the question really because how do you take advantage of all of these data sources to provide insight into the current and future trends in your business.

In April 2014, Microsoft announced the Analytics Platform System (APS) as Microsoft’s “Big Data in a Box” solution for addressing this question. APS is an appliance solution with hardware and software that is purpose built and pre-integrated to address the overwhelming variety of data while providing customers the opportunity to access this vast trove of data. The primary goal of APS is to enable the loading and querying of terabytes and even petabytes of data in a performant way using a Massively Parallel Processing version of Microsoft SQL Server (SQL Server PDW) and Microsoft’s Hadoop distribution, HDInsight, which is based off of the Hortonworks Data Platform.

Basic Design

An APS solution is comprised of three basic components:

  1. The hardware – the servers, storage, networking and racks.
  2. The fabric – the base software layer for operations within the appliance.
  3. The workloads – the individual workload types offering structured and unstructured data warehousing.

The Hardware

Utilizing commodity servers, storage, drives and networking devices from our three hardware partners (Dell, HP, and Quanta), Microsoft is able to offer a high performance scale out data warehouse solution that can grow to very large data sets while providing redundancy of each component to ensure high availability. Starting with standard servers and JBOD (Just a Bunch Of Disks) storage arrays, APS can grow from a simple 2 node and storage solution to 60 nodes. At scale, that means a warehouse that houses 720 cores, 14 TB of RAM, 6PB of raw storage and ultra-high speed networking using Ethernet and InfiniBand networks while offering the lowest price per terabyte of any data warehouse appliance on the market (Value Prism Consulting).


The fabric layer is built using technologies from the Microsoft portfolio that enable rock solid reliability, management and monitoring without having to learn anything new. Starting with Microsoft Windows Server 2012, the appliance builds a solid foundation for each workload by providing a virtual environment based on Hyper-V that also offers high availability via Failover Clustering all managed by Active Directory. Combining this base technology with Clustered Shared Volumes (CSV) and Windows Storage Spaces, the appliance is able to offer a large and expandable base fabric for each of the workloads while reducing the cost of the appliance by not requiring specialized or proprietary hardware. Each of the components offers full redundancy to ensure high-availability in failure cases.


Building upon the fabric layer, the current release of APS offers two distinct workload types – structure data through SQL Server Parallel Data Warehouse (PDW) and unstructured data through HDInsight (Hadoop). These workloads can be mixed within a single appliance offering flexibility to customers to tailor the appliance to the needs of their business.

SQL Server Parallel Data Warehouse is a massively parallel processing, shared nothing scale-out solution for Microsoft SQL Server that eliminates the need to ‘forklift’ additional very large and very expensive hardware into your datacenter to grow as the volume of data exhaust into your warehouse increases. Instead of having to expand from a large multi-processor and connected storage system to a massive multi-processor and SAN based solution, PDW uses the commodity hardware model with distributed execution to scale out to a wide footprint. This scale wide model for execution has been proven as a very effective and economical way to grow your workload.

HDInsight is Microsoft’s offering of Hadoop for Windows based on the Hortonworks Data Platform from Hortonworks. See the HDInsight portal for details on this technology. HDInsight is now offered as a workload on APS to allow for on premise Hadoop that is optimized for data warehouse workloads. By offering HDInsight as a workload on the appliance, the pressure to define, construct and manage a Hadoop cluster has been minimized. Any by using PolyBase, Microsoft’s SQL Server to HDFS bridge technology, customers can not only manage and monitor Hadoop through tools they are familiar with but they can for the first time use Active Directory to manage security into the data stored within Hadoop – offering the same ease of use for user management offered in SQL Server.

Massively-Parallel Processing (MPP) in SQL Server

Now that we’ve laid the groundwork for APS, let’s dive into how we load and process data at such high performance and scale. The PDW region of APS is a scale-out version of SQL Server that enables parallel query execution to occur across multiple nodes simultaneously. The effect is the ability to run what appears to be a very large operation into tasks that can be managed at a smaller scale. For example, a query against 100 billion rows in a SQL Server SMP environment would require the processing of all of the data in a single execution space. With MPP, the work is spread across many nodes to break the problem into more manageable and easier ways to execute tasks. In a four node appliance (see the picture below), each node is only asked to process roughly 25 billion rows – a much quicker task.

To accomplish such a feat, APS relies on a couple of key components to manage and move data within the appliance – a table distribution model and the Data Movement Service (DMS).

The first is the table distribution model that allows for a table to be either replicated to all nodes (used for smaller tables such as language, countries, etc.) or to be distributed across the nodes (such as a large fact table for sales orders or web clicks). By replicating small tables to each node, the appliance is able to perform join operations very quickly on a single node without having to pull all of the data to the control node for processing. By distributing large tables across the appliance, each node can process and return a smaller set of data returning only the relevant data to the control node for aggregation.

To create a table in APS that is distributed across the appliance, the user simply needs to add the key to which the table is distributed on:

CREATE TABLE [dbo].[Orders]
  [OrderId] ...

This allows the appliance to split the data and place incoming data onto the appropriate node onto the appropriate node in the appliance.

The second component is the Data Movement Service (DMS) that manages the routing of data within the appliance. DMS is used in partnership with the SQL Server query (which creates the execution plan) to distribute the execution plan to each node. DMS then aggregates the results back to the control node of the appliance which can perform any final execution before returning the results to the caller. DMS is essentially the traffic cop within APS that enables queries to be executed and data moved within the appliance across 2-60 nodes.


With the introduction of Clustered Column Indexes (CCI) in SQL Server, APS is able to take advantage of the performance gains to better process and store data within the appliance. In typical data warehouse workloads, we commonly see very wide table designs to eliminate the need to join tables at scale (to improve performance). The use of Clustered Column Indexes allows SQL Server to store data in columnar format versus row format. This approach enables queries that don’t utilize all of the columns of a table to more efficiently retrieve the data from memory or disk for processing – increasing performance.

By combining CCI tables with parallel processing and the fast processing power and storage systems of the appliance, customers are able to improve overall query performance and data compression quite significantly versus a traditional single server data warehouse. Often times, this means reductions in query execution times from many hours to a few minutes or even seconds. The net results is that companies are able to take advantage of the exhaust of structured or non-structured data at real or near real-time to empower better business decisions.

To learn more about the Microsoft Analytics Platform System, please visit us on the web at

Categories: Database

Oracle Introduces Latest Release of Oracle Tuxedo 12c

Oracle Database News - Tue, 05/20/2014 - 13:05
Enhancements Provide Increased Reliability, Availability, Performance and Scalability for Conventional and Cloud Deployments of Enterprise Applications. /us/corporate/press/2205836 en
Categories: Database, Vendor

Oracle Introduces Latest Release of Oracle Tuxedo ART 12c

Oracle Database News - Tue, 05/20/2014 - 13:00
Enhancements to Premier Mainframe Application Rehosting Platform Help Speed Up Migration Projects, Deliver Higher Performance and Simplify Adoption. /us/corporate/press/2205818 en
Categories: Database, Vendor

New Rules for Database Design in the Cloud

Database Journal News - Mon, 05/19/2014 - 17:33

With a database in the cloud, the challenges of remotely hosted storage must be addressed.

Categories: Database

Transparent Data Encryption (TDE) in SQL Server

Database Journal News - Mon, 05/19/2014 - 08:01

There are several ways to implement encryption in SQL Server; Arshad Ali focuses on Transparent Data Encryption (TDE), which was introduced in SQL Server 2008 and is available in later releases.

Categories: Database

Microsoft SQL Server 2014 Already Powering Production Workloads

Database Journal News - Sun, 05/18/2014 - 16:20

Early bird special? Microsoft's newest database delivers some massive performance and efficiency gains for early customers.

Categories: Database

PostgreSQL 9.4 beta1 on Debian/Ubuntu

PostgreSQL News - Fri, 05/16/2014 - 01:00

Yesterday saw the first beta release of the new PostgreSQL version 9.4. Along with the sources, we uploaded binary packages to Debian experimental and, so there's now packages ready to be tested on Debian wheezy, squeeze, testing/unstable, and Ubuntu trusty, saucy, precise, and lucid.

If you are using one of the release distributions of Debian or Ubuntu, add this to your /etc/apt/sources.list.d/pgdg.list to have 9.4 available:

deb codename-pgdg main 9.4

On Debian jessie and sid, install the packages from experimental.

Happy testing!

Categories: Database, Open Source

Do We Still Need Database Design in the Era of Big Data?

Database Journal News - Thu, 05/15/2014 - 08:01

Many big data application implementations seem to begin with an existing data warehouse, one or more new high-volume data streams, and some specialized hardware and software. The data storage issue is often accommodated by installing a proprietary hardware appliance that can store huge amounts of data while providing extremely fast data access. In these cases, do we really need to worry about database design?

Categories: Database

MySQL May Newsletter - Register and Save for MySQL Central @ OpenWorld 2014

MySQL AB - Thu, 05/15/2014 - 01:59
Welcome to the MySQL Newsletter for May 2014. To bring more consistency and visibility to the technology- and solution-focused programs at Oracle OpenWorld, MySQL Connect is renamed MySQL Central @ OpenWorld. Registration is now open—you can get early bird savings with US$500 off the onsite price until July 18. Register now and take advantage of the exclusive opportunity to hear directly from Oracle's MySQL engineers and learn from your fellow MySQL users in the community.
Categories: Database, Vendor

PostgreSQL 9.4 Beta 1 Released

PostgreSQL News - Thu, 05/15/2014 - 01:00

The PostgreSQL Global Development Group announced that the first beta release of PostgreSQL 9.4, the latest version of the world's leading open source database, is available today. This beta contains previews of all of the features which will be available in version 9.4, and is ready for testing by the worldwide PostgreSQL community. Please download, test, and report what you find.

Major Features

The new major features available for testing in this beta include:

  • JSONB: 9.4 includes the new JSONB "binary JSON" type. This new storage format for document data is higher-performance, and comes with indexing, functions and operators for manipulating JSON data.
  • Replication: The new Data Change Streaming API allows decoding and transformation of the replication stream. This lays the foundation for new replication tools that support high-speed and more flexible replication and scale-out solutions.
  • Materialized Views with "Refresh Concurrently", which permit fast-response background summary reports for complex data.
  • ALTER SYSTEM SET, which enables modifications to postgresql.conf from the SQL command line and from remote clients, easing administration tasks.

These features expand the capabilities of PostgreSQL, and introduce new syntax, APIs, and management interfaces.

Additional Features

There are many other features in the 9.4 beta, all of which need testing by you:

  • Dynamic Background Workers
  • Replication Slots
  • Write Scalability improvements
  • Aggregate performance improvements
  • Reductions in WAL volume
  • GIN indexes up to 50% smaller and faster
  • Updatable security barrier views
  • New array manipulation and table functions
  • Time-delayed standbys
  • MVCC system catalog updates
  • Decrease lock level for some ALTER TABLE commands
  • Backup throttling

There have also been many internal changes in the inner workings of the Write Ahead Log (WAL), GIN indexes, replication, aggregation, and management of the system catalogs. The means we need you to help us find any new bugs that we may have introduced in these areas before the full release of 9.4.

For a full listing of the features in version 9.4 Beta, please see the release notes. Additional descriptions and notes on the new features are available on the 9.4 Features Wiki Page.

Test 9.4 Beta 1 Now

We depend on our community to help test the next version in order to guarantee that it is high-performance and bug-free. Please download PostgreSQL 9.4 Beta 1 and try it with your workloads and applications as soon as you can, and give feedback to the PostgreSQL developers. Features and APIs in Beta 1 will not change substantially before final release, so it is now safe to start building applications against the new features. More information on how to test and report issues

Get the PostgreSQL 9.4 Beta 1, including binaries and installers for Windows, Linux and Mac from our download page.

Full documentation of the new version is available online, and also installs with PostgreSQL.

Categories: Database, Open Source

SQL Server 2014 is Customer Tested!

At Microsoft, we have an important program in place to work closely with our customers to ensure high-quality, real-world testing of Microsoft SQL Server before it hits the market for general availability. Internally, we call this the Technology Adoption Program (TAP). It works like this: an exclusive list of customers are invited to collaborate with us very early in the development lifecycle, and together, we figure out which features they benefit the most from testing and which workload (or scenario) they will use. They test the upgrade process, and then exploit the new feature(s), as applicable. Many of these customers end up moving their test workloads into their production environments up to six months prior to the release of the final version. The program obviously benefits Microsoft because no matter how well we test the product, it is real customer workloads that determine release quality. Our select customers benefit because they are assured that their workloads work well on the upcoming release, and they have the opportunity to work closely with the SQL Server engineering team.

Microsoft SQL Server 2014 is now generally available, and we believe you will enjoy this release for its exciting features: In-Memory OLTP; Always-On enhancements, including new hybrid capabilities; Column Store enhancements; cardinality estimate improvements, and much more. I also believe you will be happy with my favorite feature of all, and that is “reliability.” For an overview on the new features in SQL Server 2014, see the general release announcement.

To give you a better feel for this pre-release customer validation program, I will describe a few examples of customer workloads tested against SQL Server 2014 prior to the release of the product for general availability.

The first customer example is the world’s largest regulated online gaming company. Hundreds of thousands of people visit this company’s website every day, placing more than a million bets on a range of sports, casino games, and poker. SQL Server 2014 enables this customer to scale its applications to 250k requests per second, a 16x increase from the 16k requests per second on a previous version of SQL Server, using the same hardware. In fact, due to performance gains, they were able to reduce the number of servers running SQL Server from eighteen to one, simplifying the overall data infrastructure significantly. The transaction workload is session state of the online user, which not only has to manage tens of thousands of customers, it needs to respond quickly and be available at all times to ensure high customer satisfaction. The session state, written in ASP.NET, uses heavily accessed SQL Server tables that are now defined as “memory-optimized,” which is part of one of the new exciting capabilities of SQL Server 2014, In-Memory OLTP. The performance gain significantly improves the user’s experience and enables a simpler data infrastructure. No application logic changes were required in order to get this significant performance bump. This customer’s experience with SQL Server 2014 performance and reliability was so good, they went into production more than a year before we released the product.

The second customer example is a leading global provider of financial trading services, exchange technology, and market insight. Every year, the customer adds more than 500 terabytes of uncompressed data to its archives and has to perform analytics against this high volume of data. As you can imagine, this high volume of data not only costs a lot to store on disk, it can take a long time to query and maintain. To give you a sense of scale of this customer’s data volume, let me give you a few examples: one of the financial systems processes up to a billion transactions in a single trading day; a different system can process up to a million transactions per second; the data currently collected is nearly two petabytes of historical data. The cost savings on storage of 500+ terabytes of data, now compressed by ~8x using SQL Server 2014 in-memory columnstore for data warehousing indexes, provides an easy justification to upgrade, especially now that the in-memory columnstore is updatable. Significantly faster query execution is achieved due to the reduction in IO, another benefit of the updatable columnstore indexes and compressed data. This customer deployed SQL Server 2014 in a production environment for several months prior to general availability of the product.

My third example is a customer that provides data services to manufacturing and retail companies; the data services enable such companies to better market and sell more product. The closer this data services company can get to providing real-time data services, the more customers their partners can reach and the better customer satisfaction their partners can provide, when using the service. Before SQL Server 2014, the data services company designed their application utilizing cache and other techniques to ensure data (e.g., a product catalog) was readily available for customers. In this scenario, processing speed is important, and even more important than speed is data quality or “freshness,” so if the database can provide faster access to data persisted in the database rather than a copy in a cache, this ensures the data is more accurate and relevant. SQL Server 2014 In-Memory OLTP technology enables them to eliminate the application-tier cache and to scale reads and writes within the database. Data load performance improved 7x–11x. The In-Memory OLTP technology, by eliminating locking/latching, removed any lock contention that they might have previously experienced on read/write options to the database. The performance gains were so compelling, this company went into production with SQL Server 2014 four months prior to general release.

The Technology Adoption Program (TAP) is a great way to help all of us ensure that the final product has a proven high-quality track record when released. These three customers—and as many as a hundred others—have partnered with the SQL Server engineering team to ensure that SQL Server 2014 is well tested and high quality—maybe you can sleep a little better at night knowing you are NOT the first.

We are excited by the release of SQL Server 2014; check it out here.


Mark Souza
General Manager
Microsoft Azure Customer Advisory Team

Categories: Database

Postgres-XL Released: Scale-out PostgreSQL Cluster

PostgreSQL News - Wed, 05/14/2014 - 01:00

TransLattice announces the availability of the release candidate for Postgres-XL, an open source, clustered, parallel SQL database designed for both OLTP and Big Data analytics.

The key capabilities of Postgres-XL include OLTP write scalability, massive parallel processing (MPP), cluster-wide ACID (atomicity, consistency, isolation, durability) properties and multi-tenant security.

Open access to Postgres-XL further extends the open source PostgreSQL ecosystem and offers additional choices to the community.

Postgres-XL is based on StormDB, the commercial product acquired by TransLattice in 2013. TransLattice will continue to actively contribute to Postgres-XL.

More information can be found at

The full press release of the announcement can be read here:

Categories: Database, Open Source

pg_catcheck released

PostgreSQL News - Tue, 05/13/2014 - 01:00

EnterpriseDB is pleased to announce the initial release of pg_catcheck, a catalog integrity checker for PostgreSQL and Postgres Plus Advanced Server.

pg_catcheck is a simple tool for diagnosing system catalog corruption, released under the same license as PostgreSQL itself. pg_catcheck reports logical errors in system catalogs, such as a value in pg_catalog.relnamespace that is not present in pg_namespace.oid. It is intended to make it easy to determine the nature and extent of system catalog corruption so that you (or your PostgreSQL support provider) can take whatever recovery actions you might deem appropriate, such as repairing enough of the damage to take a successful pg_dump of the damaged cluster. pg_catcheck has been tested on Linux and Windows and is expected to work on other operating systems, and it supports server versions 8.4 and newer.

For more details, see the source code and README here.

You can join the project mailing list,, here.

Categories: Database, Open Source

Importing XML, CSV, Text, and MS Excel Files into MySQL

Database Journal News - Mon, 05/12/2014 - 08:01

For MySQL administrators who would rather not write and maintain their own import code, there are tools that can markedly simplify the importing of data from various sources. Rob Gravelle demonstrates how to use the Navicat Database Admin Tool to acquire data from XML, .csv, .txt, and Excel files.

Categories: Database

Microsoft adds forecasting capabilities to Power BI for O365

The PASS Business Analytics Conference -- the event where big data meets business analytics – kicked off today in San Jose. Microsoft Technical Fellow Amir Netz and Microsoft Partner Director Kamal Hathi delivered the opening keynote, where they highlighted our customer momentum, showcased business analytics capabilities including a new feature update to Power BI for Office 365 and spoke more broadly on what it takes to build a data culture.

To realize the greatest value from their data, businesses need familiar tools that empower all their employees to make decisions informed by data. By delivering powerful analytics capabilities in Excel and deploying business intelligence solutions in the cloud through Office 365, we are reducing the barriers for companies to analyze, share and gain insight from data. Our customers have been responding to this approach through rapid adoption of our business analytics solutions -- millions of users are utilizing our BI capabilities in Excel and thousands of companies have activated Power BI for Office 365 tenants.

One example of how our customers are using our business analytics tools is MediaCom, a global advertising agency which is using our technology to optimize performance and “spend” across their media campaigns utilizing data from third party vendors. With Power BI for Office 365, the company now has a unified dashboard for real-time data analysis, can share reports, and can ask natural-language questions that instantly return answers in the form of charts and graphs. MediaCom now anticipates analyses in days versus weeks and productivity gains that can add millions of dollars in value per campaign.

One of the reasons we’re experiencing strong customer adoption is because of our increased pace of delivery and regular service updates. Earlier this week we released updates for the Power Query add-in for Excel and today we are announcing the availability of forecasting capabilities in Power BI for Office 365. With forecasting users can predict their data series forward in interactive charts and reports. With these new Power BI capabilities, users can explore the forecasted results, adjust for seasonality and outliers, view result ranges at different confidence levels, and hindcast to view how the model would have predicted recent results.  

In the keynote we also discussed how we will continue to innovate to enable better user experiences through touch-optimized capabilities for data exploration. We are also working with our customers to make their existing on-premises investments “cloud-ready”, including the ability for customers to run their SQL Server Reporting Services and SQL Server Analysis Services reports and cubes in the cloud against on-premises data. For cross-platform mobile access across all devices we will add new features to make HTML5 the default experience for Power View.

To learn more about the new forecasting capabilities in Power BI for O365, go here. If you’re attending the PASS Business Analytics Conference this week, be sure to stop by the Microsoft booth to see our impressive Power BI demos and attend some of the exciting sessions we’re presenting at the event. 

Categories: Database

Modifying a Primary Key Index in Oracle 11.2.0.x

Database Journal News - Thu, 05/08/2014 - 08:01

Some business decisions may need to be redone, like making a non-unique primary key index unique.  In Oracle 12c it's a simple task, but in Oracle 11.2. and earlier it's a bit more involved but still possible.  Read on to see how this can be done.

Categories: Database

Measure the impact of DB2 with BLU Acceleration using IBM InfoSphere Optim Workload Replay

IBM - DB2 and Informix Articles - Thu, 05/08/2014 - 05:00
In this article, learn to use IBM InfoSphere Workload Replay to validate the performance improvement of InfoSphere Optim Query Workload Tuner (OQWT) driven implementation of DB2 with BLU Acceleration on your production databases. The validation is done by measuring the actual runtime change of production workloads that are replayed in an isolated pre-production environment.
Categories: Database

Introducing the AzureCAT PPI Theater at PASS BA

The AzureCAT (Customer Advisory Team) is returning to the world of PASS and joining all of you Data lovin’ folks at the PASS BA conference this week in sunny San Jose!  For those you that aren’t familiar with AzureCAT, we are a Microsoft organization in the Cloud and Enterprise division that spends 100% of our time engaging with customers to make the most complex scenarios in the Azure and SQL space work like a charm.

This week at PASS BA, you’ll see us hanging out at the Microsoft booth, attending some of the great sessions and you’ll also find us at our own CAT PPI theater on the tradeshow floor.

Below are some bios of the AzureCATs that will be there. You’ll also see our planned talks and schedules.  Those of you that know AzureCATs know that’s the least of what we’ll cover.  We’re hanging out for your questions and impromptu sessions as interest arises.

Come on by our PPI Theater and say hi.


 Olivier Matrat

Hi, I’m Olivier and I am a data professional with more than 18 years of experience in technical, customer-facing, and management capacities at organizations of all sizes; I’m talking start-ups to multinationals. I lead a team of AzureCAT experts helping customers, partners, and the broader community be successful in their Big Data analytics projects on the Azure platform. I’m a Founding Partner member of the PASS Board of Directors, so I have PASS in my blood.  I’m also French and incidentally own the best French bakery in Redmond.  If you aren’t interested in analytics, ask me how to make a great croissant! Looking forward to talking with all of you about social sentiment analytics in my “Tapping the tweets – Social sentiment analytics at Internet scale in Azure” talk.

 Murshed Zaman

Hello!  I’m Murshed, a Senior Program Manager in AzureCAT.  I spend my time helping customers working with SQL Server Parallel Data Warehouse, ColumnStore, Hadoop, Hive and IaaS. Over the last 12 years, I’ve specialized in telecommunications, retail, web analytics and supply chain management and for over 7 years I’ve worked with Massively Parallel Processing (MPP). Right now my main areas of focus are in design, architecture and Distributed-SQL plans. This year at PASS BA I’ll be sharing my thoughts on Big Data and Big Compute in “Connecting the Dots – Risk Simulation with Big Data, Big Compute and PDW”.  Looking forward to meeting you there! 


 Chuck Heinzelman

I’m Chuck.  I am a Senior Program Manager with the Microsoft Azure Customer Advisory Team, and I have been a member of the PASS community since 2000.  My primary focus is on cloud-based analytics, and I’ve also dabbled in matters related to hardware, OS configuration and even application development.  Like a certain snowman from a recent hit animated movie, I’ve been known to like warm hugs – as well as non-fat white chocolate mochas.  Feel free to bring one or both at my Cloud Applications Without Telemetry?  Surely You Can’t Be Serious! or BI in Windows Azure Virtual Machines: From Creation to User Access talks.

 John Sirmon 

Hi, I’m John Sirmon.  I’m a Senior Program Manager on the AzureCAT team. I’ve been working with SQL Server for over 10 years and I’m loving the BI space.  In my 9-5 life I specialize in Analysis Services performance tuning, Reporting Services, SharePoint integration, troubleshooting Kerberos Authentication and PowerPivot for SharePoint.  In my spare time I am the lead singer/guitarist of a local Rock Band in Charlotte, NC. 

 Chantel Morin

I am a member of the Microsoft Azure Customer Advisory Team (AzureCAT). For the past 4 years I’ve been the assistant to Mark Souza, our General Manager. In the last year I’ve shifted my focus more towards my passion for community and events. I’m also ramping up to assist with customer onboarding into Azure TAP programs. I have the best team and manager in all the land and when I’m not enjoying work for pay I like to travel to music festivals, ride ATVs and spend time with my two pitbulls, Max and Tucker.   You can find me at the Microsoft Information Desk during the event.

Sessions at the CAT PPI Theater

Connecting the Dots – Risk Simulation with Big Data, Big Compute and PDW 
Thursday May 8th at 12:20pm, Friday May 9th at 9:20am

Microsoft Azure offers you a platform that allows you to migrate your big compute and big data needs to the cloud, while Parallel Data Warehouse (PDW) can be used on-premises as a query engine for data that you store both on-premises and in Microsoft Azure Storage.  Through the use of Microsoft Azure HPC clusters, HDInsight clusters and PDW, we’ll discuss risk simulations and data aggregations which include hybrid on-premises/cloud scenarios, and demonstrate using these technologies over data generated during the session.

Tapping the tweets – Social sentiment analytics at Internet scale in Azure
Thursday May 8th at 12:50pm, Friday May 9th at 12:50pm

Twitter and other social media channels have become an integral part of many organizations’ marketing strategies. Microsoft Azure provides a ubiquitous platform to acquire, monitor, process, store and analyze those all-important brand loyalty and CSAT signals. Through the use of a mix of first and third party tools as well as open source solutions, we will illustrate how to infer actionable insights from the ambient social noise at scale.

Cloud Applications Without Telemetry?  Surely You Can’t Be Serious!
Thursday May 8th at 9:20am, Friday May 9th at 12:20pm

Analytics isn’t limited to line of business data – your applications can (and should) generate high quality data that can be used to determine things like:

  • Am I meeting my application SLAs?
  • What is my general customer experience like?
  • Do I need to scale up or down based on demand?

In the traditional on-premises world, you probably didn’t spend a lot of time thinking about application monitoring and telemetry – you were in full control of the entire environment.  If things weren’t right – either from a connectivity or performance perspective, you could easily look at the systems to see what was going on.

Fast-forward to the cloud-based world.  You are now running on servers that you don’t control, and using services that are shared among other consumers, and don’t necessarily have access to all of the data you are used to having.  That is why you need to add telemetry to your applications and services.

The AzureCAT has published a framework for gathering telemetry that is based on many customer engagements.  We’ll spend time talking about what data to gather, how to gather it, and how to consume it once you have it.

BI in Windows Azure Virtual Machines: From Creation to User Access (room 230A)
Conference Breakout: Thursday May 8th at 4pm

Running BI workloads in Windows Azure Virtual Machines can present a whole new world of challenges. While the tools are largely the same between the IaaS and on-premises implementations, your solutions for authentication and authorization could be significantly different in the cloud. 

We’ll start out by talking about how to use the standard gallery images to run BI workloads in IaaS, and then discuss building custom scaled-out BI infrastructures in Azure Virtual Machines. From there, we will dive into the different authentication and authorization options you might want to take advantage of – options that will work both in the cloud and on-premises, but are especially useful in a cloud-based environment.

And potentially as special treat …

Details to come but there’s a good chance you’ll see John Sirmon from the AzureCAT team at the theater.  This man LITERALLY wrestles alligators as well as analytics.  (No alligators will be harmed in the making of this PASS BA talk)

Categories: Database

More Microsoft SQL on Azure Update Details Emerge

Database Journal News - Tue, 05/06/2014 - 22:25

Microsoft releases more details on its upgraded cloud-based database offering as it gears up for an official launch.

Categories: Database