TIBCO DataSynapse GridServer Bursts Into Azure

Financial stock market data. Candle stick graph chart of stock market ,stock market data graph chart on LED concept, work for stock market background ,stock market education and stock market analysis
Reading Time: 2 minutes

On May 9th, TIBCO announced its collaboration with Microsoft and Azure for the purposes of large-scale computer calculations, so I wanted to add some weight to this and share what we have learned.

Over the last 12 months, Microsoft and TIBCO have been engaged with a number of Financial Services customers evaluating TIBCO DataSynapse GridServer in Azure. All of these PoCs involved deploying or extending existing Windows or Linux HPC clusters into Azure and evaluating performance. Some of these are moving into Production and some are not, but there were some reoccurring themes that I would like to share, so this blog includes some best practice covering networking, VM images, and storage.

Within the Financial Services sector, most customers required a secure connection into Azure before they can proceed with any testing. Whether this is through a VPN or Microsoft’s ExpressRoute, discussing the benefits of private versus public peering upfront is an important step to ensure there are no limitations down the road. This is worth emphasizing when considering your IP address ranges as a short range may be suitable for a pilot but will limit your ability to move into Production if the scale required is significant.

It is also important to consider the use of custom images versus Azure Marketplace images and why you might choose one over the other. We found that in most of our engagements, the customer chose to use the opportunity to migrate away from the custom image and adopt the latest Azure Marketplace image. This still allows the customer to select its security patches and inject the approved version of DataSynapse through a VM extension or script, while also enabling a faster spin-up time. While a custom image does appease the security and compliance departments, it does require you to configure your architecture slightly differently.

We recommend the use of  Virtual Machine Scale Sets as this allows the customer to create hundreds of identical VMs in minutes and is perfect for HPC requirements. At no additional cost, VMSS provides easy deployment and management options that allow you to manage VMs as a group. With the release of ‘Managed Disks’ currently in preview, this also extends to custom images.

If your application is I/O intensive, we also recommend Premium Storage for consistently high performance and low latency. This was not necessary in all cases, but with the ability to provision a persistent disk and configure its size and performance characteristics, the benefits often outweighed the costs.

As soon as a secure connection is established and the DataSynapse packets have been copied across, the customer is ready to run its jobs. Across a number of PoCs, we found a range of performance based on the VM sizes and cores used, but in almost all cases, D-Series and particularly Dv2 excelled. There were two consistent messages coming back. Firstly, the homogenous nature of Azure lends itself to consistent and predictable performance that is important for the business and its service level agreements. Secondly, Azure is quick. This often resulted in significantly fewer cores being used compared to the on-premise grid and yet still outperformed by some margin.

We’re excited to see customers start to adopt DataSynapse GridServer for their critical compute workloads and take advantage of the scale and elasticity of the Azure cloud. The partnership between TIBCO and Microsoft continues to grow and we are looking forward to the next steps in this development. For further information on financial services and grid computing in Azure, please visit this page.

 

 

Previous article10 Questions With Team TIBCO-SVB: Brianna Walle
Next articleNorfolk Southern Wins a TIBCO Trailblazer Pioneer Award
Matthew Thomson is a Senior Program Manager for the Microsoft Azure Big Compute team and is responsible for engineering and business strategy for high performance computing solutions running on the Microsoft Azure Cloud within the Financial Services sector. Over the past ten years, Matthew has been working on developing, managing and winning complex product, services and solutions engagements within Financial Services. He has taken his experience from the field into Program Management with the Azure Big Compute engineering team in Seattle where he is now based. Working closely with Partners and customers, Matthew is focused on advancing the HPC services and solutions provided within Azure for both hybrid and cloud scenarios.