February 11, 2019

FPGA H/W IPs and acceleration

Filed under: Expertise — admin @ 12:02 pm

For a general software application developer, working with heterogeneous devices can be very intimidating. It not only requires understanding of device internals but also how to synthesize your application to the target hardware. Software application needs to be written in a way such that it is amenable for building hardware. An optimized software will not always result in optimized hardware. At BigZetta, we have a team of hardware experts who know how to take an application and convert it to optimal underlying hardware. Our experts understand FPGA, High Level Synthesis and backend tools in intimate detail. Whether you application needs to run on cloud or on locally installed hardware, we have the expertise to build and deploy for both choices.

HW/SW co-design

Filed under: Expertise — admin @ 12:02 pm

Modern data centers provide a choice of heterogeneous hardware to pick from (with varying cost-speed trade-offs). However, most of the existing applications have not been designed to benefit from this heterogeneity. There is a widespread lack of knowledge on how to build and deploy for heterogeneous compute resources. At BigZetta, we have developed in-depth expertise to take an application and modify it to work on heterogeneous hardware in a fast, robust and scalable way. Design of such application not only requires knowledge of heterogeneous devices but also a careful partitioning and mapping of system to right resources.

Big Data tools

Filed under: Expertise — admin @ 12:01 pm

We understand what it takes to solve a problem involving terabytes of data optimally. Careful design of clusters and setting of tuning parameters are super important to extract the maximum juice out of a big data application. Our expertise with these tools enables us to help our customers in deploying and running popular big data tools in an optimal and cost-effective way. Whether it is setting of right configurations for your query engines (like Hive), or enabling LLAP functionality for BI queries, we can optimize the query performance across a wide range of inputs. Choice of right backend (MR, Tez, Spark etc.) can dramatically impact the query performance and a sub-optimal choice can lead to degraded performance and increased operational costs.

Abhishek Ranjan

Filed under: About — admin @ 10:47 am

Abhishek Ranjan is Co-founder of BigZetta Systems with close to 20 years of experience in software product development. He is utilizing his skills (technical, sales, management etc.) to build a high-tech company focused on building products for accelerating big data computations in modern data centers with heterogeneous compute resources. A strong engineering professional with an MS in Computer Engineering from Northwestern University and Bachelor’s in Electrical Engineering from Indian Institute of Technology, Kanpur, he has held major leadership positions at large companies (Xilinx Inc., Mentor Graphics, Siemens) as well as successful start-ups (Hier Design Inc., Calypto Design Systems). Has been instrumental in managing successful acquisitions of start-ups and assimilation of products in big companies. He has great expertise in assembling high quality teams and delivering best-in-class technologies. Has authored dozens of research papers in the field of Design Automation and holds multiple US patents in the same domain.

LinkedIn profile

February 7, 2019

Hardware IPs

Filed under: Product — admin @ 7:12 am

Design of hardware IP is critical to the acceleration an application can achieve.  At BigZetta, we have a library of FPGA accelerated IPs for sorting, merging, compression/decompression etc. which can be plugged into an existing application. These IPs can be integrated into a C/C++ as well as Java application (through JNI layer). Our carefully designed sort/merge/codec algorithms can handle any amount of data and perform computations 10-30 times faster than the CPU. IPs are extensible to handle any kind of size, data complexity. These IPs have been integrated into most popular big data tools and their performance has been evaluated on machines running on-prem or on cloud. The speed (and cost savings) are independent of choice of machines as the IPs provide similar speed-ups across variety of machines and FPGAs.

Home

Filed under: Home — admin @ 7:03 am

Speed and cost are the top-most careabouts for any application running in datacenters (on-prem or cloud). With CPU speeds reaching their limits, there is a pressing need to look for alternatives to accelerate big-data computations. The strong emergence of GPUs and FPGAs provides choices to application developers to off-load runtime intensive operations. Unfortunately, most of the existing applications or tools were designed around CPUs. This limits the speed-ups an application can achieve through selective off-loading. File, disk and network I/O restrict and cut the runtime gains. The need is to redesign tools and applications to optimally partition workloads and data communication among heterogenous devices (CPUs, GPUs, FPGAs) taking I/O bottle-necks into consideration from the start. BigZetta has identified this problem and is working on re-designing popular big-data tools to benefit from heterogenous choices available in modern datacenters.

Query Accelerator

Filed under: Product — admin @ 6:52 am

Almost all big data jobs these days are launched as a query using any of the higher level abstraction query languages (Hive, Pig, Impala, Presto etc.). Turn-around times of these queries are super critical to data-analysts across all domains. Unfortunately, throwing more CPUs at it simply does not scale beyond a point. BigZetta is harnessing power of FPGAs to re-design most popular query processing tools so that heavy load of computation is performed by hardware accelerators. Use of dedicated hardware to run kernels of queries can provide orders of magnitude faster turn-around times, thereby, enabling use of big data tools for low-latency-analytical-processing jobs.

Hadoop Accelerator

Filed under: Product — admin @ 6:12 am

Built to remove runtime bottlenecks in Hadoop’s Map-Reduce framework, this patent-pending product can provide up to 10x improvements in runtimes of Map-Reduce jobs which translates to about 80% TCO savings on AWS. For any job which requires large volumes of data transfers, sort/merge/codec phases of Map-Reduce can limit the speed-ups which Hadoop can provide. Beyond a point, throwing more nodes in the cluster doesn’t provide any more scaling. BigZetta’s Hadoop Accelerator, offloads runtime bottlenecks to FPGAs (installed on cloud or on-prem), thereby cutting down the turnaround time of MR job significantly.

Read Xilinx’s blog …              Try on AWS …

February 4, 2019

Hello world!

Filed under: Uncategorized — admin @ 9:38 am

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

X

Overview

X

Speedup

X

Cloudera Integration