• bzMRAccel2

    Hadoop Accelerator

    Built to remove runtime bottlenecks in Hadoop’s Map-Reduce framework, this patent-pending product can provide up to 10x improvements in runtimes of Map-Reduce jobs which translates to about 80% TCO savings on AWS. For any job which requires large volumes of data transfers, sort/merge/codec phases of Map-Reduce can limit the speed-ups which Hadoop can provide. Beyond a point, throwing more nodes in the cluster doesn’t provide any more scaling. BigZetta’s Hadoop Accelerator, offloads runtime bottlenecks to FPGAs (installed on cloud or on-prem), thereby cutting down the turnaround time of MR job significantly.

    Read Xilinx’s blog …              Try on AWS …

  • bzQAccel2

    Query Accelerator

    Almost all big data jobs these days are launched as a query using any of the higher level abstraction query languages (Hive, Pig, Impala, Presto etc.). Turn-around times of these queries are super critical to data-analysts across all domains. Unfortunately, throwing more CPUs at it simply does not scale beyond a point. BigZetta is harnessing power of FPGAs to re-design most popular query processing tools so that heavy load of computation is performed by hardware accelerators. Use of dedicated hardware to run kernels of queries can provide orders of magnitude faster turn-around times, thereby, enabling use of big data tools for low-latency-analytical-processing jobs.

  • Product Hardware IPs

    Hardware IPs

    Design of hardware IP is critical to the acceleration an application can achieve.  At BigZetta, we have a library of FPGA accelerated IPs for sorting, merging, compression/decompression etc. which can be plugged into an existing application. These IPs can be integrated into a C/C++ as well as Java application (through JNI layer). Our carefully designed sort/merge/codec algorithms can handle any amount of data and perform computations 10-30 times faster than the CPU. IPs are extensible to handle any kind of size, data complexity. These IPs have been integrated into most popular big data tools and their performance has been evaluated on machines running on-prem or on cloud. The speed (and cost savings) are independent of choice of machines as the IPs provide similar speed-ups across variety of machines and FPGAs.