Speed and cost are the top-most careabouts for any application running in datacenters (on-prem or cloud). With CPU speeds reaching their limits, there is a pressing need to look for alternatives to accelerate big-data computations. The strong emergence of GPUs and FPGAs provides choices to application developers to off-load runtime intensive operations. Unfortunately, most of the existing applications or tools were designed around CPUs. This limits the speed-ups an application can achieve through selective off-loading. File, disk and network I/O restrict and cut the runtime gains. The need is to redesign tools and applications to optimally partition workloads and data communication among heterogenous devices (CPUs, GPUs, FPGAs) taking I/O bottle-necks into consideration from the start. BigZetta has identified this problem and is working on re-designing popular big-data tools to benefit from heterogenous choices available in modern datacenters.