Apache Spark


14

Mar 2017

Making data useful and ubiquitous

Datawarehouses have evolved over the years. With Hadoop reaching a level of maturity and Spark as a powerful engine to power various workloads, we are now at a point to truly democratize consumption of data to power insights. Savvy data driven companies, combine the power of automated data analysis with human insights. In order to get everyone in an organization to leverage the data, data...

Read More


08

Mar 2017

SNAP – SparklineData Nextgen Analytics Platform

Many of you who have followed us over the past year or two, know that we have been heads down on making life easier for those who struggle with the challenges of ad-hoc analysis on modern data lakes. We have seen the frustrations of Tableau users and in the words of one of those users, ” We have a Ferrari in Tableau but using it...

Read More


14

Feb 2017

Fast aggregations/metrics on Spark with Tableau

Ad-hoc queries, with sub second response time, is critical for enterprises. Vast amounts of data exist in Hadoop or AWS datalakes and consumption of this data, in a scalable /fast manner using existing B.I tools like Tableau, is a challenge.  Transactions at the lowest grain(hourly/daily etc), are stored in fact tables. In order to achieve an acceptable level of performance, companies resort to writing extracts or summary tables...

Read More


07

Feb 2017

Advanced Tableau on Spark /Hadoop

Most benchmarks on datawarehouse optimizations and SQL engines stop with simple examples. The real world uses business intelligence tools where the use cases are not single user single SQL as in a simulated benchmark, Modern B.I on Big Data should satisfy three key requirements Should be able respond interactively as a user drills down into data in Hadoop/Spark, in seconds. While B.I is not about retrieving...

Read More


10

Jul 2016

Terabyte scale Data Lake analytics on S3, Hadoop with Spark

In our recent work with customers, there is one constant. The need to make sense of terabytes of fact and time-series data that lands in the datalake( Physically S3 or HDFS). Here is a typical process before we get engaged.  The first step in this process is organizing data in the datalake. A typical fact table for our customers, such as events of all advertising-exposures...

Read More



Page 4 of 512345