Necessity and Sufficiency Within and Without Spark
Share this Session:
  James Gorlick   James Gorlick
Senior Software Engineer


Tuesday, April 19, 2016
11:15 AM - 12:00 PM

Level:  Intermediate

Consumers, developers, technology decision makers and technology advocates are prone to announce via white papers, proposals, and Twitter that "TechnologyX is dead; long live TechnologyY!" Such a firehose of disruptive positions were posted as Spark emerged, leaving MapReduce in the dust, and developers and data scrambling to migrate. By applying sufficient effort to understand the solution space and rigor that leads the development within and without Spark, this talk aims to help technologists select the necessary and sufficient technology to meet the use (or sourcing this rigor to specialists in the field), focusing development efforts on solution-specific vertex applications that are reusable even after the next technology disruption.

This session will cover the following:

  • Spark's place in the NoSQL analytics market
  • How Spark performs for uses cases that Hadoop was designed to solve
  • Use cases that Spark was specifically designed to solve
  • Interoperability with other Business Intelligence tools
  • Pushdown and other means of NoSQL vendor support for Spark
  • Design and coding for smooth migration from Hadoop to Spark
  • Walkthrough of example time-series analysis using Spark and NoSQL

James Gorlick is a Senior Software Engineer at Basho. He is an application developer with lead and project management experience in a wide variety of business applications. He has a particular interest in straight-through processing, removing data duplication, and increasing information availability throughout the organization in a secure, consistent manner. His latest work is on the Basho Data Platform, where he leads the coordination of Redis and Riak KV.

Close Window