Don’t use elephant for your garden work

| Comments

While learning the new Tez engine and query vectorization concepts in Hadoop 2.0, I came to know that the query vectorization is claimed as 3x powerful and consume less CPU time in actual Hadoop cluster. Hortonworks tutorial uses a sample sensor data in a CSV that is imported into Hive. Then a sample has been used to explain the performance.

The intention of this post is neither explaining Tez engine and query vectorization nor Hive query.  Let us familiarize the problem I have worked before get to know the purpose of this post. :)

One sample CSV file called ‘HVAC.csv’ contains 8000 records that contain temperature information on different building during different days. Part of the file content:

Date,Time,TargetTemp,ActualTemp,System,SystemAge,BuildingID
6/1/13,0:00:01,66,58,13,20,4
6/2/13,1:00:01,69,68,3,20,17
6/3/13,2:00:01,70,73,17,20,18
6/4/13,3:00:01,67,63,2,23,15
6/5/13,4:00:01,68,74,16,9,3

In the Hive, following configurations are specified to enable Tez engine and query vectorization.

1
2
3
4
hive> set hive.execution.engine=mr;
hive> set hive.execution.engine=tez;
hive> set hive.vectorized.execution.enabled;
      hive.vectorized.execution.enabled=true

I execute the following query in my sandbox  that surprisingly took 48 seconds for a ‘group by’ and ‘count’ on 8000 records as shown below:

1
select date, count(buildingid) from hvac_orc group by date;

This query groups the sensor data by date and count the number of building for that date.  It produces 30 results as shown below:

1
2
3
4
5
6
7
Status: Finished successfully
OK
6/1/13  267
6/10/13 267
6/11/13 267
...
Time taken: 48.261 seconds, Fetched: 30 row(s)

Then I plan to write simple program without MapReduce castle, since it is just 8000 records. I created a F# script that read the CSV (note that I did not use any CSV type provider) and using Deedle exploratory library (again, LINQ can also help). I achieved the same result as shown below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
module ft

#I "..\packages\Deedle.1.0.0"
#load "Deedle.fsx"
open System
open System.IO
open System.Globalization
open System.Diagnostics
open Deedle

type hvac = { Date : DateTime; BuildingID : int}

let execute =
  let stopwatch = Stopwatch.StartNew()

  let enus = new CultureInfo("en-US")
  let fs = new StreamReader("..\ml\SensorFiles\HVAC.csv")
  let lines = fs.ReadToEnd() |> (fun s -> s.Split("\r\n".ToCharArray()))

  let ohvac = lines.[1..(Array.length lines) - 1]
              |> Array.map (fun s -> s.Split(",".ToCharArray()))
              |> Array.map (fun s -> {Date = DateTime.Parse(s.[0], enus); BuildingID = int(s.[6])})
              |> Frame.ofRecords

  let result = ohvac.GroupRowsBy("Date")
              |> Frame.getNumericCols
              |> Series.mapValues (Stats.levelCount fst)
              |> Frame.ofColumns

  stopwatch.Stop()
  (stopwatch.ElapsedMilliseconds, result)

In the FSI,

1
2
3
4
5
6
7
8
9
10
11
> #load "finalTouch.fsx";;
> open ft;;
> ft.execute;;
val it : int64 * Deedle.Frame =
(83L,
BuildingID
01-06-2013 12:00:00 AM -> 267
02-06-2013 12:00:00 AM -> 267
03-06-2013 12:00:00 AM -> 267
04-06-2013 12:00:00 AM -> 267
...

The is completed within 83 milliseconds. You may argue that I comparing apple with orange. No!.  My intention is to understand when MapReduce is the savior.  The parable of the above exercise is that be cautious and analyze well before moving your data processing mechanisms into MapReduce clusters.

Elephants are very effective in labor requiring hard slogging and heavy lifting. Not for your gardens!! :)

Note that the sample CSV files from HortonWorks is clearly for training purpose. This blog post just take that as an example to project the maximum data-generation capability of a small or medium size application for a period. The above script may not scale and will not perform well with more than the above numbers. Hence, this is not anti-MapReduce proposal.

Unpacking Apache Storm in developer box

| Comments

It is a long time pending task to evaluate and learn Apache Storm. Storm infrastructure needs Nimbus and Zookeeper.

My intention is to install Storm in my regular Ubuntu single box instead of any cluster environment/VMs, the reason is Apache Storm is just a jar file.

Zookeeper was already installed in my machine as single-cluster mode. I already have other prerequisites Java 6 or greater and Python 2.6.6 or greater.

I simply extract apache-storm-0.9.1-incubating.tar.gz into my app directory. We need to play around with two directories as like
apache-storm-0.9.1
|__ bin
|__ conf

Update the following settings in conf/storm.yaml.

1
2
3
4
5
6
7
8
9
10
storm.zookeeper.servers:
    - "127.0.0.1"
storm.zookeeper.port: 9191
storm.local.dir: "/mnt/storm"
nimbus.host: "127.0.0.1"
supervisor.slots.ports:
   - 6700
   - 6701
   - 6702
   - 6703

Open three terminal windows, switch to super user and type the following commands respectively from the bin folder.

1
2
3
4
./storm dev-zookeeper
./storm nimbus
./storm supervisor
./storm ui

The last command opens Storm UI portal at localhost:8080 in browser.

Book Review – Heroku up and running by Neil & Richard – O’Reilly

| Comments

 

hrk_rc_cat

Neil is a Ruby developer and deploying many applications in Heroku.  His book “Heroku Up and Running” is in early release stage.  Most of the contents are ready and hence review has been made.

As like Heroku ecosystem’s simplicity, this book has only 100 pages with 8 chapters.

As a cloud developer, the first chapter “What is Heroku” is bit bored.  The second chapter “How Heroku Works” is pretty straightforward, simple and nicely written.  A conceptual/flow diagram is the only missing point.  Reader has to go with full load of text.  The third chapter explains performance and scalability on “Dynos” and Postgres database.  It is worth read.

Chapter 4 covers Heroku Regions – Is it that much worth?

Chapter 5 is fully dedicated for Postgres which is helpful for the respective people.  It is a good resource.

Chapter 6 starts with deployment best practices followed by “HOWTO”.  It is well written.

Chapter 7 and 8 are really much needed for any people who want to deploy their app on Heroku.  Neil really puts good effort to on these two chapters to differentiate highly from Heroku manual. :)

The book would be even better if it provide some simple samples than syntax examples.

This book gives me mixed feeling but I suggest this book for its chapter 6, 7, and 8.

You can buy this book at http://shop.oreilly.com/product/0636920027409.do

 

Book Review – See What I Mean – O’Reilly

| Comments

For techies, this book might not be the regular lunch.

It is my long wish to express my learning through mind-friendly way like comics and videos.  Couple of years before, I used Pixton.com and written a blog post http://udooz.net/blog/2010/12/wcf-sts-federation-claims/.  After that, I simply left that direction due to lack of comic plots. It is quite surprised when I saw this book in O’Reilly that teaches you how to draw comics on yourself with paper and pencil.

Good part is this book do not bore you with comprehensive text, instead with nice comics.

Properties of Comics explains the comic formats and four properties of comics followed by a chapter explains face theory (!) and properties.  Writing the story is one of my favorite chapter where there is a crash course in Script writing.

Last 5 chapters explains layout, drawing/refining and application in real world.

Recommend this book to seasoned technical bloggers and presenters.  Convey your thoughts more user (mind) friendly way.

Buy this book at http://shop.oreilly.com/product/9781933820279.do.

Adopting Event Sourcing in SaaS using Windows Azure

| Comments

People in the enterprise application development and strong attachment with relational world feel ill-chosen when suggesting to adopt event sourcing.  There are some reasons for that.  This blog post specifies you the candidate places in SaaS development using Windows Azure where event sourcing will be useful.  Before that let us understand what is event sourcing.

Event Sourcing

Let us take an order management SaaS system as depicted in the below figure.

Assume that Order is the main entity (in DDD world, this is further specialized as “AggregateRoot” or simply “Aggregate”).  Whenever, a request to make an order in this system through service layer, a lifecycle of an Order instance will be started.  It is started with OrderQuoted and ends with OrderShipped/OrderReturned.

In the typical relational world, we will persist the order instance as

PKey Order ID Status ModifiedBy ModifiedOn
1 OD30513080515 OrderBooked Sheik 2013-06-16 12:35PM
2 OD20616150941 OrderShipped John 2013-05-22 10:00 AM
.. .. .. .. ..

If the order OD30513080515 is delivered, then we will simply update the record # 1 as

1 OD30513080515 OrderDelivered Milton 2013-06-18 02:10PM

The Event Sourcing approach enforces to persist domain object using an immutable schema.  In this case, the data store will look like:

DbId Order ID Status ModifiedBy ModifiedOn
1 OD30513080515 OrderBooked Sheik 2013-06-16 12:35PM
2 OD20616150941 OrderShipped John 2013-05-22 10:00 AM
.. .. .. .. ..
1 OD30513080515 OrderDelivered Milton 2013-06-18 02:10PM

You are now under the impression that event sourcing is nothing but audit log and if this approach is taken in the main stream database we will be end up with underperforming query and unnecessary database size.  Let us understand the benefits of event sourcing before discussing these concerns:

  • Business sometimes needs tracking changes with relevant information happened in the entity during its lifecycle.  For example, before shipping the order, if the system allows the customer to add or remove items in the order, “OrderItemChanged” will play important role to recalculate pricing by track back to the previous “OrderItemChanged” events.
  • With the immutable persistent model, this would be a fault tolerance mechanisms so that at any point in time we can reconstruct the whole system or to a particular point by rewinding the events happened on a particular entity.
  • Data analytics

The above two points keep specifying the term “event”.  A business system is nothing but performing commands (technically Create, Update, and Delete operations) on business entities.  Events will be raised as a yield of these operations.  For example, making an order in the above SaaS system will create an event OrderBooked with following facts:

{

“name” : “orderBooked”,

“entity” : “Order”,

“occrredOn” : “2013-06-16 12:35PM”,

“orderDetail” : {

“orderId” : “OD30513080515″,

“orderItems” : [{ “productId” : “PR1234”, “quantity” : 1}]

}

}

In the distributed domain driven design approach, the above domain event will be published by Order aggregate and the service layer receives the event and publish itself to the direct event handler or via event publisher.  One of the main subscriber could be a event store subscriber that persist the event into the event store.  The event can also be published to an enterprise service bus so that it can be subscribed and handled by wide variety of other subscribers.  Most likely the schema for an event store looks like below:

The various implementations of event sourcing use different terminologies and slightly different schema.  For example, main stream event sourcing implementation will have the whole aggregate object itself on every change.

Hence, event sourcing has following characteristics:

  • Every event should give a fact about that and it should be atomic
  • The data should be “immutable”
  • Every event should be “identifiable”

In the SaaS World

By this time, you understand that event sourcing is not “one size fit for all”.  Particularly, in the enterprise world.  Based on the SaaS system and organization eco system, you can suggest different methodologies:

  • Use Event Store as main stream data store and use query friendly view data stores such as document or column friendly databases.  This would handle all queries from client systems.  This is likely to be CQRS approach.
  • Enterprises where you feel relational is the right candidate for main stream database, then use event store as a replacement for audit log, if the system and regulations permit.  This would help you to address the use cases where past event tracking is the business requirement.

Right storage mechanism in Windows Azure

When you are building applications in Windows Azure, you have three official storage options as of now.  Let us see these a whole:

S.No Storage Pros Cons
1 Blob Storage Flexible and simple to implement the above mentioned schema Majorly none
2 Table Storage Read friendly Unfriendly for write when you take different serialization approach for event body apart from simple JSON serialization.
3 Windows Azure SQL Based on your relational schema, this could be read and write friendly Lacks scalability

Cost

Summary

Event sourcing is more than just an audit log that can be well adopted into SaaS system.  You should take right approach on how to use this in your system.  Windows Azure blob storage is one of the nice option as of now since there is no native document or column oriented database support in Windows Azure.

Few event sourcing frameworks in .NET:

https://github.com/NEventStore/NEventStore

https://github.com/elliotritchie/NES

        <p>
        </p>