animesh kumar

Running water never grows stale. Keep flowing!

Posts Tagged ‘Java

JUnit Primer

leave a comment »

[tweetmeme source=”anismiles” only_single=false http://www.URL.com%5D

I recently gave a presentation on JUnit and thought it might be useful to other people as well. So, I’m sharing the PPT here.

Advertisements

Written by Animesh

August 2, 2010 at 2:20 pm

Posted in Technology

Tagged with , ,

Discovering Java Annotations

with 15 comments

[tweetmeme source=”anismiles” only_single=false http://www.URL.com%5D

DZone link of this artilce: http://www.dzone.com/links/r/discovering_java_annotations.html

Annotations these days are proving to be widely popular. Many Annotation based frameworks (Spring, JPA, Tapestry to name few) are seeing the light of the day, and even small scale projects are using Annotation based meta-programming for better separation of concern. Annotations are wonderful and it’s sure to stay here.

However, in order to use this, we must be able to locate classes annotated with Annotations that concerns us. How do you do it?

First, you find resources where we can look for annotated classes. You can look into “java.class.path” system property (also known as CLASSPATH), or use Classloader or ServletContexts to have a list of Resources. Now, you scan through each resource and check for Annotations that concern you.

The easiest way to scan through a resource is to load it through a Classloader and use Java Reflection Api to look for a specified annotation. However, this approach will only help you to find Annotations that are visible at runtime, and loading each resource into memory will unnecessarily consume Java Memory. Otherwise, you can use ASM or Javassist byte processing libraries. These libraries process class bytes and see even runtime annotations. And since they don’t load your resources into Memory, they have a low footprint.

Well, to help you here, we have written a small library Annovention using Javassist. The idea behind Annovention is to help you quickly locate annotated classes, fields or methods. Once you have the pertinent classes you can run your domain logic or do whatever fancies you.

Download Annovention here: http://code.google.com/p/annovention/

How it works?

Annovention works on subscribe-and-listen pattern.

  1. You create annotation discovery listeners.
    1. Annovetion supports Class, Field and Method level annotation discovery and each of these listeners must implement ClassAnnotationDiscoveryListener, FieldAnnotationDiscoveryListener or MethodAnnotationDiscoveryListener interfaces respectively.
    2. Each listener has a method supportedAnnotations() that returns an Array of Annotation names. For example, to listen to @Entity and @EntityListeners annotations, the array will be:
      new String[] {Entity.class.getName(), EntityListeners.class.getName()}
      
    3. Each listener has a method discovered() that is called each time a relevant annotation is located with Class-name, Field-name, Method-name and discovered Annotation-name.
    4. Please note that your listeners are receiving only names of classes, fields and methods. You must use Java Reflection to instantiate them. Remember Class.forName()?
  2. Now, there is a Discoverer class that does all the discoveries. It needs Resources and Listeners. Annovention comes with an implementation of ClasspathDiscoverer. This uses “java.class.path” system property and builds an array of resources to scan. You need to register your Listeners to Discoverer class in order to get notified.

Sample Use

public class SampleAnnotationDiscoverer {

	public static void main (String args []) {
		// Get a classpath discoverer instance
		Discoverer discoverer = new ClasspathDiscoverer();

		// Register class annotation listener
		discoverer.addAnnotationListener(new MyClassAnnotationListener());
		// Register field annotation listener
		discoverer.addAnnotationListener(new MyFieldAnnotationListener());
		// Register method annotation listener
		discoverer.addAnnotationListener(new MyMethodAnnotationListener());

		// Fire it
		discoverer.discover();
	}

	/** Dummy ClassAnnotation listener */
	static class MyClassAnnotationListener implements ClassAnnotationDiscoveryListener {
		private static Log log =
			LogFactory.getLog(MyClassAnnotationListener.class);

		@Override
		public void discovered(String clazz, String annotation) {
			log.info("Discovered Class(" + clazz + ") " +
					"with Annotation(" + annotation + ")");
		}

		@Override
		public String[] supportedAnnotations() {
			// Listens for @Entity and @EntityListeners annotations.
			return new String[] {
					Entity.class.getName(),
					EntityListeners.class.getName()};
		}
	}

	/** Dummy FieldAnnotation listener */
	static class MyFieldAnnotationListener implements FieldAnnotationDiscoveryListener {
		private static Log log =
			LogFactory.getLog(MyFieldAnnotationListener.class);

		@Override
		public void discovered(String clazz, String field, String annotation) {
			log.info("Discovered Field(" + clazz + "." + field + ") " +
					"with Annotation(" + annotation + ")");
		}

		@Override
		public String[] supportedAnnotations() {
			// Listens for @Id and @Column annotations.
			return new String[] {
					Id.class.getName(),
					Column.class.getName()};
		}
	}

	/** Dummy FieldAnnotation listener */
	static class MyMethodAnnotationListener implements MethodAnnotationDiscoveryListener {

		private static Log log =
			LogFactory.getLog(MyMethodAnnotationListener.class);

		@Override
		public void discovered(String clazz, String method, String annotation) {
			log.info("Discovered Method(" + clazz + "." + method + ") " +
					"with Annotation(" + annotation + ")");
		}

		@Override
		public String[] supportedAnnotations() {
			// Listens for @PrePersist, @PreRemove and @PostPersist annotations.
			return new String[] {
					PrePersist.class.getName(),
					PostPersist.class.getName(),
					PreRemove.class.getName()};
		}
	}
}

How to extend?

You just have to play with Discoverer and Filter classes. The general flow is:

  1. Discoverer invokes findResources() method which returns an array of URLs and then
  2. It iterates through each URL and apply filter, and
  3. Then scan the resource for annotations and
  4. If a valid annotation is found, corresponding listeners are intimated.

You can write your own Filter implementation or use FilterImpl class that comes with Annovention. Then, extend Discoverer class and implement findResources() method in whatever way you see fit. Then just invoke discover() method. Easy?

Written by Animesh

July 26, 2010 at 6:08 pm

Posted in Technology

Tagged with , ,

Kundera: knight in the shining armor!

with 37 comments

[tweetmeme source=”anismiles” only_single=false http://www.URL.com%5D

The idea behind Kundera is to make working with Cassandra drop-dead simple, and fun. Kundera does not reinvent the wheel by making another client library; rather it leverages the existing libraries, and builds – on top of them – a wrap-around API to developers do away with the unnecessary boiler plate codes, and program  a neater, cleaner code that reduces code-complexity and improves quality. And above all, improves productivity.

Download Kundera here: http://code.google.com/p/kundera/

Note: Kundera is now JPA 1.0 compatible, and there are some ensuing changes. You should read about it here: https://anismiles.wordpress.com/2010/07/14/kundera-now-jpa-1-0-compatible/

Objectives:

  • To completely remove unnecessary details, such as Column lists, SuperColumn lists, byte arrays, Data encoding etc.
  • To be able to work directly with Domain models just with the help of annotations
  • To eliminate “code plumbing”, so as to keep the flow of data processing clear and obvious
  • To completely separate out Cassandra and its obvious concerns from application-level logics for robust application development
  • To include the latest Cassandra developments without breaking anything, anywhere in the business layer

Cassandra Data Models

At the very basic level, Cassandra has Column and SuperColumn to hold your data. Column is a tuple with a name, value and a timestamp; while SuperColumn is Column of Columns. Columns are stored in a ColumnFamily, and SuperColumns in SuperColumnFamily. The most important thing to note is that Cassandra is not your old relational database, it is a flat system. No joins, No foreign keys, nothing. Everything you store here is 100% de-normalized.

Read more details here: https://anismiles.wordpress.com/2010/05/18/cassandra-data-model/

Using Kundera

Kundera defines a range of annotations to describe your Entity objects. Kundera is now JPA1.0 compatible. It builds a range of various Annotations, on top of JPA annotations, to suit its needs. Here are the basic rules:

General Rules

  • Entity classes must have a default no-argument constructor.
  • Entity classes must be annotated with @CassandraEntity @Entity (@CassandraEntity annotation is dropped in favor of JPA @Entity)
  • Entity classes for ColumnFamily must be annotated with @ColumnFamily(“column-family-name”)
  • Entity classes for SuperColumnFamily must be annotated with @SuperColumnFamily(“super-column-family-name”)
  • Each entity must have a field annotation with @Id
    • @Id field must of String type. (Since you can define sorting strategies in Cassandra’s storage-conf file, keeping @Id of String type makes life simpler, you will see later)
    • There must be 1 and only 1 @Id per entity.

Note: Kundera works only at property level for now, so all method level annotations are ignored. Idea: keep life simple. 🙂

ColumnFamily Rules

  1. You must define the name of the column family in @ColumnFamily, like @ColumnFamily (“Authors”) Kundera will link this entity class with “Authors” column family.
  2. Entities annotated with @ColumnFamily are scanned for properties for @Colum annotations.
  3. Each such field will qualify to become a Cassandra Column with
    1. Name: name of the property.
    2. Value: value of the property
  4. By default the name of the column will be the name of the property. However, you fancy changing the name, you can override it like, @Column (name=”fancy-name”)
    @Column (name="email")          // override column-name
    String emailAddress;
    
  5. Properties of type Integer, String, Long and Date are inherently supported, rest all will be serialized before they get saved, and de-serialized while getting read. Serialization has some inherent limitations; that is why Kundera discourages you to use custom objects as Cassandra Column properties. However, you are free to do as you want. Just read the serialization tweaks before insanity reins over you, 😉
  6. Kundera also supports Collection and Map properties. However there are few things you must take care of:
    • You must initialize any Collection or Map properties, like
      List<String> list = new ArrayList<String>();
      Set<String> set = new HashSet<String>();
      Map<String, String> map = new HashMap<String, String>();
      
    • Type parameters follow the same rule, described in #5.
    • If you don’t explicitly define the type parameter, elements will be serialized/de-serialized before saving and retrieving.
    • There is no guarantee that the Collection element order will be maintained.
    • Collection and Map both will create as many columns as the number of elements it has.
    • Collection will break into Columns  like,
      1. Name~0: Element at index 0
      2. Name~1: Element at index 1 and so on.

      Name follows rule #4.

    • Map will break into Columns like,
      1. Name~key1: Element at key1
      2. Name~key2: Element at key2 and so on.
    • Again, name follows rule #4.

SuperColumnFamily Rules

  1. You must define the name of the super column family in @SuperColumnFamily, like @SuperColumnFamily (“Posts”) Kundera will link this entity class with “Posts” column family.
  2. Entities annotated with @SuperColumnFamily are scanned for properties for 2 annotations:
    1. @Column and
    2. @SuperColumn
  3. Only properties annotated with both annotations are picked up, and each such property qualifies to become a Column and fall under SuperColumn.
  4. You can define the name of the column like you did for ColumnFamily.
  5. However, you must define the name of the SuperColumn a particular Column must fall under like, @SuperColumn(column = “super-column-name”)
    @Column
    @SuperColumn(column = "post")  // column 'title' will fall under super-column 'post'
    String title;
    
  6. Rest of the things are same as above.

Up and running in 5 minutes

Let’s learn by example. We will create a simple Blog application. We will have Posts, Tags and Authors.

Cassandra data model for “Authors” might be like,

ColumnFamily: Authors = {
    “Eric Long”:{		// row 1
        “email”:{
            name:“email”,
            value:“eric (at) long.com”
        },
        “country”:{
            name:“country”,
            value:“United Kingdom”
        },
        “registeredSince”:{
            name:“registeredSince”,
            value:“01/01/2002”
        }
    },
    ...
}

And data model for “Posts” might be like,

SuperColumnFamily: Posts = {
	“cats-are-funny-animals”:{		// row 1
		“post” :{		// super-column
			“title”:{
				“Cats are funny animals”
			},
			“body”:{
				“Bla bla bla… long story…”
			}
			“author”:{
				“Ronald Mathies”
			}
			“created”:{
				“01/02/2010"
			}
		},
		“tags” :{
			“0”:{
				“cats”
			}
			“1”:{
				“animals”
			}
		}
	},
	// row 2
}

Create a new Cassandra Keyspace: “Blog”

<Keyspace Name="Blog">
<!—family definitions-->

<!-- Necessary for Cassandra -->
<ReplicaPlacementStrategy>org.apache.cassandra.locator.RackUnawareStrategy</ReplicaPlacementStrategy>
<ReplicationFactor>1</ReplicationFactor>
<EndPointSnitch>org.apache.cassandra.locator.EndPointSnitch</EndPointSnitch>
</Keyspace>

Create 2 column families: SuperColumnFamily for “Posts” and ColumnFamily for “Authors”

<Keyspace Name="Blog">
<!—family definitions-->
<ColumnFamily CompareWith="UTF8Type" Name="Authors"/>
<ColumnFamily ColumnType="Super" CompareWith="UTF8Type" CompareSubcolumnsWith="UTF8Type" Name="Posts"/>

<!-- Necessary for Cassandra -->
<ReplicaPlacementStrategy>org.apache.cassandra.locator.RackUnawareStrategy</ReplicaPlacementStrategy>
<ReplicationFactor>1</ReplicationFactor>
<EndPointSnitch>org.apache.cassandra.locator.EndPointSnitch</EndPointSnitch>
</Keyspace>

Create entity classes

Author.java

@Entity			// makes it an entity class
@ColumnFamily ("Authors")	// assign ColumnFamily type and name
public class Author {

    @Id						// row identifier
    String username;

    @Column (name="email")	// override column-name
    String emailAddress;

    @Column
    String country;

    @Column (name="registeredSince")
    Date registered;

    String name;

    public Author () {		// must have a default constructor
    }

    ... // getters/setters etc.
}

Post.java

@Entity					// makes it an entity class
@SuperColumnFamily("Posts")			// assign column-family type and name
public class Post {

	@Id								// row identifier
	String permalink;

	@Column
	@SuperColumn(column = "post")	// column 'title' will be stored under super-column 'post'
	String title;

	@Column
	@SuperColumn(column = "post")
	String body;

	@Column
	@SuperColumn(column = "post")
	String author;

	@Column
	@SuperColumn(column = "post")
	Date created;

	@Column
	@SuperColumn(column = "tags")	// column 'tag' will be stored under super-column 'tags'
	List<String> tags = new ArrayList<String>();

	public Post () {		// must have a default constructor
	}

       ... // getters/setters etc.
}

Note the annotations, match them against the rules described above. Please see how “tags” property has been initialized. This becomes very important because Kundera uses Java Reflection to read and populate the entity classes. Anyways, once we have entity classes in place…

Instantiate EnityManager

Kundera now works as a JPA provider, and here is how you can instantiate EntityManager. https://anismiles.wordpress.com/2010/07/14/kundera-now-jpa-1-0-compatible/#entity-manager

EntityManager manager = new EntityManagerImpl();
manager.setClient(new PelopsClient());
manager.getClient().setKeySpace("Blog");

And that’s about it. You are ready to rock-and-roll like a football. Sorry, I just got swayed with FIFA fever. 😉

Supported Operations

Kundera supports JPA EntityManager based operations, along with JPA queries. Read more here: https://anismiles.wordpress.com/2010/07/14/kundera-now-jpa-1-0-compatible/#entity-operations


Save entities

Post post = ... // new post object
try {
manager.save(post);
} catch (IllegalEntityException e) { e.printStackTrace(); }
catch (EntityNotFoundException e) { e.printStackTrace(); }

If the entity is already saved in Cassandra database, it will be updated, else a new entity will be saved.
Load entity

try {
Post post = manager.load(Post.class, key); // key is the identifier, for our case, "permalink"
} catch (IllegalEntityException e) { e.printStackTrace(); }
catch (EntityNotFoundException e) { e.printStackTrace(); }

Load multiple entities

try {
List posts = manager.load(Post.class, key1, key2, key3...); // key is the identifier, "permalink"
} catch (IllegalEntityException e) { e.printStackTrace(); }
catch (EntityNotFoundException e) { e.printStackTrace(); }

Delete entity

try {
manager.delete(Post.class, key); // key is the identifier, "permalink"
} catch (IllegalEntityException e) { e.printStackTrace(); }
catch (EntityNotFoundException e) { e.printStackTrace(); }


Wow! Was it fun? Was it easy? I’m sure it was. Keep an eye on Kundera, we will be rolling out sooner-than-you-imagine more features like,

  1. Transaction support
  2. More fine-grained methods for better control
  3. Lazy-Loading/Selective-Loading of entity properties and many more.

Written by Animesh

June 30, 2010 at 7:12 pm

Posted in Technology

Tagged with , , , ,

ZooKeeper – Primer (contd.)

with 12 comments

[tweetmeme source=”anismiles” only_single=false http://www.URL.com%5D

>> continued from here.

Watches

The power of Zookeeper comes from Watches. Watches allow clients to get notified when a znode changes in some way. Watches are set by operations, and are triggered by ZooKeeper when anything gets changed. For example, a watch can be placed on a znode which will be triggered when the znode data changes or the znode itself gets deleted.

The catch here is that Watches are triggered only ONCE. This might look pretty restrictive at first, but this helps keep ZooKeeper simple and if our client is insistent for more notifications it can always re-register the watch.

There are 9 basic operations in ZooKeeper.

Operation Type Description
create Write Creates a znode. (parent must already exist)
delete Write Deletes a znode (must not have any children)
exists Read Tests whether a znode exists and retrieves its metadata
getACL, setACL Gets/Sets the ACL for a znode
getChildren Read Gets a list of the children of a znode
getData, setData Read/Write Gets/Sets the data associated with a znode
sync Synchronizes a client’s view of a znode.

The rule is: Watches are set by read operations, and triggered by write operations. Isn’t it very intuitive?

As stated in previous post, znodes maintain version numbers for data changes, ACL changes, and timestamps, to allow cache validations and coordinated updates. Every time you want to change znode’s data, or its ACL information, you need to provide the correct versio,n and after successful operation, version number further gets incremented. You can relate it to Hibernate’s optimistic locking methodology, where every row is assigned with a version to resolve concurrent modification conflicts. Anyways, we are talking about Watches here.

Read operations like exists, getChildren and getData set the Watches. And these Watches are triggered by write operations like, create, delete and setData. Important point to note here is that ACL operations do not trigger or register any Watches, though they indeed mess with version numbers. When a Watch is triggered, a watch event is generated and passed to the Watcher which can do whatever it wishes to do with it. Let us now find out when and how various watch events are triggered.

  1. Watch set with exists operation gets triggered when the znode is created, deleted or someone updates its data.
  2. Watch set on getData gets triggered when the znode is deleted or someone updates its data.
  3. Watch set on getChildren gets triggered when a new child is added or removed or when the znode itself gets deleted.

Let’s summarize it in a table:

ZooKeeper Watch operate on dual layer. You can specify a Watch while instantiating ZooKeeper object which will be notified about ZooKeeper’s state. The same Watch also gets notified for znode changes, if you haven’t specified any explicit Watch during read operations.

Let’s now try to connect to ZooKeeper.

public class ZkConnector {

    // ZooKeeper Object
    ZooKeeper zooKeeper;

    // To block any operation until ZooKeeper is connected. It's initialized
    // with count 1, that is, ZooKeeper connect state.
    java.util.concurrent.CountDownLatch connectedSignal = new java.util.concurrent.CountDownLatch(1);

    /**
     * Connects to ZooKeeper servers specified by hosts.
     *
     * @param hosts
     * @throws IOException
     * @throws InterruptedException
     */
    public void connect(String hosts) throws IOException, InterruptedException {
	zooKeeper = new ZooKeeper(
                hosts, // ZooKeeper service hosts
                5000,  // Session timeout in milliseconds
		// Anonymous Watcher Object
		new Watcher() {
        	    @Override
        	    public void process(WatchedEvent event) {
        		// release lock if ZooKeeper is connected.
        		if (event.getState() == KeeperState.SyncConnected) {
        		    connectedSignal.countDown();
        		}
        	    }
        	}
	);
	connectedSignal.await();
    }

    /**
     * Closes connection with ZooKeeper
     *
     * @throws InterruptedException
     */
    public void close() throws InterruptedException {
	zooKeeper.close();
    }

    /**
     * @return the zooKeeper
     */
    public ZooKeeper getZooKeeper() {
        // Verify ZooKeeper's validity
        if (null == zooKeeper || !zooKeeper.getState().equals(States.CONNECTED)){
	    throw new IllegalStateException ("ZooKeeper is not connected.");
        }
        return zooKeeper;
    }

}

The above class connects to the ZooKeeper service. When a ZooKeeper instance is created, connect() method, it starts a thread to connect to the service, and returns immediately. However, the constructor accepts a Watcher to notify about ZooKeeper state changes, we must wait for connection to get established before running any operation on ZooKeeper object.

In this example, I have used CountDownLatch class, which blocks the thread after ZooKeeper constructor has returned. This will hold the thread until its count is reduced by 1. When the client has changed its status, our anonymous Watcher receives a call to its process() method with WatchedEvent object, which then verifies the client’s state and reduces CountDownLatch counter by 1. And out object is ready to use.

This diagram captures ZooKeeper’s state transitions:

Okay. Since we are connected to ZooKeeper service, let’s try to do something meaningful.

Let’s say, we have two processes, pA and pB. The process pA picks up a chuck of data and performs some sort of operations, while process pB waits for pA to finish and then issues an email notifying about the data changes.

Simple, huh? Sure, it can be solved by using Java’s concurrent package. But we will do it using ZooKeeper for obvious gains like scalability. Here are the steps:

  1. Define a znode say, /game_is_over
    final String myPath = "/game_is_over”;
    
  2. Get ZooKeeper object
    ZkConnector zkc = new ZkConnector();
    zkc.connect("localhost");
    ZooKeeper zk = zkc.getZooKeeper();
    
  3. pB registers a Watch with ZooKeeper service with exists operation. This Watch will receive a call once the znode becomes available.
    zk.exists(myPath, new Watcher() {		// Anonymous Watcher
    	@Override
    	public void process(WatchedEvent event) {
    	   // check for event type NodeCreated
       	   boolean isNodeCreated = event.getType().equals(EventType.NodeCreated);
    	   // verify if this is the defined znode
    	   boolean isMyPath = event.getPath().equals(myPath);
    
    	   if (isNodeCreated &amp;&amp; isMyPath) {
    		//<strong>TODO</strong>: send an email or whatever
    	   }
    	}
    });
    
  4. pA, after finishing its job, creates the znode. Effectively alerting pB to start working.
    zk.create(
    	myPath, 		// Path of znode
    	null,			// Data not needed.
    	Ids.OPEN_ACL_UNSAFE, 	// ACL, set to Completely Open.
    	CreateMode.PERSISTENT	// Znode type, set to Persistent.
    );
    

That was easy. As soon as pA finishes its job, it creates a znode, ideally this should have been an ephemeral znode, on which pB already has registered a Watch which gets triggered off immediately, notifying pB to do its job.
With similar models, you can implement various distributed data-structures, locks, barriers etc on top of ZooKeeper. I will write few more posts on this, but for now you can refer to ZooKeeper’s recipes.
So, this is what ZooKeeper start-up primer is. This will get you kick-started immediately. However, there are still some fundamentals left to cover, like session, ACL, consistency models etc. Keep checking this space, I will write more on these in near future.

Written by Animesh

June 13, 2010 at 4:10 pm

Posted in Technology

Tagged with , ,

Lucandra – an inside story!

with 14 comments

[tweetmeme source=”anismiles” only_single=false http://www.URL.com%5D

Lucene works with

  1. Index,
  2. Document,
  3. Field and
  4. Term.

An index contains a sequence of documents. A document is a sequence of fields. A field is a named sequence of terms. A term is a string that represents a word from text. This is the unit of search. It is composed of two elements, the text of the word, as a string, and the name of the field that the text occured in, an interned string. Note that terms may represent more than words from text fields, but also things like dates, email addresses, urls, etc.

Lucene’s index is inverted index. In normal indexes, you can look for a document to know what fields it contains. In inverted index, you look for a field to know all other documents it appears in. It’s kind of upside-down view of the world. But it makes searching blazingly fast.

Read More: http://lucene.apache.org/java/3_0_1/fileformats.html

On a very high level, you can think of lucene indexes as 2 buckets:

  1. Bucket-1 keeps all the Terms (with additional info like, term frequency, position etc.) and it knows which documents have these terms.
  2. Bucket-2 stores all leftover field info, majorly non-indexed info.

How Lucandra does it?

Lucandra needs 2 column families for each bucket described above.

  1. Family-1 to store Term info. We call it “TermInfo”
  2. Family-2 to store leftover info. We call it “Documents”

“TermInfo” family is a SuperColumnFamily. Each term gets stored in a separate row identified with TermKey (“index_name/field/term”) and stores SuperColumns containing Columns of various term information like, term frequency, position, offset, norms etc. This is how it looks:

"TermInfo" => {
    TermKey1:{                                        // Row 1
        docId:{
            name:docId,
            value:{
                Frequencies:{
                    name: Frequencies,
                    value: Byte[] of List[Number]
                },
                Position:{
                    name: Position,
                    value: Byte[] of List[Number]
                },
                Offsets:{
                    name: Offsets,
                    value: Byte[] of List[Number]
                },
                Norms:{
                    name: Norms,
                    value: Byte[] of List[Number]
                }
            }
        }
    },
    TermKey2 => {                                    // Row 2
    }
    ...
}

“Documents” family is a StandardColumnFamily. Each document gets stored in a separate row identified with DocId (“index_name/document_id”) and stores Columns of various storable fields. This looks like,

"Documents" => {
        DocId1: {                        // Row 1
            field1:{
                name: field1,
                value: binary storable content
            },
            field2{
                name: field2,
                value: binary storable content
            }
        },
        DocId2: {                        // Row 2
            field1:{
                name: field1,
                value: binary storable content
            },
        ...
        },
        ...
    }

The Lucandra Cassandra Keyspace looks like this:

<Keyspace Name="Lucandra">
    <ColumnFamily Name="TermInfo"
        CompareWith="BytesType"
        ColumnType="Super"
        CompareSubcolumnsWith="BytesType"
        KeysCached="10%" />
    <ColumnFamily Name="Documents"
        CompareWith="BytesType"
        KeysCached="10%" />

    <ReplicaPlacementStrategy>
        org.apache.cassandra.locator.RackUnawareStrategy
    </ReplicaPlacementStrategy>
    <ReplicationFactor>1</ReplicationFactor>
    <EndPointSnitch>
        org.apache.cassandra.locator.EndPointSnitch
    </EndPointSnitch>
</Keyspace>

Lucene has got many powerful features, like wildcards queries, result sorting, range queries etc. For Lucandra to have these features enabled, you must configure Cassandra with OrderedPreservingParitioner, i.e. OPP.

Cassandra comes with RandomPartitioner, i.e. RP by default, but

  1. RP does NOT support Range Slices, and
  2. If you scan through your keys, they will NOT come in order.

If you still insist on using RP, you might encounter some exceptions, and you might need to go to Lucandra source to amend range query sections.

java.lang.RuntimeException: InvalidRequestException(why:start key's md5 sorts after
end key's md5.this is not allowed; you probably should not specify end key at all,
under RandomPartitioner)
    at lucandra.LucandraTermEnum.loadTerms(LucandraTermEnum.java:217)
    at lucandra.LucandraTermEnum.skipTo(LucandraTermEnum.java:88)
    at lucandra.IndexReader.docFreq(IndexReader.java:163)
    at org.apache.lucene.search.IndexSearcher.docFreq(IndexSearcher.java:138)

This is what you need to change in Cassandra config:

<Partitioner>org.apache.cassandra.dht.OrderPreservingPartitioner</Partitioner>

Benefits

  1. Since you can pull ranges of keys and groups of columns in Cassandra, you can really tune the performance of reads and minimize network IO for each query.
  2. Since writes are indexed in Cassandra, and Cassandra replicates itself, you don’t need to worry about optimizing the indexes or reopening the index to see new writes. With Lucene you need to take care of optimizing your indexes from time to time, and you need to re-instantiate your Searcher object to see new writes.
  3. So, with Cassandra underlying Lucene, you get a real-time distributed search engine.

Caveats

As we discussed in earlier post, you can extend Lucene either by implementing you own Directory class, or writing your own IndexReader and IndexWriter classes. And Lucandra does it using the former approach and it makes much more sense.

Read here: Apache Lucene and Cassandra

Benefits that Lucandra gets are because of Cassandra’s amazing capability to store and scale the key-value pairs. Directory class works in close proximity with IndexReader and IndexWriter to store and read indexes from some storage (filesystem and/or database). It generally receives huge chunks of sequential bytes, not a key-value pair, which would be difficult to store in Cassandra, and even if stored, it would not make optimum use of Cassandra.

Anyhow, given that Lucene is not very object oriented and almost never uses interfaces, using Lucandra’s IndexWriter and IndexReader seamlessly with your legacy codes will NOT be possible.

Lucandra’s IndexReader extends org.apache.lucene.index.IndexReader which makes this class fit for your legacy codes. You just need to instantiate it and then you can pass it around to your native code without much thought:

IndexReader indexReader = new IndexReader(INDEX_NAME, cassandraClient);
// Notice that the constructor is different.
IndexSearcher indexSearcher = new IndexSearcher(indexReader);

But mind you, Lucandra’s IndexReader will NOT help you walk through the indexed documents. Who needs it anyway? 😉

However, Lucandra’s IndexWriter is an independent class, and doesn’t extend or relates to org.apache.lucene.index.IndexWriter in any way. That makes it impossible to use this class in your legacy codes without re-factoring. But, to ease you pain, it does implement the methods with the same signature as native’s, e.g. addDocument, deleteDocuments etc. have the same signature. If that makes you a little happy. 🙂

Also, Lucandra attempts to re-write all related logic inside its IndexWriter, for example, logic to invoke analyzer to fetch terms, calculating term frequencies, offsets etc. This too makes Lucandra a bit weird for future portability. Whenever, Lucene introduces a new thing, or changes its logic in any way, Lucanadra will need to re-implement them. For example, Lucene recently introduced Payloads which add weights to specific terms, much like spans. It works by extending Similarity class with additional logic. Lucandra doesn’t support it. And to support, Lucandra would need to amend its code.

In short, I am trying to say that the way Lucandra is implemented it would make it difficult to inherently use any future Lucene enhancements, but – God forbid! – there is no other way around. Wish Lucene had a better structure!

Anyways, right now, Lucandra supports:

  1. Real-Time indexing
  2. Zero optimization
  3. Search
  4. Sort
  5. Range Queries
  6. Delete
  7. Wildcards and other Lucene magic
  8. Faceting/Highlighting

Apart from this, the way Lucandra uses Cassandra can also have some scalability issues with large data. You can find some clue here:
http://ria101.wordpress.com/2010/02/22/cassandra-randompartitioner-vs-orderpreservingpartitioner/

Performance

Lucandra claims that it’s slower that Lucene. Indexing is ~10% slower, and so is reading. However, I found it must better and faster than Lucene. I wrote comparative tests to index 15K documents, and search over the index. I ran the tests on my Dell-Latitude D520 with 3GB RAM, and Lucandra (single Cassandra node) was ~35% faster than Lucene during indexing, and ~20% for search. May be, I should try with bigger set of data.

is Lucandra production ready?

There is a Twitter search app http://sparse.ly which is built on Lucandra. This service uses Lucandra exclusively, without any relational or other sort of databases. Given the depth and breadth of twitter data, and that sparse.ly is pretty popular and stable, Lucandra does seem to be production ready.

🙂 But, may be, you should read the Caveats once more and see if you are okay with them.

Written by Animesh

May 27, 2010 at 8:03 am

Posted in Technology

Tagged with , , , ,

Connecting to Cassandra – 1

with 13 comments

Cassandra uses the Apache Thrift framework as its client API. Apache Thrift is a remote procedure call framework “scalable cross-language services development”. You can define data types and service interfaces in a thrift definition file, through which the compiler generates the code in your chosen languages. Effectively, it combines a software stack with a code generation engine to build services that work efficiently and seamlessly between a numbers of languages.

Apache Thrift – though is a state of art engineering feat – is not the best choice for a client API, especially for Cassandra.

  1. Cassandra supports multiple nodes, and you can connect to any node anytime. And this is an amazing thing, because if a node falls down, a client can connect to any other node available without pulling system down. Alas! Apache Thrift doesn’t support this inherently, you need to make you client aware of node-failures and write a strategy to pick up a next alive node.
  2. Thrift doesn’t support connection pooling. So, either you connect to the server every time, or keep a connection alive for a longer period of time. Or, perhaps, write a connection pool engine. Sad!

There are few clients available which make these things easier for you. They are like wrapper over Thrift to save you from a lot of nuisance. Anyhow, since even those clients work on top of Thrift, it makes sense to learn Thrift: to make our foundation strong.

Let’s first create a dummy Keyspace for ourselves:

<Keyspace Name="AddressBook">
<ColumnFamily CompareWith="UTF8Type" Name="Users" />

<!-- Necessary for Cassandra -->
<ReplicaPlacementStrategy>org.apache.cassandra.locator.RackUnawareStrategy
</ReplicaPlacementStrategy>
<ReplicationFactor>1</ReplicationFactor>
<EndPointSnitch>org.apache.cassandra.locator.EndPointSnitch</EndPointSnitch>
</Keyspace>

We created a new Keyspace “AddressBook” which has a ColumnFamily “Users” with sorting policy of “UTF8Type” type.

Connect to Cassandra Server:

private TTransport transport = null;
private Cassandra.Client client = null;

public Cassandra.Client connect(String host, int port) {
    try {
        transport = new TSocket(host, port);
        TProtocol protocol = new TBinaryProtocol(transport);
        Cassandra.Client client = new Cassandra.Client(protocol);
        transport.open();
        return client;
    } catch (TTransportException e) {
        e.printStackTrace();
    }
    return null;
}

The above code is pretty fundamental:

  1. Opens up a Socket at the given host and port.
  2. Defines a protocol, in this case, it’s binary.
  3. And instantiates the client object.
  4. Returns client object for further operations.

Note: Cassandra uses “9160” as its default port.

Disconnect from Cassandra Server:

public void disconnect() {
    try {
        if (null != transport) {
            transport.flush();
            transport.close();
        }
    } catch (TTransportException e) {
        e.printStackTrace();
    }
}

To close the connection in a descent way, you should invoke “flush” to take care of any data that might still be there in the transport buffer.

Store a data object:

Let’s say, our User object is something like below:

public class User {
    // unique
    private String username;
    private String email;
    private String phone;
    private String zip;

    // getter and setter here.
}

To model one User to Cassandra, we need 3 columns to store email, phone and zip and the name of the row would be username. Right? Let’s create a list to store these columns.

List<ColumnOrSuperColumn> columns = new ArrayList<ColumnOrSuperColumn>();

The List contains ColumnOrSuperColumn objects. Cassandra gives us an aggregate object which can contain either a Column or a SuperColumn. You wonder why? Because, Apache thrift doesn’t support inheritance. Anyways, now we will create columns and store them in this list.

// generate a timestamp.
long timestamp = new Date().getTime();
ColumnOrSuperColumn c = null;

// add email
c = new ColumnOrSuperColumn();
c.setColumn(new Column("email".getBytes("utf-8"), user.getEmail().getBytes("utf-8"), timestamp));
columns.add(c);

// add phone
c = new ColumnOrSuperColumn();
c.setColumn(new Column("phone".getBytes("utf-8"), user.getPhone().getBytes("utf-8"), timestamp));
columns.add(c);

// add zip
c = new ColumnOrSuperColumn();
c.setColumn(new Column("zip".getBytes("utf-8"), user.getZip().getBytes("utf-8"), timestamp));
columns.add(c);

Okay, so we have the list of columns populated. Now, we need a Map which will hold the rows, that is list of columns. Key to this map will be the name of the ColumnFamily.


Map<String, List<ColumnOrSuperColumn>> data = new HashMap<String, List<ColumnOrSuperColumn>>();
data.put("Users", columns); // “Users” is our ColumnFamily Name.

Great. We have everything in place. Now, we will use client.batch_insert to store everything at once. This will create row in the ColumnFamily identified by the given key.


client.batch_insert( "AddressBook",          // Keyspace
                      user.getUsername(),    // Row identifier key
                      data,                  // Map which contains the list of columns.
                      ConsistencyLevel.ANY   // Consistency level. Explained below.
);

ConsistencyLevel parameter is used for both read and write operations to determine when the request made by the client is successful. ConsistencyLevel.ANY means that a write action is successful when it has been written to at least one node. Read Cassandra Wiki for a detailed information.

In the next blog, we will see how to delete and update a record in Casandra.

Written by Animesh

May 24, 2010 at 10:42 am

Posted in Technology

Tagged with , , ,

Apache Lucene and Cassandra

with 5 comments

[tweetmeme source=”anismiles” only_single=false http://www.URL.com%5D

I am trying to find ways to extend and scale Lucene to use various latest data storing mechanisms, like Cassandra, simpleDB etc. Why? Agree that Lucene is wonderful, blazingly high-performance with features like incremental indexing and all. But managing and scaling storage, reads, writes and index-optimizations sucks big time. Though we have Solr, Jboss’ Infinispan, and Berkeley’s DbDirectory etc. but the approach they have adopted is very conventional and do not leverage upon any of latest technological developments in non-relational, highly scalable and available data stores like Cassandra, couchDB etc.

And then, I came across Lucandra: an attempt to use Cassandra as an underlying data storage mechanism for Lucene. Ain’t the name(Lucene + Cassandra) say so? 🙂

Why Cassandra?

  1. Well, Cassandra is one of the most popular and widely used “NoSql” systems.
  2. Flexible: Cassandra is a scalable and easy to administer column-oriented data store. Read and write throughput both increase linearly as new machines are added, with no downtime or interruption to applications.
  3. Decentralized: Cassandra does not rely on a global file system, but uses decentralized peer to peer “Gossip”, and so, it has no single point of failure, and introducing new nodes to the cluster is dead simple.
  4. Fault-Tolerant: Cassandra also has built-in multi-master write, replication, rack awareness, and can handle dead nodes gracefully.
  5. Highly Available: Writes and reads offer a tunable ConsistencyLevel, all the way from “writes never fail” to “block for all replicas to be readable,” with the quorum level in the middle.
  6. And, Cassandra has a thriving community and is at production for products like Facebook, Digg, Twitter etc.

Cool. The idea sounds awesome. But wait, before we look into how Lucandra actually implements it, let’s try to find what are the possible ways of implementation. We need to understand the Lucene stack first, and where and how it can be extended?

Lucene Stack

There are 3 elementary components, IndexReader, IndexWriter and Directory. IndexWriter writes reverse indexes of a document with the help of Directory implementation to a disk. IndexReader reads from the indexes using the same Directory.

But, there is a catch. Lucene is not very well designed and its APIs are closed.

  1. Very poor OO design. There are classes, packages but almost no design pattern usage.
  2. Almost no use of interfaces. Query, HitCollector etc. are all subclasses of an abstract class, so:
    1. You’ll have to constantly cast your custom query objects to a Query in order to be able to use your objects in native Lucene calls.
    2. It’s pain to apply AOP and auto-proxying.
  3. Some classes which should have been inner are not, and anonymous classes are used for complex operations where you would typically need to override their behavior.

There are many more. Point is that Lucene is designed in such a way that you will upset your code purity no matter how you do it.

Read more:
http://www.jroller.com/melix/entry/why_lucene_isn_t_that
http://lucene.grantingersoll.com/2008/03/28/why-lucene-isnt-that-good-javalobby/

Anyhow, to extend Lucene, there are 2 approaches:

  1. Either write a custom Directory implementation, or
  2. write custom IndexReader and IndexWriter classes.

Incorporating Cassandra by writing a custom Directory

This involves extending abstract Directory class. There are many examples like Lucene Jdbc Directory, Berkeley’s DbDirectory etc. for consultation.

Incorporating Cassandra by writing custom IndexReader and IndexWriter

This is a crude approach: writing custom IndexReader and IndexWriter classes. Note again, that native Lucene’s reader/writer classes don’t implement any Interfaces and hence it will be difficult to plug and use our custom reader/writer classes in any existing code. Well, but that’s what you get. Another thing is that, native IndexReader/IndexWriter classes perform a lot of additional logic than just indexing and reading. They use analyzers to analyze the supplied document, calculate terms, term frequencies to name few. We need to make sure that we don’t miss any of these lest Lucene shouldn’t do what we expect it to do.

Lucandra follows this approach. It has written a custom IndexWriter and IndexReader classes. I am going to explore more on it, and come back with what I find there.

Read it here: Lucandra – an inside story!

Trivia

Do you know where does the name Lucene come from? Lucene is Doug Cutting‘s wife’s middle name, and her maternal grandmother’s first name. Lucene is a common Armenian first name.

And, what about Cassandra? In Greek mythology the name Cassandra means “Inflaming Men with Love” or an unheeded prophetess. She is a figure both of the epic tradition and of tragedy. Remember the movie Troy? Although, the movie was a not exactly what Odysseus wrote, but it was polluted to create more appealing cinematic drama. Read here: http://ancienthistory.about.com/cs/grecoromanmyth1/a/troymoviereview.htm

Written by Animesh

May 19, 2010 at 7:26 am

Posted in Technology

Tagged with , , , , ,