Friday, December 7, 2012

My Cassandra Templates

So I thought I would share some of my abstract adapter code for, eh, interacting with Hector client, that in turn interacts with Cassandra.

The idea is it adapts any model/entity POJO class into a key and columns that fit a column family that corresponds to the model. This happens through the marshal and unmarshal methods that are specific to each implementor of this trait. Curious names--marshal and unmarshal, but I just don't want to call them map and unmap or pack and unpack because in Scala they have different concepts. Marshal transforms the model into, really, a Scala tuple of key and list of tuples of column name and value. The persist methods in this adapter know how to work with them. Unmarshal transforms query results (in the form of ColumnFamilyResult) into the model. And this one is used by query methods.

As you can see the adapter also functions as a high-level DAO in that the user of the DAO shouldn't know about how to work with Hector/Cassandra.

Fine, I know it gets roundabout, the interaction between this trait and its subclass, but it's just how it is for now. I don't want to spend too much time refining for now.

Furthermore there is this automatic Id field finder method (getKeyFieldOpt). It does some reflection work against the model to find which field is the Id (row key). In my case Id is a case class (like var id = Id(9999)). And I think I use some implicit defs so retrieving id transforms it automatically to its value whenever relevant. But you get the gist.

This is a work in progress. At the moment all generic queries work to fetch the entire row of a column family and unmarshal its data to a model, but I'll be working on generic slices and ranges soon.

So here's the base trait:

import java.lang.reflect.Field
import scala.collection.JavaConversions._
import id.pronimo.watchlet.exception.NoExistingRowException
import id.pronimo.watchlet.exception.WrongModelException
import javax.validation.ConstraintViolation
import javax.validation.ConstraintViolationException
import javax.validation.Validation
import javax.validation.ValidationException
import me.prettyprint.cassandra.serializers.AsciiSerializer
import me.prettyprint.cassandra.serializers.StringSerializer
import me.prettyprint.cassandra.serializers.UUIDSerializer
import me.prettyprint.cassandra.serializers.CompositeSerializer
import me.prettyprint.cassandra.serializers.DateSerializer
import me.prettyprint.cassandra.service.template.ColumnFamilyResult
import me.prettyprint.cassandra.service.template.ColumnFamilyTemplate
import me.prettyprint.cassandra.service.template.ColumnFamilyUpdater
import me.prettyprint.hector.api.exceptions.HectorException
import me.prettyprint.hector.api.factory.HFactory
import me.prettyprint.hector.api.mutation.Mutator
import me.prettyprint.hector.api.Keyspace
import id.pronimo.watchlet.model.metadata.Id
import me.prettyprint.cassandra.serializers.SerializerTypeInferer
import me.prettyprint.hector.api.Serializer

trait CommonStandardCFAdapter[R <: Serializable, K, N] {
 @transient val (us, ss, as, ds, cs) = (UUIDSerializer.get, StringSerializer.get, AsciiSerializer.get, DateSerializer.get, CompositeSerializer.get)
 @transient val EMPTY_BYTE_ARRAY = Array.empty[Byte]
 @transient val validator = Validation.buildDefaultValidatorFactory.getValidator
 
 def fs[F](x:F) = { SerializerTypeInferer.getSerializer[F](x) }
 val hcolumn = (n:N, v:Any) => {
  HFactory.createColumn(n, v, getKeyspace.createClock, getNameSerializer, fs(v))
 }

 /* CONCRETE METHODS [BEGIN] */
 /**
  * Add a new record, or override existing data.
  *
  *  @tparam   R the record type defined per implementation
  */
 @throws(classOf[HectorException])
 @throws(classOf[ValidationException])
 def write(record: R, deleteFirst: Boolean) {
  validate(record)
  if (deleteFirst) remove(record)
  val thriftTemplate = this.getThriftTemplate
  val (k, c) = marshal(record)
  val updater = thriftTemplate.createUpdater(k)
  c.iterator.foreach {
   case (n, v) =>
    updater.setColumn(hcolumn(n, v))
  }
  thriftTemplate.update(updater)
 }

 /**
  * Update a record by specifying intended column to add/update.
  *
  * Will throw an exception when record with specified key does not exist.
  *
  *  @note  Use with caution, this method allows adding a column not defined in the record type.
  *  @param    key of the record
  *  @param    colName the column name
  *  @param    value the column value
  *  @tparam   K the key type defined per implementation
  *  @tparam   V the value type
  */
 @throws(classOf[HectorException])
 @throws(classOf[ValidationException])
 def update[V](key: K, colName: N, value: V) {
  val thriftTemplate = this.getThriftTemplate

  if (false == thriftTemplate.isColumnsExist(key)) {
   throw new NoExistingRowException(key, "update")
  }
  val updater = thriftTemplate.createUpdater(key)

  val column = HFactory.createColumn(colName, value, getKeyspace.createClock)
  updater.setColumn(column)
  thriftTemplate.update(updater)
 }

 /**
  * Update a record by specifying multiple intended columns to add/update.
  *
  * Will throw an exception when record with specified key does not exist.
  *
  *  @note  Use with caution, this method allows adding columns not defined in the record type/column family metadata.
  *  @param    key of the record
  *  @param    map of column name/column value
  *  @tparam   K the key type defined per implementation
  */
 @throws(classOf[HectorException])
 @throws(classOf[ValidationException])
 def update(key: K, map: Map[N, Any]) {
  val thriftTemplate = this.getThriftTemplate

  if (false == thriftTemplate.isColumnsExist(key)) {
   throw new NoExistingRowException(key, "update")
  }

  val updater = {
   thriftTemplate.createUpdater(key)
  }
  map.iterator.foreach {
   case (n, v) =>
    val column = HFactory.createColumn(n, v, getKeyspace.createClock)
    updater.setColumn(column)
  }
  thriftTemplate.update(updater)
 }

 /**
  * Add multiple records, or override when such with the same key exists.
  *
  * Will throw an exception when record with specified key does not exist, automatically cancelling whole operation.
  *
  *  @note  Use with caution, this method allows adding columns not defined in the record type/column family metadata.
  *  @param    key of the record
  *  @param    map of column name/column value
  *  @tparam   K the key type defined per implementation
  */
 @throws(classOf[HectorException])
 @throws(classOf[ValidationException])
 def writeBatch(records: List[R], deleteFirst: Boolean) {
  val cf = this.getThriftTemplate.getColumnFamily
  val thriftTemplate = this.getThriftTemplate

  records.iterator.foreach { it => validate(it) }

  val mutator = thriftTemplate.createMutator

  records.iterator.foreach { it =>
   if (deleteFirst) remove(it)
   marshal(it) match {
    case (k, c) => c.iterator.foreach { tuple =>
     mutator.addInsertion(k, cf, HFactory.createColumn(tuple._1, tuple._2, getKeyspace.createClock))
    }
   }
  }
  thriftTemplate.executeBatch(mutator)
 }

 /**
  * Read record by key, returning Option
  *
  *  @param    key of the record
  *  @tparam   K the key type defined per implementation
  *  @tparam   R the record type defined per implementation
  */
 @throws(classOf[HectorException])
 def read(key: K): Option[R] = {
  val thriftTemplate = this.getThriftTemplate
  val result = thriftTemplate.queryColumns(key)
  unmarshal(result)
 }

 /**
  * Remove entire row by key
  *
  *  @param    key of the record
  *  @tparam   K the key type defined per implementation
  */
 @throws(classOf[HectorException])
 def remove(key: K) {
  this.getThriftTemplate().deleteRow(key)
 }

 /**
  * Remove entire row using the record's id
  *
  *  @tparam   R the record type defined per impl
  *  ementation
  */
 @throws(classOf[HectorException])
 def remove(record: R) {
  remove(getKey(record))
 }

 /**
  * Get key field option (thru reflection)
  */
 protected def getKeyFieldOpt[E <: R: Manifest] = implicitly[Manifest[E]].erasure.getDeclaredFields.find(_.getType.equals(classOf[Id[K]]))
 
 /**
  * Get N serializer (thru reflection)
  */
 protected def getNSerializer[E <: N: Manifest]():Serializer[N] = SerializerTypeInferer.getSerializer[N](implicitly[Manifest[E]].erasure)

 /**
  * Get key for record (thru reflection)
  */
 def getKey[E <: R: Manifest](record: R): K = {
  try { getKeyField.get(record).asInstanceOf[Id[K]]}
  catch {
   case e =>
    throw new WrongModelException(implicitly[Manifest[E]].erasure, "Key type specified in template does not match actual key type of record.", e)
  }
 }

 /**
  * Validate record using JSR-303
  *
  *  @param    record
  *  @tparam   R the record type defined per implementation
  */
 @throws(classOf[ValidationException])
 def validate(record: R) {
  val violations = validator.validate(record)

  if (false == violations.isEmpty()) {
   val exceptionMessage = violations.iterator
    .map(it => it.getPropertyPath.toString + " " + it.getMessage)
    .mkString(", ")

   throw new ConstraintViolationException(exceptionMessage, violations.asInstanceOf[java.util.Set[ConstraintViolation[_]]])
  }
 }
 /* CONCRETE METHODS [END] */

 /* ABSTRACT METHODS [BEGIN] */
 /**
  * Transform record into tuples with format (key, List[(columnName, columnValue)])
  */
 def marshal(record: R): (K, List[(N, Any)])

 /**
  * Transform ColumnFamilyResult into record
  */
 def unmarshal(columns: ColumnFamilyResult[K, N]): Option[R]

 /**
  * Get underlying Thrift template used by this spec template
  */
 def getThriftTemplate(): ColumnFamilyTemplate[K, N]
 
 /**
  * Get column name serializer used by this spec template
  * This became mandatory for creating composite columns using the Hector client.
  */
 def getNameSerializer():Serializer[N]

 def getKeyField: Field

 def getKeyspace: Keyspace
 /* ABSTRACT METHODS [END] */
}

And a sample implementation:
PS: Don't mind the CDI annotations. And the transient annotations are there just so it plays along well with CDI.

@Named( "ItemRepository" )
@ApplicationScoped
class ItemCFAdapter extends CommonStandardCFAdapter[Item, UUID, String] with Serializable {
 @transient val keyField = {
  getKeyFieldOpt[Item] match {
   case Some( field ) =>
    field.setAccessible( true )
    field
   case None =>
    throw new WrongModelException( classOf[Item], "Cannot find field with Id metadata." )
  }
 }

 @transient val nameSerializer = getNSerializer[String]

 @Inject
 @transient
 var keyspace: Keyspace = _

 var thriftTemplate: ThriftColumnFamilyTemplate[UUID, String] = _

 @PostConstruct
 def initialise() {
  thriftTemplate = new ThriftColumnFamilyTemplate( keyspace, CF.CF_NAME, us, as )
 }

 override def marshal( record: Item ): ( UUID, List[( String, Any )] ) = {
  ( getKey( record ), List(
   ( "label", record.label ),
   ( "url", record.url ) ) )
 }

 override def unmarshal( columns: ColumnFamilyResult[UUID, String] ): Option[Item] = {
  val l = columns.getString( "label" )
  val u = columns.getString( "url" )
  ( Option( l ), Option( u ) ) match {
   case ( Some( _ ), Some( _ ) ) =>
    Option( new Item {
     id = columns.getKey
     label = l
     url = u    } )
   case _ =>
    None
  }
 }

 override def getThriftTemplate() = thriftTemplate

 override def getKeyField() = keyField

 override def getNameSerializer() = nameSerializer

 override def getKeyspace() = keyspace

}

Thursday, November 29, 2012

Back to JSP

Alright, so JSP is old and perhaps no longer endorsed by Oracle (there's no section on JSP in the Java EE 6 tutorial).  But it, I find, remains a sane, mature, and better-supported choice of technology for generating web content.  Many other templating engines suffer from lack of support and continued development (due to less usage), crippled integration with other technologies (i.e. frameworks), or unsatisfactory tooling.

Most of the hate-bordering rants on JSP that I see have nothing to do with the JSP technology itself.  The problems they face arise from anti-patterns: spaghetti code, lack of separation of concern, or simply outdated version of JSP spec.

What would be cool for the next JSP spec update is a way to supply partial XML directly from the controller, like that in Lift web framework.  With CDI, yeah.  Something like:

@Produces
@XmlNode
@Named("calendar")
def getCalendar:XmlNode = {
  return 

Calendar

Monday to Sunday
}

Saturday, November 10, 2012

JAX-RS ftw

I never realised this before, partly because the Java EE 6 documentation doesn't make it obvious, but apparently one can use the JAX-RS to create web page controllers.  It seems to be implementation specific, however.  JBoss Resteasy, with Seam Resteasy, has built-in support for freemarker and Velocity templates as well as undocumented for JSP.  Jersey, on the other hand, afaik, only does for JSP.  It's pretty snazzy considering the features JAX-RS has and its deep integration with the Java EE stack--all those REST methods, CDI scopes, streamlined authentication and authorization, etc.
Bottom line, Java EE 6 hasn't abandoned stateless web!  Or something like that.

Sunday, October 28, 2012

My Little Take on Backend Development

Update 29-10-2012:
I found this blog post more comprehensively in defense of Java EE 6.
Original post:
This comes from someone who only just recently read about JTA and JTS. Take this with a grain of salt, so to speak.

I have some experience using the Spring Framework, mainly the Spring MVC sometime about 2years ago, and some again just recently (as indicated in my previous post).  Back then, it was first a multifinancing application, second a message broadcasting engine, sort of.  Neither of which I created from the ground up.  I was merely adding features and so on.  I couldn't compare it against the Java EE 6 at that time, lack of understanding of their internals and so on.

The message broadcasting engine had to be improved so that it could support more load.  There were topics such as clustering, high availability, parallelism, concurrency, etc.  I think it was then that I came across the JMS and some asynchronous processing features provided by the EJB 3.  That was to be my gateway to Java EE 6.   Afterwards it was a haphazard series of choice to learn the actor model, CSP, Scala, Erlang, Haskell, NoSQL, etc.

So I wonder what was it again, Spring Framework's winning points.  Their documentation is among the best.  Their framework itself is rock solid, for what it is.  What it is, really, is a little confusing, though.  It appears to be sort of an alternative to the Java EE stack, but heavily relies on it.  It manages the object lifecycles.  Java EE does that as well, manage object lifecycles.  Two things manage your object lifecycles.  Obviously you can run Spring Framework on a standard servlet container, but I see everywhere that it is recommended that you use, say, the JTA provided by the container when available.  It would be unavoidable that both containers will manage the object lifecycles if you used CMT alongside Spring.  And how is it that one could share a bean across deployments in Spring?  Even if you could, I think using Spring remoting, your object lifecycles management would be more complicated because it would be a communication between two unrelated containers.  It looks to me that the EJB specification is superior in this regard, with its local and remote beans concept and tighter integration with JTA and the rest of the Java EE specification.

Anyway, honestly I'm not a big fan of dependency injection, Spring or CDI.  It's a little too much magic for me.  Language level singleton object construction available in Scala and programmatic lookup and vending are easier to work with.

Friday, October 26, 2012

Integrating Spring Framework, Jetty, and JBoss Narayana

Update 29-10-2012:
This does not work. I posted this only after testing rollback in a transaction, and it worked. However, I couldn't get it to work as Hibernate's transaction manager alongside Spring; e.g. update operations miss the transaction scope. I tried many setup variations. If someone manages to get this working, it'd be great if you shared. For the time being I'm falling back to CMT provided by JBoss AS.
Original post:
I had to browse through quite many websites and tested some adaptations to eventually get this to work. Shame on me, it's not all that complex how it ends up.
Oh, and my syntax highlighter does this XML tag uppercase thing that is so uncalled for.
My stack goes as following:
  • Spring Framework 3.1.2.RELEASE
  • JBoss Narayana 4.17.1.Final
  • Apache Derby (network) DB 10.9.1.0
  • Jetty 8.1.7.v20120910
The Spring Framework of this version doesn't even need persistence.xml to be present, so configuration, linking data source, persistence unit, and transaction manager, in a single Spring context XML is possible and makes more sense.  The JBoss Narayana JTA guide shows that to incorporate JDBC connections within itself:
  1. Use the com.arjuna.ats.jdbc.TransactionalDriver that comes with the Narayana library
  2. Wrap a javax.sql.XADataSource by creating an implementation class of com.arjuna.ats.internal.jdbc.DynamicClass and bind the data source to a JNDI (programmatically).
    The TransactionalDriver will use the DynamicClass as its data source provider.
I stumbled at binding.  Admittedly I'm not quite sure how it works on Jetty.  I managed to make it work by, firstly, bind the XADataSource to JNDI using Jetty's context configuration. Here's how it looks like in my jetty-env.xml.

 
 jdbc/dbxa
 
  
   mydb
   username
   password
  
 

Secondly, I let my implementation of DynamicClass, in its concrete getDataSource(dbName:String):XADataSource method to perform JNDI lookup, cast it to XADataSource, and return it. Simple as that. Why did I even bother struggling with Spring JNDI binding and JMX and whatever else. Here's how it looks like:
public XADataSource getDataSource(String dbName, boolean create)
  throws SQLException {
 try {
  InitialContext ic = new InitialContext();
  ClientXADataSource40 xa = (ClientXADataSource40) ic
    .lookup("jdbc/dbxa");
  return xa;
 } catch (NamingException e) {
  throw new SQLException(null, e);
 }
}
Afterwards, I bind things together in the Spring context XML.
Since I already defined the credentials in the Jetty context configuration, I don't need to repeat it here. The driver properties bean is just:

 
  
   my.implementation.of.DynamicClass
  
 

Next is the data source bean. I have to prepend the JNDI with jdbc:arjuna:, that's just how it works.

 
  com.arjuna.ats.jdbc.TransactionalDriver
 
 
 
  
 

And then the EntityManagerFactory bean, which also replaces persistence.xml. I happen to use Hibernate, so that's HibernateJpaVendorAdapter for me.

 
 
 
 
  
   
   
   
  
 
 
 
  
 

I also need to define the Narayana transaction manager and user transaction implementation beans as well as the Spring transaction manager bean itself.




 
  
 
 
  
 

And lastly if you like annotations:
As a bonus, you can also set where Narayana will store its objects, default timeout, etc by defining this factory bean.

 
 
 
  
   60
   .\arjuna
   .\arjuna
  
 

That's all. You can add a logger for com.arjuna.ats and set it to trace or fine to see whether it works. I don't suppose I need to show an example of how transactional annotation is used right. And there's that org.springframework.transaction.support.TransactionTemplate and org.springframework.transaction.interceptor.TransactionAspectSupport to allow manual, programmatic transaction control.

Saturday, October 20, 2012

Lift Web Framework is Easy

It's ubiquituously said that the Lift web framework has a steep learning curve.  Hard to believe when

<div class="lift:surround?with=default;at=content" id="real_content">
 <h1>Welcome to your project!</h1>
 <lift:Hello.world />  
</div>
object Hello extends DispatchSnippet {
 val dispatch: DispatchIt = {
  case name => render(name) _
 }
 def render(name: String)(ignore: NodeSeq): NodeSeq = Text("Hello, " + name)
}
LiftRules.snippetDispatch.append {
 case "Hello" => id.openfx.openshop.snippet.Hello
}
simply wraps the page with the template 'default.html' and makes a snippet Hello.world lookup.

Everything in Lift is easy and transparent, I daresay. Most things can be reasoned as you write code, instead of having to read its specification. I can't quite describe it precisely, but I'll give it a shot anyway. Because Scala allows "mix in" of traits, there is, CMIIW, no hidden container-managed proxy code is needed. Let's compare this with CDI used in the JSF framework. To cater to different scopes, a CDI bean can be annotated with @RequestScoped, @SessionScoped, @ConversationScoped, etc. How does it look like when constructed? Honestly I don't know. I only know how it should work thanks to the CDI spec's documentation. But I think the container would have to create a specific proxy for each bean depending on its scope, and inject its proxy dependencies and so on. The proxy might also be different when you add @Stateful to, say, a @ConversationScoped bean.

On the other hand, in Lift, it's much simpler. Snippets are normal methods registered via the LiftRules object during boot. Request and session states are just plain objects extending either RequestVar or SessionVar, used directly by snippets. The separation between controller logic and session/request data is clear here. You know what's happening to your stuff by checking out what's inside what you're extending.

Friday, September 7, 2012

Here you don't know what's there

Quite some shift having worked mainly at companies that were open source-centric before, now at one that endorses proprietary enterprise platforms.

I have found that latter such companies don't recognize open source efforts and startup solutions.  It is a mystery (to them, at least) how Twitter has managed so far with obscure languages like Ruby and Scala.  It must be a joke to claim Google used commodity hardware for their servers.  When you have a USD 30K DBMS you have no reason to consider NoSQL toys.

Whereas at my previous workplaces we would probably be rolling our eyes, thinking why we would ever need it, when offered, say, a USD 30K DBMS when we ran everything on PostgreSQL just fine.  Yahoo! did it with PostgreSQL, why couldn't we, right?  We could implement Hadoop and Cassandra for specific tasks if they fit better than they would on PostgreSQL.  If everything were still slow, it would be the development team's fault, of course.  Couldn't have it any other way because, otherwise, why hire us at all.

But of course, experience will vary depending on how one makes use of either technology.  Many proprietary products do excel at what they do.  And it is only natural that people spend money on these works.   I'd think the distinction is that at one (extreme) end you see lots of money and at the other (extreme) end you feel a great deal of passion (and both are perhaps equally reliable catalysts of creativity).  This isn't a rant on merits of either practice--not that I have authority on that topic to begin with!

What this is about is how easily we turn a blind eye to the other side when we've been delving in just one side of the equation for long.

Tuesday, August 21, 2012

Around the Web

Sometime ago I checked out different approaches to web application development (in my free time.)  With Python I explored the Pyramid framework as well as Django.  With Scala I did Lift.  And lastly I explored JSF using Groovy as a replacement for Java.

Actually I was quite overwhelmed with Python.  It's not difficult to write anything in Python. 
However, to write good code in Python it should take considerably more effort and more thorough understanding of the language.  I stumbled across a lot of metaprogramming then.  I think it grossly difficult to feel secure because, with metaprogramming alongside dynamic typing, most of your application's survival depends on the condition of your gestalt.

Scala lifts much of that mental burden off me I should think.  I like Lift.  I could do more in Lift in the future.  :)

Groovy is, as I probably wrote in my previous post, somewhat in the middle when it comes to typing--a degree of static as well as dynamic typing.  It saves a lot of time and code lines.  In a way it intuitively guides me to write better code.  There can be a lot of metaprogramming too in Groovy, but it's not a show stopper.  You don't necessarily have to understand it to write good code.  Though it's pretty easy (I think compared to that in Python).  It would be much better if Java EE and relevant stuff were tailored to fit Groovy more.  And better IDE support.  And a way to filter exception stack traces (Seam Catch maybe?).

Wednesday, July 11, 2012

Huh, OpenJPA?

Purpose of this post is primarily to help add hype around OpenJPA (there is lack of hype, from community and Apache/OpenJPA all the same).

Contrary to the plain vanilla JPA implementation vibe the project site gives off, OpenJPA might actually have its own distinctive virtue.   A popular phrase among Haskellers perhaps, "do one thing, and do it well."

Configuration might be a little daunting at first because it takes a rather different approach from such that Hibernate and EclipseLink share.  But that aside it has worked well for me (after about two-weeks use for development purposes) and perhaps most importantly for me is that its documentation is awesome.  The content feels well-thought out.  It comes in the usual online help and PDF formats.  Haven't you ever stumbled across duplicate or outdated content somewhere in EclipseLink's wiki-style 'Documentation Center'?

On that note, actually Hibernate comes with similar documentation formats as well.  The difference is that with Hibernate documentation feels more like a long tutorial (a JBoss thing perhaps?) while with OpenJPA (an Apache thing perhaps?), documentation is more formal.

Anyway one nice feature I've recently found is a native UUID generator assignable via the usual GeneratedValue annotation.  Excerpt from the documentation:

OpenJPA also offers additional generator strategies for non-numeric fields, which you can access by setting strategy to AUTO (the default), and setting the generator string to:
  • uuid-string: OpenJPA will generate a 128-bit type 1 UUID unique within the network, represented as a 16-character string. For more information on UUIDs, see the IETF UUID draft specification at: http://www.ics.uci.edu/~ejw/authoring/uuid-guid/
  • uuid-hex: Same as uuid-string, but represents the type 1 UUID as a 32-character hexadecimal string.
  • uuid-type4-string: OpenJPA will generate a 128-bit type 4 pseudo-random UUID, represented as a 16-character string. For more information on UUIDs, see the IETF UUID draft specification at: http://www.ics.uci.edu/~ejw/authoring/uuid-guid/
  • uuid-type4-hex: Same as uuid-type4-string , but represents the type 4 UUID as a 32-character hexadecimal string.
These string constants are defined in org.apache.openjpa.persistence.Generator.

So there goes my preliminary hype around OpenJPA.

Tuesday, June 19, 2012

What's a programmer to do in Indonesia?

One question that often comes up during an interview, "What do you expect from working with X company?"  Considering that the question is often asked somewhere along the end of the interview I would think it's only casual chat.  But I'd know better.  So, what do you expect from working with X company?

Sometimes I would blurt out (especially after a rough interview) and say something like, "All of it--good salary, nice environment, awesome colleagues, successful products or services."  Or would you rather hear someone say, "I love programming so I would like to have a lot of challenges coming my way.  I get a kick out of torturing my soul day in and day out."

Basically what I'm trying to say is that, no, economic situations in Indonesia won't allow a regular programmer to ever become successful enough to live well.  I'm talking about the small salary, yes.  I'm also talking about high property rates, poorly distributed IT companies across Indonesia (they all flock in central Jakarta), and bad traffic.  So the rhetoric is why would anyone want to pursue this kind of career in Indonesia at all.

Here's some loose calculation.  With perhaps up to 10 years of experience, one might land a decent position in the management--one out of, what, 50 people?  Suppose one is a highly accomplished senior programmer then.  The standard salary, I think, for a senior programmer here is currently at maximum 15 million rupiahs with a standard threshold of 30% and few lucky ones reaching over 25 million rupiahs.  The question is how long will it take from the moment one starts receiving salary decent enough to save up (10 million say, at age 27) until he owns himself a decent house.  Consider one is lucky and steady, with 30% of salary to save, standard 10% annual increase (loosely considering inflation and GDP trend), for a house worth 600 million rupiahs it will take about 11 years until you can finally own it--which is doubtful because once you reach age 30 you become much less marketable in the programming field (in Jakarta).

It's the fault of frameworks I'd say.  There are so many frameworks to build your application upon that enable companies to only hire average fresh graduates and have it up and running.  Hardly any companies need you to be a guru at it.  Hardly any companies nowadays (in Indonesia) will require you to be adept with lexical analysis, Z notation, serialization mechanisms, or even kernel scheduling. Chances are along the way in the future more and more experts will find it very hard to land a job befitting their expertise.

So I would instead say, "I expect to leave--have someone to remind me to leave even, to leave the company the moment I save up enough to start my own business."

Wednesday, May 30, 2012

Resources Publishing for WTP Maven Projects

I never realised this before but apparently these Eclipse Builders work really well to avoid collision between each other.
It was nightmare having to 'clean package jboss-as:deploy' the entire EAR again and again even when change was only on the xhtml--no recompiling!
The EAR project retains its skeleton structure even without the Maven EAR plugin (which is to be expected really, duh).  Which means you can deploy, undeploy, and redeploy part or all of the EAR into your web container via the usual Add or Remove menu.  What a pleasure!

Saturday, April 7, 2012

Lift as Eclipse WTP Project

Some of us--we've never used Maven, let alone SBT.  We're used to Eclipse's default build, test, deploy, and reload features. Why limit the choices to starting up a Lift web project by requiring knowledge of either.  So here goes it:
  1. Make sure you have Eclipse with WTP (comes with the default Java EE package), Scala IDE, and IvyDE installed.  You can opt not to use IvyDE for library management by adding jars manually.  For that you can go to MvnRepository to see the required dependencies.  A basic Lift project usually relies on either lift-mapper or lift-jpa as a single parent dependency.
    One thing to note here is this issue where Scala compiler generates version-specific byte-code.  This generally means that if you use Scala 2.9.1, you have to compile your project against a version of Lift that's compiled against Scala 2.9.1 and all the Lift dependencies must also be compiled against that version of Scala.  Lift packages come with their version as their name's suffix, thus if you use Scala 2.9.1 you would pick lift-mapper_2.9.1 as the basis of your dependency resolution.
  2. In the IDE create a new Dynamic Web Project.
    Because Lift works with any servlet container that implements the Servlet 2.5 specs, chances are your favourite container will work just fine too.  Personally I would recommend using the latest release of JBoss AS.  It's quite as blazing fast as advertised and perhaps you might want to use some bleeding edge Java EE specs like JPA, CMT, and EJB to help with your software architecture.
  3. To add Scala nature, open file .project and do the following changes:
    1. Add a buildCommand with name org.scala-ide.sdt.core.scalabuilder.
    2. Add a nature with value org.scala-ide.sdt.core.scalanature.
  4. In project→Properties→Java Build Path, add Scala libraries.  Also add it to project→Properties→Deployment Assembly.
  5. Add Lift packages and their dependencies, although...
  6. If you have IvyDE installed:
    1. You can skip this if the default ibiblio Maven repository works ok for you.  It's most of the time really slow for me so I tend to use Antelink's instead.  To do this, create a file named ivysettings.xml (could be anywhere in your project) containing:
      <ivysettings>
         <settings defaultResolver='antelink'/>
         <resolvers>
            <ibiblio name="antelink" root="http://maven.antelink.com/content/repositories/central/" m2compatible="true"/>
         </resolvers>
      </ivysettings>
    2. Create an Ivy file using the wizard and add this:
      <dependencies>
         <dependency org="net.liftweb" name="lift-jpa_2.9.1" rev="2.4">
            <exclude org="org.scala-lang" matcher="glob" name="scala*" />
         </dependency>
      </dependencies>
    3. In project→properties→Java Build Path→Libraries click Add Library and select IvyDE Managed Dependencies.  Set the Ivy File, check Enable project specific settings, and set the Ivy settings path.  Also add this Ivy dependency to project→Properties→Deployment Assembly.  When you're done, Ivy will probably take a while to resolve your dependencies.  You can see its progress by displaying the Ivy Console.

  7. A Lift web project generally has this kind of folder structure:
    src
    |_main
      |_resources
      |_scala
      |_webapp
        |_images
        |_static
        |_templates-hidden
        |_WEB-INF
    |_test
      |_scala

    To do this, you will need to remove the default WebContent directory and move everything inside it into src/main/webapp.  You will also need to, in project→Properties→Java Build Path→Source, edit accordingly to point to src/main/resources as well as src/main/scala.
  8. Inside src/main/webapp/WEB-INF you need a web.xml.  This is usually generated automatically upon project creation.  Add a filter with name LiftFilter and class net.liftweb.http.LiftFilter and a filter-mapping with name LiftFilter and url-pattern /*.  You can change the name and url-pattern according to your needs.
  9. The LiftFilter by default looks for a class named Boot inside the package bootstrap.liftweb (src/main/scala/bootstrap/liftweb/Boot.scala) and executes the boot method (def boot() {...}).
    But you can also have this class anywhere else in your classpath.  To do that your Boot class needs to extend net.liftweb.http.Bootable and add the following entry in your web.xml's LiftFilter:
    <init-param>
      <param-name>bootloader</param-name>
      <param-value>path.to.your.Boot</param-value>
    </init-param>
  10. Setting up logging.
    1. I have only ever used Logback so this is the logging backend that we'll use here.  I would advise against using IvyDE to manage your Logback dependency because by default Logback depends on a lot of other libraries that you might not even use in your Lift web project (such as Groovy).  You can instead manually download Logback and add logback-access, logback-core, and logback-classic to your build path.
    2. Now you want to setup your Lift's default logging back end.  First add a logback.xml in src/main/resources.  Logback configuration doesn't have a schema so don't bother looking.  Do open this link for an example of the most basic Logback configuration.  Next open your Boot class and add the following inside the boot method:
      LiftRules.getResource("/logback.xml") match {
        case Full(url) => Logger.setup = Full(Logback.withFile(url)); case _ => println("Logback configuration not found.")
      }
  11. Next you can continue by setting up class resolution and site map and creating your first page by utilising templates and binding.  A Lift tutorial book I would totally recommend is Lift in Action by Timothy Perrett.
  12. Lastly you can build, test, and deploy your web application the way you normally do it in Eclipse.  Managing and configuring your web server and deployment are probably much easier using their default Eclipse adapter plugin compared to using Maven.