Sunday, February 28, 2010

Dependency Injection as Function Currying

Dependency Injection is one of the techniques that I use regularly when I am programming in Java. It's a nice way of making an application decoupled from concrete implementations and localize object creation logic within specific bootstrapping modules. Be it in the form of Spring XML or Guice Modules, the idea is to keep it configurable so that specific components of your application can choose to work with specific implementations of an abstraction.

It so happens that these days possibly I have started looking at things a bit differently. I have been programming more in Scala and Clojure and being exposed to many of the functional paradigms that they encourage and espouse, it has stated manifesting in the way I think of programming. In this post I will look into dependency injection on a different note. At the end of it may be we will see that this is yet another instance of a pattern melding into the chores of a powerful language's idiomatic use.

In one of my projects I have a class whose constructor has some of its parameters injected and the others manually provided by the application. Guice has a nice extension that does this for you - AssistedInject. It writes the boilerplate stuff by generating an implementation of the factory. You just need to annotate the implementation class' constructor and the fields that aren't known to the injector. Here's an example from the Guice page ..

public class RealPayment implements Payment {
  public RealPayment(
        CreditService creditService,  // injected
        AuthService authService,  // injected
        @Assisted Date startDate, // caller to provide
        @Assisted Money amount);  // aller to provide

Then in the Guice module we bind a Provider<Factory> ..

    PaymentFactory.class, RealPayment.class));

The FactoryProvider maps the create() method's parameters to the corresponding @Assisted parameters in the implementation class' constructor. For the other constructor arguments, it asks the regular Injector to provide values.

So the basic issue that AssistedInject solves is to finalize (close) some of the parameters at the module level to be provided by the injector, while keeping the abstraction open for the rest to be provided by the caller.

On a functional note this sounds a lot like currying .. The best rationale for currying is to allow for partial application of functions, which does the same thing as above in offering a flexible means of keeping parts of your abstraction open for later pluggability.

Consider the above abstraction modeled as a case class in Scala ..

trait CreditService
trait AuthService

case class RealPayment(creditService: CreditService,
                       authService: AuthService,
                       startDate: Date,
                       amount: Int)

One of the features of a Scala case class is that it generates a companion object automatically along with an apply method that enables you to invoke the class constructor as a function object ..

val rp = RealPayment( //..

is in fact a syntactic sugar for RealPayment.apply( //.. that gets called implicitly. But you know all that .. right ?

Now for a particular module , say I would like to finalize on PayPal as the CreditService implementation, so that the users don't have to pass this parameter repeatedly - just like the injector of your favorite dependency injection provider. I can do this as follows in a functional way and pass on a partially applied function to all users of the module ..

scala> case class PayPal(provider: String) extends CreditService
defined class PayPal

scala> val paypalPayment = RealPayment(PayPal("bar"), _: AuthService, _: Date, _: Int)
paypalPayment: (AuthService, java.util.Date, Int) => RealPayment = <function>

Note how the Scala interpreter now treats paypalPayment as a function from (AuthService, java.util.Date, Int) => RealPayment. The underscore acts as the placeholder that helps Scala create a new function object with only those parameters. In our case the new functional takes only three parameters for whom we used the placeholder syntax. From your application point of view what it means is that we have closed the abstraction partially by finalizing the provider for the CreditService implementation and left the rest of it open. Isn't this precisely what the Guice injector was doing above injecting some of the objects at module startup ?

Within the module I can now invoke paypalPayment with only the 3 parameters that are still open ..

scala> case class DefaultAuth(provider: String) extends AuthService
defined class DefaultAuth

scala> paypalPayment(DefaultAuth("foo"), java.util.Calendar.getInstance.getTime, 10000)
res0: RealPayment = RealPayment(PayPal(foo),DefaultAuth(foo),Sun Feb 28 15:22:01 IST 2010,10000)

Now suppose for some modules I would like to close the abstraction for the AuthService as well in addition to freezing PayPal as the CreditService. One alternative will be to define another abstraction as paypalPayment through partial application of RealPayment where we close both the parameters. A better option will be to reuse the paypalPayment abstraction and use explicit function currying. Like ..

scala> val paypalPaymentCurried = Function.curried(paypalPayment)
paypalPaymentCurried: (AuthService) => (java.util.Date) => (Int) => RealPayment = <function>

and closing it partially using the DefaultAuth implementation ..

scala> val paypalPaymentWithDefaultAuth = paypalPaymentCurried(DefaultAuth("foo"))
paypalPaymentWithDefaultAuth: (java.util.Date) => (Int) => RealPayment = <function>

The rest of the module can now treat this as an abstraction that uses PayPal for CreditService and DefaultAuth for AuthService. Like Guice we can have hierarchies of modules that injects these settings and publishes a more specialized abstraction to downstream clients.

Monday, February 22, 2010

DSL : Grow your syntax on top of a clean semantic model

A DSL primarily has two components - a semantic model that abstracts the underlying domain and a linguistic abstraction on top that speaks the dialect of the user. The semantic model is the model of the domain where you can apply all the principles of DDD that Eric Evans espouses. And the linguistic abstraction is a thin veneer on top of the underlying model. The more well abstracted your model is, easier will be the construction of the layer on top of it. Here's a general architecture of a DSL engineering stack :-

It's interesting to observe that the two components of the stack evolve somewhat orthogonally.

The Semantic Model evolves Bottom Up

The semantic model usually evolves in a bottom up fashion - larger abstractions are formed from smaller abstractions using principles of composition. It can be through composition of traits or objects or it can be through composition of functions as well. How beautiful your compositions can be depends a lot on the language you use. But it's important that the semantic model also speaks the language of the domain.

Here's an example code snippet from my upcoming book DSLs In Action that models the business rules for a trading DSL. When you do a trade on a stock exchange you get charged a list of tax and fee components depending on the market where you execute the trade. The following snippet models a business rule using Scala that finds out the list of applicable tax/fee heads for a trade ..

class TaxFeeRulesImpl extends TaxFeeRules {
  override def forTrade(trade: Trade): List[TaxFee] = {
    (forHKG orElse 
       forSGP orElse 
  val forHKG: PartialFunction[Market, List[TaxFee]] = { 
    case HKG => 
      // in real life these can come from a database
      List(TradeTax, Commission, Surcharge)
  val forSGP: PartialFunction[Market, List[TaxFee]] = {
    case SGP => 
      List(TradeTax, Commission, Surcharge, VAT)
  val forAll: PartialFunction[Market, List[TaxFee]] = {
    case _ => List(TradeTax, Commission)


The method forTrade clearly expresses the business rule, which reads almost as expressive as the English version ..

"Get the Hong Kong specific list for trades executed on the Hong Kong market OR Get the Singapore specific list for trades executed on the Singapore market OR Get the most generic list valid for all other markets"

Note how Scala PartialFunction s can be chained together to give the above model an expressive yet succinct syntax.

The Language Interface evolves Top Down

Here you start with the domain user. What dialect does he use on the trading desk ? And then you try to build an interpreter around that which uses the services that the semantic model publishes. I call this thin layer of abstraction a DSL Facade that sits between your DSL script and the underlying domain model and acts as the glue.

It also depends a lot on the host language as to how you would like to implement the facade. With a language like Lisp, macros can come in very handy in designing an interpreter layer for the facade. And with macros you do bottom up programming, bending the host language to speak your dialect.

When you are developing an external DSL, the EBNF rules that you specify act as the DSL Facade for growing your syntax. Within the rules you can use foreign code embedding to interact with your semantic model.

In summary, when you design a DSL, the semantic model is as important as the dialect that it speaks. Having a well designed semantic model is an exercise in designing well-engineered abstractions. And as I mention in my book, the four qualities of good abstractions are minimalism, distillation, extensibility and composability.

Sunday, February 14, 2010

Why I don't like ActiveRecord for Domain Model Persistence

When it comes to a rich domain modeling, I am not a big fan of the ActiveRecord model. The biggest problem that it entails is invasiveness - the persistence model invades into my domain model. And this was also my first reaction to the Lift-CouchDB integration module which was released recently.

The moment I say ..

class Person extends CouchRecord[Person] {

my domain model becomes tied to the persistence concerns.

When I started scouchdb, my very first thought was to make it non-invasive. The Scala objects must be pure and must remain pure and completely oblivious of the underlying persistence model. In the age of polyglot persistence there is every possibility that you may need to persist a domain model across multiple storage engines. I may be using a JPA backed relational store as the enterprise online database, which gets synchronized with some offline processing that comes from a CouchDB backend. I need the domain model persistence in both my Oracle and my CouchDB engines. The ActiveRecord pattern is difficult to scale to such requirements.

Here's what I implemented in scouchdb ..

// Scala abstraction : pure
case class ItemPrice(store: String, item: String, price: Number)

// specification of the db server running
val couch = Couch("")
val item_db = Db("item_db")

// create the database
couch(item_db create)

// create the Scala object : a pure domain object
val s = ItemPrice("Best Buy", "mac book pro", 3000)

// create a document for the database with an id
val doc = Doc(item_db, "best_buy")

// add
couch(doc add s)

// query by id to get the id and revision of the document
val id_rev = couch(item_db by_id "best_buy")

// query by id to get back the object
// returns a tuple3 of (id, rev, object)
val sh = couch(item_db by_id("best_buy", classOf[ItemPrice]))

// got back the original object
sh._3.item should equal(s.item)
sh._3.price should equal(s.price)

It's a full cycle session of interaction with the CouchDB persistence engine without any intrusion into the domain abstraction. I am free to use ItemPrice domain abstraction for a relational storage as well.

CouchDB offers a model of persistence where the objects that we store should be close to the granularity of domain abstractions. I should be able to store the entire Aggregate Root of my model components directly as JSON. ActiveRecord model offers a lower level of abstraction and makes you think more in terms of persistence of the individual entities. The thought process is so relational that you ultimately end up with a relational model both in terms of persistence and domain. With CouchDB you need to think in terms of documents and views and NOT in terms of relations and tables. I blogged on this same subject some time back.

The philosophy that I adopted in scouchdb was to decouple the domain entities from the persistence layer. You hand over a pure Scala object to the driver, it will extract a JSON model from it and write it to CouchDB. I use sjson for this serialization. sjson works totally based on reflection and can transparently serialize and deserialize Scala objects that you hand over to it. From this point of view, the three aspects of managing domain abstractions, JSON serialization and persistence into CouchDB are totally orthogonal. I think this is difficult to get with an ActiveRecord based model.

Sunday, February 07, 2010

Scala Self-Type Annotations for Constrained Orthogonality

I talked about orthogonality in design in one of my earlier posts. We had a class Address in Scala and we saw how we can combine it with other orthogonal concerns without polluting the core abstraction. We could do this because Scala offers a host of capabilities to compose smaller abstractions and build larger wholes out of them. A language is orthogonal when it allows such capabilities of composition without overlaps in functionalities between the composing featuresets.

The design of Scala offers many orthogonal features - I showed some of them in my earlier post. The power of mixins for adding orthogonal features ..

val a = new Address(..) with LabelMaker {
  override def toLabel = {

and the power of Scala views with implicits ..

object Address {
  implicit def AddressToLabelMaker(addr: Address) = new LabelMaker {
    def toLabel =

Here Address and LabelMaker are completely unrelated and offers truly orthogonal capabilities when mixed in. However there can be some cases where the mixins themselves are not completely orthogonal to the core abstractions, but really optional extensions to them. In fact the mixins implement some functionalities that may depend on the core abstraction as well. Let's see yet another feature of the Scala type system that makes this modeling wholesome.

Consider the following abstraction for a security trade that takes place in a stock exchange ..

// details ellided for clarity
case class Trade(refNo: String, account: String, 
                 instrument: String, quantity: Int,
                 unitPrice: Int) {
  // principal value of the trade
  def principal = quantity * unitPrice

For every trade executed on the exchange we need to have a set of tax and fees associated with it. The exact set of tax and fees depend on a number of factors like type of trade, instruments traded, the exchange where it takes place etc. Let's have a couple of tax/fee traits that model this behavior ..

trait Tax { 
  def calculateTax = //..
trait Commission { 
  def calculateCommission = //..

In the above definitions both methods calculateTax and calculateCommission depends on the trade being executed. One option is to keep them abstract in the above trait and provide their implementations after mixing in with Trade ..

val t = new Trade(..) with Tax with Commission {
  // implementations
  def calculateTax = principal * 0.2
  def calculateCommission = principal * 0.15

I did it at the instance level. You can very well use this idiom at the class level and define ..

class RichTrade extends Trade with Tax with Commission {

However the above composition does not clearly bring out the fact that the domain rules mandate that the abstractions Tax and Commission should be constrained to be used with the Trade abstraction only.

Scala offers one way of making this knowledge explicit at the type level .. using self-type annotations ..

trait Tax { this: Trade =>
  // refers to principal of trade
  def calculateTax = principal * 0.2
trait Commission { this: Trade =>
  // refers to principal of trade
  def calculateCommission = principal * 0.15

The traits are still decoupled. But using Scala's self type annotations you make it explicit that Tax and Commission are meant to be used *only* by mixing them with Trade.

val t = new Trade(..) with Tax with Commission

Can I call this constraining the orthogonality of abstractions ? Tax and Commission provide orthogonal attributes to Trade optionally and publish their constraints explicitly in their definitions. It's not much of a difference from the earlier implementations. But I prefer to use this style to make abstractions closer to what the domain speaks.