Friday, November 25, 2011

Building a Scala Restful web service

Given how little documentation I've seen out there that is anywhere near close to comprehensive, I figured I'd document some of my trials and tribulations learning Scala, and working in the Scala world.  I am adding the very large disclaimer here that as I am just learning things, I am by no means a Scala expert, and take nothing here as perfect, or even good.  Feel free to admonish my techniques in the comments if you wish, I'll try to post corrections where I can.

As many who know me know, I play EVE Online.  It's a big MMO which has a significant API and database dump available for players to work with.  I've been working on and off over the years on tools to work with the EVE universe through the data and the API.  I'm going to use this platform as my first experiment with Scala.

I have the Programming in Scala book, I bought it some time ago, and it languished on my shelf for a long time.  I finally picked it up and started working with it, but I found it severely wanting.  When I start a new language like this, I want working examples that do stuff.  I want to see how language concepts work in practice, not in examples so clinical they are more or less worthless.  To that end, I started writing code based on what I had read, and what I could google.

I've picked the O/R Broker database API and Jersey to run my Rest service.  I remember reading about a Rest server that Twitter used somewhere, but I can't seem to find it now, so I'm going with something else that's pretty well known.

O/R Broker is a bit verbose, but I like how it uses real SQL and case classes to achieve a pretty effective ORMish style system.

What follows is a simple example of serving up two services: a solar system information service, and a route service that shows a path between two solar systems through jump gates.  I'm guessing if you're reading this, you can figure out the database structure easily enough, and I'll leave acquiring the EVE database as an exercise for the reader if you really want to do it (though I'll be happy to answer questions on that if anyone cares).

Starting with the SQL and moving upward, using ORBroker you create plain text files with the SQL in them, map them with a Token object then use those to make read calls.  I'm using Maven as my build tools, so directory naming conventions follow Maven conventions for the most part, I think I started with a Scala archetype, but I don't fully recall.

First, I'm designing the SQL to return the values I want to use from the appropriate tables.  To start with, I'm going to retrieve only basic information about a solar system:

select
  a.solar_system_id,
  a.solar_system,
  a.x,
  a.y,
  a.z,
  a.security
from
  solar_system a
where
  a.solar_system = :solarSystemName


I defined the model class for our SolarSystem object.  It's created as a case class as that is what ORBroker is expecting, and it has a number of benefits including public members, but many others that I don't fully understand yet to be honest.

src/main/scala/com/plexq/eve/model/SolarSystem.scala:

case class SolarSystem (id: Option[Long],
                        name: String,
                        x: Double,
                        y: Double,
                        z: Double,
                        security: Double)

The next piece is mapping the SQL and model classes to extractor objects.  These objects are responsible for turning the results of the SQL into Scala objects.  In the first service, we have only a simple type, so we use the RowExtractor super class:

src/main/scala/com/plexq/eve/db/SolarSystemExtractor.scala:

object SolarSystemExtractor extends RowExtractor[SolarSystem] {
  def extract(row: Row) = {
    new SolarSystem(
      row.bigInt("solar_system_id"),
      row.string("solar_system").get,
      row.decimal("x").get.doubleValue(),
      row.decimal("y").get.doubleValue(),
      row.decimal("z").get.doubleValue(),
      row.decimal("security").get.doubleValue()
    )
  }
}

The row data is mapped into a constructor call for the model object, but because a SQL query can legitimately return a null for a column, the type of a row field is an Option type.  The Option type in Scala is a case class used to distinguish explicitly between null values and actual values.  It allows nulls to be type safe for one thing.  The get method on an Option object retrieves the actual object, assuming there is one (I don't know what happens when there isn't, an exception I'd guess).

Now we want to perform the database operation in the context of a Rest call, which using Jersey is pretty easy:

@Path("/solarSystem")
class SolarSystemService(@QueryParam("name") name: String) {
  @Produces(Array("text/plain"))
  @GET
  def getSystem = {
    val broker = DatabaseContainer.getBroker
    broker.readOnly() { session =>
      session.selectOne(Tokens.selectSolarSystem, "solarSystemName"->name).get match {
        case Some(p) => {
            "Solar System:" + p.name + "\n" +
            "Security:" + p.security +"\n" +
            "x:" + p.x + "\n" +
            "y:" + p.y + "\n" +
            "z:" + p.z + "\n"
        }
        case _ => "Failed to find solar system "+name
      }
    }
  }
}

I'm just sending back text for the time being so that I can easily read if the output is correct.  I've found that XML or JSON is surprisingly tricky in Scala as of yet, and the mechanisms I've tried didn't work out of the box or as designed/described.

We also need to create the object to contain our database information, modify to your local environment as usual:

object DatabaseContainer {
  val ds = new PGSimpleDataSource();
  ds.setServerName("localhost")
  ds.setUser("eve")
  ds.setPassword("xxxx")
  ds.setDatabaseName("eve")
  val builder = new BrokerBuilder(ds)
  FileSystemRegistrant(new java.io.File("src/main/sql/orbroker")).register(builder)
  builder.verify(Tokens.idSet)
  val broker = builder.build()

  def getBroker = broker
}

The final piece to the puzzle is a web.xml that initializes the Jersey servlet, and I'm using Jetty as a container here because I can't be arsed to get Glassfish sorted out, and I like Jetty:

src/main/webapp/WEB-INF/web.xml:

We can now run this web service and pull back information on a solar system!

I'm going to throw some stuff up here about getting JSON out of this whole thing, but it seems a subject that is unclear at best with my current knowledge, at least in a concise way and using Jersey.  Lift seems overly complex for what I want, so I want to do it outside of that framework.  There is some support in Jackson for Scala, but it doesn't seem to work quite right, at least the way I have it configured.

To get it going, I've changed web.xml, SolarSystem.scala and SolarSystemService.scala.  I added the POJO option to the Jersey container:

I created a Jackson mapper in my Service object and updated the output to use that instead of my String concatenation:

@Path("/solarSystem")
class SolarSystemService(@QueryParam("name") name: String) {
  @Produces(Array(MediaType.APPLICATION_JSON))
  @GET
  def getSystem = {
    val mapper = new ObjectMapper()
    mapper.registerModule(DefaultScalaModule)

    val broker = DatabaseContainer.getBroker
    broker.readOnly() { session =>
      session.selectOne(Tokens.selectSolarSystem, "solarSystemName"->name) match {
        case Some(p) => {
          mapper.writeValueAsString(p);
        }
        case _ => "Failed to find solar system "+name
      }
    }
  }
}

And for the most perplexing part, I updated the SolarSystem object to define explicit getters.  This is a bit odd because Jackson is supposed to cope with public fields, but it's not working for me.  I've read some things about version incompatibilities, so maybe that's it, but I'm going with what I have so far:

case class SolarSystem (id: Option[Long],
                        name: String,
                        x: Double,
                        y: Double,
                        z: Double,
                        security: Double) {
  def getName = name
  def getId = id
  def getSecurity = security
  def getX = x
  def getY = y
  def getZ = z
}

Now when I run the service and ask for information on Jita, I get the following:

{"id":1358,"name":"Jita","x":-1.29064861734878E17,"y":6.07553069099636E16,"z":-1.1746922706009E17,"security":0.945913116664839}

Much easier to digest.

In the next service, I want to build a route between two solar systems, kind of a travel plan.  To do this I'm going to need to retrieve a list of systems that a given system leads to, a list of destinations.  Constructing the SQL for this is a little more interesting, but not particularly challenging:

src/main/sql/orbroker/selectDestinations.sql:

select
  a.solar_system_id,
  a.solar_system,
  a.x,
  a.y,
  a.z,
  a.security,
  d.solar_system_id as destination_id,
  d.solar_system as destination_name
from
  solar_system a,
  stargate b,
  stargate c,
  solar_system d
where
  b.solar_system_id=a.solar_system_id
  and b.destination_id=c.stargate_id
  and c.solar_system_id=d.solar_system_id
  and a.solar_system = :solarSystemName

As you can see above, we're performing a join, so we need to use two of the extractor types provided by ORBroker, a RowExtractor and a JoinExtractor.  The information we are retrieving here is a one-to-many relationship between a solar system and its destinations.  A RowExtractor is responsible for the most frequent output information, the data from the join that is unique on each row and represents the child objects, which in this case is the destination solar systems.  The extractor we already have for SolarSystem is find for that.  The low frequency information, which is the parent object, is the source solar system.  The source solar system is therefore extracted using the JoinExtractor.  The JoinExtractor needs to know what field the identity for the parent record is so that it can separate objects that belong to the parent, and those that belong to the child.  The identity column is provided by overriding the 'key' property.  All rows that share this identity column are assumed to be a single parent object.  All rows within that set that have different entries are mapped as children of that parent object.

src/main/scala/com/plexq/eve/db/SolarSystemDestinationExtractor.scala:

object SolarSystemDestinationExtractor extends JoinExtractor[SolarSystemDestination] {
  val key = Set("solar_system_id")

  def extract(row: Row, join: Join) = {
    new SolarSystemDestination(
      new SolarSystem(
        row.bigInt("solar_system_id"),
        row.string("solar_system").get,
        row.decimal("x").get.doubleValue(),
        row.decimal("y").get.doubleValue(),
        row.decimal("z").get.doubleValue()
      ),
      join.extractSeq(SolarSystemExtractor, Map("solar_system_id"->"destination_id", "solar_system" -> "destination_name"))
    )
  }
}


Given the data structure and the constructors above, we can define the new model class for SolarSystemDestinations:

src/main/scala/com/plexq/eve/model/SolarSystemDestination.scala:

case class SolarSystemDestination(solarSystem : SolarSystem,
                                  destination : IndexedSeq[SolarSystem])

Now we have enough code to store the result of a SQL query that retrieves information about solar system destinations.  We need some code to read it and turn it into a route.  I'm building a simple b-tree style object here that contains a route-in-progress:

src/main/scala/com/plexq/eve/map/RouteTree.scala:

class RouteTree(solarSystem: SolarSystem) {
  var nodes : List[RouteTree] = List[RouteTree]()

  def contains(v: SolarSystem) : Boolean = (v.name == solarSystem.name) || nodes.exists {_.contains(v)}

  def leaves() : List[RouteTree] = {
    nodes.length match {
      case 0 => List(this)
      case _ => nodes.flatMap {x=>x.leaves()}
    }
  }

  def getSolarSystem : SolarSystem = solarSystem
  def setNodes(n : List[RouteTree]) : RouteTree = {
    nodes = n
    this
  }

  def path(end: SolarSystem, filter: (SolarSystem) => (Double)) : List[SolarSystem] = {
    if (contains(end)) {
      nodes.length match {
        case 0 => List(solarSystem)
        case _ => List((solarSystem, filter(solarSystem))) ::: nodes.find {_.contains(end)}.get.path(end, filter)
      }
    }
    else List()
  }

  def count : Int = (nodes.length/:nodes)(_+_.count)
}

and a builder object to construct a route:

class RouteBuilder {
  def buildRoute(broker: Broker, route: RouteTree, end: SolarSystem) : List[SolarSystem] = {
    var s = route.count

    route.leaves().foreach { x : RouteTree =>
      x.setNodes(SolarSystemDataService.getSolarSystemDestinations(broker, x.getSolarSystem.name).filterNot {
        route.contains(_)
      }.map {
        new RouteTree(_)
      }.toList)
    }

    /* Bug out if the list didn't get any bigger */
    if (route.count == s) {
      return List()
    }

    route.contains(end) match {
      case false => buildRoute(broker, route, end)
      case _ => {
        route.path(end)
      }
    }
  }
}

Now we can create our service class:

@Path("/route")
class RouteService(@QueryParam("start") start: String,
                   @QueryParam("end") end: String
                  ) {
    @Produces(Array(MediaType.APPLICATION_JSON))
    @GET
    def getRoute = {
      val mapper = new ObjectMapper()
      mapper.registerModule(DefaultScalaModule)

      val broker = DatabaseContainer.getBroker
      var error : scala.collection.mutable.ListBuffer[String] = ListBuffer()

      val routeTree = broker.readOnly() { session =>
        session.selectOne(Tokens.selectSolarSystem, "solarSystemName"->start).get match {
          case p : SolarSystem => new RouteTree(p)
          case _ => {
            error+=("Failed to find Start System "+start)
            null
          }
        }
      }

      val endSystem : SolarSystem = broker.readOnly() { session =>
        session.selectOne(Tokens.selectSolarSystem, "solarSystemName"->end).get match {
          case p : SolarSystem => p
          case _ => {
            error+=("Failed to find End System "+end)
            null
          }
        }
      }

      error.length match {
        case 0 => {
          mapper.writeValueAsString(new RouteBuilder().buildRoute(broker, routeTree, endSystem))
        }
        case _ => {
          ("[\""/:error)(_+"\",\""+_)+"]"
        }
      }
    }
}

In our service class we check to make sure the start and end systems exist, and I'm thinking there has to be a better way than this to do it, but this works.  The main difference here is the mapper class now registers the DefaultScalaModule.  This is provided by the jackson-scala dependency, and will cope with Scala classes that the default Java bindings don't, like List() objects, which is what we get in this case.  Now when we ask for the route from Jita to Amarr, we get back a nice JSON list:


[{"id":1358,"name":"Jita","x":-1.29064861734878E17,"y":6.07553069099636E16,"z":-1.1746922706009E17,"security":0.945913116664839},
{"id":1360,"name":"Perimeter","x":-1.29064861734878E17,"y":6.07553069099636E16,"z":-1.1746922706009E17,"security":0.945913116664839},
{"id":1355,"name":"Urlen","x":-1.43265233088943008E17,"y":6.4923714928938896E16,"z":-1.04178623206742E17,"security":0.953123230586721},
{"id":4028,"name":"Sirppala","x":-1.39376796022883008E17,"y":7.1476647043998E16,"z":-9.9524016578104608E16,"security":0.959995210823471},
{"id":4025,"name":"Inaro","x":-1.37550934148756E17,"y":7.8077592063385904E16,"z":-8.6193987480218304E16,"security":0.88322702239899},
{"id":4026,"name":"Kaaputenen","x":-1.3575371976577E17,"y":7.79504770996252E16,"z":-8.2362867465824608E16,"security":0.836977572149063},
{"id":4750,"name":"Niarja","x":-1.38143247136544992E17,"y":6.6032260761458E16,"z":-7.5317306241481296E16,"security":0.779168558516838},
{"id":4749,"name":"Madirmilire","x":-1.84441638429595008E17,"y":4.9352410074477104E16,"z":2.47548529253837E16,"security":0.541991606217488},
{"id":4736,"name":"Ashab","x":-1.86411855995097984E17,"y":5.1383654517254496E16,"z":2.95630167990767E16,"security":0.603228207163472},
{"id":3412,"name":"Amarr","x":-1.95782999035935008E17,"y":5.4527294158362E16,"z":5.51292598732268E16,"security":0.909459990985168}]



After this, I start to descend into madness around providing a filter mechanism to provide routes only in high-sec etc, and a service for capital jump planning, but that's another story!


I'm not a Scala expert by far, this is pretty much my first Scala app, so suggestions and comments welcome.  I hope this was useful.  I can post Maven deps if that's useful, but largely it's just Scala, Jackson, ORBroker and Jersey, I grabbed the latest versions of each using mvnrepository.com.

Thursday, November 24, 2011

Functional Programming


I've been working in the US as a programmer for over a decade, and during that time, I've worked with a huge variety of engineers and hackers alike.

If you asked most people what the difference was, I think many would have a hard time quantifying it well.  My mind goes back to my first semester at Southampton University and our functional programming course, which was taught in SML.  The simplest problem, how to reverse a list was a challenge in functional programming that many couldn't wrap their heads around at first.  The idea of calling a function recursively to perform what in a declarative language was a loop was quite foreign, even to those of us who had done quite a significant amount of programming in our youth.

Here's an example of a solution in Scala (yes, I know there's a built in call for this, it's an example okay?):

def myReverse(x : List[String]) : List[String] = (List()/:x)((x:List[String],y:String) => List(y) ::: x)


Some folks might write something like this in Java:

public List<T> reverse(List<T> input) {
  List<T> l = new ArrayList<T>();
  for (int a = 0; a<input.size(); a++) {
    l.add(input.get(input.size()-a-1));
  }
  return l;
}

Or maybe even:

public void reverse(List<T> input) {
  for (int a = 0; a<(input.size()/2); a++) {
    T x = input.get(a);
    input.set(a,input.get(input.size()-a-1));
    input.set(input.size()-a-1, x);
  }
}

modify the list in place, what a good idea eh?  Sometimes, but mostly not so much, and many can't tell you why.  (And yes, there are slightly more elegant ways in pure Java, but few use them).  As we add complexity to this problem, the code gets increasingly worse.

As we add other features that are really great if you know how to use them, simple things like list comprehensions and similar, the code starts getting more obscure looking, but better.

And there lies the problem.  As we right more "efficient" code, it requires more and more skill to manipulate.  At what point do you draw the line and decide you've gone from good code to obscure code?

I have a few simple rules to write code by, the first one is "put shit where it goes".  It's a simple expression for avoiding tangling and scattering, a concept I learned about much later.  Some mechanisms fundamentally break this rule.  One such mechanism is Aspect Oriented Programming.  It is a very very powerful tool, and in the right hands, amazing.  The problem is is that it is probably the very definition of obscure.  Without a good toolset, you'd have no idea that the code even existed or was being executed against one of your classes, particularly if it didn't show up in a stack trace because the problem was being precipitated as a knock-on effect.  It could take a long time, possible forever to track down a problem.

Functional programming is less so, and I think keeps to the "put shit where it goes" rule much better.  With things like list comprehensions and closures, you can write shorter code that goes directly where it should.  Having more collections immutable makes things more reliable as you're less likely to get a modification vs copy problem.  I'm not sure of the performance cost of doing it this way though, but I get the feeling that of all performance problems in software development today, copy versus modify-in-place is not the worst of them by far.  I think it makes code more readable and clearer.  I think it can significantly reduce development time as well, at least if you don't strive to write "perfect" code.

Thursday, November 17, 2011

Scala FTW!

I've been working a bit with Groovy and Grails lately, and I'm finding them pretty awesome.  I figured I'd take it to the next level, and dive into Scala.

Scala has more powerful functional features than Groovy, but it seems to have kind of spotty framework support.  This turns out not to be such a bad thing.  The folks using Scala seem to care about making good solutions to things, and looking at GORM in Grails, it works fine, but it's really slow, and suffers the same problems as Hibernate.  Database access in Scala isn't so locked down yet.  There a few systems out there, so I picked on that looked interesting: ORBroker and ran with it.

My goal here was really just to get something working that could read data from a database and do something with it.

As per usual, I do testing with my EVE database, so I decided to start with something that could read a solar system and list the destinations.  Not too bad once I figured out how the ORBroker mapping worked, which took a bit of doing.  The documentation is a bit sparse, but with a bit of general ORM experience and a few years under the belt of development, I got it going.

I'm gonna put some code in here, then talk a bit more about it, and some of the fun with ORBroker:

object App extends Application {
  val ds = new PGSimpleDataSource();
  ...
  val builder = new BrokerBuilder(ds)
  FileSystemRegistrant(new java.io.File("src/main/sql/orbroker")).register(builder)
  val broker = builder.build()

  val selectDestinations = Token('selectDestinations, SolarSystemDestinationExtractor)

  val jitaDestinations = broker.readOnly() { session =>
    session.selectOne(selectDestinations, "solarSystemName"->"Jita")
  }

  def lf(session: QuerySession, y: List[SolarSystem]): List[SolarSystem] = {
    if (y.isEmpty)
      List()
    else {
      session.selectOne(selectDestinations, "solarSystemName"->y.head.name) match {
        case Some(p) => lf(session, y.tail).filterNot {p.destination.toList.contains _} ::: p.destination.toList
        case None => List()
      }
    }
  }

  broker.readOnly() { session =>
    lf(session, jitaDestinations match {
      case Some(p) => p.destination.toList
      case None => List()
    }).foreach(y => println(y.name))
  }
}

The code uses a SQL query defined in selectDestinations.sql to pull back a list of destination solar systems given a solar system.  This isn't quite as easy as it might first appear because in my incarnation of the EVE database, solar systems aren't joined directly, but via a Stargate relation.  Because ORBroker doesn't do ORM, it maps queries to objects via the extractors, this is much saner than it would be with an ORM, even if it is a bit more verbose.

We grab a list of destinations from the initial query, which returns a list of SolarSystem objects.  Then we send that list through the lf closure to retrieve the destinations for those destinations.  We are getting the list of systems reachable by making two jumps from the start point - which rather typically is Jita.  The cool thing that becomes easy with a functional language is the list construction piece.

It's all very nice to implement list iteration by recursion, it's first year computer science stuff in the UK, but I still love it, and, because of list comprehensions and filters, we can go one better with only a tiny bit more code; we can filter the list and eliminate systems that have already been added, or more accurately that are going to be added.  Now we're talking something vaguely useful.

I haven't tried the ternary operator yet in Scala, not sure if it exists, but if it did, the closure would be one line long, if it wasn't for the case situation.

This is one place where I greatly prefer Groovy's solution to the problem of null results, maybe I'm missing something Scala can do here because I only just cracked the book yesterday, but so far, this is kinda annoying.  To separate null returns from valid returns, we have to disambiguate an Option[] group.  There seem to be a couple of ways to do this, but some of then aren't appropriate here, and well, this worked.  The match/case operator pairs do a match on the Option group, and allow us to do something in both null and non-null situations without an ugly if operator, and matching can do a whole lot more, but for this situation, it seems a bit overkill.  The Groovy suffix ? does as much with less fuss.

We filter the return to we only concatenate elements in the return list that don't exist in the destination list, and because we are applying it recursively, it always filters to the latest concatenation, and we magically get the right result.  List comprehensions FTW.

The interesting _ parameter kind of feels a bit like Perl's $_, so I kinda have to hate it.  Honestly though, it beats Groovy's it parameter which is just kinda lame.

The ORBroker is supposed to be able to build Token objects through the Tokens thingie, but I haven't got it to work yet.

Tuesday, November 8, 2011

Hooking up an SSD to an older Mac without tearing it apart

I'm posting about my attempts to hook up an SSD to my 2009 iMac, which doesn't have a thunderbolt port.

It does however have a firewire 800 port.  In theory, I should be able to get a pretty good transfer rate over this, but we'll see!

I got my shiny SSD in the mail a few days ago, I got a Corsair Force 120GB drive which has pretty good number on the anandtech review.  I bought an external drive chassis on Amazon that will hold two drive in one of RAID 0, RAID 1 or JBOD.  The drive enclosure product blurb claims transfer speeds up to 200MB/sec, which should exceed this SSD's write rate, but we'll see.  I haven't researched the signaling speed of firewire 800, so I don't know if that is even physically possible currently.  (Update: DUH, it's called Firewire 800, because it's 800Mbit/sec).



The first problem was the the external enclosure only had mounting rails for a 3.5" internal drive, and the SSD is 2.5".  Not a big setback, adapter kits are pretty easy to come-by and I found a pretty nifty one on Amazon that wasn't too expensive (picture to come when I get it).

Until that arrives, I have hooked up the SSD through two other methods that are bit of a hack.  I have a plethora of external hard drives that sit on a shelf next to me.  Most are Western Digital MyBooks, a couple aren't.  I got a Seagate Freeagent awhile ago as it was the first 3TB drive available, and I was desperately in need of a bigger drive for my backups.  The thing crapped out when I tried to restore and only some quality time with the Mac drive utility tool allowed me to access it and get my data off!  The good part is that the enclosure has available an adapter for firewire 800.  Over the years I've also bought a couple of Western Digital drives that have Firewire ports.  First thing to do was to pull apart the enclosure and remote the electronics that have the firewire interface.  The Wester Digital required some heavy convincing with a screw driver to come apart, but I got the guts out well enough.  The Seagate Freeagent was very easy.  The base on the model I have detaches, and I initially though I would have to pull the enclosure apart to reveal the adapter from the base to SATA, but it turns out that the base's adapter is itself just a SATA port! (so much for paying attention).  You can take the base and plug the drive in without further ado.

The Seagate Freeagent base from the side.  You can see the Firewire 800 port and the power connector

The Seagate Freeagent base from the back.  You can see the addition firewire port and a USB port.  The main connector is visible here and if you're paying attention (unlike me), you can see it's a standard SATA style connector.

This Western Digital MyBook enclosure.  The enclosure comes apart by forcing the back loose, then sliding the middle bit out.  The middle bit didn't slide out without some serious convincing with a screw driver to break loose the sides.  I'm not sure if they were bonded or not, but they did give a reassuring pop sound when they came loose!

The back of the Western Digital enclosure.  This enclosure has holes where the two Firewire 800 ports are, a hole for an eSATA connection and a USB port.  Many of these enclosures only have a USB port, and I had purchased this with the knowledge that I wanted a firewire port available.  It turned out to be a bit of a non-issue in the long-run, but it sure came in handy for this!

The Seagate Freeagent enclosure.  The enclosure comes apart from one side only, which is the side opposite the connector.  The casing pops apart, though in doing so I snapped a few of the clips on the inside.  Don't bet on being able to put either of these back together afterwards!

The bottom of the Seagate enclosure.  You can see here the hole where the drive port is exposed.  There is no adapter that lives here!  The base plugs directly into the drive with clips for the main part of the enclosure itself.

This is the SSD docked onto the Seagate base.  It's a bit precarious in this position, but it makes a good picture!  The drive docks directly onto the base, and the connection isn't great because the base is expecting a gap between the bottom of the drive and the base where the enclosure housing would normally go.


Here is the df from the drive.  If you can see, it's showing about 116GB usable and 12.6GB used (give or take).

This is the guts of the Western Digital enclosure.  You can see the SATA adapter on the left.  You can plug in the drive to this, and it almost rests on the rest of the case.  Slightly less precarious, but also less fast!!

The Seagate Freeagent base.


I did some benchmarks with no other apps running on my iMac, and the results were surprising.

The Seagate base did pretty well, clocking in around 60-70MB/sec both read and write with seeks in the 10,000 range.  Not too bad, about the same speed as a regular disk running at full speed, with much better seek performance.  The Western Digital interface was a bit disappointing, clocking in at only 30MB/sec or so.  The interface has a firewire 800 port, but I suspect it's not running at full speed.

I formatted the drive once I had it back on the Seagate base and mounting a PostgreSQL tablespace.  I've created a fairly large database on there, about 12GB, so it gives a fair amount of data to play with.

More to come...