I primarily do backend processing at work, often dealing with extended processing that doesn't naturally have a user-interface. As such, I've been finding ways to provide quick and convenient interaction with the processes.

One simple obvious way is logging, but I found that text-based statement logging doesn't get me quite the level of detail and structure I want and ultimately need to be productive. When I was playing with RDF more heavily earlier last year, I started to format my log messages as N3 pseudo-statements:

** 2006-01-25 12:34:56 INFO Thread-0 Processor: [ object "foo"; count 12; disposition "thumbs-up" ] ** 2006-01-25 12:34:56 INFO Thread-0 Processor: [ object "bar"; count 7; disposition "thumbs-down" ]

Of course, the "real" form of this would be to have the entire logging pipeline emit actual N3 statements, including the logging-provided (meta-)data:

[ # a log:Entry log:ts "2006-01-25 12:34:56" ; log:lvl log:INFO ; log:thread "Thread-0" ; log:class "Processor" ; log:msg [ :object "foo"; :count 12; :disposition "thumbs-up" ] ].

N3's great because of its simple syntax for representing structured data, but I've recently started using JSON instead, which has a more straightforward syntax and semantics for loose, non-RDF data representation. If you're unfamiliar with JSON, it is a subset of javascript representing dictionaries, lists, and the basic data types (integers, floats, booleans and strings). An example would be:

{"favorite int": 42, "floating value": 42.01, "this is an example": true, "a list of things": [42, 42.01, true, {foo: bar, baz: quuz}]}

For example, I've been developing a content parser, and running it against a fixed set of test cases to assess the impact of changes I make. Each test result consists of 4 parts:

  1. the test-case label (the input filename)
  2. the expected-and-produced pairs
  3. the expected-but-missed cases
  4. the unexpected-but-produced cases

Each test-case and result is on the order of 10 "things", so they're well suited to manual review. The test run outputs a set of log statements like:

** 2006-01-25 12:34:56 INFO Thread-0 Testing: {"case":"/path/to/test/case/file.name","expectedProduced":[[«expected object», «produced object»], «...»], "expectedMissed":[ «...» ], "unexpectedProduced":[ «...» ]} ** 2006-01-25 12:34:57 INFO Thread-0 Testing: {"case":"/path/to/test/case/file.name","expectedProduced":[[{"foo":"bar"}, {"foo":"bar"}], «...»], "expectedMissed":[{"foo":"baz"}], "unexpectedProduced":[{"foo":"quux"]}

The process of logging general objects is made much easier by using a reflection-based utility class I wrote that will json-stringify random objects thrown at it, but that's for another post...

A nice feature of JSON – being a subset of javascript – is that it is trivially parsed in javascript with an eval statement. As such, it's simple to write an HTML page that accepts pasted JSON and renders it as nested tables. As this accepts and evaluates arbitrary javascript, I won't link to it, but instead present the (mostly) short page here:



This can be copied and pasted into a local HTML file; it runs entirely locally. Alternatively, uncompress this compressed version.