Weeknotes: Data mining, XML and bibliographies

It seems to be have been a week of frantic completion and refactoring.

The first half was spent frantically converting html pages into PDFs using Verypdf’s HTMLtools server product. All in all the manual is very helpful and the test server could be set up quickly. It might have helped the other end if I’d remembered to break the file up for printing but that turned out to be a 10 minute jub to put back into production. The next task is to transfer it from the test server and onto the production one but that’ll need to wait for networking to tweak it a little.

I spent some time refactoring the call recordings archive. For some reason the archiving solution that I hacked up in November decided to start failing in March after it was changed. Despite being put back to its original state it never quite got back to working as it did. I’ve been trying to tweak it ridon and off but never found the time to complete it. I finally just made the time on friday afternoon to look at it properly. I’d been thinking about item based filtering after reading the first chapter of Toby Segaran’s Programming Collective Intelligence. (On the back of this, I think I’ll be buying his Beautiful Data at some point.)  Although this is not really an intelligent programme as such, the techniques have shown some real promise in the hurried tests. Using a Redis datastore, the percentage of found recordings is way up. Fingers crossed for Monday morning when I can see what the scripts run over the weekend. I also spent some time simplifying the matching algorithm so that I didn’t have to account for so many edge cases when dealing with time.

It seems that we are approaching some sort of real-time status update systems at work. I’ve sort of been arguing for this for a while to remove the bottlenecks of having each system dependant on another one. One of our suppliers is sending us XML data so I’ve been playing with Xpath 1.0 (since Xpath 2.0 apparently isn’t directly supported by PHP but there might be a way of passing the data to Java which adds unnecessary overhead) to extract the relevant values. Anyhow the core is running but I still need to fully test it and add in security.

I’ve also been asked to design and implement a queueing system for the main internal server. I’ve run up a quick high level overview but the detail still needs to be worked on. I’m pushing it back to June so that I can slear the decks of the older projects that are still on the board.

I had a chat with Jonathan Gray, a sound guy who does far too much, about digital humanities ideas. We’ve agreed to keep closer contact with each other about the area and to encourage each other into actually doing stuff (I have half a moleskin of ideas – time for more code, less talk then).  He proposed the Bibliographica idea in January and the team wrote a blog entry for the Open Knowledge Foundation blog. It is an idea that I’m looking forward to playing with and trying to embed data from. (http://bibliographica.org/)

One of the things that I’ve been thinking about though is increasingly when we do research, we store  web pages, blog entries and so on. Whilst there is way of recording these in a footnote (http:example.org accessed on <insert data> type thing), there does not appear to be a way of building a local archive of these with the relevant metadata for later retrieval, Don’t know about anybody else but I’ve got a fair few pages dotted around my hard drive for projects and I’d like a way of storing these properly and to be able to integrate them into bibliographies or research notes. I know that there is WARC format (Library of Congress link and the WARC tools Google code project) to play with so need to make time to do that.

I had a mini-hack on the Open Correspondence project last Sunday intending to update a couple of pages and got a little more done than that. The database needs rebuilding but the purl reference (http://purl.org/letter) now points to the schema. It is so close that I can’t wait to actually start hacking the data. Time to do the last little bits like tidy up the parser, use the weaving history API to embed a timeline and start using JENA, ARC and Chris Gutteridge’s Graphite library which worked out of the box (but as yet I haven’t entirely used it for much yet).

Goals for this week are to finish the Open Correspondence bits, update the trac instance with the various ‘todo’s, write a blog post for the Open Knowledge Foundation for Open Correspondence, do some major testing this week at work on various XML exports and imports. I should just be about caught up then. With any luck…

No Comments

Leave a Reply

Your email is never shared.Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.