Jan 18 2012
This will catch you up on the activities of the DPLA dev core since Jan. 3 when our formation was announced.
- The core team has formed
- We have begun weekly meetings
- We are engaging two consultants: Nick Caramello of Pod Consulting to help with the architecture of the project and of the platform, and MacKenzie Smith who has broad knowledge of prior efforts and current practices.
- We have consulted with the heads of the DPLA Tech Workstream (Martin K and Chris F) from the beginning and are working with them closely.
- We are scheduled to meet with David Smith from U Mass about his work on clustering based on full text analysis
- We are scheduling a meeting with Karen Coyle to see if she can help our linked data strategy.
- Sebastian Hammer, an expert on federated search, is spending a morning with us at te end of January, on the recommendation of Martin Kalfatovic
- We are scheduled to meet with Karen Coyle about her metadata project
- We have met with Ben Schmidt about his one-gram text analysis
- We have created the initial set of communication modes to engage the wider community, including a wiki (http://dp.la/dev/wiki), a barebones home page (http://dp.la/dev), a blog(http://dp.la/dev/blog), an email list (in addition to the dev core's internal list), and a twitter account (@dpladev)
- We have drafted a msg to go from the Tech Workstream, inviting that community to collaborate
- The wiki has a fairly rich set of materials on it, including initial documentation of the API, an overview intended for a non-technical audience, the list of communication vehicles, a provisional and inadequate road map for the weekly builds, a first sketch of a system architecture, and more.
- We have met internally to work out the set of materials that will constitute a technical specification, and the rest of what we need in order to go forward, and to engage the community of developers. Nick Caramello is working on a "scope of work" document that will be an important precursor for completing the tech spec.
- We have developed a proposed strategy for dealing with arbitrarily complex collection metadata
- We have committed to an aggressive agile development schedule, with weekly builds available to the public. We hope with each build to also make available new data sets, new API extensions, and new applications that use the platform's API.
- The first build will occur this week, depending on how quickly we can acquire and integrate a non-library, non-book collection.
- We have a first pass at a budget up through April 2012, with information about what might be required to take the budget up to April 2013.
- We have developed an initial list of plausible apps to run on top of the platform, and have started working out the resources required to get them built
- We have begun taking in metadata, and talking to potential contributors, including:
- The San Francisco Public Library's complete catalog and its circulation data.
- SFPL is sending us the metadata from their photo archive
- We also have metadata about the Univ. of Illinois U/C's image collection. They are deciding if they'll give us permission to make it public
- We are talking with the State of California about its heritage collection
- The Smithsonian has agreed to give us the Biodiversity collection metadata, and they're working on getting us the National Portrait Gallery collection metadata