In case it helps, I'll try to summarise what we've done in this area.

Currently our webarchive-discovery indexing tool parses the WARC and then passes the payload to Tika:

This works fine, but along the way we've also experimented with adding WARC parsing to Tika directly. The code is an extremely messy proof-of-concept but I've pushed it here so you can see how it works:

The parser itself is fairly straightforward:

but it did require a few changes elsewhere...

1. Needed to teach Tika to spot ARC/WARC:

2. Added webarchive-commons as a dependency:

3. Enable concatenated block gunzip in order to parse WARC.GZ:
(given this was explicitly disabled before, this may be contentious?)

There's another couple of bigger issues that would need resolving too.

Firstly, the WARC format is not a file archive, but primarily a HTTP request/response archive. There are 8 different record types (see for details) that may or may not be of interest. The HTTP request and the response get separate records, and of course the response might be 303 or 404, not just 200. One strategy that is fairly widely used is to simply ignore anything that is not a 200 response, but that does discard quite a lot of information.

Secondly, I'm not sure how many layers of embedded are appropriate. According to the spec, I would argue that these are the layers:

- archive.warc.gz (a series of block-concatenated gzip records)
- archive.warc.gz/record.warc (an individual WARC record)
- archive.warc.gz/record.warc/http.response (the message/http in its entirety)
- archive.warc.gz/record.warc/http.response/entity.body (the actual resource)

This is probably overkill (and gets worse if it's a gzipped HTTP response!). We could just use:

- archive.warc.gz (a series of block-concatenated gzip records)
- archive.warc.gz/record.warc (the parsed entity.body, with all relevant info from WARC and HTTP headers attached as metadata)

Collapsing the layers down does make is less clear where some of the metadata is coming from, but it’s probably worth it.

One final note - I've not put the test WARC files in that repo yet as I need to create some new ones from an Apache 2 source.

I hope this is useful.

Dr Andrew N. Jackson
Web Archiving Technical Lead
01937 546602

-----Original Message-----
From: Nick Burch [mailto:[EMAIL PROTECTED]]
Sent: 10 July 2017 19:45
Subject: Re: Adding a WARC parser to Tika

On Mon, 10 Jul 2017, Allison, Timothy B. wrote:
> Sorry, I can't tell if this is tongue-in-cheek...

No, I do think we should add a WARC parser to Tika Parsers.

Once done, I'd suggest we figure out a way for Tika Batch to run over a collection of WARC files just as it does for directories, to make it easier to run over crawl collections without having to unpack them first!

Experience the British Library online at<>
The British Library’s latest Annual Report and Accounts :<>
Help the British Library conserve the world's knowledge. Adopt a Book.<>
The Library's St Pancras site is WiFi - enabled
The information contained in this e-mail is confidential and may be legally privileged. It is intended for the addressee(s) only. If you are not the intended recipient, please delete this e-mail and notify the [EMAIL PROTECTED]<mailto:[EMAIL PROTECTED]> : The contents of this e-mail must not be disclosed or copied without the sender's consent.
The statements and opinions expressed in this message are those of the author and do not necessarily reflect those of the British Library. The British Library does not take any responsibility for the views of the author.
Think before you print
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB