nutch.buddy@...) 2012-05-08, 05:31
Markus Jelsma 2012-05-08, 05:43
-Re: Is it possible to control the segment size?
nutch.buddy@...) 2012-05-08, 05:52
Yeah I've meant an unexpected failure that crashed the job, like OOM.
Regarding topN - Nutch tutorial says:
"-topN N determines the maximum number of pages that will be retrieved at
each level up to the depth."
Does it mean that when the limit is reached, no more urls on this level will
be added to the fetch list, or in other words - does is mean that nutch will
not fetch all the urls?
Markus Jelsma-2 wrote
> On Mon, 7 May 2012 22:31:43 -0700 (PDT), "nutch.buddy@"
> <nutch.buddy@> wrote:
>> In a previous discussion about handling of failures in nutch, it was
>> mentioned that a broken segment cannot be fixed and it's urls should
>> Thus, it seems that there should be a way to control segment size, so
>> one can limit the risk of having to re-crawl a huge amount of urls if
>> one of them fails.
> If one what fails? It's not as if one URL's fails, the whole segment
> has failed. A segment is failed when the fetcher unexpectedly dies and
> is not successfully retried by Hadoop.
>> Any existing way in nutch to do this?
> Sure, the -topN parameter of the generator tool.
>> View this message in context:
>> Sent from the Nutch - User mailing list archive at Nabble.com.
> Markus Jelsma - CTO - Openindex
> 050-8536600 / 06-50258350
View this message in context: http://lucene.472066.n3.nabble.com/Is-it-possible-to-control-the-segment-size-tp3970452p3970478.html
Sent from the Nutch - User mailing list archive at Nabble.com.
Markus Jelsma 2012-05-08, 06:02
nutch.buddy@...) 2012-05-08, 06:12