Hi,

I am working on deeply nested directory structure and large logs in perl and
trying to generate the results from index data,

my $hits = $searcher->hits(
        query      => ['title'],
        num_wanted => -1,
    );

while ( my $hit = $hits->next ) {
    # making 28777 calls to Lucy::Search::Hits::next
    # do some work - already ran profiling on this code and optimized

Now, since the hit count is large, the stuff that is in the iteration is
consuming good amount of time, i need to improve its performance to get it
going,

can we run this in parallel or any other optimization possible for case
where hits are present in thousands?

I tried implementing Parallel::ForkManager in it, but it had increased the
running time to serious extent instead of reducing it,

Not getting any clue, please help i am badly stuck now.

Regards
Rohit Singh

--
View this message in context: http://lucene.472066.n3.nabble.com/iterating-through-hits-is-there-a-way-to-improve-performance-or-can-we-run-these-iterations-in-parall-tp4329286.html
Sent from the lucy-user mailing list archive at Nabble.com.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB