-Re: solr taking too long to update a document
Tomás Fernández Löbbe 2012-02-02, 13:49
The problem is that in order to make the changes visible to the user you
have to issue a commit. If you commit with every user change (I assume you
may have concurrent users) you may have many commits per second. That's too
much for Solr, as each commit will flush a new segment, reopen an index
searcher (warm it), may also cause background merges. Also, if you are
updating documents in the slave server (and committing), next time the
replication occurs, the index may need to be downloaded completely from the
master (instead of just the changed segments) and if this occurs, the
replication process will need to reload the core (instead of just
committing). This will cause the replication to be much slower.
I don't think this is a good approach for your case. You could take a look
at the NRT (soft commit) stuff or to the real time get, those are on trunk
and don't get along very well with master-slave architectures.
On Thu, Feb 2, 2012 at 10:30 AM, Carlos Alberto Schneider <
[EMAIL PROTECTED]> wrote:
> Good morning everyone,
> I'm working on a project using solr 3.5, one master and two slaves.
> We run a grails app, and it has an update function.
> When the user click the button, we search for the message to be updated,
> clone it using SolrJ, delete the old message and save the new one.
> We do this update both on master and slave.
> The grails app uses the slave solr to get the data that
> will be displayed to the *users.
> **Problem:* It's taking too long till the users can see the updated
> message, *around 30 seconds. *
> So, If the users reload the page, it seems an error.
> If we do this change only on the master, it takes yet more time cause we
> must wait for the sync(every 3 seconds).
> We have 37GB(30 million docs) of data.
> If we could get 3 to 5 seconds, it would be acceptable.
> Any ideas?
> Carlos Alberto Schneider
> Informant -(47) 38010919 - 9904-5517