Jason Toy 2011-08-17, 21:14
Herman Kiefus 2011-08-17, 21:28
Jason Toy 2011-08-17, 21:46
Markus Jelsma 2011-08-17, 21:46
Jason Toy 2011-08-17, 21:56
Yonik Seeley 2011-08-17, 22:01
Jason Toy 2011-08-17, 22:48
Fuad Efendi 2011-08-18, 03:08
Fuad Efendi 2011-08-18, 03:22
-Re: solr keeps dying every few hours.
Fuad Efendi 2011-08-18, 02:53
EC2 7.5Gb (large CPU instance, $0.68/hour) sucks. Unpredictably, there are
errors such as
User time: 0 seconds
Kernel time: 0 seconds
Real time: 600 seconds
How can "clock time" be higher in such extent? Only if _another_ user used
600 seconds CPU: _virtualization_
My client have had constant problems. We are moving to dedicated hardware
(25 times cheaper in average; Amazon sells 1 Tb of EBS for $100/month,
plus additional costs for I/O)
> I have a large ec2 instance(7.5 gb ram), it dies every few hours with out
> of heap memory issues. I started upping the min memory required,
> currently I use -Xms3072M .
"Large CPU" instance is "virtualization" and behaviour is unpredictable.
Choose "cluster" instance with explicit Intel XEON CPU (instead of
"CPU-Units") and compare behaviour; $1.60/hour. Please share results.
Tokenizer Inc., Canada
Data Mining, Search Engines
On 11-08-17 5:56 PM, "Jason Toy" <[EMAIL PROTECTED]> wrote:
>I've only set set minimum memory and have not set maximum memory. I'm
>more investigation and I see that I have 100+ dynamic fields for my
>documents, not the 10 fields I quoted earlier. I also sort against those
>dynamic fields often, I'm reading that this potentially uses a lot of
>memory. Could this be the cause of my problems and if so what options do
>have to deal with this?
>On Wed, Aug 17, 2011 at 2:46 PM, Markus Jelsma
>> Keep in mind that a commit warms up another searcher and potentially
>> RAM consumption in the back ground due to cache warming queries being
>> (newSearcher event). Also, where is your Xmx switch? I don't know how
>> will behave if you set Xms > Xmx.
>> 65m docs is quite a lot but it should run fine with 3GB heap allocation.
>> It's a good practice to use a master for indexing without any caches and
>> up queries when you exceed a certain amount of documents, it will bite.
>> > I have a large ec2 instance(7.5 gb ram), it dies every few hours with
>> > of heap memory issues. I started upping the min memory required,
>> > currently I use -Xms3072M .
>> > I insert about 50k docs an hour and I currently have about 65 million
>> > with about 10 fields each. Is this already too much data for one box?
>> > do I know when I've reached the limit of this server? I have no idea
>> > to keep control of this issue. Am I just supposed to keep upping the
>> > ram used for solr? How do I know what the accurate amount of ram I
>> > be using is? Must I keep adding more memory as the index size grows,
>> > rather the query be a little slower if I can use constant memory and
>> > the search read from disk.
>- sent from my mobile