![]() Generally speaking I tend to always follow the "golden rule" that says to "always put the user in control". For the 20% of folks (made up number following the 80/20 rule with our userbase) that need that blistering performance, that it will hurt them and they won't have an option to choose the old approach.We don't have concrete numbers (that I'm aware of) to validate the expectation.So while there will be some perf hit, I expect it won't matter in most use cases. Just want to make sure we don't have surprises and regression for customers. Hopefully there has been testing or other work that I missed reading this issue and the associated PRs that proves this a non-issue and apologies if I missed it. While payloads can be problematic if abused, many clients are smart about it and/or have small enough suggester FSTs that the extra memory associated with payloads is a non-issue compared to the performance expectation they have. Instead I believe it would be really nice to default to the new approach, but one could still opt-in to using payloads. My concern is not allowing clients who continue to want to make indices with completion suggester mappings that leverage payloads due to performance reasons. By leveraging payloads we were able to strive extreme performance as we never had to do a FETCH of the associated docs for field values, parse the json, etc. With the current 1.X completion suggester, the entire FST (including payloads) is held on heap (but persisted to disk). Let me try to elaborate and please let me know if I stated anything incorrectly. I have some concerns over performance with the change to remove the payload functionality. The completion fields are indexed in a special way, hence a field mapping has to be defined.įollowing shows a field mapping for a completion field named title_suggest: Return document field values via payload.Ĭompletion Suggester V2 is based on LUCENE-6339 and LUCENE-6459, the first iteration of Lucene's new suggest API.Searches suitable for serving relevant results as a user types. The completions are indexed as a weighted FST (finite state transducer) to provide fast Top N prefix-based It is not meant for spell correction or did-you-mean functionality like the term or phrase suggesters. This is a navigational feature to guide users to relevant results as they are typing, improving search precision. ![]() If yes, what would be the reaction time here.īesides #1 and #2 also, if you have any idea to quickly decrease heap usage in an emergency scenario, please let me know.The completion suggester provides auto-complete/search-as-you-type functionality. In case memory usage goes too high, can we rely on stopping the queries to suggester to bring the memory usage down ? Assuming that ElasticSearch will remove the FST from memory. I know that FST is loaded into memory on first query for completion. In case memory due to completion suggester just occupies a lot of heap, is there any emergency way to turn off completion suggester for the entire index / cluster quickly through some API call ? I know that node-stats give direct os-> mem indication but since we've multiple indices in cluster, its hard to isolate measurements for any single index. The field in stats response that seems closest to overall memory is "segments"-> "memory_in_bytes"īut if I go by that field, 99.39% of RAM is being captured by FST for our index which is shockingly high. However, for overall RAM usage of index, I'm not finding any metric from index-stats. I am trying to compare RAM usage of FST vs overall RAM usage for a given index.įor FST, confirmed that the "completion" -> "size_in_bytes" metric is heap metric in reply to my post here Here are a few questions I had in that regard: However, there is growing concern due to memory usage as our data increases. We're using elasticsearch for our search use-case and have an index that serves both regular queries as well as autocompletion.įor autocompletion, I've enabled completion suggester on it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |