content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
[[analysis-whitespace-analyzer]] === Whitespace Analyzer The `whitespace` analyzer breaks text into terms whenever it encounters a whitespace character. [float] === Definition It consists of: Tokenizer:: \* <> [float] === Example output [source,js] --------------------------- POST \_analyze { "analyzer": "whitespace", ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.048424214124679565,
0.06254736334085464,
0.0013984923716634512,
0.09296879917383194,
0.03275211527943611,
-0.029206503182649612,
0.028946448117494583,
-0.027706753462553024,
0.08665987104177475,
-0.03943983465433121,
-0.02602391131222248,
-0.0491911880671978,
-0.01062013115733862,
0.017... | 0.100885 |
[[analysis-uaxurlemail-tokenizer]] === UAX URL Email Tokenizer The `uax\_url\_email` tokenizer is like the <> except that it recognises URLs and email addresses as single tokens. [float] === Example output [source,js] --------------------------- POST \_analyze { "tokenizer": "uax\_url\_email", "text": "Email me at john... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/uaxurlemail-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.0552905909717083,
0.09409602731466293,
0.025443002581596375,
0.06755999475717545,
0.013250075280666351,
-0.055508196353912354,
0.04343673586845398,
-0.021329401060938835,
0.08936887234449387,
-0.03280862048268318,
-0.0472174733877182,
-0.08530164510011673,
-0.01570197194814682,
0.029880... | 0.071388 |
[[analysis-pathhierarchy-tokenizer-examples]] === Path Hierarchy Tokenizer Examples A common use-case for the `path\_hierarchy` tokenizer is filtering results by file paths. If indexing a file path along with the data, the use of the `path\_hierarchy` tokenizer to analyze the path allows filtering the results by differ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/pathhierarchy-tokenizer-examples.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.042049188166856766,
0.11150763928890228,
0.011105519719421864,
0.10933803766965866,
0.020054137334227562,
-0.057110294699668884,
-0.01664552465081215,
0.040813542902469635,
0.043445441871881485,
-0.052918966859579086,
0.019418731331825256,
0.008586092852056026,
0.010155970230698586,
0.0... | 0.124033 |
[[analysis-ngram-tokenizer]] === NGram Tokenizer The `ngram` tokenizer first breaks text down into words whenever it encounters one of a list of specified characters, then it emits https://en.wikipedia.org/wiki/N-gram[N-grams] of each word of the specified length. N-grams are like a sliding window that moves across the... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.08330246061086655,
0.0022078249603509903,
-0.019962135702371597,
0.038688626140356064,
0.002754446119070053,
0.00965271145105362,
0.0649375393986702,
0.041511762887239456,
0.042218632996082306,
-0.05346465855836868,
-0.017415979877114296,
-0.0024812109768390656,
-0.018415797501802444,
0... | 0.210021 |
configuration In this example, we configure the `ngram` tokenizer to treat letters and digits as tokens, and to produce tri-grams (grams of length `3`): [source,js] ---------------------------- PUT my\_index { "settings": { "analysis": { "analyzer": { "my\_analyzer": { "tokenizer": "my\_tokenizer" } }, "tokenizer": { "... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.05853106454014778,
0.03131013363599777,
-0.015707947313785553,
-0.003535333089530468,
-0.026944279670715332,
0.06541714817285538,
0.04427230358123779,
0.025474781170487404,
0.034923795610666275,
-0.03078332543373108,
0.0005481460830196738,
-0.012184174731373787,
-0.03803953900933266,
0.... | 0.156379 |
[[analysis-letter-tokenizer]] === Letter Tokenizer The `letter` tokenizer breaks text into terms whenever it encounters a character which is not a letter. It does a reasonable job for most European languages, but does a terrible job for some Asian languages, where words are not separated by spaces. [float] === Example ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.06824101507663727,
0.06201956421136856,
0.017438406124711037,
0.08455979824066162,
-0.018832355737686157,
-0.02038567140698433,
0.04812046140432358,
-0.049510661512613297,
0.06317001581192017,
-0.04036922752857208,
0.03482893481850624,
-0.0721479281783104,
0.000024018774638534524,
0.038... | 0.135772 |
[[analysis-keyword-tokenizer]] === Keyword Tokenizer The `keyword` tokenizer is a ``noop'' tokenizer that accepts whatever text it is given and outputs the exact same text as a single term. It can be combined with token filters to normalise output, e.g. lower-casing email addresses. [float] === Example output [source,j... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.026556504890322685,
0.07770223915576935,
0.0003896320122294128,
0.07273875176906586,
0.02323695831000805,
-0.03966354578733444,
0.07747111469507217,
-0.043004751205444336,
0.049485258758068085,
-0.05380745604634285,
-0.003051281673833728,
-0.07197939604520798,
-0.01753394491970539,
0.01... | 0.107281 |
[[analysis-classic-tokenizer]] === Classic Tokenizer The `classic` tokenizer is a grammar based tokenizer that is good for English language documents. This tokenizer has heuristics for special treatment of acronyms, company names, email addresses, and internet host names. However, these rules don't always work, and the... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.03479166328907013,
0.07793547213077545,
0.027646785601973534,
0.03049081191420555,
0.05541447177529335,
-0.06869444251060486,
0.0897543877363205,
-0.027886012569069862,
-0.006770506035536528,
-0.06133939325809479,
-0.01991618610918522,
-0.018104635179042816,
0.02757095918059349,
0.02579... | 0.120001 |
The above example produces the following terms: [source,text] --------------------------- [ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ] --------------------------- | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
0.0002693895366974175,
0.0010960763320326805,
0.02501124143600464,
0.04256615787744522,
-0.01429069321602583,
-0.004428257700055838,
-0.0022587324492633343,
-0.046641118824481964,
0.05714871734380722,
0.03745194151997566,
0.10029219835996628,
-0.023154739290475845,
0.05395839735865593,
-0.... | 0.053933 |
[[analysis-edgengram-tokenizer]] === Edge NGram Tokenizer The `edge\_ngram` tokenizer first breaks text down into words whenever it encounters one of a list of specified characters, then it emits https://en.wikipedia.org/wiki/N-gram[N-grams] of each word where the start of the N-gram is anchored to the beginning of the... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.10274534672498703,
-0.030787035822868347,
-0.0020756796002388,
-0.0018987134099006653,
-0.0008811127045191824,
0.024281447753310204,
0.06277298927307129,
0.014723305590450764,
0.04325144737958908,
-0.04932716116309166,
0.019305571913719177,
0.023210076615214348,
-0.009420118294656277,
0... | 0.196046 |
case of the `edge\_ngram` tokenizer, the advice is different. It only makes sense to use the `edge\_ngram` tokenizer at index time, to ensure that partial words are available for matching in the index. At search time, just search for the terms the user has typed in, for instance: `Quick Fo`. Below is an example of how ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.05956581234931946,
0.05416107177734375,
0.0030396056827157736,
0.05506736785173416,
0.011250658892095089,
0.041698604822158813,
0.045648109167814255,
0.015131194144487381,
0.0050147222355008125,
-0.04119088128209114,
-0.03102429397404194,
-0.022945716977119446,
-0.056854818016290665,
0.... | 0.133269 |
[[analysis-whitespace-tokenizer]] === Whitespace Tokenizer The `whitespace` tokenizer breaks text into terms whenever it encounters a whitespace character. [float] === Example output [source,js] --------------------------- POST \_analyze { "tokenizer": "whitespace", "text": "The 2 QUICK Brown-Foxes jumped over the lazy... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.031342145055532455,
0.06204230710864067,
0.023147910833358765,
0.11265160143375397,
0.025318026542663574,
-0.006602495443075895,
0.023880785331130028,
-0.004333268851041794,
0.09547729045152664,
-0.04849106818437576,
-0.028196895495057106,
-0.0501919724047184,
-0.008681297302246094,
0.0... | 0.084407 |
[[analysis-thai-tokenizer]] === Thai Tokenizer The `thai` tokenizer segments Thai text into words, using the Thai segmentation algorithm included with Java. Text in other languages in general will be treated the same as the <>. WARNING: This tokenizer may not be supported by all JREs. It is known to work with Sun/Oracl... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/thai-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.009444876573979855,
0.10675905644893646,
-0.012387681752443314,
-0.00753452442586422,
-0.003318639239296317,
-0.0283115953207016,
0.009447078220546246,
0.02184256911277771,
0.06675579398870468,
-0.06646069139242172,
0.03671099990606308,
-0.09868048876523972,
0.015359316021203995,
0.0336... | 0.082861 |
[[analysis-lowercase-tokenizer]] === Lowercase Tokenizer The `lowercase` tokenizer, like the <> breaks text into terms whenever it encounters a character which is not a letter, but it also lowercases all terms. It is functionally equivalent to the <> combined with the <>, but is more efficient as it performs both steps... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.049975764006376266,
0.07657940685749054,
0.012795238755643368,
0.06827619671821594,
0.01834137551486492,
-0.04384757950901985,
0.007164939306676388,
0.008277826942503452,
0.08551909029483795,
-0.047149669378995895,
-0.023101001977920532,
-0.01639428362250328,
0.0169279333204031,
-0.0029... | 0.104451 |
[[analysis-simplepatternsplit-tokenizer]] === Simple Pattern Split Tokenizer experimental[This functionality is marked as experimental in Lucene] The `simple\_pattern\_split` tokenizer uses a regular expression to split the input into terms at pattern matches. The set of regular expression features it supports is more ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.04393194615840912,
0.04149279370903969,
0.06437347084283829,
0.05881571024656296,
0.005609659478068352,
0.033285219222307205,
-0.003829189809039235,
0.005000697448849678,
-0.048542384058237076,
-0.029866045340895653,
-0.028614072129130363,
-0.05790232867002487,
-0.021738963201642036,
0.... | 0.114985 |
[[analysis-simplepattern-tokenizer]] === Simple Pattern Tokenizer experimental[This functionality is marked as experimental in Lucene] The `simple\_pattern` tokenizer uses a regular expression to capture matching text as terms. The set of regular expression features it supports is more limited than the <> tokenizer, bu... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.05267629027366638,
0.050547800958156586,
0.05691883713006973,
0.05926106870174408,
0.0051574744284152985,
0.01577117294073105,
0.008182203397154808,
0.001092833816073835,
-0.048931561410427094,
-0.03461291268467903,
-0.030743107199668884,
-0.07687707990407944,
-0.019932912662625313,
0.0... | 0.12831 |
[[analysis-standard-tokenizer]] === Standard Tokenizer The `standard` tokenizer provides grammar based tokenization (based on the Unicode Text Segmentation algorithm, as specified in http://unicode.org/reports/tr29/[Unicode Standard Annex #29]) and works well for most languages. [float] === Example output [source,js] -... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.031411584466695786,
0.05141865462064743,
0.018881693482398987,
0.06276240199804306,
-0.006894172169268131,
-0.030368458479642868,
0.019613150507211685,
0.015339664183557034,
0.0739658996462822,
-0.04923176020383835,
-0.005684389267116785,
-0.0640489012002945,
-0.036214474588632584,
0.06... | 0.152835 |
[[analysis-pathhierarchy-tokenizer]] === Path Hierarchy Tokenizer The `path\_hierarchy` tokenizer takes a hierarchical value like a filesystem path, splits on the path separator, and emits a term for each component in the tree. [float] === Example output [source,js] --------------------------- POST \_analyze { "tokeniz... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.009917885065078735,
0.04905540868639946,
0.009623423218727112,
0.09332514554262161,
0.029984772205352783,
-0.014432130381464958,
-0.024737801402807236,
0.03037802129983902,
0.06588617712259293,
-0.022170446813106537,
-0.025006545707583427,
-0.0800965204834938,
-0.048282600939273834,
0.0... | 0.072835 |
[[analysis-pattern-tokenizer]] === Pattern Tokenizer The `pattern` tokenizer uses a regular expression to either split text into terms whenever it matches a word separator, or to capture matching text as terms. The default pattern is `\W+`, which splits text whenever it encounters non-word characters. [WARNING] .Beware... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.0618094839155674,
0.05143130570650101,
0.08987421542406082,
-0.0015771416947245598,
0.008873440325260162,
-0.04347436502575874,
-0.007322950288653374,
0.006091167684644461,
-0.023393282666802406,
0.019886309280991554,
-0.0033399960957467556,
-0.027133900672197342,
0.046013206243515015,
... | 0.119413 |
} ] } ---------------------------- // TESTRESPONSE ///////////////////// The above example produces the following two terms: [source,text] --------------------------- [ value, value with embedded \" quote ] --------------------------- | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.08102966099977493,
0.07649099081754684,
-0.0680963397026062,
0.07726232707500458,
0.009580581448972225,
0.02569619007408619,
0.0470491498708725,
0.06242748349905014,
0.04842430725693703,
0.0056532290764153,
0.05400925129652023,
-0.05756662040948868,
0.07820019870996475,
-0.0541964843869... | 0.124352 |
[[analysis-hunspell-tokenfilter]] === Hunspell Token Filter Basic support for hunspell stemming. Hunspell dictionaries will be picked up from a dedicated hunspell directory on the filesystem (`/hunspell`). Each dictionary is expected to have its own directory named after its associated locale (language). This dictionar... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.04584832489490509,
-0.02637827768921852,
-0.030245782807469368,
0.01308473851531744,
-0.024165423586964607,
-0.05340952053666115,
0.0455065555870533,
0.011092743836343288,
-0.0021979925222694874,
-0.02438156120479107,
0.057217955589294434,
-0.03384321555495262,
-0.0032173716463148594,
0... | 0.034648 |
[[analysis-stemmer-override-tokenfilter]] === Stemmer Override Token Filter Overrides stemming algorithms, by applying a custom mapping, then protecting these terms from being modified by stemmers. Must be placed before any stemming filters. Rules are separated by `=>` [cols="<,<",options="header",] |==================... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/stemmer-override-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
0.015201888978481293,
0.07322996854782104,
0.03095855377614498,
-0.044898536056280136,
-0.008140922524034977,
0.012783624231815338,
-0.040203772485256195,
-0.015616028569638729,
-0.07733442634344101,
0.024029389023780823,
-0.02104920521378517,
-0.06706830114126205,
0.014408249408006668,
0.... | 0.03056 |
[[analysis-minhash-tokenfilter]] === Minhash Token Filter A token filter of type `min\_hash` hashes each token of the token stream and divides the resulting hashes into buckets, keeping the lowest-valued hashes per bucket. It then returns these hashes as tokens. The following are settings that can be set for a `min\_ha... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.00003910839950549416,
0.0857943445444107,
0.012736234813928604,
-0.05139120668172836,
-0.005841148551553488,
-0.055291712284088135,
0.05161533132195473,
-0.0015508601209148765,
0.016700904816389084,
-0.021418217569589615,
-0.054815057665109634,
-0.06851137429475784,
0.04180257394909859,
... | 0.039305 |
[[analysis-keyword-marker-tokenfilter]] === Keyword Marker Token Filter Protects words from being modified by stemmers. Must be placed before any stemming filters. [cols="<,<",options="header",] |======================================================================= |Setting |Description |`keywords` |A list of words t... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
0.008828477934002876,
0.057546716183423996,
0.034303419291973114,
-0.007177958730608225,
0.0019934310112148523,
0.026731709018349648,
0.01834688149392605,
0.012862362898886204,
-0.09767929464578629,
0.03694332018494606,
0.010415086522698402,
-0.0375821515917778,
0.06873058527708054,
-0.006... | 0.014983 |
[[analysis-flatten-graph-tokenfilter]] === Flatten Graph Token Filter experimental[This functionality is marked as experimental in Lucene] The `flatten\_graph` token filter accepts an arbitrary graph token stream, such as that produced by <>, and flattens it into a single linear chain of tokens suitable for indexing. T... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.019455410540103912,
0.01947525516152382,
0.025152621790766716,
0.040214329957962036,
0.011302400380373001,
-0.009938342496752739,
-0.03647530823945999,
0.022522026672959328,
-0.007826133631169796,
-0.030058782547712326,
-0.07265961170196533,
-0.00758942449465394,
-0.021876560524106026,
... | 0.080563 |
[[analysis-pattern\_replace-tokenfilter]] === Pattern Replace Token Filter The `pattern\_replace` token filter allows to easily handle string replacements based on a regular expression. The regular expression is defined using the `pattern` parameter, and the replacement string can be provided using the `replacement` pa... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.02986469864845276,
0.057682160288095474,
0.10755114257335663,
-0.01123170368373394,
-0.008373068645596504,
-0.003934435546398163,
-0.014708222821354866,
0.029588352888822556,
-0.043903838843107224,
-0.0022290144115686417,
-0.04198390617966652,
-0.036575838923454285,
0.039772726595401764,
... | 0.056673 |
[[analysis-normalization-tokenfilter]] === Normalization Token Filter There are several token filters available which try to normalize special characters of a certain language. [horizontal] Arabic:: http://lucene.apache.org/core/4\_9\_0/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizer.html[`arabic\_norma... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
0.016894925385713577,
0.03063873201608658,
0.001616740832105279,
0.010213077999651432,
-0.058443229645490646,
0.031829312443733215,
-0.014784672297537327,
-0.06772520393133163,
-0.04440931975841522,
-0.052501335740089417,
0.018372880294919014,
-0.05229460075497627,
-0.0032164091244339943,
... | 0.18617 |
[[analysis-compound-word-tokenfilter]] === Compound Word Token Filters The `hyphenation\_decompounder` and `dictionary\_decompounder` token filters can decompose compound words found in many German languages into word parts. Both token filters require a dictionary of word parts, which can be provided as: [horizontal] `... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/compound-word-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.011241629719734192,
0.030716905370354652,
-0.022498246282339096,
-0.017342722043395042,
-0.006362077314406633,
-0.06045382469892502,
0.05215092748403549,
0.0064485324546694756,
-0.003988601732999086,
-0.026771940290927887,
0.034472450613975525,
-0.06643889844417572,
0.03646978363394737,
... | 0.088798 |
[[analysis-synonym-tokenfilter]] === Synonym Token Filter The `synonym` token filter allows to easily handle synonyms during the analysis process. Synonyms are configured using a configuration file. Here is an example: [source,js] -------------------------------------------------- PUT /test\_index { "settings": { "inde... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.03711889684200287,
0.048747990280389786,
-0.015486663207411766,
0.06440020352602005,
-0.003814490046352148,
0.0006840314599685371,
0.017438722774386406,
0.03728081285953522,
0.020864129066467285,
0.007062912918627262,
-0.03678208962082863,
-0.024818580597639084,
0.0029356679879128933,
0... | 0.050341 |
[[analysis-fingerprint-tokenfilter]] === Fingerprint Token Filter The `fingerprint` token filter emits a single token which is useful for fingerprinting a body of text, and/or providing a token that can be clustered on. It does this by sorting the tokens, deduplicating and then concatenating them back into a single tok... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/fingerprint-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.03276245668530464,
0.04724197834730148,
0.013885162770748138,
0.004015559330582619,
0.00575652439147234,
-0.007456726860255003,
0.06860485672950745,
-0.05258018895983696,
-0.019907116889953613,
-0.04530474171042442,
0.04168170318007469,
-0.018557904288172722,
0.0388500951230526,
-0.0235... | 0.070506 |
[[analysis-snowball-tokenfilter]] === Snowball Token Filter A filter that stems words using a Snowball-generated stemmer. The `language` parameter controls the stemmer with the following available values: `Armenian`, `Basque`, `Catalan`, `Danish`, `Dutch`, `English`, `Finnish`, `French`, `German`, `German2`, `Hungarian... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/snowball-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.025119099766016006,
0.06655868142843246,
0.010124722495675087,
0.030182238668203354,
0.008928042836487293,
0.05134420096874237,
0.0050134859047830105,
0.03084905631840229,
0.016736047342419624,
-0.032801948487758636,
-0.0735829770565033,
-0.09569025784730911,
-0.018665822222828865,
0.03... | 0.125482 |
[[analysis-asciifolding-tokenfilter]] === ASCII Folding Token Filter A token filter of type `asciifolding` that converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists. Example: [source,js... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/asciifolding-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.043375130742788315,
0.06810806691646576,
-0.020243071019649506,
0.04543643817305565,
-0.03903858736157417,
-0.004926715511828661,
0.008353997021913528,
0.015746818855404854,
0.042673733085393906,
-0.07476290315389633,
-0.04000024497509003,
-0.11268138140439987,
-0.03392425924539566,
0.0... | 0.116152 |
[[analysis-keep-types-tokenfilter]] === Keep Types Token Filter A token filter of type `keep\_types` that only keeps tokens with a token type contained in a predefined set. [float] === Options [horizontal] types:: a list of types to keep [float] === Settings example You can set it up like: [source,js] -----------------... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/keep-types-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.04056117311120033,
0.08842018991708755,
0.007520932238548994,
0.09731175750494003,
0.008530541323125362,
0.044049013406038284,
0.08784646540880203,
-0.01121652964502573,
-0.019483523443341255,
-0.049651458859443665,
-0.08896171301603317,
-0.09253249317407608,
-0.04779998958110809,
0.036... | 0.073875 |
[[analysis-delimited-payload-tokenfilter]] === Delimited Payload Token Filter Named `delimited\_payload`. Splits tokens into tokens and payload whenever a delimiter character is found. [WARNING] ============================================ The older name `delimited\_payload\_filter` is deprecated and should not be used... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/delimited-payload-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.010851388797163963,
0.02076088637113571,
0.031675588339567184,
-0.0018283440731465816,
-0.015814684331417084,
-0.03036957047879696,
0.10117295384407043,
-0.009275704622268677,
-0.007275264710187912,
-0.07045558094978333,
-0.033403072506189346,
-0.1305847465991974,
0.02300257980823517,
-... | 0.066492 |
[[analysis-elision-tokenfilter]] === Elision Token Filter A token filter which removes elisions. For example, "l'avion" (the plane) will tokenized as "avion" (plane). Accepts `articles` setting which is a set of stop words articles. For example: [source,js] -------------------------------------------------- PUT /elisio... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/elision-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.007698987610638142,
0.08461792767047882,
-0.011275597847998142,
0.06778859347105026,
0.011400303803384304,
0.056054677814245224,
0.056138720363378525,
-0.0569947212934494,
0.04652152955532074,
-0.022388534620404243,
-0.04747127741575241,
-0.05113174393773079,
-0.007667830213904381,
0.04... | 0.103146 |
[[analysis-keep-words-tokenfilter]] === Keep Words Token Filter A token filter of type `keep` that only keeps tokens with text contained in a predefined set of words. The set of words can be defined in the settings or loaded from a text file containing one word per line. [float] === Options [horizontal] keep\_words:: a... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/keep-words-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.059929702430963516,
0.08720630407333374,
-0.006467323750257492,
0.09271769225597382,
-0.004147282335907221,
0.007319805212318897,
0.0644131526350975,
-0.01046765223145485,
-0.008399100042879581,
-0.04378603398799896,
-0.039796873927116394,
-0.03420079126954079,
0.03643983602523804,
0.05... | 0.106837 |
[[analysis-word-delimiter-graph-tokenfilter]] === Word Delimiter Graph Token Filter experimental[This functionality is marked as experimental in Lucene] Named `word\_delimiter\_graph`, it splits words into subwords and performs optional transformations on subword groups. Words are split into subwords with the following... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.022857805714011192,
-0.014292032457888126,
0.0728878304362297,
-0.023988982662558556,
-0.0030735135078430176,
-0.047869857400655746,
-0.0020039950031787157,
-0.03331998735666275,
0.0030087640043348074,
0.03160850703716278,
0.0063684298656880856,
-0.05546127259731293,
0.03979173302650452,
... | 0.074622 |
[[analysis-lowercase-tokenfilter]] === Lowercase Token Filter A token filter of type `lowercase` that normalizes token text to lower case. Lowercase token filter supports Greek, Irish, and Turkish lowercase token filters through the `language` parameter. Below is a usage example in a custom analyzer [source,js] -------... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/lowercase-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.0024386029690504074,
0.08856888860464096,
0.035403061658144,
0.04044126719236374,
-0.024123165756464005,
0.013824229128658772,
0.027180658653378487,
-0.00862966850399971,
0.012758414261043072,
-0.05260346829891205,
-0.055610645562410355,
-0.08635605871677399,
-0.025741038843989372,
0.09... | 0.101417 |
[[analysis-synonym-graph-tokenfilter]] === Synonym Graph Token Filter beta[] The `synonym\_graph` token filter allows to easily handle synonyms, including multi-word synonyms correctly during the analysis process. In order to properly handle multi-word synonyms this token filter creates a "graph token stream" during pr... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.027799848467111588,
-0.027325404807925224,
0.018414143472909927,
0.014000304974615574,
-0.04067232832312584,
0.014408786781132221,
0.007181485183537006,
0.02826809324324131,
-0.008622351102530956,
0.004698062781244516,
-0.03618042171001434,
-0.014909449964761734,
0.024054329842329025,
0... | 0.075225 |
[[analysis-word-delimiter-tokenfilter]] === Word Delimiter Token Filter Named `word\_delimiter`, it Splits words into subwords and performs optional transformations on subword groups. Words are split into subwords with the following rules: \* split on intra-word delimiters (by default, all non alpha-numeric characters)... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.045972030609846115,
0.01558596733957529,
0.06884997338056564,
-0.0021352574694901705,
0.014020601287484169,
-0.05194150283932686,
0.029279697686433792,
-0.04302886873483658,
-0.009902981109917164,
0.011366738937795162,
0.013203239999711514,
-0.09019161760807037,
0.07626470178365707,
-0.... | 0.047568 |
[[analysis-stop-tokenfilter]] === Stop Token Filter A token filter of type `stop` that removes stop words from token streams. The following are settings that can be set for a `stop` token filter type: [horizontal] `stopwords`:: A list of stop words to use. Defaults to `\_english\_` stop words. `stopwords\_path`:: A pat... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.02971246838569641,
0.04430404305458069,
0.02367839217185974,
-0.020236706361174583,
0.019991198554635048,
0.031018439680337906,
0.09193508327007294,
-0.04357825219631195,
-0.0459144301712513,
-0.005686186719685793,
0.026569953188300133,
-0.02428268827497959,
0.03503992781043053,
0.02490... | 0.103961 |
[[analysis-cjk-bigram-tokenfilter]] === CJK Bigram Token Filter The `cjk\_bigram` token filter forms bigrams out of the CJK terms that are generated by the <> or the `icu\_tokenizer` (see {plugins}/analysis-icu-tokenizer.html[`analysis-icu` plugin]). By default, when a CJK character has no adjacent characters to form a... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/cjk-bigram-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.07039536535739899,
0.06439205259084702,
-0.024599334225058556,
-0.026571426540613174,
0.003478376427665353,
-0.0032531723845750093,
-0.030908185988664627,
0.0031351912766695023,
-0.002783749019727111,
-0.07075179368257523,
0.040388256311416626,
-0.06113406643271446,
-0.08386927098035812,
... | 0.167307 |
[[analysis-shingle-tokenfilter]] === Shingle Token Filter A token filter of type `shingle` that constructs shingles (token n-grams) from a token stream. In other words, it creates combinations of tokens as a single token. For example, the sentence "please divide this sentence into shingles" might be tokenized into shin... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.048017468303442,
0.047077249735593796,
0.014898263849318027,
-0.009066509082913399,
-0.051324691623449326,
0.025468407198786736,
0.09420260787010193,
0.000795959320385009,
0.055385611951351166,
-0.03301338106393814,
-0.03222504258155823,
-0.06378152966499329,
0.0562836229801178,
-0.0191... | 0.085824 |
[[analysis-common-grams-tokenfilter]] === Common Grams Token Filter Token filter that generates bigrams for frequently occurring terms. Single terms are still indexed. It can be used as an alternative to the <> when we don't want to completely ignore common terms. For example, the text "the quick brown is a fox" will b... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/common-grams-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
0.034572843462228775,
0.000058897996495943516,
0.015202019363641739,
0.019535556435585022,
0.006837837863713503,
0.05021733418107033,
0.0359671413898468,
-0.021236097440123558,
-0.006232342217117548,
-0.03223380818963051,
0.007039233576506376,
-0.028900673612952232,
0.003321561496704817,
-... | 0.113398 |
[[analysis-pattern-capture-tokenfilter]] === Pattern Capture Token Filter The `pattern\_capture` token filter, unlike the `pattern` tokenizer, emits a token for every capture group in the regular expression. Patterns are not anchored to the beginning and end of the string, so each pattern can match multiple times, and ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.05600697919726372,
0.06563212722539902,
0.09416919946670532,
-0.03257311135530472,
0.05920165777206421,
-0.016546962782740593,
0.002412826055660844,
-0.007475448306649923,
-0.016986669972538948,
-0.017016975209116936,
-0.04153868928551674,
-0.04147375747561455,
0.021065164357423782,
-0.... | 0.097216 |
[[analysis-limit-token-count-tokenfilter]] === Limit Token Count Token Filter Limits the number of tokens that are indexed per document and field. [cols="<,<",options="header",] |======================================================================= |Setting |Description |`max\_token\_count` |The maximum number of tok... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/limit-token-count-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.017653029412031174,
0.047169025987386703,
-0.007416354957967997,
0.047100730240345,
-0.005176859442144632,
0.0005050421459600329,
0.06151295825839043,
0.05336509644985199,
0.05015217512845993,
-0.01201307587325573,
-0.056638043373823166,
-0.07582102715969086,
0.03734186664223671,
-0.033... | 0.11272 |
[[analysis-stemmer-tokenfilter]] === Stemmer Token Filter // Adds attribute for the 'minimal\_portuguese' stemmer values link. // This link contains ~, which is converted to subscript. // This attribute prevents that substitution. // See https://github.com/asciidoctor/asciidoctor/wiki/How-to-prevent-URLs-containing-for... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.08549321442842484,
0.05731949582695961,
0.0003481521853245795,
0.07009594887495041,
-0.0029272958636283875,
0.04177588224411011,
-0.08756762742996216,
0.03219398856163025,
0.02737935446202755,
-0.06214360520243645,
0.013462267816066742,
-0.09560178220272064,
-0.03575747087597847,
0.0095... | 0.158543 |
[[analysis-keyword-repeat-tokenfilter]] === Keyword Repeat Token Filter The `keyword\_repeat` token filter Emits each incoming token twice once as keyword and once as a non-keyword to allow an unstemmed version of a term to be indexed side by side with the stemmed version of the term. Given the nature of this filter ea... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/tokenfilters/keyword-repeat-tokenfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.07816223055124283,
0.009966889396309853,
0.06053519248962402,
0.027700558304786682,
-0.01151332724839449,
0.03985638916492462,
0.02821231260895729,
-0.05642615258693695,
0.030618172138929367,
-0.028303442522883415,
-0.004949462600052357,
-0.022373564541339874,
0.021222570911049843,
0.00... | 0.041253 |
[[analysis-mapping-charfilter]] === Mapping Char Filter The `mapping` character filter accepts a map of keys and values. Whenever it encounters a string of characters that is the same as a key, it replaces them with the value associated with that key. Matching is greedy; the longest pattern matching at a given point wi... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/charfilters/mapping-charfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.0311958659440279,
0.0497235469520092,
-0.002138101030141115,
-0.03500449284911156,
-0.08916100859642029,
-0.01439683698117733,
0.03662409633398056,
-0.044769275933504105,
0.00024854118237271905,
-0.05328843742609024,
0.014368889853358269,
-0.03128132224082947,
0.07364337146282196,
0.005... | 0.048787 |
[[analysis-pattern-replace-charfilter]] === Pattern Replace Char Filter The `pattern\_replace` character filter uses a regular expression to match characters which should be replaced with the specified replacement string. The replacement string can refer to capture groups in the regular expression. [WARNING] .Beware of... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.02860838919878006,
0.07522282004356384,
0.10980334877967834,
-0.007959655486047268,
0.014141397550702095,
-0.0031140928622335196,
-0.041285015642642975,
-0.022845640778541565,
0.0037645904812961817,
0.002648905385285616,
-0.029879800975322723,
-0.04036039113998413,
0.011265861801803112,
... | 0.055269 |
above is: [source,js] ---------------------------- { "timed\_out": false, "took": $body.took, "\_shards": { "total": 5, "successful": 5, "skipped" : 0, "failed": 0 }, "hits": { "total": 1, "max\_score": 0.2876821, "hits": [ { "\_index": "my\_index", "\_type": "\_doc", "\_id": "1", "\_score": 0.2876821, "\_source": { "t... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.021512052044272423,
0.09953704476356506,
0.038450732827186584,
0.08044175058603287,
0.06317687779664993,
0.018307948485016823,
0.033720746636390686,
-0.009787555783987045,
0.11615968495607376,
-0.03882491588592529,
-0.05149512365460396,
-0.02048698626458645,
0.01791258342564106,
-0.0475... | 0.062036 |
[[analysis-htmlstrip-charfilter]] === HTML Strip Char Filter The `html\_strip` character filter strips HTML elements from the text and replaces HTML entities with their decoded value (e.g. replacing `&` with `&`). [float] === Example output [source,js] --------------------------- POST \_analyze { "tokenizer": "keyword"... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/analysis/charfilters/htmlstrip-charfilter.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.02558317594230175,
0.09756948798894882,
-0.005414223298430443,
0.04572499915957451,
0.030628859996795654,
-0.01787460222840309,
0.04485539719462395,
-0.07862497866153717,
0.0721740648150444,
-0.02906155027449131,
-0.012178618460893631,
-0.06220924109220505,
0.026770662516355515,
-0.0337... | 0.054121 |
[[pipeline]] == Pipeline Definition A pipeline is a definition of a series of <> that are to be executed in the same order as they are declared. A pipeline consists of two main fields: a `description` and a list of `processors`: [source,js] -------------------------------------------------- { "description" : "...", "pr... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.03027070127427578,
0.05095989629626274,
0.008093031123280525,
0.017347633838653564,
-0.03607701510190964,
-0.0708681121468544,
-0.004154015798121691,
0.012620912864804268,
0.016184192150831223,
-0.05884004756808281,
-0.02292812429368496,
-0.03017842024564743,
-0.019684847444295883,
-0.0... | 0.171047 |
of the request. You can either specify an existing pipeline to execute against the provided documents, or supply a pipeline definition in the body of the request. Here is the structure of a simulate request with a pipeline definition provided in the body of the request: [source,js] -------------------------------------... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.14587344229221344,
0.03721410036087036,
0.049484819173812866,
0.04926663637161255,
-0.02709115296602249,
-0.0867757499217987,
-0.08219849318265915,
0.03171166777610779,
0.053231123834848404,
-0.048507314175367355,
-0.06425824016332626,
-0.04679439961910248,
0.006947627291083336,
-0.0338... | 0.048858 |
metadata fields. [float] [[accessing-source-fields]] === Accessing Fields in the Source Accessing a field in the source is straightforward. You simply refer to fields by their name. For example: [source,js] -------------------------------------------------- { "set": { "field": "my\_field", "value": 582.1 } } ----------... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.03356687352061272,
0.0828164666891098,
-0.041290249675512314,
0.05762309581041336,
0.02582540549337864,
-0.020421914756298065,
0.04611419513821602,
0.031480010598897934,
0.08400829136371613,
-0.062016382813453674,
-0.03081630915403366,
-0.052685678005218506,
-0.019575713202357292,
-0.08... | -0.006155 |
documents into a separate index. To enable this behavior, you can use the `on\_failure` parameter. The `on\_failure` parameter defines a list of processors to be executed immediately following the failed processor. You can specify this parameter at the pipeline level, as well as at the processor level. If a processor s... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.02559654228389263,
0.07665462791919708,
0.07049120217561722,
0.10362304002046585,
0.03360529989004135,
-0.06373671442270279,
-0.033452313393354416,
0.0017724661156535149,
0.07758059352636337,
-0.026835089549422264,
-0.062135957181453705,
0.01304683182388544,
0.03494524210691452,
-0.0693... | 0.047703 |
identifier of the specific instantiation of a certain processor in a pipeline. The `tag` field does not affect the processor's behavior, but is very useful for bookkeeping and tracing errors to specific processors. See <> to learn more about the `on\_failure` field and error handling in pipelines. The <> can be used to... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
0.05108185112476349,
-0.016217052936553955,
0.002362574217841029,
0.036953143775463104,
-0.018539663404226303,
-0.02143867313861847,
-0.09479488432407379,
-0.020695893093943596,
0.007537547033280134,
-0.016520554199814796,
-0.0439351424574852,
-0.03511321544647217,
-0.009808260016143322,
-... | 0.137303 |
is `null`, the processor quietly exits without modifying the document |====== [source,js] -------------------------------------------------- { "convert": { "field" : "foo", "type": "integer" } } -------------------------------------------------- // NOTCONSOLE [[date-processor]] === Date Processor Parses dates from fiel... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.07487999647855759,
0.09105129539966583,
-0.007017314434051514,
0.11137603223323822,
0.0436309389770031,
-0.03165585920214653,
-0.09125631302595139,
0.024829547852277756,
0.03644435107707977,
-0.024852082133293152,
-0.03221087530255318,
-0.04484565556049347,
-0.05129336938261986,
-0.0787... | 0.11097 |
"date1", "index\_name\_prefix" : "myindex-", "date\_rounding" : "M" } } ] } -------------------------------------------------- // CONSOLE Using that pipeline for an index request: [source,js] -------------------------------------------------- PUT /myindex/\_doc/1?pipeline=monthlyindex { "date1" : "2016-04-25T12:02:01.7... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.004411489702761173,
0.055092159658670425,
0.017695125192403793,
0.09952981024980545,
-0.050714362412691116,
-0.027320776134729385,
-0.05847013741731644,
0.05821708217263222,
0.07694266736507416,
0.03313566371798515,
-0.04762393608689308,
-0.050160616636276245,
-0.04323617368936539,
-0.0... | 0.058733 |
Processes elements in an array of unknown length. All processors can operate on elements inside an array, but if all elements of an array need to be processed in the same way, defining a processor for each element becomes cumbersome and tricky because it is likely that the number of elements in an array is unknown. For... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
0.012623025104403496,
0.04100070893764496,
-0.027748288586735725,
0.07331584393978119,
0.019114147871732712,
-0.14199045300483704,
0.037183068692684174,
-0.024774283170700073,
0.07459250092506409,
-0.04918142408132553,
0.013688861392438412,
0.037300772964954376,
-0.02649284526705742,
-0.09... | 0.157506 |
// NOTCONSOLE In this example, if the `remove` processor does fail, then the array elements that have been processed thus far will be updated. Another advanced example can be found in the {plugins}/ingest-attachment-with-arrays.html[attachment processor documentation]. [[grok-processor]] === Grok Processor Extracts str... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
0.013192267157137394,
0.0899413600564003,
0.0037703029811382294,
0.016990525647997856,
0.08678775280714035,
-0.12776270508766174,
0.02347881719470024,
-0.07087668031454086,
0.07842922955751419,
0.028252407908439636,
-0.024718208238482475,
0.04101242125034332,
0.0009215706959366798,
-0.0042... | 0.195083 |
and `field` does not exist or is `null`, the processor quietly exits without modifying the document |====== Here is an example of using the provided patterns to extract out and name structured fields from a string field in a document. [source,js] -------------------------------------------------- { "message": "55.3.244... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.08649951219558716,
0.14546583592891693,
0.036036569625139236,
0.09509101510047913,
0.042265407741069794,
-0.08724212646484375,
0.010194335132837296,
-0.026777811348438263,
0.055384960025548935,
-0.05859890952706337,
-0.008659910410642624,
-0.0015533613041043282,
-0.019692374393343925,
-... | 0.127152 |
"timestamp": "2016-11-08T19:43:03.850+0000" } } } ] } -------------------------------------------------- // TESTRESPONSE[s/2016-11-08T19:43:03.850\+0000/$body.docs.0.doc.\_ingest.timestamp/] In the above response, you can see that the index of the pattern that matched was `"1"`. This is to say that it was the second (i... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.07415793091058731,
0.0851425901055336,
-0.018403831869363785,
0.006229231599718332,
0.060986991971731186,
-0.09793978929519653,
0.02408134937286377,
0.018695324659347534,
0.05723850429058075,
-0.04194111004471779,
0.02563571371138096,
-0.03491130471229553,
-0.03555107116699219,
-0.05137... | 0.142701 |
-------------------------------------------------- { "string\_source": "{\"foo\": 2000}", "json\_target": { "foo": 2000 } } -------------------------------------------------- // NOTCONSOLE If the following configuration is provided, omitting the optional `target\_field` setting: [source,js] ----------------------------... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.0432916022837162,
0.04735155403614044,
0.027996821328997612,
0.047937795519828796,
-0.0006483101169578731,
0.010771914385259151,
-0.026075543835759163,
-0.0052852020598948,
0.03853542357683182,
-0.06805860996246338,
0.011179231107234955,
-0.07818175852298737,
-0.03347006067633629,
-0.01... | 0.007686 |
If `true` and `field` does not exist, the processor quietly exits without modifying the document |====== [source,js] -------------------------------------------------- { "rename": { "field": "foo", "target\_field": "foobar" } } -------------------------------------------------- // NOTCONSOLE [[script-processor]] === Sc... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.05952494964003563,
0.07737403362989426,
0.02219991944730282,
0.11003172397613525,
0.03306503966450691,
-0.10965855419635773,
-0.03341921046376228,
-0.01727345399558544,
0.0794687420129776,
-0.0028835968114435673,
0.011715746484696865,
0.03017761930823326,
-0.04156407341361046,
-0.111203... | 0.145581 |
[[split-options]] .Split Options [options="header"] |====== | Name | Required | Default | Description | `field` | yes | - | The field to split | `separator` | yes | - | A regex which matches the separator, eg `,` or `\s+` | `target\_field` | no | `field` | The field to assign the split value to, by default `field` is u... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.07383976131677628,
0.07171697914600372,
0.08820365369319916,
0.11303285509347916,
0.05981333926320076,
-0.057120997458696365,
-0.012338542379438877,
0.029962077736854553,
0.010572163388133049,
-0.014030463993549347,
-0.024878133088350296,
-0.08126957714557648,
-0.04027090221643448,
-0.0... | 0.035845 |
dot expand processor would turn this document: [source,js] -------------------------------------------------- { "foo.bar" : "value" } -------------------------------------------------- // NOTCONSOLE into: [source,js] -------------------------------------------------- { "foo" : { "bar" : "value" } } --------------------... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/ingest/ingest-node.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.07550714164972305,
0.04878617450594902,
-0.03177493438124657,
0.06662614643573761,
-0.03754662722349167,
-0.004542676266282797,
0.003825438441708684,
0.05470235273241997,
0.056456826627254486,
-0.048303209245204926,
-0.0007772129611112177,
-0.023722445592284203,
-0.020311595872044563,
-... | 0.101733 |
[[secure-settings]] === Secure Settings Some settings are sensitive, and relying on filesystem permissions to protect their values is not sufficient. For this use case, Elasticsearch provides a keystore and the `elasticsearch-keystore` tool to manage the settings in the keystore. NOTE: All commands here should be run a... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/secure-settings.asciidoc | 6.2 | elasticsearch-6-2 | [
0.0036996458657085896,
0.06624411791563034,
-0.057873114943504333,
0.027295377105474472,
-0.046158477663993835,
0.002113126451149583,
-0.0639638751745224,
0.039120372384786606,
0.016609378159046173,
0.0561087541282177,
-0.007674077525734901,
-0.016207076609134674,
0.06703238189220428,
-0.0... | -0.001242 |
[[bootstrap-checks]] == Bootstrap Checks Collectively, we have a lot of experience with users suffering unexpected issues because they have not configured <>. In previous versions of Elasticsearch, misconfiguration of some of these settings were logged as warnings. Understandably, users sometimes miss these log message... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/bootstrap-checks.asciidoc | 6.2 | elasticsearch-6-2 | [
0.0379595123231411,
0.06915276497602463,
0.033455848693847656,
0.0178429763764143,
0.1124337688088417,
-0.0875997543334961,
-0.05382157117128372,
-0.03841593116521835,
-0.010019836015999317,
0.01648635044693947,
-0.022460373118519783,
0.017492078244686127,
0.02840389683842659,
0.0230338964... | 0.047277 |
not be the case that all of the JVM heap is locked in memory. To pass the heap size check, you must configure the <>. === File descriptor check File descriptors are a Unix construct for tracking open "files". In Unix though, https://en.wikipedia.org/wiki/Everything\_is\_a\_file[everything is a file]. For example, "file... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/bootstrap-checks.asciidoc | 6.2 | elasticsearch-6-2 | [
0.05537925288081169,
0.013231230899691582,
-0.01255929097533226,
0.03476807102560997,
0.09196590632200241,
-0.06824656575918198,
-0.06291985511779785,
0.046842675656080246,
0.043545257300138474,
0.060454197227954865,
-0.062045689672231674,
0.09096500277519226,
0.003583921818062663,
-0.0162... | 0.080469 |
large (exceeding multiple gigabytes). On systems where the max size of files that can be created by the Elasticsearch process is limited, this can lead to failed writes. Therefore, the safest option here is that the max file size is unlimited and that is what the max file size bootstrap check enforces. To pass the max ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/bootstrap-checks.asciidoc | 6.2 | elasticsearch-6-2 | [
0.08363692462444305,
0.025702644139528275,
-0.037429168820381165,
0.0025906641967594624,
0.0460321344435215,
-0.08205811679363251,
-0.10746558755636215,
0.08490685373544693,
0.009026622399687767,
0.09863448888063431,
-0.0334344282746315,
0.0630488321185112,
0.019965393468737602,
-0.0058868... | 0.096968 |
call filters are incompatible. The `OnError` and `OnOutOfMemoryError` checks prevent Elasticsearch from starting if either of these JVM options are used and system call filters are enabled. This check is always enforced. To pass this check do not enable `OnError` nor `OnOutOfMemoryError`; instead, upgrade to Java 8u92 ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/bootstrap-checks.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.05299921706318855,
0.04428156092762947,
0.042914822697639465,
0.0015430474886670709,
0.031135953962802887,
-0.05785147100687027,
-0.06676742434501648,
0.026203904300928116,
-0.008426419459283352,
-0.013715039938688278,
-0.004800041206181049,
-0.041714418679475784,
-0.04920673742890358,
... | 0.023219 |
[[logging]] === Logging configuration Elasticsearch uses https://logging.apache.org/log4j/2.x/[Log4j 2] for logging. Log4j 2 can be configured using the log4j2.properties file. Elasticsearch exposes three properties, `${sys:es.logs.base\_path}`, `${sys:es.logs.cluster\_name}`, and `${sys:es.logs.node\_name}` (if the no... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/logging-config.asciidoc | 6.2 | elasticsearch-6-2 | [
0.0842258557677269,
0.0225897915661335,
0.022157583385705948,
0.02009354531764984,
0.06435064226388931,
-0.03979182615876198,
-0.03029687888920307,
-0.00022880315373186022,
0.07843979448080063,
0.043679263442754745,
-0.023619534447789192,
-0.005034989677369595,
0.010421624407172203,
0.0377... | 0.108216 |
ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the http://... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/logging-config.asciidoc | 6.2 | elasticsearch-6-2 | [
0.0395018644630909,
0.00011275894939899445,
0.0006915405974723399,
-0.02654544822871685,
0.05826254189014435,
-0.0733499750494957,
-0.04222787916660309,
0.038914989680051804,
-0.010530001483857632,
0.05124587565660477,
0.02946707420051098,
-0.026570430025458336,
0.013740753754973412,
0.042... | 0.113407 |
[[system-config]] == Important System Configuration Ideally, Elasticsearch should run alone on a server and use all of the resources available to it. In order to do so, you need to configure your operating system to allow the user running Elasticsearch to access more resources than allowed by default. The following set... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/sysconfig.asciidoc | 6.2 | elasticsearch-6-2 | [
0.07154503464698792,
0.07081207633018494,
0.014116969890892506,
0.038108471781015396,
0.03346462920308113,
-0.01694631576538086,
-0.06725979596376419,
0.035625308752059937,
0.007511255331337452,
0.07028810679912567,
-0.04817063733935356,
-0.022440442815423012,
0.02311602048575878,
0.001137... | 0.04892 |
[[install-elasticsearch]] == Installing Elasticsearch Elasticsearch is provided in the following package formats: [horizontal] `zip`/`tar.gz`:: The `zip` and `tar.gz` packages are suitable for installation on any system and are the easiest choice for getting started with Elasticsearch on most systems. + <> or <> `deb`:... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.0006261346861720085,
0.013621549122035503,
0.0176156684756279,
-0.048278581351041794,
0.036109693348407745,
0.0015931319212540984,
-0.07626292109489441,
0.042284607887268066,
-0.008602883666753769,
0.04681267589330673,
-0.016843261197209358,
-0.014338348992168903,
-0.029310161247849464,
... | 0.119473 |
[[important-settings]] == Important Elasticsearch configuration While Elasticsearch requires very little configuration, there are a number of settings which need to be considered before going into production. The following settings \*must\* be considered before going to production: \* <> \* <> \* <> \* <> \* <> \* <> \... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/important-settings.asciidoc | 6.2 | elasticsearch-6-2 | [
0.055546198040246964,
0.030998103320598602,
-0.0615164153277874,
0.05559387430548668,
0.009707125835120678,
-0.021231861785054207,
-0.06075485795736313,
-0.002100155223160982,
-0.01606966368854046,
-0.019565163180232048,
0.03232872486114502,
-0.07568464428186417,
-0.018631016835570335,
0.0... | 0.125479 |
[[stopping-elasticsearch]] == Stopping Elasticsearch An orderly shutdown of Elasticsearch ensures that Elasticsearch has a chance to cleanup and close outstanding resources. For example, a node that is shutdown in an orderly fashion will remove itself from the cluster, sync translogs to disk, and perform other related ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/stopping.asciidoc | 6.2 | elasticsearch-6-2 | [
0.04079403355717659,
0.0736645981669426,
0.03233658894896507,
0.017136963084340096,
0.05022669956088066,
-0.02591261826455593,
-0.058848485350608826,
-0.047190111130476,
0.06917267292737961,
0.062000539153814316,
-0.06338073313236237,
0.0566125214099884,
-0.05482611805200577,
-0.0485038422... | 0.172536 |
[[jvm-options]] === Setting JVM options You should rarely need to change Java Virtual Machine (JVM) options. If you do, the most likely change is setting the <>. The remainder of this document explains in detail how to set JVM options. The preferred method of setting JVM options (including system properties and JVM fla... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/jvm-options.asciidoc | 6.2 | elasticsearch-6-2 | [
0.04765482619404793,
0.027915874496102333,
0.02600143663585186,
-0.019022027030587196,
0.0034743433352559805,
0.00353726907633245,
-0.05761948972940445,
0.010776503011584282,
-0.031571000814437866,
-0.006365042645484209,
0.01533143687993288,
0.012838694266974926,
0.007950535044074059,
-0.0... | 0.011303 |
[[settings]] == Configuring Elasticsearch Elasticsearch ships with good defaults and requires very little configuration. Most settings can be changed on a running cluster using the <> API. The configuration files should contain settings which are node-specific (such as `node.name` and paths), or settings which a node r... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/configuration.asciidoc | 6.2 | elasticsearch-6-2 | [
0.07664239406585693,
0.01336563378572464,
-0.03264801576733589,
-0.0013225942384451628,
0.04417220875620842,
-0.02844397723674774,
-0.05540976673364639,
0.029000338166952133,
-0.02472950704395771,
0.024032099172472954,
0.009805303998291492,
0.022267185151576996,
-0.012554476037621498,
0.01... | 0.089452 |
[[deb]] === Install Elasticsearch with Debian Package The Debian package for Elasticsearch can be <> or from our <>. It can be used to install Elasticsearch on any Debian-based system such as Debian and Ubuntu. The latest stable version of Elasticsearch can be found on the link:/downloads/elasticsearch[Download Elastic... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/deb.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.006520088762044907,
0.03592371195554733,
-0.009851701557636261,
-0.042909469455480576,
-0.012978448532521725,
-0.04836854338645935,
-0.057213906198740005,
0.03933417424559593,
-0.016893019899725914,
0.030315976589918137,
-0.030995327979326248,
-0.024036120623350143,
-0.023506207391619682,... | 0.093829 |
for more information. [[deb-layout]] ==== Directory layout of Debian package The Debian package places config files, logs, and the data directory in the appropriate locations for a Debian-based system: [cols="> | conf | Environment variables including heap size, file descriptors. | /etc/default/elasticsearch d| | data ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/deb.asciidoc | 6.2 | elasticsearch-6-2 | [
0.0256331916898489,
-0.01900789514183998,
-0.03363460674881935,
-0.05309848487377167,
0.051903918385505676,
-0.06302887201309204,
-0.021827876567840576,
0.05104277655482292,
0.06578618288040161,
0.07346697896718979,
0.034899089485406876,
0.02475038543343544,
0.010037367232143879,
-0.008563... | 0.098143 |
==== Running Elasticsearch with `systemd` To configure Elasticsearch to start automatically when the system boots up, run the following commands: [source,sh] -------------------------------------------------- sudo /bin/systemctl daemon-reload sudo /bin/systemctl enable elasticsearch.service ----------------------------... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/systemd.asciidoc | 6.2 | elasticsearch-6-2 | [
0.002557312836870551,
0.04964206740260124,
-0.02102767303586006,
-0.024296632036566734,
0.02632136084139347,
-0.001888167578727007,
-0.05272385850548744,
0.016504140570759773,
0.015232783742249012,
0.05440954491496086,
-0.06866709887981415,
-0.026944475248456,
-0.0012936423299834132,
0.027... | 0.070592 |
[[windows]] === Install Elasticsearch with Windows MSI Installer beta[] Elasticsearch can be installed on Windows using the `.msi` package. This can install Elasticsearch as a Windows service or allow it to be run manually using the included `elasticsearch.exe` executable. TIP: Elasticsearch has historically been insta... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/windows.asciidoc | 6.2 | elasticsearch-6-2 | [
0.004486836958676577,
0.04000410437583923,
0.02125740982592106,
0.005411679390817881,
-0.0019247692544013262,
0.00181589275598526,
-0.0872856080532074,
0.004171237349510193,
-0.032211024314165115,
0.046278513967990875,
-0.04148446023464203,
-0.02430638112127781,
-0.04406389221549034,
0.047... | 0.114547 |
log file for the installation process can be found within the `%TEMP%` directory, with a randomly generated name adhering to the format `MSI.LOG`. The path to a log file can be supplied using the `/l` command line argument ["source","sh",subs="attributes,callouts"] -------------------------------------------- start /wa... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/windows.asciidoc | 6.2 | elasticsearch-6-2 | [
0.0626310333609581,
0.018318740651011467,
-0.005194793455302715,
0.024293068796396255,
-0.007365257013589144,
-0.0160220880061388,
-0.012682413682341576,
0.009907121770083904,
0.05089496821165085,
0.04468410462141037,
-0.0031771506182849407,
-0.07980141043663025,
0.007188389543443918,
0.02... | 0.003088 |
to download plugins over HTTP. Defaults to `""` `HTTPPROXYPORT`:: The proxy port to use to download plugins over HTTP. Defaults to `80` `XPACKLICENSE`:: When installing X-Pack plugin, the type of license to install, either `Basic` or `Trial`. Defaults to `Basic` `XPACKSECURITYENABLED`:: When installing X-Pack plugin wi... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/windows.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.04951123893260956,
-0.005942493677139282,
-0.08432309329509735,
-0.0671219602227211,
0.06568390130996704,
0.018222032114863396,
0.0180408526211977,
0.0030043460428714752,
-0.03291855752468109,
-0.009036178700625896,
0.13836398720741272,
0.04812370985746384,
0.06362396478652954,
0.032999... | -0.035299 |
jvm.options and elasticsearch.yml configuration files to configure the service after installation. Most changes (like JVM settings) will require a restart of the service in order to take effect. [[upgrade-msi-gui]] ==== Upgrade using the graphical user interface (GUI) The `.msi` package supports upgrading an installed ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/windows.asciidoc | 6.2 | elasticsearch-6-2 | [
0.05556945502758026,
-0.013218477368354797,
0.034793026745319366,
-0.014161082915961742,
0.020761385560035706,
0.008105957880616188,
-0.0734013095498085,
-0.030318472534418106,
-0.06276483833789825,
0.013354180380702019,
0.000769356673117727,
0.020459778606891632,
0.01016473863273859,
0.01... | 0.012903 |
[[zip-windows]] === Install Elasticsearch with `.zip` on Windows Elasticsearch can be installed on Windows using the `.zip` package. This comes with a `elasticsearch-service.bat` command which will setup Elasticsearch to run as a service. TIP: Elasticsearch has historically been installed on Windows using the `.zip` ar... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/zip-windows.asciidoc | 6.2 | elasticsearch-6-2 | [
0.0072787762619555,
0.04049453139305115,
-0.017777420580387115,
0.017222048714756966,
-0.026944762095808983,
0.015804285183548927,
-0.052654366940259933,
0.031737733632326126,
-0.037046030163764954,
0.0589834563434124,
-0.050640132278203964,
-0.0029109457973390818,
-0.008858880028128624,
0... | 0.106537 |
to the path to the JDK installation that you want the service to use. If you upgrade the JDK, you are not required to the reinstall the service but you must set the value of the system environment variable `JAVA\_HOME` to the path to the new JDK installation. However, upgrading across JVM types (e.g. JRE versus SE) is ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/zip-windows.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.0011257053120061755,
-0.005702198017388582,
-0.03564866632223129,
-0.06247803196310997,
-0.0517650805413723,
0.007488661911338568,
-0.01257278211414814,
0.023304708302021027,
-0.0462304949760437,
0.0051522948779165745,
-0.08975236862897873,
0.012926527298986912,
0.024548077955842018,
0.... | 0.012805 |
and directories are, by default, contained within `%ES\_HOME%` -- the directory created when unpacking the archive. This is very convenient because you don't have to create any directories to start using Elasticsearch, and uninstalling Elasticsearch is as easy as removing the `%ES\_HOME%` directory. However, it is advi... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/zip-windows.asciidoc | 6.2 | elasticsearch-6-2 | [
0.04966987296938896,
0.015914754942059517,
0.0001784183841664344,
0.02631482481956482,
0.09096517413854599,
-0.010568824596703053,
-0.02021208219230175,
-0.008200028911232948,
0.06396877765655518,
0.06690765172243118,
0.03675942122936249,
0.058614570647478104,
0.022230474278330803,
0.03798... | 0.084252 |
==== Checking that Elasticsearch is running You can test that your Elasticsearch node is running by sending an HTTP request to port `9200` on `localhost`: [source,js] -------------------------------------------- GET / -------------------------------------------- // CONSOLE which should give you a response something lik... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/check-running.asciidoc | 6.2 | elasticsearch-6-2 | [
0.031668297946453094,
0.06558232009410858,
-0.006817673332989216,
0.03791218250989914,
0.038716401904821396,
-0.06919996440410614,
-0.0751851350069046,
-0.02261035144329071,
0.047330692410469055,
-0.018653932958841324,
-0.06583195179700851,
-0.0957726389169693,
-0.029098520055413246,
-0.01... | 0.053148 |
[horizontal] `JAVA\_HOME`:: Set a custom Java path to be used. `MAX\_OPEN\_FILES`:: Maximum number of open files, defaults to `65536`. `MAX\_LOCKED\_MEMORY`:: Maximum locked memory size. Set to `unlimited` if you use the `bootstrap.memory\_lock` option in elasticsearch.yml. `MAX\_MAP\_COUNT`:: Maximum number of memory ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/sysconfig-file.asciidoc | 6.2 | elasticsearch-6-2 | [
0.09298282861709595,
0.015509159304201603,
-0.059580400586128235,
-0.0073389532044529915,
0.018760833889245987,
-0.021329199895262718,
-0.10852032899856567,
0.07139018177986145,
0.015845635905861855,
0.08066146075725555,
-0.036539651453495026,
0.054419077932834625,
-0.027202868834137917,
-... | 0.113032 |
Elasticsearch defaults to using `/etc/elasticsearch` for runtime configuration. The ownership of this directory and all files in this directory are set to `root:elasticsearch` on package installation and the directory has the `setgid` flag set so that any files and subdirectories created under `/etc/elasticsearch` are ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/etc-elasticsearch.asciidoc | 6.2 | elasticsearch-6-2 | [
0.02474578283727169,
0.033612240105867386,
-0.01742946170270443,
-0.0169578418135643,
-0.0033109711948782206,
-0.02717781439423561,
-0.04393015056848526,
0.01120992936193943,
0.053414858877658844,
0.013231947086751461,
0.008493388071656227,
0.06734274327754974,
-0.026736389845609665,
0.024... | 0.088365 |
[[rpm]] === Install Elasticsearch with RPM The RPM for Elasticsearch can be <> or from our <>. It can be used to install Elasticsearch on any RPM-based system such as OpenSuSE, SLES, Centos, Red Hat, and Oracle Enterprise. NOTE: RPM install is not supported on distributions with old versions of RPM, such as SLES 11 and... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/rpm.asciidoc | 6.2 | elasticsearch-6-2 | [
0.01415508147329092,
0.012238973751664162,
-0.00443551829084754,
-0.013862244784832,
0.022343937307596207,
-0.0023847671691328287,
-0.07293888926506042,
0.021857568994164467,
-0.030110185965895653,
-0.00607337336987257,
-0.016568632796406746,
-0.008423782885074615,
0.010466785170137882,
-0... | 0.123508 |
| /usr/share/elasticsearch/plugins | | repo | Shared file system repository locations. Can hold multiple locations. A file system repository can be placed in to any subdirectory of any directory specified here. d| Not configured | path.repo |======================================================================= includ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/rpm.asciidoc | 6.2 | elasticsearch-6-2 | [
0.006220329087227583,
-0.03227587416768074,
-0.08687540143728256,
-0.006899368483573198,
0.027495192363858223,
-0.004434448666870594,
-0.05029832944273949,
-0.0264086052775383,
0.049854349344968796,
0.028924770653247833,
0.0579388290643692,
0.005321429576724768,
-0.012091503478586674,
0.04... | 0.133328 |
[[zip-targz]] === Install Elasticsearch with `.zip` or `.tar.gz` Elasticsearch is provided as a `.zip` and as a `.tar.gz` package. These packages can be used to install Elasticsearch on any system and are the easiest package format to use when trying out Elasticsearch. The latest stable version of Elasticsearch can be ... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/zip-targz.asciidoc | 6.2 | elasticsearch-6-2 | [
-0.0027106108609586954,
0.06998429447412491,
0.03403596952557564,
-0.010728410445153713,
0.02891518734395504,
0.014128361828625202,
-0.09379706531763077,
0.038572847843170166,
-0.020822517573833466,
0.05800679698586464,
-0.05196177214384079,
-0.0025900662876665592,
-0.045596372336149216,
0... | 0.084129 |
This is very convenient because you don't have to create any directories to start using Elasticsearch, and uninstalling Elasticsearch is as easy as removing the `$ES\_HOME` directory. However, it is advisable to change the default locations of the config directory, the data directory, and the logs directory so that you... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/install/zip-targz.asciidoc | 6.2 | elasticsearch-6-2 | [
0.037087976932525635,
0.014180092141032219,
0.03117196261882782,
0.007416510954499245,
0.0747409239411354,
-0.010116875171661377,
-0.012425497174263,
0.0025598681531846523,
0.06630299240350723,
0.08355407416820526,
0.023746749386191368,
0.03880744427442551,
0.005724976304918528,
0.04097835... | 0.104231 |
[[file-descriptors]] === File Descriptors [NOTE] This is only relevant for Linux and macOS and can be safely ignored if running Elasticsearch on Windows. On Windows that JVM uses an https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx[API] limited only by available resources. Elasticsearch us... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/sysconfig/file-descriptors.asciidoc | 6.2 | elasticsearch-6-2 | [
0.019130874425172806,
0.04456309974193573,
0.02082924172282219,
0.006973553448915482,
0.04746845364570618,
-0.04196534305810928,
-0.10015758126974106,
0.048695459961891174,
-0.0008244566852226853,
0.04657690227031708,
-0.025128452107310295,
0.026072585955262184,
-0.006807633675634861,
0.01... | 0.055595 |
[[setup-configuration-memory]] === Disable swapping Most operating systems try to use as much memory as possible for file system caches and eagerly swap out unused application memory. This can result in parts of the JVM heap or even its executable pages being swapped out to disk. Swapping is very bad for performance, f... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/sysconfig/swap.asciidoc | 6.2 | elasticsearch-6-2 | [
0.041837844997644424,
0.07037176936864853,
-0.02880864031612873,
0.042657069861888885,
0.02452057972550392,
0.01605367846786976,
-0.03712437301874161,
0.05009908974170685,
0.008010240271687508,
0.06116839498281479,
-0.059237513691186905,
0.13466815650463104,
0.014698664657771587,
-0.048412... | 0.154676 |
[[vm-max-map-count]] === Virtual memory Elasticsearch uses a <> directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions. On Linux, you can increase the limits by running the following command as `root`: [source,s... | https://github.com/elastic/elasticsearch/blob/6.2//docs/reference/setup/sysconfig/virtual-memory.asciidoc | 6.2 | elasticsearch-6-2 | [
0.11822592467069626,
-0.0012686012778431177,
-0.04847096651792526,
0.013653362169861794,
-0.045121967792510986,
-0.010361883789300919,
-0.0948982760310173,
0.0629083663225174,
0.030620798468589783,
0.06609605997800827,
0.018086541444063187,
-0.02537526749074459,
0.005265090148895979,
-0.00... | 0.063555 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.