Alphabetical List of Options Jason Stevens September 07, 2023 12:22 Updated Follow Index A addLemmaTokens, Lucene Options, Lucene Options addReadings, Lucene Options, Lucene Options alternativeEnglishDisambiguation, Disambiguation, Analyzer Options alternativeGreekDisambiguation, Disambiguation, Analyzer Options alternativeSpanishDisambiguation, Disambiguation, Analyzer Options analysisCacheSize, Analyzers, Analyzer Options analyze, Analyzer Options atMentions, Social Media Tokens: Emoji & Emoticons, Hashtags, @Mentions, Email Addresses, URLs, Tokenizer Options B breakAtAlphaNumIntraWordPunct, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options C cacheSize, Analyzers, Analyzer Options caseSensitive, Tokenizers, Analyzers, Tokenizer Options, Analyzer Options compoundComponentSurfaceForms, Compounds, Analyzer Options consistentLatinSegmentation, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options conversionLevel, CSC Options, Chinese Script Converter Options customPosTagsUri, Returning Universal Part-of-Speech (POS) Tags, Analyzer Options customTokenizeContractionRulesUri, Splitting Contractions, Analyzer Options D decomposeCompounds, Compounds, Chinese and Japanese Lexical Tokenization, Analyzer Options, Chinese and Japanese Options deepCompoundDecomposition, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options defaultTokenizationLanguage, Tokenizers, Tokenizer Options deliverExtendedTags, Analyzers, Analyzer Options dictionaryDirectory, Initial and Path Options, General Options disambiguate, Disambiguation, Analyzer Options disambiguatorType, Hebrew Disambiguator Types, Hebrew Options E emailAddresses, Social Media Tokens: Emoji & Emoticons, Hashtags, @Mentions, Email Addresses, URLs, Tokenizer Options emoticons, Social Media Tokens: Emoji & Emoticons, Hashtags, @Mentions, Email Addresses, URLs, Tokenizer Options F favorUserDictionary, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options fragmentBoundaryDelimiters, Structured Text, Tokenizer Options fragmentBoundaryDetection, Structured Text, Tokenizer Options G generateAll, Chinese and Japanese Readings, Chinese and Japanese Options guessHebrewPrefixes, Hebrew Analyses, Hebrew Options H hashtags, Social Media Tokens: Emoji & Emoticons, Hashtags, @Mentions, Email Addresses, URLs, Tokenizer Options I identifyContractionComponents, Lucene Options, Lucene Options ignoreSeparators, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options ignoreStopwords, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options includeHebrewRoots, Hebrew Analyses, Hebrew Options J joinKatakanaNextToMiddleDot, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options L language, Initial and Path Options, CSC Options, General Options, Chinese Script Converter Options lemDictionaryPath, Activating User Dictionaries in Lucene, Lucene Options licensePath, Initial and Path Options, General Options licenseString, Initial and Path Options, General Options M maxTokensForShortLine, Structured Text, Tokenizer Options minLengthForScriptChange, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options minNonPrimaryScriptRegionLength, Tokenizers, Tokenizer Options modelDirectory, Initial and Path Options, General Options N nfkcNormalize, Tokenizers, Tokenizer Options normalizationDictionaryPaths, Analyzers, Analyzer Options P pos, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options Q query, Tokenizers, Analyzers, Tokenizer Options, Analyzer Options R readingByCharacter, Chinese and Japanese Readings, Chinese and Japanese Options readings, Chinese and Japanese Readings, Chinese and Japanese Options readingType, Chinese and Japanese Readings, Chinese and Japanese Options replaceTokensWithLemmas, Lucene Options, Lucene Options rootDirectory, Initial and Path Options, General Options S segDictionaryPath, Activating User Dictionaries in Lucene, Lucene Options segmentNonJapanese, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options separateNumbersFromCounters, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options separatePlaceNameFromSuffix, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options T targetLanguage, CSC Options, Chinese Script Converter Options tokenizeContractions, Splitting Contractions, Analyzer Options tokenizeForScript, Tokenizers, Tokenizer Options tokenizerType, Tokenizers, Analyzers, Tokenizer Options U universalPosTags, Returning Universal Part-of-Speech (POS) Tags, Analyzer Options urls, Social Media Tokens: Emoji & Emoticons, Hashtags, @Mentions, Email Addresses, URLs, Tokenizer Options userDefinedDictionaryPath, Activating User Dictionaries in Lucene, Lucene Options userDefinedReadingDictionaryPath, Activating User Dictionaries in Lucene, Lucene Options useVForUDiaeresis, Chinese and Japanese Readings, Chinese and Japanese Options W whiteSpaceIsNumberSep, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options whitespaceTokenization, Chinese and Japanese Lexical Tokenization, Chinese and Japanese Options Related articles Analyzers Options Solr Plugin Using RBL in Apache Lucene User Dictionaries Comments 0 comments Article is closed for comments.