RLI may be used to detect the language for a monolingual document or language regions for a document that contains blocks of text in different languages. To use RLI, you must have a Java Runtime Environment 11 or 17. RLI is tested with OpenJDK.
For best results we recommend disabling all language profiles that you are certain will not appear in your data.
The following code fragments imply the imports listed below. Unless otherwise stated, other classes belong to the com.basistech.rosette.languageidentifier
package.
import com.basistech.rosette.dm.AnnotatedText;
import com.basistech.rosette.dm.LanguageDetection;
import com.basistech.util.LanguageCode;
import com.basistech.util.ISO15924;
-
Create a LanguageIdentifierBuilder
.
LanguageIdentifierBuilder liBuilder =
new LanguageIdentifierBuilder(new File("pathToRLIRootDirectory"));
If you have not copied your license file to the standard location (pathToRLIRootDirectory/licenses/rlp-license.xml
), use the license
method to specify a valid license.
-
(Optional) Set detection options.
-
Create an Annotator
for detecting the language of the input text: com.basistech.rosette.languageidentifier.LanguageIdentificationAnnotator
implements the com.basistech.rosette.dm.Annotator
interface.
LanguageIdentificationAnnotator langAnnotator =
liBuilder.buildSingleLanguageAnnotator();
Note: To create an annotator to find language regions in input text that contains blocks of text in different languages, see Detecting Language Regions.
-
Use langAnnotator
to annotate the input text, which returns a com.basistech.rosette.dm.AnnotatedText
object.
AnnotatedText results =
langAnnotator.annotate("This is the cereal shot from guns.");
-
The AnnotatedText
object includes a method for gathering a LanguageDetection
object for the entire body of the input text.
LanguageDetection langDetect = results.getWholeTextLanguageDetection();
-
The LanguageDetection
object contains a list of LanguageDetection.DetectionResult
objects in descending order of confidence, so the first result is the best.
List<LanguageDetection.DetectionResult> detectResults =
langDetect.getDetectionResults();
LanguageDetection.DetectionResult bestResult = detectResults.get(0);
LanguageCode lang = bestResult.getLanguage();
ISO15924 script = bestResult.getScript();
String encoding = bestResult.getEncoding();
double confidence = bestResult.getConfidence();
Processing RawData.
The preceding procedure detected the language of a Java string. If you are processing a file and the encoding is not UTF-8 or you do not know the encoding, you can read in the bytes, place them in a com.basistech.rosette.dm.RawData
object, and use that object as the input for the annotator annotation. RLI will detect the language, the writing script and the encoding. For an example, see Sample: IdentifyExample.
Detecting the Language of Short Strings
RLI can apply a special procedure for detecting the language of short strings, such as tweets. Use LanguageIdentifierBuilder#shortStringThreshold(int)
to set the threshold for short-string language detection. The default is 0, so by default short-string detection is turned off. Set it to 10, and RLI will process any input string containing less than 10 characters as a short string.
RLI uses a different rule- and model-based procedure for processing short strings.
-
The options languageHint
and languageWeightAdjustment
are the only valid options.
-
The input to be analyzed must be a Java string; other input formats are not supported for short-string analysis. If short-string analysis is turned on (shortStringThreshold
> 0) and the input is not a Java string, RLI throws an UnsupportedOperationException
.
-
For short-string language detection, RLI returns an ordered list of one or more LanguageDetection
results with language, script, encoding (UTF-16BE), and confidence.
-
For a list of the languages that RLI can detect with short-string analysis, see RLI Languages.
-
If unable to detect a language, short-string detection returns Unknown (xxx) with a confidence of 1.0.
Example: Set the shortStringThreshold
to 7 and process 温州市委常委
RLI returns the following:
Language(Script)/Encoding Confidence
Chinese(Hani)/UTF-16BE 0.924089
Japanese(Jpan)/UTF-16BE 0.075911
For the best results with short strings, set the shortStringThreshold
to a high number. To determine the best value for the threshold, you should test your configuration on representative data, as the value may vary by language and whether you are optimizing for precision or recall. The value also depends on the set of enabled language profiles. We recommend disabling all language profiles that you are certain will not appear in your data.
Detecting Language Regions
For input with blocks of text in different languages, an Annotator
can return a list of LanguageDetection
objects, one for each language region.
-
Use the LanguageIdentifierBuilder
to build an Annotator
for detecting language regions.
Annotator regionAnnotator = new LanguageIdentifierBuilder(new File(rootDirectory))
.buildLanguageRegionAnnotator();
-
Use the Annotator
to build an AnnotatedText
object containing the language region results.
AnnotatedText lrResults = regionAnnotator.annotate(lrString);
-
The following fragment illustrates how to retrieve each region's attributes including its best (and only) language result.
for (LanguageDetection languageDetection
: lrResults.getLanguageDetectionRegions()) {
System.out.format("Region from %3d to %3d%n",
languageDetection.getStartOffset(),
languageDetection.getEndOffset());
LanguageDetection.DetectionResult result =
languageDetection.getDetectionResults().get(0);
System.out.format("%15s %15s %g%n",
result.getLanguage().languageName(),
result.getScript(),
result.getConfidence());
}
How It Works.
The Annotator
first detects sentence boundaries and script regions. Each script region is a potential language region. If the evaluation is ambiguous, each sentence within the script region is evaluated. Mid-sentence language boundaries are not detected unless the script changes. If a language region is shorter than the value set for shortStringThreshold
, RLI will use short-string detection to process the region.
The following sample application (sample.IdentifyExample
) illustrates the basic usage pattern for identifying the language, script, and encoding of a monolingual document, and language regions for a multilingual document.
To compile and run this sample, you can use the Apache Ant build script that accompanies the sample. The build script assumes that the RLP license, which is required to access RLI, is in RLI-ROOT-DIRECTORY
/licenses/rlp-license.xml
, so you must copy rlp-license.xml
to that directory before you can run the sample. With Ant on your path, navigate in a command shell to rli-je-<version>/samples/rli-je
and call the Ant build script:
ant run
The sample program performs four operations:
-
It processes a monolingual text file (example.txt
) as raw text (an array of bytes) with standard analysis (shortStringThreshold
= 0) and writes five results to output.txt
in order of preference. (The best result, i.e., the result with the shortest difference to a built-in profile, is first.) Each result contains language, encoding, and confidence.
-
It attempts to process the same text file as raw text with shortStringThreshold
= 1, which throws an UnsupportedOperationException
, and reports the problem. RLI does not know the length of text in an array of bytes, and cannot perform short-string analysis.
-
It converts the array of bytes from a short text file (short.txt
) as UTF-8 into a string. With shortStringThreshold
set to a value greater than the length of the text, it performs short-string analysis and reports the best result for language and confidence. The short-string sample we include is "We'll study Machiavelli", which short-string analysis identifies as English, whereas the standard RLI analysis would return Italian as the best match.
-
It processes a multilingual text file (language-region-sample.txt
) and reports the best result for each region with region offsets.
The source: samples/rli-je/src/sample/IdentifyExample.java
THIS IS THE HARDCODED SAMPLE
/******************************************************************************
** This data and information is proprietary to, and a valuable trade secret
** of, Basis Technology Corp. It is given in confidence by Basis Technology
** and may only be used as permitted under the license agreement under which
** it has been distributed, and in no other way.
**
** Copyright (c) 2016 Basis Technology Corporation All rights reserved.
**
** The technical data and information provided herein are provided with
** `limited rights', and the computer software provided herein is provided
** with `restricted rights' as those terms are defined in DAR and ASPR
** 7-104.9(a).
******************************************************************************/
package sample;
import com.basistech.rosette.dm.AnnotatedText;
import com.basistech.rosette.dm.Annotator;
import com.basistech.rosette.dm.LanguageDetection;
import com.basistech.rosette.dm.RawData;
import com.basistech.rosette.languageidentifier.LanguageIdentificationAnnotator;
import com.basistech.rosette.languageidentifier.LanguageIdentifierBuilder;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.List;
import static java.nio.file.Files.readAllBytes;
import static java.nio.file.Paths.get;
/**
* Creates a LanguageIdentifier instance, prints the results
* of language detection on the bytes from example_text, and the results
* of language region detection on the bytes from language-region-sample.txt.
*/
public final class IdentifyExample {
private IdentifyExample() {
}
private static byte[] getBytes(String fileName) throws IOException {
File inFile = new File(fileName);
byte[] bytes;
try (InputStream in = new FileInputStream(inFile)) {
// Get the size of the file
long length = inFile.length();
// byte array cannot be made from long
if (length > Integer.MAX_VALUE) {
// the file is too large for reading in one go
throw new RuntimeException("File too large for single read");
} else {
bytes = new byte[(int) length];
}
// read in the bytes
in.read(bytes);
import com.basistech.rosette.dm.AnnotatedText;
import com.basistech.rosette.dm.Annotator;
import com.basistech.rosette.dm.LanguageDetection;
import com.basistech.rosette.dm.RawData;
import com.basistech.rosette.languageidentifier.LanguageIdentificationAnnotator;
import com.basistech.rosette.languageidentifier.LanguageIdentifierBuilder;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.List;
import static java.nio.file.Files.readAllBytes;
import static java.nio.file.Paths.get;
/**
* Creates a LanguageIdentifier instance, prints the results
* of language detection on the bytes from example_text, and the results
* of language region detection on the bytes from language-region-sample.txt.
*/
public final class IdentifyExample {
private IdentifyExample() {
}
private static byte[] getBytes(String fileName) throws IOException {
File inFile = new File(fileName);
byte[] bytes;
try (InputStream in = new FileInputStream(inFile)) {
// Get the size of the file
long length = inFile.length();
// byte array cannot be made from long
if (length > Integer.MAX_VALUE) {
// the file is too large for reading in one go
throw new RuntimeException("File too large for single read");
} else {
bytes = new byte[(int) length];
}
// read in the bytes
in.read(bytes);
}
return bytes;
}
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.err.println("Usage: IdentifyExample RLI-ROOT-DIRECTORY");
System.exit(1);
return;
}
IdentifyExample id = new IdentifyExample();
id.run(args[0]);
}
public void run(String rootDirectory) throws Exception {
// 1. Process example.txt as RawData with standard analysis: shortStringThreshold = 0.
// (shortStringThreshold = 0)
// get the bytes from the example.txt file
byte[] bytes = getBytes("example.txt");
LanguageIdentificationAnnotator annotator = new LanguageIdentifierBuilder(new File(rootDirectory))
.buildSingleLanguageAnnotator();
// Process the monolingual text file (example.txt) as Raw Data; shortStringThreshold is equal to 0,
// so shortString language detection is inactive.
try {
RawData input = new RawData(ByteBuffer.wrap(bytes),
new HashMap<String, List<String>>());
AnnotatedText results = annotator.annotate(input);
// print the language detection results.
System.out.println("1. RawData, shortStringThreshold = 0: Language Detection for example.txt.");
System.out.println("------------------------------------------------------------------------");
System.out.format("%15s %15s %15s %s%n",
"Language",
"Encoding",
"Script",
"Confidence");
for (LanguageDetection.DetectionResult result
: results.getWholeTextLanguageDetection().getDetectionResults()) {
System.out.format("%15s %15s %15s %g%n",
result.getLanguage().languageName(),
result.getEncoding(),
result.getScript(),
result.getConfidence());
}
} catch (RuntimeException e) {
e.printStackTrace();
}
System.out.println();
// 2. Run short string algorithm with RawData.
annotator = new LanguageIdentifierBuilder(new File(rootDirectory)).shortStringThreshold(1)
.buildSingleLanguageAnnotator();
// Process the monolingual text file (example.txt) as RawData, with shortStringThreshold > 0,
// which throws UnsupportedOperationException, because shortString detection does
// not support RawData input (length of string content is unknown).
try {
// print the language detection results.
System.out.println("2. RawData, shortStringThreshold > 0: Language Detection for example.txt.");
System.out.println("------------------------------------------------------------------------");
RawData input2 = new RawData(ByteBuffer.wrap(bytes),
new HashMap<String, List<String>>());
AnnotatedText results2 = annotator.annotate(input2);
for (LanguageDetection.DetectionResult result
: results2.getWholeTextLanguageDetection().getDetectionResults()) {
System.out.format("%15s %15s %15s %g%n",
result.getLanguage().languageName(),
result.getEncoding(),
result.getScript(),
result.getConfidence());
}
} catch (RuntimeException e) {
e.printStackTrace();
}
System.out.println();
// 3. Process the short-string text file (short.txt) as string (Annotated Text)
// and report the best result. Also, illustrate the use of LanguageIdentifierBuilder.license(String)
// when one wants to pass in a license via the API.
bytes = getBytes("short.txt");
try {
// Get the license in the root directory for illustration purposes.
String xmlLicense =
new String(readAllBytes(get(rootDirectory, "licenses", "rlp-license.xml")), StandardCharsets.UTF_8);
String text = new String(bytes, StandardCharsets.UTF_8);
int threshold = text.length() + 1;
annotator =
new LanguageIdentifierBuilder(new File(rootDirectory))
.license(xmlLicense)
.shortStringThreshold(threshold)
.buildSingleLanguageAnnotator();
// Alternatively, use the models bundled in the JAR.
/*
annotator =
new LanguageIdentifierBuilder(xmlLicense)
.useModelsInJar(true)
.shortStringThreshold(threshold)
.buildSingleLanguageAnnotator();
*/
System.out.format("%s%d%s%n",
"3. String from file, shortStringThreshold = ",
threshold,
": the best result with short-string analysis of short.txt.");
System.out.println("------------------------------------------------------------------------");
System.out.format("%15s %15s %15s %s%n",
"Language",
"Encoding",
"Script",
"Confidence");
AnnotatedText shortStringText = annotator.annotate(text);
// Get the best result.
LanguageDetection.DetectionResult result =
shortStringText.getWholeTextLanguageDetection().getDetectionResults().get(0);
System.out.format("%15s %15s %15s %g%n",
result.getLanguage().languageName(),
result.getEncoding(),
result.getScript(),
result.getConfidence());
} catch (RuntimeException e) {
e.printStackTrace();
}
System.out.println();
// 4 Process a multilingual text file (language-region-sample.txt) and reports
// the best result for each region with regions offsets.
try {
/* now language regions */
System.out.println("4. String Language Regions from language-region-sample.txt.");
System.out.println("------------------------------------------------------------------------");
Annotator regionAnnotator = new LanguageIdentifierBuilder(new File(rootDirectory))
.buildLanguageRegionAnnotator();
byte[] lrData = getBytes("language-region-sample.txt");
String lrString = new String(lrData, StandardCharsets.UTF_8);
AnnotatedText lrResults = regionAnnotator.annotate(lrString);
// print the best language detection result for each language region.
for (LanguageDetection languageDetection
: lrResults.getLanguageDetectionRegions()) {
System.out.format("Region from %3d to %3d%n",
languageDetection.getStartOffset(),
languageDetection.getEndOffset());
LanguageDetection.DetectionResult result =
languageDetection.getDetectionResults().get(0);
System.out.format("%15s %15s %g%n",
result.getLanguage().languageName(),
result.getScript(),
result.getConfidence());
}
} catch (RuntimeException e) {
e.printStackTrace();
}
}
}
[java] 1. RawData, shortStringThreshold = 0: Language Detection for example.txt.
[java] ------------------------------------------------------------------------
[java] Language Encoding Script Confidence
[java] English US-ASCII Latn 0.626407
[java] Romanian US-ASCII Latn 0.132481
[java] Norwegian US-ASCII Latn 0.124547
[java] Estonian US-ASCII Latn 0.121240
[java] Malay, Standard US-ASCII Latn 0.103885
[java]
[java] 2. RawData, shortStringThreshold > 0: Language Detection for example.txt.
[java] ------------------------------------------------------------------------
[java] shortStringThreshold = 1, but using the short string detection algorithm on RawData is not supported.
[java]
[java] 3. String from file, shortStringThreshold = 25: the best result with short-string analysis of short.txt.
[java] ------------------------------------------------------------------------
[java] Language Encoding Script Confidence
[java] English UTF-16BE Latn 0.369665
[java]
[java] 4. String Language Regions from language-region-sample.txt.
[java] ------------------------------------------------------------------------
[java] Region from 0 to 2626
[java] Spanish Latn 0.477605
[java] Region from 2626 to 3690
[java] French Latn 0.618590
[java] Region from 3690 to 7834
[java] German Latn 0.365043
You can change the content of the sample text files and use Ant as described above to rerun the sample.
To remove the sample build that has been placed in the rli-je-<version>/samples/rli-je/build
directory, type:
ant clean
For standard language detection, a variety of options can be set before detection is performed. Set the shortString Threshold to activate short-string language detection, in which most other options are ignored.
For options details, see the Java API documentation for com.basistech.rosette.languageidentifier.LanguageIdentifierBuilder
.
-
languageHint
: Provide a language hint
The language is denoted by a com.basistech.rosette.util.LanguageCode
constant. The weight is a float from 1 to 99 (the default is 1.0). Standard analysis and short-string analysis use this hint differently.
This option is deprecated because it has been superseded by languageWeightAdjustment
.
Standard Analysis. The hint reduces the distance of the input profile from the specified language's profile by the specified weight (treated as a percentage). For example LanguageCode.DUTCH
with the default weight reduces the distance from the Dutch language profile by 1.0%. Another example: LanguageCode.GERMAN
with a weight of 50.005 reduces the distance of the input profile from the German language profile by 50.005%. If the language is already known, then use a large hint weight to suppress language detection, leaving only encoding and script detection to be performed.
Short-String Analysis. For short-string analysis, the hint weight provides a greater boost to confidence than for standard analysis. Confidence is boosted by 100/(100 - weight). For example, a 50% French weight boosts French confidence by a factor of 2, and 75% French weight boosts confidence by a factor of 4.
-
encodingHint
: Provide an encoding hint
Use com.basistech.rosette.util.EncodingCode
to specify an encoding. The weight is a float from 1 to 100 (the default is 3.0). The hint reduces the distance from the input profile from the specified encoding's profile by the specified weight (treated as a percentage). For example, EncodingType.Ascii
with the default weight reduces the distance from the input profile by 1.0%. The value of 100 forces only those results which match the encoding hint to be considered during detection.
This option is deprecated.
-
koreanDialects
: Set whether to support North Korean and South Korean language identification.
If this parameter is true, language classification for North Korean or South Korean is enabled. By default, this is false and Hangul script text is labeled as Korean only. This uses the same algorithm whether the text is long or short, and it is more similar to the short-string classifier. One consequence of this is that koreanDialects
requires the text to be in UTF-8 encoding.
-
minValidChars
: Set the minimum number of valid non-whitespace characters required for identification.
If the input data has less than the minimum number of valid characters, no detection is performed. The value range is an integer 1 or greater.
-
profileDepth
: Set the profile depth.
Profile depth is the maximum number of n-grams to be used in the input profile. If the depth is 100, the 100 most frequent n-grams are included in the input profile. A small depth improves detection speed but reduces detection accuracy. The value range is an integer 1 or greater.
-
languageWeightAdjustment
: Adjust a language weight.
Use this setting to change the language weight for a language, for the purpose of detecting another language that is present in the document. The weight
parameter in the languageWeightAdjustment
method is a non-negative percentage that is applied to the language weight; 70 reduces the weight to 70% of its default value.
For example, you may be interested in detecting German in a document that is mostly English but also contains some German. Reduce the language weight for English or increase the language weight for German.
-
ambiguityThreshold
: Set the ambiguity threshold.
If the distance difference between the input profile and a candidate built-in profile is smaller than ambiguityThreshold/100 × bestProfileDistance, the result is flagged as ambiguous. bestProfileDistance is the distance measure of the best matching profile. If the threshold is 0 then all results will be flagged as unambiguous. If the value is 100 then all results will be flagged as ambiguous. The value range is a float from 0 to 100.
-
invalidityThreshold
: Set the invalidity threshold.
If the input profile distance measure is smaller than invalidityThreshold/100 × maximumProfileDistance, the result is flagged as valid. maximumProfileDistance is the maximum distance any input text can have. If the threshold is 0, all results are flagged as invalid. If the value is 100, all results are flagged valid. The value range is a float from 0 to 100.
-
minRegionLength
: Set the minimum length for a language region (the amount of text examined in a script region). This is only used for language region detection.
-
maxRegionLength
: Set the maximum region length. This is only used for language region detection.
-
shortStringThreshold
: Set the short-string threshold.
By default, the short-string threshold is 0 and short-string language detection is inactive. To turn it on, set the threshold to a non-negative integer, such as 6. If the string contains fewer characters than this threshold, RLI performs short-string language detection.
Note
When RLI is performing short-string language detection, only languageHint
and language weight adjustments are used. The other options are ignored.
-
useModelsInJar
: Set whether to use the short-string models bundled in the JAR instead of the models in the file system. This only has an effect when RLI is performing short-string language detection.
-
minNonScriptioContinuaRegionLength
: Sets the minimum length of a region of non-scriptio-continua characters adjacent to a scriptio continua region. A scriptio continua script is one that does not separate its tokens with spaces. If a non-scriptio-continua region (e.g. Latin) has fewer characters than the minimum and is adjacent to a scriptio continua region (e.g. Han), the former will be merged into the latter.
This is only used for script region detection.
-
uniqueLanguages
: Set whether to ignore script and encoding differences in language detection results.
By default, results that share a language but have different scripts or encodings, such as Simplified Chinese and Traditional Chinese, are returned as separate results. If script and encoding differences are ignored, only one result is returned per language.
You can run RLI from the command line. To see the all the arguments, run
bin/RLICmd -help
To process a file and see the results:
bin/RLICmd -rootDirectory RLI-ROOT-DIRECTORY
-in INPUT-FILE
[Options]
where RLI-ROOT-DIRECTORY
is the path to your RLI root directory. RLI uses this path to find your license file (RLI-ROOT-DIRECTORY
/licenses/rlp-license.xml
).
INPUT-FILE
is the path to the text file you want to process. For short-string analysis, the file must be UTF-8.
Options
are the optional arguments you can provide. For example, use -multilingual
to detect language regions in a multilingual document. Use -maxShortStringLength n
to apply short-string detection to a string with n
or fewer characters.
Use the option -lineInput
to process each line of an input file separately. For example, a document with three lines returns three sets of results, one for each line. The results sets are separated by ----
.
The output format is:
Language(script)/Encoding Confidence
To obtain a list of supported language-script-encoding combinations for standard language detection and for short-string analysis:
bin/RLICmd -rootDirectory RLI-ROOT-DIRECTORY
-profileInfo
RLI Performance Optimization Advice
Annotator
objects are meant to be used on a per-thread basis; they are not re-entrant and should not be used by more than one thread at once. LanguageIdentifierBuilder
objects can be used by multiple threads to construct Annotator
s at once. However, setting and getting configuration values is not thread-safe and should not occur during Annotator
construction. It is recommended that for multithreaded applications a pool of Annotator
s be constructed in order to minimize the overhead of Annotator
construction. Because the settings of Annotator
s cannot be changed after they are built, if Annotator
s of multiple configurations are desired they should be stored separately.
The minimum amount of Java heap required to run RLI is 150MB. An -Xmx setting of (50 + 100 × [no. active annotator threads]) MB has been observed to produce optimal performance. If memory is tight, a setting of no less than (20 + 35 × [no. active annotator threads]) MB may run, if more slowly. This advice applies equally to all languages and includes short-string analysis.
With sufficient available heap space on a machine with 8 cores, RLI's throughput on an 8MB document is around 11 million characters per second for monolingual detection and 4.5 million characters per second for language region detection.
RLI uses the Logging Facade for Java (SLF4J) to log activities. See SLF4J user manual.
SFL4J is a facade for various logging APIs. Using SFL4J, the developer or an administrator can determine which one of many popular logging systems to use at runtime.
In the lib
directory, we include an SLF4J API jar (slf4j-api-1.7.5.jar
) and an implementation jar (slf4j-log4j12-1.7.5.jar
). When you place both of these jars on your classpath, the logging façade is bound to the implementation, and logging is turned on. By default, logging info
messages are output to the console (System.err
). As the Javadoc for org.apache.log4j.PropertyConfigurator
explains, you can use system properties or a properties file to output to a file and control other logging parameters. You then instantiate a logger and include logging statements in your application.
RLI includes a sample configuration file, etc/log4j.properties
, which is also included in the jar file rli-je-shaded-XXversion.jar
. To experiment with logging options, you can modify etc/log4j.properties
and invoke the command line, bin/RLICmd
.
If you want to use SLF4J with a different implementation, put the appropriate adapter and implementation jar files and properties file on your classpath.