lucenenet: One character is missing in class ASCIIFoldingFilter

I think one character in class ASCIIFoldingFilter is missing Character: Ʀ Nº: 422 UTF-16: 01A6

Source code that might need to be added to method FoldToASCII(char[] input, int inputPos, char[] output, int outputPos, int length):

case '\u01A6': // Ʀ  [LATIN LETTER YR] 
output[outputPos++] = 'R';

Links about this character: https://codepoints.net/U+01A6
https://en.wikipedia.org/wiki/%C6%A6

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15 (7 by maintainers)

Most upvoted comments

Thanks for the report.

As this is a line-by-line port from Java Lucene 4.8.0 (for the most part), we have faithfully reproduced the ASCIIFoldingFilter in its entirety. While we have admittedly included some patches from later versions of Lucene where they affect usability (for example, Lucene.Net.Analysis.Common all came from 4.8.1), the change you are suggesting isn’t even reflected in the ASCIIFoldingFilter in the latest commit.

If you wish to pursue adding more characters to ASCIIFoldingFilter, I suggest you take it up with the Lucene design team on their dev mailing list.

However, do note this isn’t the only filter included in the box that is capable of removing diacritics from ASCII characters. Some alternatives:

  1. Nomalizer2Filter
  2. ICUFoldingFilter

Note that you can also create a custom folding filter by using a similar approach in the ICUFoldingFilter implementation (ported from Lucene 7.1.0). There is a tool you can port to generate a .nrm binary file from modified versions of these text files. The .nrm file can then be provided to the constructor of ICU4N.Text.Normalizer2 - more about the data format can be found in the ICU normalization docs. Note that the .nrm file is the same binary format used in C++ and Java.

Alternatively, if you wish to extend the ASCIIFoldingFilter with your own custom brew of characters, you can simply chain your own filter to ASCIIFoldingFilter as pointed out in this article.

public TokenStream GetTokenStream(string fieldName, TextReader reader)
{
    TokenStream result = new StandardTokenizer(reader);
    result = new StandardFilter(result);
    result = new LowerCaseFilter(result);
    // etc etc ...
    result = new StopFilter(result, yourSetOfStopWords);
    result = new MyCustomFoldingFilter(result);
    result = new ASCIIFoldingFilter(result);
    return result;
}

FYI - there is also another demo showing additional ways to build analyzers here: https://github.com/NightOwl888/LuceneNetDemo

@diegolaz79

Nope, it isn’t valid to use multiple tokenizers in the same Analyzer, as there are strict consuming rules to adhere to.

It would be great to build code analysis components to ensure developers adhere to these tokenizer rules while typing, such as the rule that ensures TokenStream classes are sealed or use a sealed IncrementToken() method (contributions welcome). It is not likely we will add any additional code analyzers prior to the 4.8.0 release unless they are contributed by the community, though, as these are not blocking the release. For the time being, the best way to ensure custom analyzers adhere to the rules are to test them with Lucene.Net.TestFramework, which also hits them with multiple threads, random cultures, and random strings of text to ensure they are robust.

I built a demo showing how to setup testing on custom analyzers here: https://github.com/NightOwl888/LuceneNetCustomAnalyzerDemo (as well as showing how the above example fails the tests). The functioning analyzer just uses a WhiteSpaceTokenizer and ICUFoldingFilter. Of course, you may wish to add additional test conditions to ensure your custom analyzer meets your expectations, and then you can experiment with different tokenizers and adding or rearranging filters until you find a solution that meets all of your requirements (as well as plays by Lucene’s rules). And of course, you can then later add additional conditions as you discover issues.

using Lucene.Net.Analysis;
using Lucene.Net.Analysis.Core;
using Lucene.Net.Analysis.Icu;
using Lucene.Net.Util;
using System.IO;

namespace LuceneExtensions
{
    public sealed class CustomAnalyzer : Analyzer
    {
        private readonly LuceneVersion matchVersion;

        public CustomAnalyzer(LuceneVersion matchVersion)
        {
            this.matchVersion = matchVersion;
        }

        protected override TokenStreamComponents CreateComponents(string fieldName, TextReader reader)
        {
            // Tokenize...
            Tokenizer tokenizer = new WhitespaceTokenizer(matchVersion, reader);
            TokenStream result = tokenizer;

            // Filter...
            result = new ICUFoldingFilter(result);

            // Return result...
            return new TokenStreamComponents(tokenizer, result);
        }
    }
}
using Lucene.Net.Analysis;
using NUnit.Framework;

namespace LuceneExtensions.Tests
{
    public class TestCustomAnalyzer : BaseTokenStreamTestCase
    {
        [Test]
        public virtual void TestRemoveAccents()
        {
            Analyzer a = new CustomAnalyzer(TEST_VERSION_CURRENT);

            // removal of latin accents (composed)
            AssertAnalyzesTo(a, "résumé", new string[] { "resume" });

            // removal of latin accents (decomposed)
            AssertAnalyzesTo(a, "re\u0301sume\u0301", new string[] { "resume" });

            // removal of latin accents (multi-word)
            AssertAnalyzesTo(a, "Carlos Pírez", new string[] { "carlos", "pirez" });
        }
    }
}

For other ideas about what test conditions you may use, I suggest having a look at Lucene.Net’s extensive analyzer tests including the ICU tests. You may also refer to the tests to see if you can find a similar use case to yours for building queries (although do note that the tests don’t show .NET best practices for disposing objects).

Thanks again! Your suggestions helped me a lot!

I’m currently doing it like this

IDictionary<string, Analyzer> myAnalyzerPerField = new Dictionary<string, Analyzer>();
myAnalyzerPerField ["code"] = new WhitespaceAnalyzer(LuceneVersion.LUCENE_48);
finalAnalyzer = new PerFieldAnalyzerWrapper(new CustomAnalyzer(LuceneVersion.LUCENE_48), myAnalyzerPerField );

The WhitespaceAnalyzer did not help my case of the code format (“M-12-14”, “B-10-39”, etc) but will try other more suitable.

And using the finalAnalyer for indexing and search.

Thanks! Just removed the LowerCase filter and changed the StandardFilter for the stopfilter and its working fine with casing and diactrics searches. Still need to adjust the stopwording for something more suitable for spanish, but its working well like this.

FYI - There is a generic Spanish stop word list that can be accessed through SpanishAnalyzer.DefaultStopSet.

One thing I noticed, there is a field that has the format like “M-4-20” or “B-7-68” … new StringField("code", code, Field.Store.YES) but when searching that field with the above analyzer, it cant find the dashes

searchTerm = "*" + searchTerm + "*";
Query q = new WildcardQuery(new Term(field, searchTerm));

is there a way to escape the dash from the term or skip analysis from that field? thanks!

PerFieldAnalyzerWrapper applies a different analyzer to each field (example). Note you don’t necessarily have to use inline analyzers, you can also simply new up pre-constructed analyzers for each field.

If all of the data in the field can be considered a token, there is a KeywordAnalyzer that can be used to keep the entire field together.

Just out of curiosity, do all of your use cases work without the LowerCaseFilter?

Lowercasing is not the same as case folding (which is what ICUFoldingFilter does):

  • Lowercasing: Converts the entire string from uppercase to lowercase in the invariant culture.
  • Case folding: Folds the case while handling international special cases such as the infamous Turkish uppercase dotted i and the German “ß” (among others).
            AssertAnalyzesTo(a, "Fuß", new string[] { "fuss" }); // German

            AssertAnalyzesTo(a, "QUİT", new string[] { "quit" }); // Turkish

Case Mapping and Case Folding

While this might not matter for your use case, it is also worth noting that performance will be improved without the LowerCaseFilter.

In addition, search performance and accuracy can be improved by using a StopFilter with a reasonable stop word set to cover your use cases - the only reason I removed it from the demo was because the question was about removing diacritics.

@diegolaz79

My bad. It looks like the example I pulled was from an older version of Lucene. However, “Answer 2” in this link shows an example from 4.9.0, which is similar enough to 4.8.0.

// Accent insensitive analyzer
public class CustomAnalyzer : StopwordAnalyzerBase {
    public CustomAnalyzer (LuceneVersion matchVersion)
        : base(matchVersion, StopAnalyzer.ENGLISH_STOP_WORDS_SET)
    {
    }

    protected override TokenStreamComponents CreateComponents(string fieldName, TextReader reader)
    {
        Tokenizer tokenizer = new KeywordTokenizer(reader);
        TokenStream result = new StopFilter(m_matchVersion, tokenizer, m_stopwords);            
        result = new LowerCaseFilter(matchVersion, result);
        result = new CustomFoldingFilter(result);
        result = new StandardFilter(matchVersion, result);
        result = new ASCIIFoldingFilter(result);
    }
}

And of course, the whole idea of the last example is to implement another folding filter named CustomFoldingFilter similar to ASCIIFoldingFilter that adds your own folding rules that is executed before ASCIIFoldingFilter.

Alternatively, use ICUFoldingFilter, which implements UTR #30 (includes accent removal).