Lucene的评分(score)机制研究

首先,需要学习Lucene的评分计算公式——

分值计算方式为查询语句q中每个项t与文档d的匹配分值之和,当然还有权重的因素。其中每一项的意思如下表所示:

表3.5

评分公式中的因子

评分因子

描 述

tf(t in d)

项频率因子——文档(d)中出现项(t)的频率

idf(t)

项在倒排文档中出现的频率:它被用来衡量项的“唯一”性.出现频率较高的term具有较低的idf,出现较少的term具有较高的idf

boost(t.field in d)

域和文档的加权,在索引期间设置.你可以用该方法 对某个域或文档进行静态单独加权

lengthNorm(t.field in d)

域的归一化(Normalization)值,表示域中包含的项数量.该值在索引期间计算,并保存在索引norm中.对于该因子,更短的域(或更少的语汇单元)能获得更大的加权

coord(q,d)

协调因子(Coordination factor),基于文档中包含查询的项个数.该因子会对包含更多搜索项的文档进行类似AND的加权

queryNorm(q)

每个査询的归一化值,指毎个查询项权重的平方和

通过Searcher.explain(Query query, int doc)方法可以查看某个文档的得分的具体构成。 示例:

public class ScoreSortTest {
    public final static String INDEX_STORE_PATH = "index";
    public static void main(String[] args) throws Exception {
        IndexWriter writer = new IndexWriter(INDEX_STORE_PATH, new StandardAnalyzer(), true);
        writer.setUseCompoundFile(false);
        
        Document doc1 = new Document();
        Document doc2 = new Document();
        Document doc3 = new Document();
        
        Field f1 = new Field("bookname","bc bc", Field.Store.YES, Field.Index.TOKENIZED);
        Field f2 = new Field("bookname","ab bc", Field.Store.YES, Field.Index.TOKENIZED);
        Field f3 = new Field("bookname","ab bc cd", Field.Store.YES, Field.Index.TOKENIZED);
        
        doc1.add(f1);
        doc2.add(f2);
        doc3.add(f3);
        
        writer.addDocument(doc1);
        writer.addDocument(doc2);
        writer.addDocument(doc3);
        
        writer.close();
        
        IndexSearcher searcher = new IndexSearcher(INDEX_STORE_PATH);
        TermQuery q = new TermQuery(new Term("bookname", "bc"));
        q.setBoost(2f);
        Hits hits = searcher.search(q);
        for(int i=0; i<hits.length();i++){
            Document doc = hits.doc(i);
            System.out.print(doc.get("bookname") + "		");
            System.out.println(hits.score(i));
            System.out.println(searcher.explain(q, hits.id(i)));//
        }
    }
}

运行结果: 

bc bc    0.629606 
0.629606 = (MATCH) fieldWeight(bookname:bc in 0), product of: 
  1.4142135 = tf(termFreq(bookname:bc)=2) 
  0.71231794 = idf(docFreq=3, numDocs=3) 
  0.625 = fieldNorm(field=bookname, doc=0) 

ab bc    0.4451987 
0.4451987 = (MATCH) fieldWeight(bookname:bc in 1), product of: 
  1.0 = tf(termFreq(bookname:bc)=1) 
  0.71231794 = idf(docFreq=3, numDocs=3) 
  0.625 = fieldNorm(field=bookname, doc=1) 

ab bc cd    0.35615897 
0.35615897 = (MATCH) fieldWeight(bookname:bc in 2), product of: 
  1.0 = tf(termFreq(bookname:bc)=1) 
  0.71231794 = idf(docFreq=3, numDocs=3) 
  0.5 = fieldNorm(field=bookname, doc=2) 

涉及到的源码:

idf的计算

idf是项在倒排文档中出现的频率,计算方式为

  1. /** Implemented as <code>log(numDocs/(docFreq+1)) + 1</code>. */
  2.   @Override
  3.   public float idf(long docFreq, long numDocs) {
  4.     return (float)(Math.log(numDocs/(double)(docFreq+1)) + 1.0);
  5.   }

docFreq是根据指定关键字进行检索,检索到的Document的数量,我们测试的docFreq=14;numDocs是指索引文件中总共的Document的数量,我们测试的numDocs=1453。用计算器验证一下,没有错误,这里就不啰嗦了。

queryNorm的计算

queryNorm的计算在DefaultSimilarity类中实现,如下所示:

  1. /** Implemented as <code>1/sqrt(sumOfSquaredWeights)</code>. */
  2. public float queryNorm(float sumOfSquaredWeights) {
  3.     return (float)(1.0 / Math.sqrt(sumOfSquaredWeights));
  4. }

这里,sumOfSquaredWeights的计算是在org.apache.lucene.search.TermQuery.TermWeight类中的sumOfSquaredWeights方法实现:

    

  1. public float sumOfSquaredWeights() {
  2.       queryWeight = idf * getBoost();             // compute query weight
  3.       return queryWeight * queryWeight;          // square it
  4.     }

其实默认情况下,sumOfSquaredWeights = idf * idf,因为Lucune中默认的boost = 1.0。

fieldWeight的计算

在org/apache/lucene/search/similarities/TFIDFSimilarity.java的explainScore方法中有:

  1. // explain field weight
  2.     Explanation fieldExpl = new Explanation();
  3.     fieldExpl.setDescription("fieldWeight in "+doc+
  4.                              ", product of:");
  5.  
  6.     Explanation tfExplanation = new Explanation();
  7.     tfExplanation.setValue(tf(freq.getValue()));
  8.     tfExplanation.setDescription("tf(freq="+freq.getValue()+"), with freq of:");
  9.     tfExplanation.addDetail(freq);
  10.     fieldExpl.addDetail(tfExplanation);
  11.     fieldExpl.addDetail(stats.idf);
  12.  
  13.     Explanation fieldNormExpl = new Explanation();
  14.     float fieldNorm = norms != null ? decodeNormValue(norms.get(doc)) : 1.0f;
  15.     fieldNormExpl.setValue(fieldNorm);
  16.     fieldNormExpl.setDescription("fieldNorm(doc="+doc+")");
  17.     fieldExpl.addDetail(fieldNormExpl);
  18.     
  19.     fieldExpl.setValue(tfExplanation.getValue() *
  20.                        stats.idf.getValue() *
  21.                        fieldNormExpl.getValue());
  22.  
  23.     result.addDetail(fieldExpl);

重点是这一句:

  1. fieldExpl.setValue(tfExplanation.getValue() *
  2.                        stats.idf.getValue() *
  3.                        fieldNormExpl.getValue());

使用计算式表示就是

fieldWeight = tf * idf * fieldNorm

tf和idf的计算参考前面的,fieldNorm的计算在索引的时候确定了,此时直接从索引文件中读取,这个方法并没有给出直接的计算。如果使用DefaultSimilarity的话,它实际上就是lengthNorm,域越长的话Norm越小,在org/apache/lucene/search/similarities/DefaultSimilarity.java里面有关于它的计算:

  1.   public float lengthNorm(FieldInvertState state) {
  2.     final int numTerms;
  3.     if (discountOverlaps)
  4.       numTerms = state.getLength() - state.getNumOverlap();
  5.     else
  6.       numTerms = state.getLength();
  7.    return state.getBoost() * ((float) (1.0 / Math.sqrt(numTerms)));
  8.   }

参考文献:

【1】http://www.hankcs.com/program/java/lucene-scoring-algorithm-explained.html

【2】http://grantbb.iteye.com/blog/181802

原文地址:https://www.cnblogs.com/davidwang456/p/6150388.html