|
|||||||||||
PREV NEXT | FRAMES NO FRAMES |
BitSet.and(java.util.BitSet)
.
BitSet.andNot(java.util.BitSet)
.
f
matching
dates on or after date
.
f
matching
times on or after time
.
?
and *
don't get
removed from the search terms.BooleanQuery.add(Query, BooleanClause.Occur)
instead:
IndexWriter.getAnalyzer()
.
Field
.
Field
.
scorerQueue
.
f
matching
dates on or before before date
.
f
matching times
on or before time
.
n
bits.
name
in Directory
d
, as written by the BitVector.write(org.apache.lucene.store.Directory, java.lang.String)
method.
BooleanQuery.getMaxClauseCount()
clauses.BrazilianAnalyzer.BRAZILIAN_STOP_WORDS
).
IndexInput
.IndexOutput
.Filter.bits(org.apache.lucene.index.IndexReader)
.
CJKAnalyzer.STOP_WORDS
.
Filter
s to be chained.CzechAnalyzer.CZECH_STOP_WORDS
).
bit
to zero.
overlap / maxOverlap
.
Directory.createOutput(String)
query
Integer.MAX_VALUE
.
IndexWriter.DEFAULT_MAX_BUFFERED_DOCS
instead
QueryParser.AND_OPERATOR
instead
QueryParser.OR_OPERATOR
instead
DateTools
instead. For
existing indices you can continue using this class, as it will not be
removed in the near future despite being deprecated.RangeFilter
combined with
DateTools
.f
matching dates
between from
and to
inclusively.
f
matching times
between from
and to
inclusively.
Encoder
implementation that does not modify the outputConjunctionScorer
.DisjunctionScorer
.
DisjunctionScorer
, using one as the minimum number
of matching subscorers.
Document
from a File
.
DutchAnalyzer.DUTCH_STOP_WORDS
).
IndexModifier.deleteDocuments(Term)
instead.
IndexModifier.deleteDocument(int)
instead.
IndexReader.deleteDocument(int docNum)
instead.
IndexReader.deleteDocuments(Term term)
instead.
docNum
.
docNum
.
term
.
term
.
docNum
.
i
.
t
.
term
.
term
.
n
th
Document
in this index.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
\
.
\
.
doc
scored against
weight
.
Searcher.explain(Weight, int)
instead.
doc
scored against
query
.
Directory
as a directory of files.Field.Field(String, String, Field.Store, Field.Index)
instead
Field.Field(String, String, Field.Store, Field.Index, Field.TermVector)
instead
FilterIndexReader
contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.TermDocs
implementations.TermEnum
implementations.TermPositions
implementations.Highlighter
class.FrenchAnalyzer.FRENCH_STOP_WORDS
).
minimumSimilarity
to term
.
FuzzyQuery(term, minimumSimilarity, 0)
.
FuzzyQuery(term, 0.5f, 0)
.
reader
which share a prefix of
length prefixLength
with term
and which have a fuzzy similarity >
minSimilarity
.
GERMAN_STOP_WORDS
).
GermanStemFilter.GermanStemFilter(org.apache.lucene.analysis.TokenStream, java.util.Set)
instead.
true
if bit
is one and
false
if it is zero.
field
to see if it contains integers, floats
or strings, and then calls one of the other methods in this class to get the
values.
HtmlDocument
object.
field
and calls the given SortComparator
to get the sort values.
Document
from an InputStream
.
IndexReader.getFieldNames(IndexReader.FieldOption)
IndexReader.getFieldNames(IndexReader.FieldOption)
MultiFieldQueryParser.getFieldQuery(String, String)
QueryParser.getFieldQuery(String, String)
QueryParser.getFieldQuery(String, String, int)
QueryParser.getFieldQuery(String,String)
.
PrecedenceQueryParser.getFieldQuery(String,String)
.
Field
s with the given name.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
MultiFieldQueryParser.getFuzzyQuery(String, String, float)
QueryParser.getFuzzyQuery(String, String, float)
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
PrecedenceQueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
IndexReader
this searches.
IndexReader.getFieldNames(IndexReader.FieldOption)
IndexReader.getFieldNames(IndexReader.FieldOption)
field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as integers and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
QueryParser.getLowercaseExpandedTerms()
instead
maxTokens
tokens from the underlying child analyzer,
ignoring all remaining tokens.
QueryParser.getDefaultOperator()
instead
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
PrecedenceQueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
MultiFieldQueryParser.getRangeQuery(String, String, String, boolean)
QueryParser.getRangeQuery(String, String, String, boolean)
Searchable
s this searches.
field
and returns
an array of them in natural order, along with an array telling
which element in the term array each document uses.
field
and returns an array
of size reader.maxDoc()
containing the value each document
has in the given field.
SynonymTokenFilter
.
HtmlDocument
object.
WordlistLoader.getWordSet(File)
instead
WordlistLoader.getWordSet(File)
instead
WordlistLoader.getWordSet(File)
instead
WordlistLoader.getWordSet(File)
getWordSet(File)} instead
WordlistLoader.getWordSet(File)
getWordSet(File)} instead
WordlistLoader.getWordSet(File)
getWordSet(File)} instead
HighFreqTerms
class extracts terms and their frequencies out
of an existing Lucene index.Fragmenter
, Scorer
, Formatter
,
Encoder
and tokenizers.HitIterator
to provide a lazily loaded hit
from Hits
.Hits
that provides lazy fetching of each document.HtmlDocument
class creates a Lucene Document
from an HTML document.HtmlDocument
from a File
.
HtmlDocument
from an InputStream
.
Directory
.path
.
path
.
d
.
IndexInput
or BufferedIndexInput
instead.log(numDocs/(docFreq+1)) + 1
.
true
if the lower endpoint is inclusive
true
if the upper endpoint is inclusive
true
if an index exists at the specified directory.
true
if an index exists at the specified directory.
true
if an index exists at the specified directory.
getTerms
at which the term with the specified
term
appears.
indexOf(int)
but searches for a number of terms
at the same time.
IndexWriter.setInfoStream(java.io.PrintStream)
instead
Similarity.coord(int,int)
is disabled in
scoring for this query instance.
true
if the range query is inclusive
true
iff the index in the named directory is
currently locked.
true
iff the index in the named directory is
currently locked.
IndexReader.getTermFreqVector(int,String)
.
Character.isLetter(char)
.
Character.isWhitespace(char)
.
Character.isLetter(char)
.
HitIterator
to navigate the Hits.
Field(name, value, Field.Store.YES, Field.Index.UN_TOKENIZED)
instead
Field(name, value, Field.Store.YES, Field.Index.UN_TOKENIZED)
instead
org.apache.lucene.lockDir
or java.io.tmpdir
system property
fieldName
matching
less than or equal to upperTerm
.
1/sqrt(numTerms)
.
a
is less relevant than b
.
Directory
implementation that uses mmap for input.fieldName
matching
greater than or equal to lowerTerm
.
MultiFieldQueryParser.MultiFieldQueryParser(String[], Analyzer)
instead
MultiFieldQueryParser.MultiFieldQueryParser(String[], Analyzer)
instead
MultiFieldQueryParser.MultiFieldQueryParser(String[], Analyzer)
instead
MultiPhraseQuery.add(Term[])
.Searchables
.Query
that matches documents containing a subset of terms provided
by a FilteredTermEnum
enumeration.term
.
MultipleTermPositions
here.MultipleTermPositions
instance.
HtmlDocument
on the files specified on
the command line.
SimpleAnalyzer
.
SimpleAnalyzer
.
Lock
.
Lock
with the specified name.
Lock
.
StopFilter.makeStopSet(String[])
instead.
StopFilter.makeStopSet(java.lang.String[], boolean)
instead.
BooleanQuery.setMaxClauseCount(int)
instead
IndexWriter.setMaxFieldLength(int)
instead
IndexWriter.setMaxMergeDocs(int)
instead
IndexWriter.setMergeFactor(int)
instead
IndexWriter.setMaxBufferedDocs(int)
instead
"\\W+"
; Divides text at non-letters (Character.isLetter(c))
Fragmenter
implementation which does not fragment the text.Hit
instance representing the next hit in Hits
.
Character.isLetter(char)
.
BitSet.or(java.util.BitSet)
.
IndexOutput
or BufferedIndexOutput
instead.Directory.openInput(String)
TokenFilter
and Analyzer
implementations that use Snowball
stemmers.Searchables
.Reader
, that can flexibly separate text into terms via a regular expression Pattern
(with behaviour identical to String.split(String)
),
and that combines the functionality of
LetterTokenizer
,
LowerCaseTokenizer
,
WhitespaceTokenizer
,
StopFilter
into a single efficient
multi-purpose class.MultiPhraseQuery
insteadprefix
.
QueryParser.parse(String)
instead but note that it
returns a different query for queries where all terms are required:
its query excepts all terms, no matter in what field they occur whereas
the query built by this (deprecated) method expected all terms in all fields
at the same time.
MultiFieldQueryParser.parse(String, String[], BooleanClause.Occur[], Analyzer)
instead
MultiFieldQueryParser.parse(String[], String[], BooleanClause.Occur[], Analyzer)
instead
QueryParser.parse(String)
method instead.
Query
.
Query
.
BooleanClause.setOccur(BooleanClause.Occur)
instead
query
.
Scorer
implementation which scores text fragments by the number of unique query terms found.BooleanClause.setQuery(Query)
instead
1/sqrt(sumOfSquaredWeights)
.
Directory
implementation.Directory
.
RAMDirectory
instance from a different
Directory
implementation.
RAMDirectory
instance from the FSDirectory
.
RAMDirectory
instance from the FSDirectory
.
IndexOutput
implementation.lowerTerm
but less than upperTerm
.
term
.
ReqExclScorer
.
ReqOptScorer
.
BooleanClause.setOccur(BooleanClause.Occur)
instead
Lock.With.doBody()
while lock is obtained.
NumberTools.longToString(long)
Similarity
that delegates all methods to another.
Fragmenter
implementation which breaks text up into same-size
fragments with no concerns over spotting sentence boundaries.Encoder
implementation to escape text for HTML outputFormatter
implementation to highlight terms with a pre and post tagStandardTokenizer
with StandardFilter
, LowerCaseFilter
, StopFilter
and SnowballFilter
.field
then by index order (document
number).
field
then by
index order (document number).
AUTO
).
AUTO
).
match
whose end
position is less than or equal to end
.
include
which
have no overlap with spans from exclude
.
RegexQuery
allowing regular expression
queries to be nested within other SpanQuery subclasses.StandardTokenizer
with StandardFilter
, LowerCaseFilter
and StopFilter
, using a list of English stop words.StandardAnalyzer.STOP_WORDS
).
StandardTokenizer
.StopFilter.StopFilter(TokenStream, Set)
instead
StopFilter.StopFilter(TokenStream, Set)
instead
SynExpand.expand(...)
).Searcher.search(Weight, Filter, HitCollector)
instead.
Searcher.search(Weight, Filter, int)
instead.
Searcher.search(Weight, Filter, int, Sort)
instead.
query
.
query
and
filter
.
query
sorted by
sort
.
query
and filter
,
sorted by sort
.
TermEnum
.
bit
to one.
b
.
GermanStemFilter.setExclusionSet(java.util.Set)
instead.
IndexModifier.getMaxFieldLength()
is reached will be printed to this.
QueryParser.setLowercaseExpandedTerms(boolean)
instead
QueryParser.setDefaultOperator(QueryParser.Operator)
instead
RegexCapabilities
implementation is used by this instance.
field
then by index order
(document number).
field
possibly in reverse,
then by index order (document number).
1 / (distance + 1)
.
timeToString
or
dateToString
back to a time, represented as a
Date object.
NumberTools.longToString(long)
back to a
long.
timeToString
or
dateToString
back to a time, represented as the
number of milliseconds since January 1, 1970, 00:00:00 GMT.
n
within its
sub-index.
n
in the array
used to construct this searcher.
TermFreqVector
to provide additional information about
positions in which each of the terms is found.t
.
Field(name, value, Field.Store.YES, Field.Index.TOKENIZED)
instead
Field(name, value, Field.Store.YES, Field.Index.TOKENIZED, storeTermVector)
instead
Field(name, value)
instead
Field(name, value, storeTermVector)
instead
HitCollector
implementation that collects the top-scoring
documents, returning them as a TopDocs
.HitCollector
implementation that collects the top-sorting
documents, returning them as a TopFieldDocs
.term
.
TermDocs
enumerator.
term
.
TermPositions
enumerator.
sqrt(freq)
.
field
as the default field
for terms.
StandardTokenizer
filtered by a StandardFilter
, a LowerCaseFilter
and a StopFilter
.
StandardTokenizer
filtered by a StandardFilter
, a LowerCaseFilter
and a StopFilter
.
tokenStream(String, String)
and is
less efficient than tokenStream(String, String)
.
Field(name, value, Field.Store.YES, Field.Index.NO)
instead
Field(name, value, Field.Store.NO, Field.Index.TOKENIZED)
instead
Field(name, value, Field.Store.NO, Field.Index.TOKENIZED, storeTermVector)
instead
"\\s+"
; Divides text at whitespaces (Character.isWhitespace(c))
WildcardTermEnum
.
WordlistLoader
insteadname
in Directory
d
, in a format that can be read by the constructor BitVector.BitVector(Directory, String)
.
BitSet.xor(java.util.BitSet)
.
|
|||||||||||
PREV NEXT | FRAMES NO FRAMES |