-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Labels
Description
This is difficult as the cost of counting is the same as performing a search. The desire for counts is to enable us to more efficiently determine the order of complex searches, but if it takes the same time to get a count as to perform the search it is of no use.
I am thinking it could make sense to add a LRU cache that stores the count from the last search. This may become invalid, but it would at least enable some data from which to order queries when searching.
Think about queries of the type:
graph.search([
{ subject: graph.v('id'), predicate: 'rdf:type', object: 'a:CommonType' }, // matches 1000 things
{ subject: graph.v('id'), predicate: 'a:name', object: '"a unique value"' } // matches 1 thing
])At the moment this search will get all the 1000 things and check each until it finds the one.
Approximate counts would solve this for many cases.