brief: | i18n.site now supports serverless full-text search.
This article introduces the implementation of a pure front-end full-text search technology, including the construction of an inverted index using IndexedDB, prefix search, word segmentation optimization, and multi-language support.
Compared to existing solutions, i18n.site's pure front-end full-text search is compact and fast, suitable for small to medium-sized websites such as documentation and blogs, and is available offline.
After several weeks of development, i18n.site (a purely static markdown multilingual translation & website building tool) now supports pure front-end full-text search.
This article will share the technical implementation of i18n.site
's pure front-end full-text search. Visit i18n.site to experience the search functionality.
Code is open-source: Search kernel / Interactive interface
For small and medium-sized purely static websites such as documents/personal blogs, building a self-built full-text search backend is too heavy, and service-free full-text search is the more common choice.
Serverless full-text search solutions are divided into two main categories:
The first involves third-party search service providers like algolia.com that offer front-end components for full-text search.
Such services require payment based on search volume and are often unavailable to users in mainland China due to compliance issues.
They cannot be used offline or on intranets, and have significant limitations. This article will not elaborate further.
The second category is pure front-end full-text search.
Currently, common pure front-end full-text search tools include lunrjs and ElasticLunr.js (a secondary development based on lunrjs
).
lunrjs
has two methods for building indexes, both with their own issues.
Pre-built index files
Because the index includes all the words from the documents, it is large in size. Every time a document is added or modified, a new index file must be loaded. This increases user waiting time and consumes a significant amount of bandwidth.
Loading documents and building indexes on the fly
Building an index is a computationally intensive task, and rebuilding it with each access can cause noticeable delays, leading to a poor user experience.
In addition to lunrjs
, there are other full-text search solutions, such as:
fusejs, which searches by calculating the similarity between strings.
This solution has poor performance and is not suitable for full-text search (refer to Fuse.js Long query takes over 10 seconds, how to optimize?).
TinySearch, which uses a Bloom filter for searching, cannot perform prefix searches (e.g., entering goo
to search for good
or google
) and cannot achieve an autocomplete effect.
Due to the drawbacks of existing solutions, i18n.site
has developed a new pure front-end full-text search solution with the following features:
gzip
, is only 6.9KB
(in comparison, lunrjs
is 25KB
)IndexedDB
, with low memory usage and fast performanceDetails of the i18n.site
technical implementation will be introduced below.
Word segmentation uses the browser's native Intl.Segmenter
, which is supported by all mainstream browsers.
The coffeescript
code for word segmentation is as follows:
SEG = new Intl.Segmenter 0, granularity: "word"
seg = (txt) =>
r = []
for {segment} from SEG.segment(txt)
for i from segment.split('.')
i = i.trim()
if i and !'|`'.includes(i) and !/\p{P}/u.test(i)
r.push i
r
export default seg
export segqy = (q) =>
seg q.toLocaleLowerCase()
Where:
/\p{P}/
is a regular expression that matches punctuation marks, including: ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _
{ | } ~. .</p><ul><li>
split('.')is because
Firefoxbrowser word segmentation does not segment
.` .
Five object storage tables are created in IndexedDB
:
word
: id - worddoc
: id - document URL - document version numberdocWord
: document id - array of word idsprefix
: prefix - array of word idsrindex
: word id - document id - array of line numbersBy passing in an array of document url
and version number ver
, the doc
table is checked for the document's existence. If it does not exist, an inverted index is created. Simultaneously, the inverted index for documents not passed in is removed.
This method allows for incremental indexing, reducing the computational load.
In the front-end interface, a progress bar for index loading can be displayed to avoid lag during the initial load. See "Animated Progress Bar, Based on a Single progress + Pure CSS Implementation" English / Chinese.
The project is developed based on the asynchronous encapsulation of IndexedDB, idb.
IndexedDB reads and writes are asynchronous. When creating an index, documents are loaded concurrently to build the index.
To avoid data loss due to concurrent writes, you can refer to the following coffeescript
code, which adds a ing
cache between reading and writing to intercept competitive writes.
pusher = =>
ing = new Map()
(table, id, val)=>
id_set = ing.get(id)
if id_set
id_set.add val
return
id_set = new Set([val])
ing.set id, id_set
pre = await table.get(id)
li = pre?.li or []
loop
to_add = [...id_set]
li.push(...to_add)
await table.put({id,li})
for i from to_add
id_set.delete i
if not id_set.size
ing.delete id
break
return
rindexPush = pusher()
prefixPush = pusher()
The search first segments the keywords entered by the user.
Assuming there are N
words after segmentation, the results are first returned with all keywords, followed by results with N-1
, N-2
, ..., 1
keywords.
The search results displayed first ensure query precision, while subsequent loaded results (click the "Load More" button) ensure recall.
To improve response speed, the search uses the yield
generator to implement on-demand loading, returning results after each limit
query.
Note that after each yield
, a new IndexedDB
query transaction must be opened for the next search.
To display search results in real-time as the user types, for example, showing words like words
and work
that start with wor
when wor
is entered.
The search kernel uses the prefix
table for the last word after segmentation to find all words with that prefix and search sequentially.
An anti-shake function, debounce
(implemented as follows), is used in the front-end interaction to reduce the frequency of searches triggered by user input, thus minimizing computational load.
export default (wait, func) => {
var timeout;
return function(...args) {
clearTimeout(timeout);
timeout = setTimeout(func.bind(this, ...args), wait);
};
}
The index table does not store the original text, only words, reducing storage space.
Highlighting search results requires reloading the original text, and using service worker
can avoid repeated network requests.
Also, because service worker
caches all articles, once a search is performed, the entire website, including search functionality, becomes offline available.
The pure front-end search solution provided by i18n.site
is optimized for MarkDown
documents.
When displaying search results, the chapter name is shown, and clicking navigates to that chapter.
The pure front-end implementation of inverted full-text search, without the need for a server, is very suitable for small to medium-sized websites such as documents and personal blogs.
i18n.site
's open-source self-developed pure front-end search is compact, responsive, and addresses the various shortcomings of current pure front-end full-text search solutions, providing a better user experience.