-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
MDEV-28730 Remove internal parser usage from InnoDB fts #4443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
|
|
In addition to the CI failures needing correcting, does this mean Great to see the parser going away. |
04ec1e9 to
ff6a64d
Compare
dr-m
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some quick initial comments.
| dtuple_t *clust_tuple= row_build_row_ref(ROW_COPY_DATA, sec_index, | ||
| sec_rec, m_heap); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not ROW_COPY_POINTERS? At least add a comment about that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of ROW_COPY_DATA still has not been explained. As far as I understand, clust_tuple is only being used inside this function, so ROW_COPY_POINTERS should work just as well.
Which tables can this be invoked on? Can we add any assertions on the table or index name to document that?
b672350 to
53f237a
Compare
Introduce QueryExecutor to perform direct InnoDB record scans with a callback interface and consistent-read handling. Also handles basic DML operation on clustered index of the table Newly Added file row0query.h & row0query.cc QueryExecutor class the following apis read(): iterate clustered index with RecordCallback read_by_index(): scan secondary index and fetch clustered row lookup_clustered_record(): resolve PK from secondary rec process_record_with_mvcc(): build version via read view and skip deletes insert_record(): Insert tuple into table's clustered index select_for_update(): Lock the record which matches with search_tuple update_record(): Update the currently selected and X-locked clustered record. delete_record(): Delete the clustered record identified by tuple delete_all(): Delete all clustered records in the table replace_record(): Tries update via select_for_update() + update_record(); if not found, runs insert_record.
Add FTSQueryExecutor class as a thin abstraction over QueryExecutor.
This class takes care of open, lock, read, insert, delete
for all auxiliary tables INDEX_[1..6], common FTS tables
(DELETED, DELETED_CACHE, BEING_DELETED, CONFIG..)
FTSQueryExecutor Class which has the following function:
Auxiliary table functions : insert_aux_record(), delete_aux_record(),
read_aux(), read_aux_all()
FTS common table functions : insert_common_record(), delete_common_record(),
delete_all_common_records(), read_all_common()
FTS CONFIG table functions : insert_config_record(), update_config_record(),
delete_config_record(), read_config_with_lock()
Introduce CommonTableReader callback to collect doc_id_t from fulltext common tables (DELETED, BEING_DELETED, DELETED_CACHE, BEING_DELETED_CACHE). These table share the same schema strucutre. Simplified all function which uses DELETED, BEING_DELETED, DELETED_CACHE, BEING_DELETED_CACHE table. These functions uses executor.insert_common_record(), delete_common_record(), delete_all_common_records() instead of SQL or query graph. fts_table_fetch_doc_ids(): Changed the signature of the function to pass the table name instead of fts_table_t.
Introduce ConfigReader callback to extract key, value from fulltext config common table (CONFIG). This table has <key, value> schema. Simplifield all function which uses CONFIG tale. These functions uses executor.insert_config_record(), update_config_record() instead of SQL or query graph.
Introduce AuxCompareMode and AuxRecordReader to scan FTS auxiliary
indexes with compare+process callbacks.
Replace legacy SQL-graph APIs with typed executor-based ones:
-Add fts_index_fetch_nodes(trx, index, word, user_arg,
FTSRecordProcessor,compare_mode).
-Redefine fts_write_node() to use FTSQueryExecutor and fts_aux_data_t.
Implement write path via delete_aux_record (or) insert_aux_record.
Keep lock-wait retry handling and memory limit checks.
Change fts_select_index{,_by_range,_by_hash} return type
from ulint to uint8_t and simplify return flow.
Include fts0exec.h in fts0priv.h and update declarations accordingly.
Refactor fetch, optimize to QueryExecutor and standardize processor API. Replaced legacy SQL-graph paths with QueryExecutor-based reads/writes: fts_query code now uses QueryExecutor::read(), read_by_index() with RecordCallback (updating fts_query_match_document(), fts_query_is_in_proximity_range(), and fts_expand_query() to call fts_query_fetch_document() instead of fts_doc_fetch_by_doc_id(), which was removed along with FTS_FETCH_DOC_BY_DOC_ID_* macros); Rewrote fts_optimize_write_word() to delete (or) insert via FTSQueryExecutor::delete_aux_record()/insert_aux_record() using fts_aux_data_t;
- Removed fts0sql.cc file. - Removed commented fts funtions - Removed fts_table_t from fts_query_t and fts_optimize_t
fts_optimize_table() : Assigns thd to transaction even it is called via user_thread or fulltext optimize thread.
53f237a to
edabb01
Compare
dr-m
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some more comments. The error propagation is better now, but I would like to see some more effort to avoid the number of dict_sys.latch acquisitions. This should be tested as well, in a custom benchmark.
Even though we are adding quite a bit of code, I was pleasantly surprised that the size of a x86-64 CMAKE_BUILD_TYPE=RelWithDebInfo executable would increase by only 20 KiB. I believe that removing the InnoDB SQL parser (once some more code has been refactored) would remove more code than that.
| trx_t* trx= trx_create(); | ||
| trx->op_info= "fetching FTS index nodes"; | ||
| for (;;) | ||
| { | ||
| FTSQueryExecutor executor(trx, index, index->table); | ||
| AuxRecordReader reader(words, &total_memory); | ||
| if (word->f_str == nullptr) | ||
| error= executor.read_aux_all((uint8_t) selected, reader); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we create executor and reader outside the loop? And test the constant condition !word->f_str outside the loop?
| if (UNIV_LIKELY(error == DB_SUCCESS || | ||
| error == DB_RECORD_NOT_FOUND)) | ||
| { | ||
| fts_sql_commit(trx); | ||
| if (error == DB_RECORD_NOT_FOUND) error = DB_SUCCESS; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reason for committing and re-starting the transaction after each iteration? Is it one transaction per fetched row?
Here, the second if had better be removed. A blind assignment error= DB_SUCCESS should be shorter and incur less overhead. It is basically just zeroing out a register.
| break; | ||
| } | ||
| else | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The else after break; is redundant.
| fts_sql_rollback(trx); | ||
| if (error == DB_LOCK_WAIT_TIMEOUT) | ||
| { | ||
| ib::warn() << "Lock wait timeout reading FTS index. Retrying!"; | ||
| trx->error_state = DB_SUCCESS; | ||
| } | ||
| else | ||
| { | ||
| ib::error() << "Error occurred while reading FTS index: " << error; | ||
| break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which index and table are we reading? Why are we not disclosing the name of the index or the table?
Please, let’s avoid using ib::logger::logger in any new code, and invoke sql_print_error or sql_print_warning directly.
Is this code reachable? How would a lock wait timeout be possible?
Can this ever be a locking read? When and why would it need to be one? After all, as the code stands now, we are committing the transaction (and releasing any locks) after every successful iteration. Hence, there will be no consistency guarantees on the data that we are reading.
"Auxiliary table" in the function comment is inaccurate. Can we be more specific? Is this always reading entries from a partition of an inverted index? Which functions can write these tables? (What are the potential conflicts?)
Do we even need a transaction object here, or would a loop around btr_cur_t suffice?
| if (total_memory >= fts_result_cache_limit) | ||
| error= DB_FTS_EXCEED_RESULT_CACHE_LIMIT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn’t this condition be checked inside the loop? Shouldn’t an executor member function return this error?
| { | ||
| m_mtr.commit(); | ||
| return err; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is duplicating some earlier code in the function. goto would be a lesser evil.
| dberr_t err= process_record_with_mvcc( | ||
| table, clust_index, clust_rec, clust_offsets, callback, | ||
| continue_processing); | ||
| if (err != DB_SUCCESS) | ||
| { | ||
| m_mtr.rollback_to_savepoint(savepoint, savepoint + 1); | ||
| return err; | ||
| } | ||
| match_count++; | ||
| } | ||
| } | ||
| m_mtr.rollback_to_savepoint(savepoint, savepoint + 1); | ||
| return clust_err; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some code duplication could be avoided here:
clust_err= process_record_with_mvcc(
table, clust_index, clust_rec, clust_offsets, callback,
continue_processing);
if (clust_err == DB_SUCCESS)
match_count++;
}
}
m_mtr.rollback_to_savepoint(savepoint);
return clust_err;In fact, I believe that if btr_pcur_open() returned an error, we might not have anything added to the memo. In that case, the m_mtr.rollback_to_savepoint(savepoint) would be a no-op, and m_mtr.rollback_to_savepoint(savepoint, savepoint + 1) would be incorrect.
| rec_offs *offsets, RecordCallback &callback, | ||
| bool &continue_processing) noexcept | ||
| { | ||
| bool is_deleted= rec_get_deleted_flag(rec, dict_table_is_comp(table)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be faster to check rec_offs_comp(offsets).
| dtuple_t *clust_tuple= row_build_row_ref(ROW_COPY_DATA, sec_index, | ||
| sec_rec, m_heap); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of ROW_COPY_DATA still has not been explained. As far as I understand, clust_tuple is only being used inside this function, so ROW_COPY_POINTERS should work just as well.
Which tables can this be invoked on? Can we add any assertions on the table or index name to document that?
| /* Verify this is the exact record we want */ | ||
| if (!cmp_dtuple_rec(clust_tuple, clust_rec, clust_index, clust_offsets)) | ||
| { | ||
| dberr_t err= process_record_with_mvcc( | ||
| table, clust_index, clust_rec, clust_offsets, callback, | ||
| continue_processing); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I understand, clust_tuple only contains the PRIMARY KEY. The purpose of an MVCC read should be to find a clustered index record that matches the secondary index record. But, no sec_rec is being passed here.
Description
Remove internal parser/SQL-graph usage and migrate FTS paths to QueryExecutor
Introduced QueryExecutor (row0query.{h,cc}) and FTSQueryExecutor abstractions for
clustered, secondary scans and DML.
Refactored fetch/optimize code to use QueryExecutor::read(), read_by_index()
with RecordCallback, replacing SQL graph flows
Added CommonTableReader and ConfigReader callbacks for common/CONFIG tables
Implemented fts_index_fetch_nodes(trx, index, word, user_arg, FTSRecordProcessor, compare_mode)
and rewrote fts_optimize_write_word() to delete/insert via executor with fts_aux_data_t
Removed fts_doc_fetch_by_doc_id() and FTS_FETCH_DOC_BY_ID_* macros, updating callers to
fts_query_fetch_document()
Tightened fts_select_index{,_by_range,by_hash} return type to uint8_t;
Removed fts0sql.cc and eliminated fts_table_t from fts_query_t/fts_optimize_t.*
Release Notes
Removed the sql parser usage from fulltext subsystem
How can this PR be tested?
For QA purpose, Run RQG testing involving Fulltext subsystem
Basing the PR against the correct MariaDB version
mainbranch.PR quality check