mongodb - Detecting concurrent data modification of document between read and write -
i'm interested in scenario document fetched database, computations run based on external conditions, 1 of fields of document gets updated , document gets saved, in system might have concurrent threads accessing db.
to make easier understand, here's simplistic example. suppose have following document:
{ ... items_average: 1234, last_10_items: [10,2187,2133, ...] ... }
suppose new item (x) comes in, 5 things need done:
- read document db
- remove first (oldest) item in
last_10_items
- add x end of array
- re-compute average* , save in
items_average
. - write document db
* note: average computation chosen simple example, question should take account more complex operations based on data existing in document , on new data (i.e. not solvable $inc
operator)
this easy implement in single-threaded system, in concurrent system, if 2 threads follow above steps, inconsistencies might occur since both update last_10_items
, items_average
values without considering and/or overwriting concurrent changes.
so, question how can such scenario handled? there way check or react-upon fact underlying document changed between steps 1 , 5? there such thing watch redis or 'concurrent modification error' relational dbs?
thanks
in database system,it uses memory inspection , roll scheme similar transactional memory.
briefly speaking, monitors share memory parts specified , compare , swap or load , link or test , set. therefore,if memory content changed during transaction,it abort , try again until there no conflict operation shared memory.
for example,gcc implements following:
https://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/atomic-builtins.html
type __sync_lock_test_and_set (type *ptr, type value, ...) type __sync_val_compare_and_swap (type *ptr, type oldval type newval, ...)
for more info transactional memory, http://en.wikipedia.org/wiki/software_transactional_memory
Comments
Post a Comment