Major Objects: Difference between revisions

From Bitpost wiki
Line 32: Line 32:
                 b) if bNeeded, always do deletions first, starting with greatest grandparent container, to minimize work
                 b) if bNeeded, always do deletions first, starting with greatest grandparent container, to minimize work
                 c) use the erase-remove pattern to remove all deleted items in one loop
                 c) use the erase-remove pattern to remove all deleted items in one loop
                     eg: removeDeletedUsers();
                     see code here for reference implementation: BrokerAccount::removeDeletedStockRuns()
                       a) iterate and remove item from all secondary indices
                       i) iterate and remove item from all secondary indices
                       b) iterate primary index, and use the lambda of the erase-remove operation to delete memory allocation and remove db record
                       ii) iterate primary index, and use the lambda of the erase-remove operation to delete memory allocation and remove db record
                      iii) associative container iterators can be safely deleted directly
                          sequential containers like vector require use of erase-remove idiom
                          see reference implementation for example code!

Revision as of 17:29, 26 March 2017

Overview

  • Major Objects
    • Use Major Objects for fast in-memory handling of large amount of data that is thread-safe but must be persisted
    • We must support complex objects with simple keys, crud, and fast lookup by multiple keys.
    • Our most useful containers are vector, set (key in object) and map (<key,value> pair). Set can give us almost every positive feature, when used to store the PersistentIDObject class.
    • Use an unordered_set of const pointers to objects derived from PersistentIDObject
    • The default container should index by db_id primary key
    • Always use the db_id for foreign keys
    • Other containers can be created with alternate keys using object members; just define new hash functions.
  • PersistentIDObject
    • Add a dirty flag to all objects, set to true on any change that must be persisted
    • Use an internal in-memory counter to generate the next db_id for a newly created object
    • This means that when creating new objects, there is NO NEED to access db, VERY IMPORTANT!
    • Use delayed-write tactics to write all dirty objects on idle time
  • Memory Model
    • Use a Datastore manager (aka "MemoryModel") to hold sets
    • It can look up objects by any key, and strip away const to return a mutable object. NOTE that the user must not damage the key values!
    • Derive a class from the memory model for persistence; it can use any persistence method (local, remote, sql, nosql, etc.).
    • Make sure that the base MemoryModel class is concrete not abstract, thread-safe and self-contained; this makes parallel calculations trivial, helps scalability, etc.

Delayed delete pattern

           1) to dynamically delete an object: 
               ba.setDeleted();
           2) include deleted status in active check, etc.:
               // NOTE use the direct function rather than !bFunc(), as deleted objects return false for both.
               bool bActive() const        { return b_active_ && !bDeleted();  }
               bool bInactive() const      { return !b_active_ && !bDeleted(); }
           3) all deletion work is done in MemoryModel::saveDirtyObjectsAsNeeded(), see that code
               a) deletion check should happen in delayed write check:
                   if (pau->bDirtyOrDeleted())
                       bNeeded = true;
               b) if bNeeded, always do deletions first, starting with greatest grandparent container, to minimize work
               c) use the erase-remove pattern to remove all deleted items in one loop
                   see code here for reference implementation: BrokerAccount::removeDeletedStockRuns()
                     i) iterate and remove item from all secondary indices
                     ii) iterate primary index, and use the lambda of the erase-remove operation to delete memory allocation and remove db record
                     iii) associative container iterators can be safely deleted directly
                          sequential containers like vector require use of erase-remove idiom
                          see reference implementation for example code!