A better Trader: Difference between revisions

From Bitpost wiki
No edit summary
No edit summary
 
(157 intermediate revisions by the same user not shown)
Line 1: Line 1:
=== PSEUDO ===
[https://abettertrader.com LIVE SITE]  --  [http://localhost:8080 DEV SITE]


load from SQL tables into [[Major Objects]] (std::unordered_sets of PersistentIDObjects)
[https://bitpost.com/files/AbT_docs/ Generated Documentation]


=== QUOTE EXTRACTION ===
== [[A better Trader testing]] ==
 
== Use Case ==
 
    ALL WE ASK OF USERS IS TO WATCH THE NEWS AND...
   
        -----------------------------------------------------
        BE  INTUITIVE  ABOUT  A  STOCK'S  NEXT  DIRECTION
        -----------------------------------------------------
   
    IF A PICKED IS LOOKING GOOD:    let it ride; actions: buy; reset the bracket to hold it longer
    IF A PICKED IS LOOKING BAD:    delete it - there are more fish in the sea, get OUT
    IF AN OWNED IS LOOKING GOOD:    let it ride; actions: sell; hold
    IF AN OWNED IS LOOKING BAD:    let it ride; actions: sell and drop, raise bracket
    REGULARLY ADD AND REMOVE PICKS
   
    THAT'S THE DAILY USAGE PATTERN
    we must do everything else! 
    all that boring analysis should be done for them, unless they really want to obsess
 
== Design ==
 
=== Events ===
 
* Types
** this account's snapshots
** significant highlights (across all accounts and cycles); NOTE to make this performant, we only use snapshots (all types) to find these highlights; snapshots should happen on every major change tho!  Cool.
** news (TBD)
* Provide more detail for monthly-or-below, and less for above-monthly
* The server consolidates data into all "significant" events for the timeline
* The user can filter events on the client for readability in a small UI, but they are always available for the given timeframe so filter changes can be quickly applied
 
==== Monthly ====
 
===== Realtime events =====
* All user account snapshot events
* "significant" changes to any account or cycle
 
(MORE TODO)
 
===== Daily-batched events =====
(MORE TODO)
 
==== Above-Monthly ====
* Only provide account SELLS.  Easy to query, plenty of data.
(MORE TODO)
 
== Patterns ==
 
=== Basics ===
 
* Load from SQL/nosql tables into [[Major Objects]]
* Use a tight single-threaded message loop with async timers that set booleans when it is time to perform tasks
* Offload heavy automated analysis to after-hours external analyzers, with the goal of applying yesterday's best fit to today
* Server should provide minimal concise data to client, and client Javascript should do all UI rendering work.
** Traditionally, we would inject const variables into html
** Best practice but more of a refactor: make three separate calls to server to get html, javascript, and data.  The html and javascript become cacheable static files.
 
=== Object model ===
 
JSON schema is used to generate base data classes and database read/write code, to provide the most agile schema refactoring.  Follow these patterns to keep it consistent:
 
* use a constructor with defaults for all parameters:
    // This constructor serves several purposes:
    //  1) standard full-param constructor, efficient for both deserializing and initializing
    //  2) no-param constructor for reflection via quicktype
    //  3) id constructor for loading via id + quicktype fields
    //  4) id constructor for use as key for unsorted_set::find()
    BrokerAccount(
        int64_t db_id = PersistentIDObject::DBID_DO_NOT_SAVE,     
        int64_t aar_add_arca_enabled = -1,
        ...
    :
        // Call base class
        inherited(ba_max_db_id_,db_id),
        // internal members
        ...
    {
        // persistent members
        ...
    }
 
* There are three use cases for new objects:
** objects about to be loaded - use constructor params, and load in a value for db_id_
** new objects that should be made persistent - track a max_db_id in the parent to provide the "next" db_id constructor parameter
** temporary objects - use the default constructor
 
* use addXXXToMemory() to init parent-child relationships
    StockRun& BrokerAccount::addStockRunToMemory(StockRun* psr, bool bInsertNewRank)
        psr->setParent(*this);
        runsByRank_.push_unsorted(psr);
        runs_.insert(psr);
        BrokerAccount &AppUser::addBrokerAccountToMemory(BrokerAccount *pba)
        // We need a valid db id.
        assert(pba->db_id_ != PersistentIDObject::DBID_UNSAVED);
        // The caller is responsible for ensuring the account doesn't exist.
        assert(findBrokerAccount(pba->db_id_) == accounts_.end());
        pba->setParent(*this);
        accounts_.insert(pba);
        accountsByStringId_.insert(pba);
        return *pba;
 
* use pointers for parent objects
    set always-and-only once, on load
    use functions that return a reference, to use those objects
    example:
    // EXTERNAL REFERENCES
    // NOTE we are not responsible for these allocations.
    // Access pointers to parents as references.
    // Accces nullable pointers directly.
    void setParent(AppUser& au) { pau_ = &au; }
    AppUser& au() { assert(pau_ != 0); return *pau_; }
    AppUser& au() const { assert(pau_ != 0); return *pau_; }
 
* squash 1:1 contained members into parent
    StockRun (Cycle):
        flatten these:
            Stock stock_;
            StockPick sp_;
            AutotradedStock as_;
 
* store contained containers/vectors in separate tables, and fill in secondary pass
    StockRun (Cycle):
        BracketEvents sbe_;
 
=== Web UI ===
 
* bootstrap header and footer
* possible subsection navbar (eg Accounts, Admin) that sticks to top with bootstrap header
* forms:
** For the simplest forms, just use inputs and a button, and capture the click in js.  Don't use <form> as it is hard to prevent the default behavior.
** For any substantial multi-field form, use helpers in at.js
* tables: we define tables entirely in JSON and pop them up with bootstrap-table, see AnalysisData getJSON, getJSONColumnNames
* ajax: see at.js
* patch can provide partial JSON to do partial updates (don't touch fields that are not provided)
* dates: moment.js - need to convert d3 date functions to moment
* money: accounting.js
 
== Documentation ==
 
=== Performance Tracking ===
 
We measure three types of performance:
* SR: As a stock cycles, it tracks the %gain on each buy-sell; Each cycle may have a diff # of stocks but aggregating %change-per-cycle should have value
* BA: This is where we can say "at time T1 we had value X; at time T2 we had value Y" and get precise gains.
* AD: This is used across many cycles and accounts and needs aggregation similar to SR.
 
* SR and AD performance should be avg-percentage based as there is no base "value" like with BA
** use an "average gain per buy-sell cycle" => d_avg_pct_gain_[GTT_COUNT] + sells_count_[GTT_COUNT]
 
* cycle stopsells may need closer inspection
** track stopsells; perhaps red-flag at 1 stopsell, then bail on 2?
** cycle stopsell should (eventually?) flag the aps as in critical need of an update via reanalysis of recent history; rerun analysis, then reset the stopsell to zero
* track performance of ad
** nothing to do with need to rerank or reanalyze
** but just so we learn over time what the best metaranges are
** we want to keep working in this area, expanding as it makes sense
** eg separate stocks' volatility by price, volume, market...etc.
 
=== [https://bitpost.com/files/AbT_docs/index.html Doxygen and SchemaSpy diagrams] ===
 
SchemaSpy was used on Sqlite relationships to generate a nice [https://bitpost.com/files/AbT_docs/schemaspy/output/sqlite/relationships.html foreign key map].
 
Doxygen shows class diagrams.  Here are some central relationships:
 
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_persistent_object.html PersistentObject] class hierarchy shows all persistent classes
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_at_http_server.html AtHttpServer] has call graphs for functions like [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_at_http_server.html#abf8c8e1e8d1d0ba417470d6298e804cc GetAccount]
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_broker_account.html BrokerAccount] [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_broker_account.html#ae8315bd08d9f69cedaef04ca4d1d1c97 ProcessQuote] call hierarchy
 
=== MODEL ===
 
AtController
  ui_
  memory_model_
  timers
MemoryModel: delayed-write datastore manager; use dirty flags + timed, transactioned saveDirtyObjects() call
  prefs
  sq_
  brokers
  aps_
  users_
TradingModel
  AppUser
    BrokerAccount:
      runs_ (sorted by id)
      runsByRank_
          contains all cycles sorted by rank
          ONLY MANUAL has valid rank values
          but additional sort criteria is used to handle all cycles
          see StockRunsByRank_lessthan()         
      broker (not worthy of its own layer)
      StockRun: rank, active, owned
        StockPick+AutotradedStock: quote processing
        SPASBracketEvent: stores one bracket-change event; includes triggering quote, bActive, buy/sell {quantity, commission, value}
      StockSnapshot: run, symbol, quote, quantity (for account snapshot history)
Analysis data
  StockQuote
    db only has time+symbol+price 
    memory also has p_aad
  AutoAnalysisData
    symbol
    aps_id
    stopsells
    profitable_sells 
  StockRun
    aps
  AutotradeParameterSet
    bracket params
    many analysis vars - move these out to aad   
  BracketEvent
    contains all details about any possible event
    for report and analysis we have a tighter version:
      typedev vector<RunQAB> RunHistory
      RunQAB is just quote+time + optional Bracket ptr
Order lifespan
  Sim and analysis buy: place order, wait for next stock quote, buyWasExecuted()
  Live buy: place order, poll for execution, buyWasExecuted()
 
Stock model
        Stock
          StockQuote& quote_;
            typedef std::pair<const string,StockQuoteDetails*> StockQuote;
              StockQuoteDetails
                double d_quote_;
                time_t timestamp_;
                (+spike logic)
          int64_t n_quantity_;
          StockOrder so_;
            int64_t order_id_;
            ORDER_STATUS status_;
            int64_t quote_db_id_;
 
=== QUOTE PROCESSING ===
 
    AlpacaQuotesWss::startWss() on_message for EVERY T QUOTE WE GET
      AlpacaQuotesWss::processQuote
        quotes_.push_back(q);
    AlpacaInterface::getQuotes()
      if (p_quotes_wss_->drainQuotesQueue(latest_quotes))
        return vetAndProcessQuotes(latest_quotes);
        ---
        patc_->mm().processQuote(
    ^^^ all that does nothing but provide raw quotes, no worries there
    bool MemoryModel::processQuote
      bool bReset = !sqd.bValid() || sqd.resetBracketsAtMarketOpen_;
      if ( bRealtime() ) bProcess = sqd.addToSpikeHistory(dQuote, timestamp);
      if ( bReset || bProces )
        ---
        pa->processQuote(sq, bReset);
    BrokerAccount::processQuote(
      // skip quote if APS is not ready!
      if (!psr->aps().bIsReady() && au().mm().bRealtime()) continue;
      if (broker().bSimulation() && psr->bBuyOrderPending())
        psr->executeBuyOnQuote();
      else if (broker().bSimulation() && psr->bSellOrderPending())
        psr->executeSellOnQuote();
      else if (psr->processQuoteForBuy(sq))
        vsrToBuy.push_back(psr);
      else if (psr->processQuoteForSell(sq))
        vsrToSell.push_back(psr);
      for (StockRun *psr : vsrToSell)
        sell(*psr);
    for (StockRun *psr : vsrToBuy)
      // Try to buy until we're full!
      if (bMaxOpenOrders() || !buy(*psr))
        psr->resetBracketOnDelayedBuyAsNeeded();
    ---
    next processing is here:
      StockRun::processQuoteForBuy
        if (!bUnowned()) (bOrderUnresolved()) return false;
        if (handlePickInstaspike()) log
        else if (bBounceType())
      ---
      bool StockRun::handlePickInstaspike()
        if (
              sq().second->bSpikeFlatUp(aps().spike_protection_percent_)
          || sq().second->bSpikeFlatDown(aps().spike_protection_percent_))
          return true; // Ignore it.
      ---
      bSpikeFlatUp
        return ((quote_ - d_last_quote_) / quote_ > dPct);
      bSpikeFlatDown
        return (abs(d_last_quote_ - d_last_last_quote_) / d_last_quote_ < 0.002) && bSpikeDown(dPct);
      bSpikeDown
        return ((d_last_quote_ - quote_) / d_last_quote_ > dPct);
    ---
    and here:
      StockRun::processQuoteForSell
        if (handleOwnedInstaspike()) // log-and-drop
        ---
        StockRun::handleOwnedInstaspike()
          if (sq().second->bSpikeUpDown) // reset to last-last
          else if (sq().second->bSpikeFlatDown) // ignore it
        ---
        bSpikeUpDown
          return ((d_last_quote_ - d_last_last_quote_) / d_last_quote_ > dPct) && bSpikeDown(dPct) && (abs(quote_ - d_last_quote_) / quote_ < 0.002);
 
=== DEBUG INTO REST API HANDLERS ===
 
break on server_http.hpp ln~370: if(REGEX_NS::regex_match(request->path, sm_res, regex_method.first)) {
watch regex_method.first.str
watch request->path
 
=== CI ===
 
MASTER SCRIPT: atci
 
We will have a live site, a constantly running CI site, and multiple dev environments.
 
RUN LIVE at bitpost.com:
m@bitpost rs at
m@bitpost # if that doesn't work, start a session: screen -S at
m@bitpost cd ~/development/thedigitalage/AbetterTrader/server-prod
m@bitpost atlive
  ========================================================
    *** LIVE MODE ***
  ========================================================
CTRL-A CTRL-D
 
RUN CI at bitpost.com:
# Keep this running to ensure that changes are dynamically built as they are committed
# It should run at a predictable publicly available url that can be checked regularly
# It runs in TEST but should run in a test mode that has an account assigned to it so it is very much like LIVE
# It runs release build in test mode
CTRL-A CTRL-D
 
RUN DEV anywhere but bitpost:
# Dev has complete control; most common tasks:
#  Code fast with a local CI loop - as soon as a file is changed, CI should restart server in test mode, displaying server log and [https://addons.mozilla.org/en-US/firefox/addon/auto-reload/ refreshing server page]
#      kill server, build, run, refresh browser
#  Turn off CI loop to debug via IDE
#  Stop prod, pull down production database, run LIVE mode in debugger to diagnose production problems
 
 
==== Old notes from `at readme` ====
<pre>
TODO NEEDS WORK!!
 
1) Set up a new dev environment: at setup
This function defines continuous integration steps for our project.
See: https://bitpost.com/wiki/Continuous_Integration
 
1) Continuous Integration environment
    There should only be one official CI repository.
    The CI repository should be a clone of the central bare .git repo.
    The central bare .git repo should have a git post-receive hook that pushes all code to the CI repo as soon as it is received.
    [atci cwatch] will watch for receipt of new code pushes, and stop+rebuild+import+restart the CI server in test mode.
    The script will copy production data, massaged to run in test mode.
 
2) Get ci status
    [atci cconsole] will restore the screen of the running ci server
    [atci cstatus] TODO TOTHINK: will be called via php via ajax from the main development webpage (bitpost.com)
        to query the CI server and report its one-line status 24/7.  Should include build/run/test status.
 
3) Development environment
    During "fast" code development:
        [atci dwatch] will watch for code saves, and stop+rebuild+import+restart the dev server in test mode.
        [atci dconsole] will watch for code saves, and stop+rebuild+import+restart the dev server in test mode.
        [atci dstatus] will give a one-line status of the running dev server.
    During "careful" code development, the developer can call substeps to do these specific tasks.  Common tasks:
        [atci dbuild] builds release mode.
        [atci dbuild debug] builds debug mode.
    To sync changes:
        [atci ds] top-level dev script to commit+tag dev.
    See usage for the complete list.
 
Call these in one of two ways: [atci cmd ...] or [atcmd ...]
See https://bitpost.com/news for more bloviating.  Happy trading!  :-)
 
</pre>
 
=== Thread locking model ===
OLD model was to do async operations, sending them through the APIRequestCache.  The problem with that was that the website could not give immediate feedback.  FUCKING WORTHLESS.  New model uses the same exact locking, just does it as needed, where needed.  We just need to chose that wisely.
 
* Lock at USER LEVEL, as low-level as possible, but as infrequently as possible - not necessarily easy
* Lock container reads with reads lock, which allows multiple reads but no writing
  // Lock user for reading
  boost::shared_lock<boost::shared_mutex> lock(p_user->rw_mutex_);
* Lock container writes with exclusive write lock
  // Lock user for writing
  boost::lock_guard<boost::shared_mutex> uniqueLock(p_user->rw_mutex_);
 
There is also a global mutex, for locking during AppUser container operations, etc.
* reading
  boost::shared_lock<boost::shared_mutex> lock(g_p_local->rw_mutex_);         
* writing
  boost::lock_guard<boost::shared_mutex> uniqueLock(g_p_local->rw_mutex_);
 
=== DAILY MAINTENANCE ===
* Data is segmented into files, one per day
* to determine end-of-day, timestamps are checked as they come in with quotes (we had no better way to tell)
* At end of day, perform maintenance
** perform maintenance only once, checking for existing filename with date of "today"
** purge nearly all quotes and bracket events, while ensuring the new database remains self-contained
*** preserve last-available quote for all stocks
*** create a fresh new starting snapshot of all accounts using the preserved quotes
** postpone next quote retrieval until market is ready to open again
Pseudo:
  EtradeInterface::processQuotes()
    if (patc_->bTimestampIsOutsideMarketHours(pt))
      patc_->checkCurrentTimeForOutsideMarketHours()
      ---
      checkCurrentTimeForOutsideMarketHours
        // Do not rely on quote timestamps.
        ptime ptnow = second_clock::local_time();
 
        if (bMarketOpenOrAboutTo(ptnow))
          return false;
 
        // If we are just now switching to outside-hours, immediately take action.
          set_quotes_timer_to_next_market_open();    // This will cause the quotes timer to reset to a loooong pause.
          if (bTimestampIsAfterHours(ptnow))          // must be AFTER not before
            if (g_p_local->performAfterHoursMaintenance(ptnow))
                // Always start each new day with a pre-market account snapshot,
                            pa->addSnapshotToMemory(snaptime);
            runAnalysis();
 
=== PRODUCTION ASSETS ===
 
Due to potentially large sizes, I moved all bitpost production live assets to the software raid.  Extra log backups are in logs folder.  Extra db backups are in db_archive folder.
 
at_server_live.log -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/logs/at_server_live.log
db_analysis -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/db_analysis
db_archive -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/db_archive
 
=== ANALYSIS ===
 
==== AnalysisData Bracketing Overview ====
Bracket vars:
  "ph1_initial_drop_high": 0.9986,
  "ph1_initial_drop_low": 0.9571,
  "ph1_initial_drop_steps": 4,
  "ph2_initial_rise_high": 1.0429,
  "ph2_initial_rise_low": 1.0014,
  "ph2_initial_rise_steps": 4,
  "ph3_loss_drop_high": 0.85,
  "ph3_loss_drop_low": 0.9929,
  "ph3_loss_drop_steps": 10,
  "ph3_profit_rise_high": 1.12,
  "ph3_profit_rise_low": 1.0071,
  "ph3_profit_rise_steps": 10,
  "ph4_profit_harvest_high": 0.9571,
  "ph4_profit_harvest_low": 0.9986,
  "ph4_profit_harvest_steps": 4,
 
APS monte carlo steps are taken from that.
 
===== steps =====
There is a limit to the number of total steps that can be done overnight.  The steps above are able to be performed.  We should work on increasing step count as possible by optimizing, and improving hardware.
 
===== bracket constraints =====
`high` and `low` vars are given, then adjusted using a standard deviation, but within limits to keep meaningless tiny trades from happening.
 
Example values:
        "ph1_initial_drop_high": 0.9986,
        "ph1_initial_drop_low": 0.9571,
        "ph1_initial_drop_steps": 4,
        "ph2_initial_rise_high": 1.0429,
        "ph2_initial_rise_low": 1.0014,
        "ph2_initial_rise_steps": 4,
        "ph3_loss_drop_high": 0.85,
        "ph3_loss_drop_low": 0.9929,
        "ph3_loss_drop_steps": 10,
        "ph3_profit_rise_high": 1.12,
        "ph3_profit_rise_low": 1.0071,
        "ph3_profit_rise_steps": 10,
        "ph4_profit_harvest_high": 0.9571,
        "ph4_profit_harvest_low": 0.9986,
        "ph4_profit_harvest_steps": 4,
 
==== TRENDLINE (REFACTOR SIX) ====
 
Keep focus on the final objective:
* I want to SEE A STOCK'S CYCLE, CLEARLY, so i can harvest it
* the cycle has to be around a base trendline
 
We will use recent non-reduced data.
 
We need to dynamically display, to clue us in to the big picture.
 
More to come.
 
==== ANALYZE PSEUDO (REFACTOR FOUR) ====
 
* we run autoanalysis for every known symbol, with minimal interaction from user - they just decide to use it or not
* we do a standard deviation on the data, and a monte-carlo-like loop through ranges APS values based on sd multipliers
* we are working toward a distributed microservice approach with many analysis engines across LAN
 
Function overview:
  bool AtController::load_startup_data()
      if (b_analyze_on_startup_)
          getAnalyzerController().requeue_analyses();
  bool AtController::checkCurrentTimeForOutsideMarketHours()
      if (getModel().bMarketOpenOrAboutTo(ptnow))
          return false;
      if (!b_after_hours_)
          b_after_hours_= true;
          // Attempt maintenance and analysis, but only if we are AFTER hours (not before).
          if (getModel().bTimestampIsAfterHours(ptnow))
              getModel().performAfterHoursMaintenance(ptnow);
 
  void AnalyzerController::requeue_analyses()
      stop_and_clear_jobs();
      fill_all_job_slots();
 
=== REPORTING JSON ===
 
    PATTERN
    -------
    bool handler()
        mm().readXxxJson(json)                              (done in derived model)
            for r
                buildAccountCyclesPerformanceJSONRow(      (done in MM)
                    row["snapshot_action"].as<int64_t>()
                )               
        archive().readXxxJson(json)                        (done in derived model)
            for row
                buildAccountCyclesPerformanceJSONRow(      (done in MM)
                    query.getColumn(0).getInt64(),
    FOLLOW OUR PATTERN WITH all handlers:
    * PostAccountPerformanceCycles
       
        AtHttpServer::PostAccountPerformanceCycles()
            readAccountCyclesPerformanceJSON (derived models)
                buildAccountCyclesPerformanceJSONRow (mm)
    * GetAccountPerformance
    * PostAccountPerformance
        NOTE that these ones actually completely harvest the data first,
        due to reporting requirements (JSON can't be built directly from db rows)
    - PostAccountActivity
    - PostAccountActivityTable
 
=== QAB charts ===
 
CHART TIMEFRAME DESIGN
 
  use cases:
    user wants to see performance across a variety of time frames <- PERFORMANCE PAGE only!
    user wants to see historical brackets for older days <- PERFORMANCE PAGE only!  lower priority!
    user wants to perform immediate actions on realtime chart
    user wants to do autoanalysis across a range and then manually tweak it
 
  requirements
    round 1: we can satisfy everything with TODAY ONLY (show today archive if after hours)
    round 2: add a separate per-day performance chart
    round 3: add a date picker to the chart to let the user select an older day to show
 
  node reduction
    data DISPLAY only needs to show action points and highs/lows
        aggressively node-reduce to fit the requested screen size!
        given: number of pixels of width
        provide: all bracket quotes plus the lowest+highest quotes in each 2-pixel range (minimum - adjustable to more aggressive clipping if desired)
    internal data ANALYSIS should use all points
 
CHART DATA RETRIEVAL
<pre>
  function addStock(cycle) {
    restAndApply('GET','runs/'+cycle.run+'/live.json?pixel_limit='+$(window).width()*2...
    ---
    void AtHttpsServer::GetRunLive(API_call& call)
      g_p_local->readRunLive(p_user->db_id_, account_id, run_id, symbol, pixel_limit, atc_.bAfterHours(), rh);
      atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);
  var analysisChange = function(event) {
    $.getJSON('runs/'+run+'/analysis.json?pixel_limit='+$(window).width()*2+'&aggressiveness='+event.value, function( data ) {
    ---
    AtHttpsServer::GetRunAnalysis(API_call& call)
      atc_.thread_handleAnalysisRequest(
      ---
      AtController::thread_handleAnalysisRequest(BrokerAccount& ba,int64_t run_id,bool b_autoanalyze,double d_aggressiveness,int32_t pixel_limit,string& http_reply)
        g_p_local->readRunHistory()
        thread_analyzeHistory()
        thread_buildRunJSON(rh,apsA,apsA.run_id_);
  -- NOT CURRENTLY CALLED --
  function displayHistory(run)
    $.getJSON('runs/'+run+'/history.json?pixel_limit='+$(window).width()*2+'&days=3', function( data ) {
    ---
    AtHttpsServer::GetRunHistory(API_call& call)
      g_p_local->readRunHistory(p_user->db_id_,account_id,run_id,symbol,days,sr.paps_->n_analysis_quotes_per_day_reqd_,pixel_limit,rh);
      atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);
</pre>
 
=== DEBUG LIVE ===
NOTE that you WILL lose stock quote data during the debugging time, until we set up a second PROD environment.
* WRITE INITIAL DEBUG CODE in any DEV environment
* UPDATE DEBUGGER to run with [live] parameter instead of debug ones
* COPY DATABASE directly from PROD environment to DEV environment: atimport prod
* STOP PROD environment at_server
* DEBUG.  quickly.  ;-)
* PUSH any code fix (without debug code) back to PROD env
* RESTART PROD and see if the fix worked
* REVERT DEV environment: clean any debug code, redo an atimport and reset the debugger parameters
 
=== HTML SHARED HEADER/FOOTER ===
<pre>
    -------------------------------------------------------------------------------
    THREE PARTS THAT MUST BE IN EVERY HTML FILE:
    -------------------------------------------------------------------------------
   
      1) all code above <container>, including these replaceables:
          a) logout:      <button type='button' id='logout' class='btn btn-margined btn-xs btn-primary pull-right'>Log out</button>
          b) breadcrumbs: <!--bread--><li><a href="/1">1</a></li><li class="active">2</li><!--crumbs-->
      2) logout button handler
        $( document ).ready(function() {
      3) footer and [Bootstrap core Javascript]
     
    what a maintenance nightmare - but it seems best to do all 10-12 files manually
    -------------------------------------------------------------------------------
</pre>
 
=== HAPROXY and LOAD BALANCING ===
 
For the first 1000 paid users, we will NOT do load balancing.
* Use haproxy Layer 7 (http) load balancing to redirect abettertrader.com requests to a bitpost.com custom port.
 
For load balancing, there are two database design choices:
* Each server gets its own quotes and saves all its own data
** Need to read user id from each request and send each user to a predetermined server
** Need multiple Etrade accounts, one for each server, unless we get a deal with Etrade
* Switch to a distributed database with master-master replication
** A lot of work
** Might kill sub-second performance?  Might not.  We already have delayed-write.
 
 
=== TIMESTAMP STANDARDIZATION ===
 
Standardize internal times as int64_t milliseconds since 1970 in UTC.  That's not ideal as it doesn't deal with leap seconds.  But makes our time handling code much faster, so worth the tradeoff.
 
Display times in local time.
 
{| class="wikitable"
|+Timestamp TODO
|-
|Chart URL
|The time should default to 9:30am EST which should display in URL as something like: 2019-09-13T13:30:00.000Z
|-
|Performance page
|?
|-
|Activity page
|?
|}
 
=== OLDER NOTES ===
 
==== WALKING DATABASE FILES ====
 
(we are moving to postgres for archiving!)
 
There are two types of historical requests:
* a specific date range, usually requested by user; use this:
getDatabaseNames(startdate,enddate)
 
* a specific number of days, usually requested by analysis; loop with this:
getPreviousDatabaseName(dt,db_name)
 
* there is also this, which skips non-market days, but we want them for now when walking db files:
get_previous_market_day(pt)
 
Move to mongo soon! :-)
 
==== API PSEUDO (no longer of much use) ====  


     APIGetRunLive::handle_call()
     APIGetRunLive::handle_call()
Line 14: Line 694:
             readRunHistory
             readRunHistory
                 readRunQAB
                 readRunQAB
====  Qt Creator settings (moved on > CLion > vs code) ====
* Make sure you have already run [atbuild] and [atbuild debug].
* Open CMakeLists.txt as a Qt Creator project.
* It will force you to do CMake - pick cmake-release folder and let it go.
* Rename the build config to debug.
* Clone it to release and change folder to release.
* Delete make step and replace it with custom build:
./build.sh
(no args)
%{buildDir}
* Create run setups:
you have to use hardcoded path to BASE working dir (or leave it blank maybe?):
    /home/m/development/thedigitalage/AbetterTrader/server
[x] run in terminal
I recommend using TEST args for both debug and release: localhost 8000 test reanalyze (matches attest)
LIVE args may occasionally be needed for use with [atimport prod]: localhost 8080 live (matches atlive)
==== MONTHLY MANUAL MAINTENANCE ====
(This is now available via the admin Summary page.)
Automate as much as possible, but this is not that bad and safer to do manually when we know the time is right:
monthly db maintenance
        just monthly:
            update PrefStr set value = "JANUARY 2018 LEADERBOARD" where name="LeaderboardTitle";
            update PrefStr set value = "Leader at 1/31 closing bell wins<br />December winner: cfjaques Go Cara!" where name="LeaderboardDescription";
            update Accounts set leaderboard_initial_value = total_managed_value;
            update AnalysisData set avg_pct_gain_mtd = 0, sells_count_mtd = 0;
            update AnalysisData set avg_pct_gain_mtd = 0, sells_count_mtd = 0;
            update StockRuns set avg_pct_gain_mtd = 0, sells_count_mtd = 0;
            update StockRuns set avg_pct_gain_mtd = 0, sells_count_mtd = 0;
        AND ANNUAL!  HAPPY NEW YEAR 2018!
            update PrefStr set value = "JANUARY 2018 LEADERBOARD" where name="LeaderboardTitle";
            update PrefStr set value = "Leader at 1/31 closing bell wins<br />December winner: cfjaques Go Cara!" where name="LeaderboardDescription";
            update Accounts set leaderboard_initial_value = total_managed_value, year_initial_value = total_managed_value;
            update AnalysisData set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;
            update AnalysisData set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;
            update StockRuns set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;
            update StockRuns set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;
        if you need to hard-reset all the simulation accounts, do this and restart to fix all CASH:
            update Accounts set initial_managed_value = 100000, total_managed_value = 100000, net_account_value = 100000 where broker_id=2;
== Developer environment setup ==
The setup script is [as setup [nopostres]].  The installation script can be used for initial setup, and rerun to upgrade components like boost SWS SWSS and postgres.
=== First time install ===
* First, set up YET ANOTHER GODDAMN BOX:
setup_linux.sh [desktop | nodesktop]
* Clone the good stuff
mh c Get all the goodness
=== Dependencies ===
==== Choose local postgres server or remote ====
You can choose to install a full postgres server (usually desired on a laptop):
at setup
Or just install postgres client, and point your dev installation at another postgres server installation (typically use positronic if you're on the LAN):
at se nopostgres
==== To upgrade boost ====
* update the [[Development reference | boost]] version in .bashrc
* [cdl] and remove existing boost install
* rerun [at se] or [at se nopostres] as appropriate
==== libpqxx ====
We use our own fork of pqxx, github:moodboom
I keep the fork updated on cast.
We keep a repo of it in development/Libraries/c++/libpqxx/source/libpqxx
We keep a repo of the parent that we forked from, here:
  development/Libraries/c++/libpqxx/source/libpqxx-jvt-parent
It is a straight git-clone of the jvt repo.
To rebase on top of the latest parent release:
  cd libpqxx-jvt-parent
  git pull
  git reset --hard tags/7.7.0 # or whatever latest release is
  cd ../libpqxx
  # this should already be done:
    # git remote add jvt-parent ../libpqxx-jvt-parent
    # git checkout -b jvt-parent
    # git fetch --all
    # git branch --set-upstream-to=jvt-parent/master jvt-parent
  git checkout jvt-parent && git pull
  git checkout master && git rebase jvt-parent
  # fix up the merge and commit and push -f!
To force-push all the way back to github:
🤔 m@morosoph  [~/development/Libraries/c++/libpqxx.git] git push --set-upstream origin master -f
==== Simple-Web-Server ====
I keep my own fork on gitlab.
I pull parent fork changes in on cast.
To get a new release going:
cd development/Libraries/c++/Simple-Web-Server
git branch
    eidheim-parent
    master
    release/abt-0.0.3
  * release/abt-0.0.4
git checkout -b release/abt-0.0.5
# make sure development/Libraries/c++/Simple-Web-Server-eidheim has most recent commits pulled
git checkout eidheim-parent
git pull
git checkout release/abt-0.0.5
git rebase eidheim-parent
git push --all
Something like that, anyway :P
==== Simple-WebSocket-Server ====
Similar to SWS.
=== Clone prod database ===
You can easily pull a sanitized copy of prod down for local usage.  It will use the dev account instead of prod.  No reason not to do this OFTEN!
Also, you probably don't need quotes!  Those are HUGE.  It's fast if you skip them.
ssh positronic
mh-add-postgres-db at_whatevs
at dump noquotes
at clone positronic-at_live-noquotes-2022-05-15-162423 at_whatevs
# set up a launch.json block to use it - probably with "offline" too
== [[Trading]] ==

Latest revision as of 02:44, 12 March 2024

LIVE SITE -- DEV SITE

Generated Documentation

A better Trader testing

Use Case

   ALL WE ASK OF USERS IS TO WATCH THE NEWS AND...
   
       -----------------------------------------------------
       BE  INTUITIVE   ABOUT   A   STOCK'S   NEXT  DIRECTION
       -----------------------------------------------------
   
   IF A PICKED IS LOOKING GOOD:    let it ride; actions: buy; reset the bracket to hold it longer
   IF A PICKED IS LOOKING BAD:     delete it - there are more fish in the sea, get OUT
   IF AN OWNED IS LOOKING GOOD:    let it ride; actions: sell; hold
   IF AN OWNED IS LOOKING BAD:     let it ride; actions: sell and drop, raise bracket
   REGULARLY ADD AND REMOVE PICKS
   
   THAT'S THE DAILY USAGE PATTERN
   we must do everything else!  
   all that boring analysis should be done for them, unless they really want to obsess

Design

Events

  • Types
    • this account's snapshots
    • significant highlights (across all accounts and cycles); NOTE to make this performant, we only use snapshots (all types) to find these highlights; snapshots should happen on every major change tho! Cool.
    • news (TBD)
  • Provide more detail for monthly-or-below, and less for above-monthly
  • The server consolidates data into all "significant" events for the timeline
  • The user can filter events on the client for readability in a small UI, but they are always available for the given timeframe so filter changes can be quickly applied

Monthly

Realtime events
  • All user account snapshot events
  • "significant" changes to any account or cycle

(MORE TODO)

Daily-batched events

(MORE TODO)

Above-Monthly

  • Only provide account SELLS. Easy to query, plenty of data.

(MORE TODO)

Patterns

Basics

  • Load from SQL/nosql tables into Major Objects
  • Use a tight single-threaded message loop with async timers that set booleans when it is time to perform tasks
  • Offload heavy automated analysis to after-hours external analyzers, with the goal of applying yesterday's best fit to today
  • Server should provide minimal concise data to client, and client Javascript should do all UI rendering work.
    • Traditionally, we would inject const variables into html
    • Best practice but more of a refactor: make three separate calls to server to get html, javascript, and data. The html and javascript become cacheable static files.

Object model

JSON schema is used to generate base data classes and database read/write code, to provide the most agile schema refactoring. Follow these patterns to keep it consistent:

  • use a constructor with defaults for all parameters:
   // This constructor serves several purposes:
   //  1) standard full-param constructor, efficient for both deserializing and initializing
   //  2) no-param constructor for reflection via quicktype
   //  3) id constructor for loading via id + quicktype fields
   //  4) id constructor for use as key for unsorted_set::find()
   BrokerAccount(
       int64_t db_id = PersistentIDObject::DBID_DO_NOT_SAVE,       
       int64_t aar_add_arca_enabled = -1,
       ...
   :
       // Call base class
       inherited(ba_max_db_id_,db_id),

       // internal members
       ...
   {
       // persistent members
       ...
   }
  • There are three use cases for new objects:
    • objects about to be loaded - use constructor params, and load in a value for db_id_
    • new objects that should be made persistent - track a max_db_id in the parent to provide the "next" db_id constructor parameter
    • temporary objects - use the default constructor
  • use addXXXToMemory() to init parent-child relationships
   StockRun& BrokerAccount::addStockRunToMemory(StockRun* psr, bool bInsertNewRank)
       psr->setParent(*this);
       runsByRank_.push_unsorted(psr);
       runs_.insert(psr);

       BrokerAccount &AppUser::addBrokerAccountToMemory(BrokerAccount *pba)
       // We need a valid db id.
       assert(pba->db_id_ != PersistentIDObject::DBID_UNSAVED);

       // The caller is responsible for ensuring the account doesn't exist.
       assert(findBrokerAccount(pba->db_id_) == accounts_.end());

       pba->setParent(*this);
       accounts_.insert(pba);
       accountsByStringId_.insert(pba);
       return *pba;
  • use pointers for parent objects
   set always-and-only once, on load
   use functions that return a reference, to use those objects
   example:
   // EXTERNAL REFERENCES
   // NOTE we are not responsible for these allocations.
   // Access pointers to parents as references.
   // Accces nullable pointers directly.
   void setParent(AppUser& au) { pau_ = &au; }
   AppUser& au() { assert(pau_ != 0); return *pau_; }
   AppUser& au() const { assert(pau_ != 0); return *pau_; }
  • squash 1:1 contained members into parent
   StockRun (Cycle):
       flatten these:
           Stock stock_;
           StockPick sp_;
           AutotradedStock as_;
  • store contained containers/vectors in separate tables, and fill in secondary pass
   StockRun (Cycle):
       BracketEvents sbe_;

Web UI

  • bootstrap header and footer
  • possible subsection navbar (eg Accounts, Admin) that sticks to top with bootstrap header
  • forms:
    • For the simplest forms, just use inputs and a button, and capture the click in js. Don't use <form> as it is hard to prevent the default behavior.
    • For any substantial multi-field form, use helpers in at.js
  • tables: we define tables entirely in JSON and pop them up with bootstrap-table, see AnalysisData getJSON, getJSONColumnNames
  • ajax: see at.js
  • patch can provide partial JSON to do partial updates (don't touch fields that are not provided)
  • dates: moment.js - need to convert d3 date functions to moment
  • money: accounting.js

Documentation

Performance Tracking

We measure three types of performance:

  • SR: As a stock cycles, it tracks the %gain on each buy-sell; Each cycle may have a diff # of stocks but aggregating %change-per-cycle should have value
  • BA: This is where we can say "at time T1 we had value X; at time T2 we had value Y" and get precise gains.
  • AD: This is used across many cycles and accounts and needs aggregation similar to SR.
  • SR and AD performance should be avg-percentage based as there is no base "value" like with BA
    • use an "average gain per buy-sell cycle" => d_avg_pct_gain_[GTT_COUNT] + sells_count_[GTT_COUNT]
  • cycle stopsells may need closer inspection
    • track stopsells; perhaps red-flag at 1 stopsell, then bail on 2?
    • cycle stopsell should (eventually?) flag the aps as in critical need of an update via reanalysis of recent history; rerun analysis, then reset the stopsell to zero
  • track performance of ad
    • nothing to do with need to rerank or reanalyze
    • but just so we learn over time what the best metaranges are
    • we want to keep working in this area, expanding as it makes sense
    • eg separate stocks' volatility by price, volume, market...etc.

Doxygen and SchemaSpy diagrams

SchemaSpy was used on Sqlite relationships to generate a nice foreign key map.

Doxygen shows class diagrams. Here are some central relationships:

MODEL

AtController
  ui_
  memory_model_
  timers

MemoryModel: delayed-write datastore manager; use dirty flags + timed, transactioned saveDirtyObjects() call
  prefs
  sq_
  brokers
  aps_
  users_

TradingModel
  AppUser
    BrokerAccount: 
      runs_ (sorted by id) 
      runsByRank_ 
          contains all cycles sorted by rank
          ONLY MANUAL has valid rank values
          but additional sort criteria is used to handle all cycles
          see StockRunsByRank_lessthan()           
      broker (not worthy of its own layer)
      StockRun: rank, active, owned
        StockPick+AutotradedStock: quote processing
        SPASBracketEvent: stores one bracket-change event; includes triggering quote, bActive, buy/sell {quantity, commission, value}
      StockSnapshot: run, symbol, quote, quantity (for account snapshot history)

Analysis data
  StockQuote 
    db only has time+symbol+price  
    memory also has p_aad
  AutoAnalysisData
    symbol
    aps_id
    stopsells
    profitable_sells   
  StockRun
    aps
  AutotradeParameterSet
    bracket params
    many analysis vars - move these out to aad     
  BracketEvent
    contains all details about any possible event
    for report and analysis we have a tighter version: 
      typedev vector<RunQAB> RunHistory
      RunQAB is just quote+time + optional Bracket ptr

Order lifespan
  Sim and analysis buy: place order, wait for next stock quote, buyWasExecuted()
  Live buy: place order, poll for execution, buyWasExecuted()

Stock model

       Stock
         StockQuote& quote_;
           typedef std::pair<const string,StockQuoteDetails*> StockQuote;
             StockQuoteDetails
               double d_quote_;
               time_t timestamp_;
               (+spike logic)
         int64_t n_quantity_;
         StockOrder so_;
           int64_t order_id_;
           ORDER_STATUS status_;
           int64_t quote_db_id_;

QUOTE PROCESSING

   AlpacaQuotesWss::startWss() on_message for EVERY T QUOTE WE GET
     AlpacaQuotesWss::processQuote
       quotes_.push_back(q);

   AlpacaInterface::getQuotes()
     if (p_quotes_wss_->drainQuotesQueue(latest_quotes))
       return vetAndProcessQuotes(latest_quotes);
       ---
       patc_->mm().processQuote(

   ^^^ all that does nothing but provide raw quotes, no worries there

   bool MemoryModel::processQuote
     bool bReset = !sqd.bValid() || sqd.resetBracketsAtMarketOpen_;
     if ( bRealtime() ) bProcess = sqd.addToSpikeHistory(dQuote, timestamp);
     if ( bReset || bProces )
       ---
       pa->processQuote(sq, bReset);

   BrokerAccount::processQuote(
     // skip quote if APS is not ready!
     if (!psr->aps().bIsReady() && au().mm().bRealtime()) continue;

     if (broker().bSimulation() && psr->bBuyOrderPending())
       psr->executeBuyOnQuote();
     else if (broker().bSimulation() && psr->bSellOrderPending())
       psr->executeSellOnQuote();
     else if (psr->processQuoteForBuy(sq))
       vsrToBuy.push_back(psr);
     else if (psr->processQuoteForSell(sq))
       vsrToSell.push_back(psr);

     for (StockRun *psr : vsrToSell)
       sell(*psr);

   for (StockRun *psr : vsrToBuy)
     // Try to buy until we're full!
     if (bMaxOpenOrders() || !buy(*psr))
       psr->resetBracketOnDelayedBuyAsNeeded();
   ---
   next processing is here:
     StockRun::processQuoteForBuy
       if (!bUnowned()) (bOrderUnresolved()) return false;
       if (handlePickInstaspike()) log
       else if (bBounceType())
     ---
     bool StockRun::handlePickInstaspike()
       if (
             sq().second->bSpikeFlatUp(aps().spike_protection_percent_) 
         || sq().second->bSpikeFlatDown(aps().spike_protection_percent_))
         return true; // Ignore it.
     ---
     bSpikeFlatUp
       return ((quote_ - d_last_quote_) / quote_ > dPct);
     bSpikeFlatDown
       return (abs(d_last_quote_ - d_last_last_quote_) / d_last_quote_ < 0.002) && bSpikeDown(dPct);
     bSpikeDown
       return ((d_last_quote_ - quote_) / d_last_quote_ > dPct);
   ---
   and here:
     StockRun::processQuoteForSell
       if (handleOwnedInstaspike()) // log-and-drop
       ---
       StockRun::handleOwnedInstaspike()
         if (sq().second->bSpikeUpDown) // reset to last-last
         else if (sq().second->bSpikeFlatDown) // ignore it
       ---
       bSpikeUpDown
         return ((d_last_quote_ - d_last_last_quote_) / d_last_quote_ > dPct) && bSpikeDown(dPct) && (abs(quote_ - d_last_quote_) / quote_ < 0.002);

DEBUG INTO REST API HANDLERS

break on server_http.hpp ln~370: if(REGEX_NS::regex_match(request->path, sm_res, regex_method.first)) {
watch regex_method.first.str
watch request->path

CI

MASTER SCRIPT: atci

We will have a live site, a constantly running CI site, and multiple dev environments.

RUN LIVE at bitpost.com:

m@bitpost rs at
m@bitpost # if that doesn't work, start a session: screen -S at 
m@bitpost cd ~/development/thedigitalage/AbetterTrader/server-prod
m@bitpost atlive
 ========================================================
    *** LIVE MODE ***
 ========================================================
CTRL-A CTRL-D

RUN CI at bitpost.com:

# Keep this running to ensure that changes are dynamically built as they are committed
# It should run at a predictable publicly available url that can be checked regularly
# It runs in TEST but should run in a test mode that has an account assigned to it so it is very much like LIVE
# It runs release build in test mode
CTRL-A CTRL-D

RUN DEV anywhere but bitpost:

# Dev has complete control; most common tasks:
#   Code fast with a local CI loop - as soon as a file is changed, CI should restart server in test mode, displaying server log and refreshing server page
#       kill server, build, run, refresh browser
#   Turn off CI loop to debug via IDE
#   Stop prod, pull down production database, run LIVE mode in debugger to diagnose production problems


Old notes from `at readme`

TODO NEEDS WORK!!

1) Set up a new dev environment: at setup 
This function defines continuous integration steps for our project.
See: https://bitpost.com/wiki/Continuous_Integration

1) Continuous Integration environment
     There should only be one official CI repository.
     The CI repository should be a clone of the central bare .git repo.
     The central bare .git repo should have a git post-receive hook that pushes all code to the CI repo as soon as it is received.
     [atci cwatch] will watch for receipt of new code pushes, and stop+rebuild+import+restart the CI server in test mode.
     The script will copy production data, massaged to run in test mode.

2) Get ci status
     [atci cconsole] will restore the screen of the running ci server
     [atci cstatus] TODO TOTHINK: will be called via php via ajax from the main development webpage (bitpost.com)
         to query the CI server and report its one-line status 24/7.  Should include build/run/test status.

3) Development environment
     During "fast" code development:
         [atci dwatch] will watch for code saves, and stop+rebuild+import+restart the dev server in test mode.
         [atci dconsole] will watch for code saves, and stop+rebuild+import+restart the dev server in test mode.
         [atci dstatus] will give a one-line status of the running dev server.
     During "careful" code development, the developer can call substeps to do these specific tasks.  Common tasks:
         [atci dbuild] builds release mode.
         [atci dbuild debug] builds debug mode.
     To sync changes:
         [atci ds] top-level dev script to commit+tag dev.
     See usage for the complete list.

Call these in one of two ways: [atci cmd ...] or [atcmd ...]
See https://bitpost.com/news for more bloviating.  Happy trading!  :-)

Thread locking model

OLD model was to do async operations, sending them through the APIRequestCache. The problem with that was that the website could not give immediate feedback. FUCKING WORTHLESS. New model uses the same exact locking, just does it as needed, where needed. We just need to chose that wisely.

  • Lock at USER LEVEL, as low-level as possible, but as infrequently as possible - not necessarily easy
  • Lock container reads with reads lock, which allows multiple reads but no writing
 // Lock user for reading
 boost::shared_lock<boost::shared_mutex> lock(p_user->rw_mutex_);
  • Lock container writes with exclusive write lock
 // Lock user for writing
 boost::lock_guard<boost::shared_mutex> uniqueLock(p_user->rw_mutex_);

There is also a global mutex, for locking during AppUser container operations, etc.

  • reading
 boost::shared_lock<boost::shared_mutex> lock(g_p_local->rw_mutex_);          
  • writing
 boost::lock_guard<boost::shared_mutex> uniqueLock(g_p_local->rw_mutex_);

DAILY MAINTENANCE

  • Data is segmented into files, one per day
  • to determine end-of-day, timestamps are checked as they come in with quotes (we had no better way to tell)
  • At end of day, perform maintenance
    • perform maintenance only once, checking for existing filename with date of "today"
    • purge nearly all quotes and bracket events, while ensuring the new database remains self-contained
      • preserve last-available quote for all stocks
      • create a fresh new starting snapshot of all accounts using the preserved quotes
    • postpone next quote retrieval until market is ready to open again

Pseudo:

 EtradeInterface::processQuotes()
   if (patc_->bTimestampIsOutsideMarketHours(pt))
     patc_->checkCurrentTimeForOutsideMarketHours()
     ---
     checkCurrentTimeForOutsideMarketHours
       // Do not rely on quote timestamps.
       ptime ptnow = second_clock::local_time();
 
       if (bMarketOpenOrAboutTo(ptnow))
         return false;
 
       // If we are just now switching to outside-hours, immediately take action.
         set_quotes_timer_to_next_market_open();     // This will cause the quotes timer to reset to a loooong pause.
         if (bTimestampIsAfterHours(ptnow))          // must be AFTER not before
           if (g_p_local->performAfterHoursMaintenance(ptnow))
               // Always start each new day with a pre-market account snapshot, 
                           pa->addSnapshotToMemory(snaptime);
           runAnalysis();

PRODUCTION ASSETS

Due to potentially large sizes, I moved all bitpost production live assets to the software raid. Extra log backups are in logs folder. Extra db backups are in db_archive folder.

at_server_live.log -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/logs/at_server_live.log
db_analysis -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/db_analysis
db_archive -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/db_archive

ANALYSIS

AnalysisData Bracketing Overview

Bracket vars:

 "ph1_initial_drop_high": 0.9986,
 "ph1_initial_drop_low": 0.9571,
 "ph1_initial_drop_steps": 4,
 "ph2_initial_rise_high": 1.0429,
 "ph2_initial_rise_low": 1.0014,
 "ph2_initial_rise_steps": 4,
 "ph3_loss_drop_high": 0.85,
 "ph3_loss_drop_low": 0.9929,
 "ph3_loss_drop_steps": 10,
 "ph3_profit_rise_high": 1.12,
 "ph3_profit_rise_low": 1.0071,
 "ph3_profit_rise_steps": 10,
 "ph4_profit_harvest_high": 0.9571,
 "ph4_profit_harvest_low": 0.9986,
 "ph4_profit_harvest_steps": 4,

APS monte carlo steps are taken from that.

steps

There is a limit to the number of total steps that can be done overnight. The steps above are able to be performed. We should work on increasing step count as possible by optimizing, and improving hardware.

bracket constraints

`high` and `low` vars are given, then adjusted using a standard deviation, but within limits to keep meaningless tiny trades from happening.

Example values:

       "ph1_initial_drop_high": 0.9986,
       "ph1_initial_drop_low": 0.9571,
       "ph1_initial_drop_steps": 4,
       "ph2_initial_rise_high": 1.0429,
       "ph2_initial_rise_low": 1.0014,
       "ph2_initial_rise_steps": 4,
       "ph3_loss_drop_high": 0.85,
       "ph3_loss_drop_low": 0.9929,
       "ph3_loss_drop_steps": 10,
       "ph3_profit_rise_high": 1.12,
       "ph3_profit_rise_low": 1.0071,
       "ph3_profit_rise_steps": 10,
       "ph4_profit_harvest_high": 0.9571,
       "ph4_profit_harvest_low": 0.9986,
       "ph4_profit_harvest_steps": 4,

TRENDLINE (REFACTOR SIX)

Keep focus on the final objective:

  • I want to SEE A STOCK'S CYCLE, CLEARLY, so i can harvest it
  • the cycle has to be around a base trendline

We will use recent non-reduced data.

We need to dynamically display, to clue us in to the big picture.

More to come.

ANALYZE PSEUDO (REFACTOR FOUR)

  • we run autoanalysis for every known symbol, with minimal interaction from user - they just decide to use it or not
  • we do a standard deviation on the data, and a monte-carlo-like loop through ranges APS values based on sd multipliers
  • we are working toward a distributed microservice approach with many analysis engines across LAN

Function overview:

 bool AtController::load_startup_data()
     if (b_analyze_on_startup_)
         getAnalyzerController().requeue_analyses();
 bool AtController::checkCurrentTimeForOutsideMarketHours()
     if (getModel().bMarketOpenOrAboutTo(ptnow))
         return false;
     if (!b_after_hours_)
         b_after_hours_= true;
         // Attempt maintenance and analysis, but only if we are AFTER hours (not before).
         if (getModel().bTimestampIsAfterHours(ptnow))
             getModel().performAfterHoursMaintenance(ptnow);
 
 void AnalyzerController::requeue_analyses()
     stop_and_clear_jobs();
     fill_all_job_slots();

REPORTING JSON

   PATTERN
   -------
   bool handler()
       mm().readXxxJson(json)                              (done in derived model)
           for r
               buildAccountCyclesPerformanceJSONRow(       (done in MM)
                   row["snapshot_action"].as<int64_t>()
               )                
       archive().readXxxJson(json)                         (done in derived model)
           for row
               buildAccountCyclesPerformanceJSONRow(       (done in MM)
                   query.getColumn(0).getInt64(),

   FOLLOW OUR PATTERN WITH all handlers:
   * PostAccountPerformanceCycles
       
       AtHttpServer::PostAccountPerformanceCycles() 
           readAccountCyclesPerformanceJSON (derived models)
               buildAccountCyclesPerformanceJSONRow (mm)

   * GetAccountPerformance
   * PostAccountPerformance
       NOTE that these ones actually completely harvest the data first,
       due to reporting requirements (JSON can't be built directly from db rows)

   - PostAccountActivity
   - PostAccountActivityTable

QAB charts

CHART TIMEFRAME DESIGN

 use cases:
   user wants to see performance across a variety of time frames <- PERFORMANCE PAGE only!
   user wants to see historical brackets for older days <- PERFORMANCE PAGE only!  lower priority!
   user wants to perform immediate actions on realtime chart
   user wants to do autoanalysis across a range and then manually tweak it
 
 requirements
   round 1: we can satisfy everything with TODAY ONLY (show today archive if after hours)
   round 2: add a separate per-day performance chart
   round 3: add a date picker to the chart to let the user select an older day to show
 
 node reduction
   data DISPLAY only needs to show action points and highs/lows
       aggressively node-reduce to fit the requested screen size!
       given: number of pixels of width
       provide: all bracket quotes plus the lowest+highest quotes in each 2-pixel range (minimum - adjustable to more aggressive clipping if desired)
   internal data ANALYSIS should use all points

CHART DATA RETRIEVAL

  function addStock(cycle) {
    restAndApply('GET','runs/'+cycle.run+'/live.json?pixel_limit='+$(window).width()*2...
    ---
    void AtHttpsServer::GetRunLive(API_call& call)
      g_p_local->readRunLive(p_user->db_id_, account_id, run_id, symbol, pixel_limit, atc_.bAfterHours(), rh);
      atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);
 
  var analysisChange = function(event) {
    $.getJSON('runs/'+run+'/analysis.json?pixel_limit='+$(window).width()*2+'&aggressiveness='+event.value, function( data ) {
    ---
    AtHttpsServer::GetRunAnalysis(API_call& call)
      atc_.thread_handleAnalysisRequest(
      ---
      AtController::thread_handleAnalysisRequest(BrokerAccount& ba,int64_t run_id,bool b_autoanalyze,double d_aggressiveness,int32_t pixel_limit,string& http_reply)
        g_p_local->readRunHistory()
        thread_analyzeHistory()
        thread_buildRunJSON(rh,apsA,apsA.run_id_);
 
  -- NOT CURRENTLY CALLED --
  function displayHistory(run)
    $.getJSON('runs/'+run+'/history.json?pixel_limit='+$(window).width()*2+'&days=3', function( data ) {
    ---
    AtHttpsServer::GetRunHistory(API_call& call)
      g_p_local->readRunHistory(p_user->db_id_,account_id,run_id,symbol,days,sr.paps_->n_analysis_quotes_per_day_reqd_,pixel_limit,rh);
      atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);

DEBUG LIVE

NOTE that you WILL lose stock quote data during the debugging time, until we set up a second PROD environment.

  • WRITE INITIAL DEBUG CODE in any DEV environment
  • UPDATE DEBUGGER to run with [live] parameter instead of debug ones
  • COPY DATABASE directly from PROD environment to DEV environment: atimport prod
  • STOP PROD environment at_server
  • DEBUG. quickly.  ;-)
  • PUSH any code fix (without debug code) back to PROD env
  • RESTART PROD and see if the fix worked
  • REVERT DEV environment: clean any debug code, redo an atimport and reset the debugger parameters

HTML SHARED HEADER/FOOTER

    -------------------------------------------------------------------------------
    THREE PARTS THAT MUST BE IN EVERY HTML FILE:
    -------------------------------------------------------------------------------
    
      1) all code above <container>, including these replaceables:
          a) logout:      <button type='button' id='logout' class='btn btn-margined btn-xs btn-primary pull-right'>Log out</button>
          b) breadcrumbs: <!--bread--><li><a href="/1">1</a></li><li class="active">2</li><!--crumbs-->
      2) logout button handler
        $( document ).ready(function() {
      3) footer and [Bootstrap core Javascript]
      
    what a maintenance nightmare - but it seems best to do all 10-12 files manually
    -------------------------------------------------------------------------------

HAPROXY and LOAD BALANCING

For the first 1000 paid users, we will NOT do load balancing.

  • Use haproxy Layer 7 (http) load balancing to redirect abettertrader.com requests to a bitpost.com custom port.

For load balancing, there are two database design choices:

  • Each server gets its own quotes and saves all its own data
    • Need to read user id from each request and send each user to a predetermined server
    • Need multiple Etrade accounts, one for each server, unless we get a deal with Etrade
  • Switch to a distributed database with master-master replication
    • A lot of work
    • Might kill sub-second performance? Might not. We already have delayed-write.


TIMESTAMP STANDARDIZATION

Standardize internal times as int64_t milliseconds since 1970 in UTC. That's not ideal as it doesn't deal with leap seconds. But makes our time handling code much faster, so worth the tradeoff.

Display times in local time.

Timestamp TODO
Chart URL The time should default to 9:30am EST which should display in URL as something like: 2019-09-13T13:30:00.000Z
Performance page ?
Activity page ?

OLDER NOTES

WALKING DATABASE FILES

(we are moving to postgres for archiving!)

There are two types of historical requests:

  • a specific date range, usually requested by user; use this:
getDatabaseNames(startdate,enddate)
  • a specific number of days, usually requested by analysis; loop with this:
getPreviousDatabaseName(dt,db_name)
  • there is also this, which skips non-market days, but we want them for now when walking db files:
get_previous_market_day(pt)

Move to mongo soon! :-)

API PSEUDO (no longer of much use)

   APIGetRunLive::handle_call()
       g_p_local->getRunLiveJSON()
           readRunQAB(s_str_db_name...)
           (SAME as readRunLive!!)
   APIGetRunHistory::handle_call()
       g_p_local->getRunHistoryJSON()
           readRunHistory
               readRunQAB

Qt Creator settings (moved on > CLion > vs code)

  • Make sure you have already run [atbuild] and [atbuild debug].
  • Open CMakeLists.txt as a Qt Creator project.
  • It will force you to do CMake - pick cmake-release folder and let it go.
  • Rename the build config to debug.
  • Clone it to release and change folder to release.
  • Delete make step and replace it with custom build:
./build.sh
(no args)
%{buildDir}
  • Create run setups:
you have to use hardcoded path to BASE working dir (or leave it blank maybe?): 

   /home/m/development/thedigitalage/AbetterTrader/server

[x] run in terminal
I recommend using TEST args for both debug and release: localhost 8000 test reanalyze (matches attest)
LIVE args may occasionally be needed for use with [atimport prod]: localhost 8080 live (matches atlive)

MONTHLY MANUAL MAINTENANCE

(This is now available via the admin Summary page.)

Automate as much as possible, but this is not that bad and safer to do manually when we know the time is right: monthly db maintenance

       just monthly:
           update PrefStr set value = "JANUARY 2018 LEADERBOARD" where name="LeaderboardTitle";
           update PrefStr set value = "Leader at 1/31 closing bell wins
December winner: cfjaques Go Cara!" where name="LeaderboardDescription"; update Accounts set leaderboard_initial_value = total_managed_value; update AnalysisData set avg_pct_gain_mtd = 0, sells_count_mtd = 0; update AnalysisData set avg_pct_gain_mtd = 0, sells_count_mtd = 0; update StockRuns set avg_pct_gain_mtd = 0, sells_count_mtd = 0; update StockRuns set avg_pct_gain_mtd = 0, sells_count_mtd = 0; AND ANNUAL! HAPPY NEW YEAR 2018! update PrefStr set value = "JANUARY 2018 LEADERBOARD" where name="LeaderboardTitle"; update PrefStr set value = "Leader at 1/31 closing bell wins
December winner: cfjaques Go Cara!" where name="LeaderboardDescription"; update Accounts set leaderboard_initial_value = total_managed_value, year_initial_value = total_managed_value; update AnalysisData set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0; update AnalysisData set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0; update StockRuns set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0; update StockRuns set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0; if you need to hard-reset all the simulation accounts, do this and restart to fix all CASH: update Accounts set initial_managed_value = 100000, total_managed_value = 100000, net_account_value = 100000 where broker_id=2;

Developer environment setup

The setup script is [as setup [nopostres]]. The installation script can be used for initial setup, and rerun to upgrade components like boost SWS SWSS and postgres.

First time install

  • First, set up YET ANOTHER GODDAMN BOX:
setup_linux.sh [desktop | nodesktop]
  • Clone the good stuff
mh c Get all the goodness

Dependencies

Choose local postgres server or remote

You can choose to install a full postgres server (usually desired on a laptop):

at setup

Or just install postgres client, and point your dev installation at another postgres server installation (typically use positronic if you're on the LAN):

at se nopostgres

To upgrade boost

  • update the boost version in .bashrc
  • [cdl] and remove existing boost install
  • rerun [at se] or [at se nopostres] as appropriate

libpqxx

We use our own fork of pqxx, github:moodboom

I keep the fork updated on cast. We keep a repo of it in development/Libraries/c++/libpqxx/source/libpqxx We keep a repo of the parent that we forked from, here:

  development/Libraries/c++/libpqxx/source/libpqxx-jvt-parent

It is a straight git-clone of the jvt repo. To rebase on top of the latest parent release:

  cd libpqxx-jvt-parent
  git pull
  git reset --hard tags/7.7.0 # or whatever latest release is
  cd ../libpqxx
  # this should already be done:
    # git remote add jvt-parent ../libpqxx-jvt-parent
    # git checkout -b jvt-parent
    # git fetch --all
    # git branch --set-upstream-to=jvt-parent/master jvt-parent
  git checkout jvt-parent && git pull
  git checkout master && git rebase jvt-parent
  # fix up the merge and commit and push -f!

To force-push all the way back to github:

🤔 m@morosoph  [~/development/Libraries/c++/libpqxx.git] git push --set-upstream origin master -f

Simple-Web-Server

I keep my own fork on gitlab. I pull parent fork changes in on cast.

To get a new release going:

cd development/Libraries/c++/Simple-Web-Server
git branch
   eidheim-parent
   master
   release/abt-0.0.3
 * release/abt-0.0.4
git checkout -b release/abt-0.0.5
# make sure development/Libraries/c++/Simple-Web-Server-eidheim has most recent commits pulled 
git checkout eidheim-parent
git pull
git checkout release/abt-0.0.5
git rebase eidheim-parent
git push --all

Something like that, anyway :P

Simple-WebSocket-Server

Similar to SWS.

Clone prod database

You can easily pull a sanitized copy of prod down for local usage. It will use the dev account instead of prod. No reason not to do this OFTEN!

Also, you probably don't need quotes! Those are HUGE. It's fast if you skip them.

ssh positronic
mh-add-postgres-db at_whatevs
at dump noquotes
at clone positronic-at_live-noquotes-2022-05-15-162423 at_whatevs
# set up a launch.json block to use it - probably with "offline" too

Trading