A better Trader: Difference between revisions

From Bitpost wiki
No edit summary
Line 270: Line 270:
=== HAPROXY and LOAD BALANCING ===
=== HAPROXY and LOAD BALANCING ===


* Initially I will use HAPROXY Layer 7 (http) load balancing to allow abettertrader.com domain to be served from bitpost.com.
* For the first 1000 paid users, we will NOT do load balancing.
** Use haproxy Layer 7 (http) load balancing to redirect (bitpost.com IP) + port 8080 requests to abettertrader.com (all https).
 
Two database design choices:
* Each server gets its own quotes and saves all its own data
** Need to read user id from each request and send each user to a predetermined server
** Need multiple Etrade accounts, one for each server, unless we get a deal with Etrade
* Switch to a distributed database with master-master replication
** A lot of work
** Might kill sub-second performance?  Might not. We already have delayed-write.

Revision as of 15:55, 9 May 2017

PRIORITIZED OPERATIONAL USE CASES

   ALL WE ASK OF USERS:
   
       -----------------------------------------------------
       BE  INTUITIVE   ABOUT   A   STOCK'S   NEXT  DIRECTION
       -----------------------------------------------------
   
   IF A PICKED IS LOOKING GOOD:    MOVE TO TOP!  or even buy
   IF A PICKED IS LOOKING BAD:     MOVE TO BOTTOM or even deactivate!
   IF AN OWNED IS LOOKING GOOD:    MOVE TO TOP - or even HOLD
   IF AN OWNED IS LOOKING BAD:     SELL!  or raise sell trigger
   
   THAT'S THE DAILY USAGE PATTERN
   we must do everything else!  
   all that boring analysis should be done for them, unless they really want to obsess

MODEL

AtController
  ui_
  memory_model_
  timers

MemoryModel: delayed-write datastore manager; use dirty flags + timed, transactioned saveDirtyObjects() call
  prefs
  sq_
  brokers
  aps_
  users_

TradingModel
  AppUser
    BrokerAccount: runs_ (sorted by id) runsByRank_ (sorted by rank)
      broker (not worthy of its own layer)
      StockRun: rank, active, owned
        StockPick+AutotradedStock: quote processing
        SPASBracketEvent: stores one bracket-change event; includes triggering quote, bActive, buy/sell {quantity, commission, value}
      StockSnapshot: run, symbol, quote, quantity (for account snapshot history)
Order lifespan
  Sim and analysis buy: place order, wait for next stock quote, buyWasExecuted()
  Live buy: place order, poll for execution, buyWasExecuted()

Stock model

       Stock
         StockQuote& quote_;
           typedef std::pair<const string,StockQuoteDetails*> StockQuote;
             StockQuoteDetails
               double d_quote_;
               time_t timestamp_;
               (+spike logic)
         int64_t n_quantity_;
         StockOrder so_;
           int64_t order_id_;
           ORDER_STATUS status_;
           int64_t quote_db_id_;

CI

MASTER SCRIPT: atci

We will have a live site, a constantly running CI site, and multiple dev environments.

RUN LIVE at bitpost.com:

m@bitpost rs at
m@bitpost # if that doesn't work, start a session: screen -S at 
m@bitpost cd ~/development/thedigitalage/AbetterTrader/server-prod
m@bitpost atlive
 ========================================================
    *** LIVE MODE ***
 ========================================================
CTRL-A CTRL-D

RUN CI at bitpost.com:

# Keep this running to ensure that changes are dynamically built as they are committed
# It should run at a predictable publicly available url that can be checked regularly
# It runs in TEST but should run in a test mode that has an account assigned to it so it is very much like LIVE
# It runs release build in test mode
CTRL-A CTRL-D

RUN DEV anywhere but bitpost:

# Dev has complete control; most common tasks:
#   Code fast with a local CI loop - as soon as a file is changed, CI should restart server in test mode, displaying server log and refreshing server page
#       kill server, build, run, refresh browser
#   Turn off CI loop to debug via IDE
#   Stop prod, pull down production database, run LIVE mode in debugger to diagnose production problems

THREAD LOCKING MODEL

OLD model was to do async operations, sending them through the APIRequestCache. The problem with that was that the website could not give immediate feedback. FUCKING WORTHLESS. New model uses the same exact locking, just does it as needed, where needed. We just need to chose that wisely.

  • Lock at USER LEVEL, as low-level as possible, but as infrequently as possible - not necessarily easy
  • Lock container reads with reads lock, which allows multiple reads but no writing
 // Lock user for reading
 boost::shared_lock<boost::shared_mutex> lock(p_user->rw_mutex_);
  • Lock container writes with exclusive write lock
 // Lock user for writing
 boost::lock_guard<boost::shared_mutex> uniqueLock(p_user->rw_mutex_);

DAILY MAINTENANCE

  • Data is segmented into files, one per day
  • to determine end-of-day, timestamps are checked as they come in with quotes (we had no better way to tell)
  • At end of day, perform maintenance
    • perform maintenance only once, checking for existing filename with date of "today"
    • purge nearly all quotes and bracket events, while ensuring the new database remains self-contained
      • preserve last-available quote for all stocks
      • create a fresh new starting snapshot of all accounts using the preserved quotes
    • postpone next quote retrieval until market is ready to open again

Pseudo:

 EtradeInterface::processQuotes()
   if (patc_->bTimestampIsOutsideMarketHours(pt))
     patc_->checkCurrentTimeForOutsideMarketHours()
     ---
     checkCurrentTimeForOutsideMarketHours
       // Do not rely on quote timestamps.
       ptime ptnow = second_clock::local_time();
 
       if (bMarketOpenOrAboutTo(ptnow))
         return false;
 
       // If we are just now switching to outside-hours, immediately take action.
         set_quotes_timer_to_next_market_open();     // This will cause the quotes timer to reset to a loooong pause.
         if (bTimestampIsAfterHours(ptnow))          // must be AFTER not before
           if (g_p_local->performAfterHoursMaintenance(ptnow))
               // Always start each new day with a pre-market account snapshot, 
                           pa->addSnapshotToMemory(snaptime);
           runAnalysis();

QAB charts

CHART TIMEFRAME DESIGN

 use cases:
   user wants to see performance across a variety of time frames <- PERFORMANCE PAGE only!
   user wants to see historical brackets for older days <- PERFORMANCE PAGE only!  lower priority!
   user wants to perform immediate actions on realtime chart
   user wants to do autoanalysis across a range and then manually tweak it
 
 requirements
   round 1: we can satisfy everything with TODAY ONLY (show today archive if after hours)
   round 2: add a separate per-day performance chart
   round 3: add a date picker to the chart to let the user select an older day to show
 
 node reduction
   data DISPLAY only needs to show action points and highs/lows
       aggressively node-reduce to fit the requested screen size!
       given: number of pixels of width
       provide: all bracket quotes plus the lowest+highest quotes in each 2-pixel range (minimum - adjustable to more aggressive clipping if desired)
   internal data ANALYSIS should use all points

CHART DATA RETRIEVAL

<code>
  function addStock(cycle) {
    restAndApply('GET','runs/'+cycle.run+'/live.json?pixel_limit='+$(window).width()*2...
    ---
    void AtHttpsServer::GetRunLive(API_call& call)
      g_p_local->readRunLive(p_user->db_id_, account_id, run_id, symbol, pixel_limit, atc_.bAfterHours(), rh);
      atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);

  var analysisChange = function(event) {
    $.getJSON('runs/'+run+'/analysis.json?pixel_limit='+$(window).width()*2+'&aggressiveness='+event.value, function( data ) {
    ---
    AtHttpsServer::GetRunAnalysis(API_call& call)
      atc_.thread_handleAnalysisRequest(
      ---
      AtController::thread_handleAnalysisRequest(BrokerAccount& ba,int64_t run_id,bool b_autoanalyze,double d_aggressiveness,int32_t pixel_limit,string& http_reply)
        g_p_local->readRunHistory()
        thread_analyzeHistory()
        thread_buildRunJSON(rh,apsA,apsA.run_id_);

  -- NOT CURRENTLY CALLED --
  function displayHistory(run)
    $.getJSON('runs/'+run+'/history.json?pixel_limit='+$(window).width()*2+'&days=3', function( data ) {
    ---
    AtHttpsServer::GetRunHistory(API_call& call)
      g_p_local->readRunHistory(p_user->db_id_,account_id,run_id,symbol,days,sr.paps_->n_analysis_quotes_per_day_reqd_,pixel_limit,rh);
      atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);
</code>

ANALYZE PSEUDO (REFACTOR THREE)

  • runs at end of day via AtController::runAnalysis()
  • runs on demand at a specific aggressiveness via AtHttpsServer::GetRunAnalysis()
  • runs on demand to auto-analyze via AtHttpsServer::GetRunAutoAnalysis()

Function hierarchy:

     void AtController::runAnalysis()      runs at end of day to analyze all targeted picks
                                           can also be kicked off in test via command line param
                                           current status: does nothing, needs to use refactor 3 functions
                                           
     thread_analyzeOneAPS()                not called anywhere!
     
     GetRunAutoAnalysis()                  auto = true, aggressiveness = -1.0
     GetRunAnalysis()                      auto = false, aggressiveness = a specific point (0-100)
       both call:
       thread_handleAnalysisRequest()      if -1.0
         thread_analyzeHistory()
           if (bAutoanalyze)
               analyzeAcrossFullAggressivenessRange(symbol,*pba,rh);
               analyzeAcrossMonteCarloRange(symbol,*pba,rh);
               (and run ba.analyze() a bunch)
           else
               generateAPSFromAggressiveness(apsAnalysis);
               pba->analyze(symbol,rh);
               (just run it once)

TRADE PSEUDO

load from SQL tables into Major Objects (std::unordered_sets of PersistentIDObjects)

API PSEUDO

   APIGetRunLive::handle_call()
       g_p_local->getRunLiveJSON()
           readRunQAB(s_str_db_name...)
           (SAME as readRunLive!!)
   APIGetRunHistory::handle_call()
       g_p_local->getRunHistoryJSON()
           readRunHistory
               readRunQAB

DEBUG LIVE

NOTE that you WILL lose stock quote data during the debugging time, until we set up a second PROD environment.

  • WRITE INITIAL DEBUG CODE in any DEV environment
  • UPDATE DEBUGGER to run with [live] parameter instead of debug ones
  • COPY DATABASE directly from PROD environment to DEV environment: atimport prod
  • STOP PROD environment at_server
  • DEBUG. quickly.  ;-)
  • PUSH any code fix (without debug code) back to PROD env
  • RESTART PROD and see if the fix worked
  • REVERT DEV environment: clean any debug code, redo an atimport and reset the debugger parameters

Qt Creator settings

  • Make sure you have already run [atbuild] and [atbuild debug].
  • Open CMakeLists.txt as a Qt Creator project.
  • It will force you to do CMake - pick cmake-release folder and let it go.
  • Rename the build config to debug.
  • Clone it to release and change folder to release.
  • Delete make step and replace it with custom build:
./build.sh
(no args)
%{buildDir}
  • Create run setups:
you have to use hardcoded path to BASE working dir (or leave it blank maybe?): 

   /home/m/development/thedigitalage/AbetterTrader/server

[x] run in terminal
I recommend using TEST args for both debug and release: localhost 8000 test reanalyze (matches attest)
LIVE args may occasionally be needed for use with [atimport prod]: localhost 8080 live (matches atlive)

HTML SHARED HEADER/FOOTER

    -------------------------------------------------------------------------------
    THREE PARTS THAT MUST BE IN EVERY HTML FILE:
    -------------------------------------------------------------------------------
    
      1) all code above <container>, including these replaceables:
          a) logout:      <button type='button' id='logout' class='btn btn-margined btn-xs btn-primary pull-right'>Log out</button>
          b) breadcrumbs: <!--bread--><li><a href="/1">1</a></li><li class="active">2</li><!--crumbs-->
      2) logout button handler
        $( document ).ready(function() {
      3) footer and [Bootstrap core Javascript]
      
    what a maintenance nightmare - but it seems best to do all 10-12 files manually
    -------------------------------------------------------------------------------

HAPROXY and LOAD BALANCING

  • For the first 1000 paid users, we will NOT do load balancing.
    • Use haproxy Layer 7 (http) load balancing to redirect (bitpost.com IP) + port 8080 requests to abettertrader.com (all https).

Two database design choices:

  • Each server gets its own quotes and saves all its own data
    • Need to read user id from each request and send each user to a predetermined server
    • Need multiple Etrade accounts, one for each server, unless we get a deal with Etrade
  • Switch to a distributed database with master-master replication
    • A lot of work
    • Might kill sub-second performance? Might not. We already have delayed-write.