Development reference: Difference between revisions

From Bitpost wiki
No edit summary
No edit summary
Line 603: Line 603:
|
|
{| class="mw-collapsible mw-collapsed wikitable"
{| class="mw-collapsible mw-collapsed wikitable"
! Register your npm user on a new machine (so you can publish from it)
! Register with npm
|-
|-
|  
| A one-time registration is required on a new machine if you want to publish from it:
  npm adduser
  npm adduser
  Username: moodboom
  Username: moodboom

Revision as of 14:39, 12 March 2016

Patterns
Major Objects stored in a Memory Model
  • Major Objects
    • Use Major Objects for fast in-memory handling of large amount of data that is thread-safe but must be persisted
    • We must support complex objects with simple keys, crud, and fast lookup by multiple keys.
    • Our most useful containers are vector, set (key in object) and map (<key,value> pair). Set can give us almost every positive feature, when used to store the PersistentIDObject class.
    • Use an unordered_set of const pointers to objects derived from PersistentIDObject
    • The default container should index by db_id primary key
    • Always use the db_id for foreign keys
    • Other containers can be created with alternate keys using object members; just define new hash functions.
  • PersistentIDObject
    • Add a dirty flag to all objects, set to true on any change that must be persisted
    • Use an internal in-memory counter to generate the next db_id for a newly created object
    • This means that when creating new objects, there is NO NEED to access db, VERY IMPORTANT!
    • Use delayed-write tactics to write all dirty objects on idle time
  • Memory Model
    • Use a Datastore manager (aka "MemoryModel") to hold sets
    • It can look up objects by any key, and strip away const to return a mutable object. NOTE that the user must not damage the key values!
    • Derive a class from the memory model for persistence; it can use any persistence method (local, remote, sql, nosql, etc.).
    • Make sure that the base MemoryModel class is concrete not abstract, thread-safe and self-contained; this makes parallel calculations trivial, helps scalability, etc.
Continuous Integration
  • Commit to Continuous Integration with constant automated testing
  • apps should have live and test modes, made as similar as possible
  • live mode: no tests, lean and mean; usually only one runs at a time in a PROD env (esp for web-driven or service-oriented apps); but you can also debug it in a DEV env
  • test mode: always run automated tests on startup, but then function exactly like live mode as much as possible
  • test mode: write smart tests - assert-driven development means we don't need blanket unit-testing on everything - spend time on big domain-specific tests that MATTER
  • test mode: multiple simultaneous instances should be allowed, so CI can run on dev box during debugging
  • live mode and test mode should be able to run simultaneously, so prod box can always be quickly tested
  • dev box needs ci loop that automatically reruns release in test mode on code changes
  • NOTE that during initial design/development/refactor of basic architecture, CI has less value and may actually get in the way; better to build a functioning app first.
c++11 Model View Controller skeleton
Three objects contain most of the functionality. In addition, utilities.* provides crosscut features like global logging.

CONTROLLER header

#include "HTDJInterface.h"
#include "HTDJLocalModel.h"
#include "HTDJRemoteModel.h"

class HTDJController
    HTDJController(
        HTDJLocalModel& local,
        HTDJRemoteModel& remote,
        HTDJInterface& hinterface
VIEW header
class HTDJController;
class HTDJInterface
{
   HTDJInterface( HTDJController* p_controller)
MODEL header
class HTDJLocalModel;

// Note that we want GLOBAL ACCESS to a GENERIC local model.
extern HTDJLocalModel* g_p_local;
class HTDJLocalModel
{
    HTDJLocalModel();
Web design mantra
  1. DESIGN INPUT TO THE INPUT DEVICE
    1. respect the input device
    2. detect it and use a reasonable default guess
    3. allow manual override
    4. [mouse/pen]...[finger]
    5. sm-md-lg ... sm-md-lg
  2. DESIGN VISUALS TO THE SCREEN
    1. high-res = SHOW LOTS OF DETAIL
    2. responsive, zoomable
git central shared repository
Use bare repos for any central shared repositories. Use the [####.git] suffix on bare repo names.

Bare repositories are designed to be shared. Maintenance on the central server is easier because you don't have local files to manage permissions or constant flux. Plus, you can always have a second repo on the central server where you check out a specific branch (e.g. to serve up with apache). If you want a dynamically updated central repo, clone the ###.git repo to ###, and add a post-receive hook (see bitpost quick-http.git for a good example).

To configure the repo as shared:

git config core.sharedRepository true

To set it on a new git repo during initial setup, make sure devs are in the same group, and use:

git init --shared=group
C++
c++ Create a portable C++ project in linux
For anything serious, it's best to clone an existing project's skeleton.
  • main.cpp
  • MVC code
  • nix/copy_from folder
  • make sure .bashrc is configured for boost
  • nix$ bootstrap_autotools_project.sh force release debug
  • set up eclipse according to screenshots in docs below
c++ Create a portable C++ project in Visual Studio

If you don't have existing code, it's probably best to create a project_name.cpp file with your main() function.

int main( int argc, char * argv[] )
{ 
    return 0;
}

Then in Visual Studio...

File->New->Project from existing code
C++
(then use mostly defaults on this page, once you provide file location and project name)
Project file location:  <base>\project_name
Project name: project_name
[x] Add files from these folders
   Add subs  
   [x]        <base>\project_name
NEXT
Use Visual Studio
  Console application project
  No ATL, MFC, CLR
NEXT
NEXT
FINISH

Then add Reusable and boost include and lib paths: Example if you built boost according to notes below:

Project->Properties->All Configurations->Configuration Properties->VC++ Directories->Include Directories->
   $(VC_IncludePath);$(WindowsSDK_IncludePath);..\..\..\Reusable\c++
INCLUDE: C:\Software Development\boost_1_53_0
LIB: C:\Software Development\boost_1_53_0\stage\lib
boost release and debug build for linux
   # download latest boost, eg: boost_1_59_0 
   m@wallee:~/development$ 7z x boost_1_59_0.7z
   cd boost_1_##_0
   build_boost_release_and_debug.sh
   # then patch .bashrc as instructed
   # eclipse and server/nix/bootstrap.sh are customized to match

To upgrade a project:

   cd nix
   make distclean  # removes nasty .deps folders that link to old boost if you let them
   make clean      # removes .o files etc
   cd ../build-Release && make distclean && make clean
   cd ../build-Debug && make distclean && make clean
   cd ..
   ./bootstrap force release
   ./bootstrap force debug
boost release and debug build for Windows
Open a VS2015 x64 Native Tools Command Prompt.

EITHER: for new installs, you have to run bootstrap.bat first, it will build b2; OR: for reruns, remove boost dirs: [bin.v2, stage]. Then build 64-bit:

cd "....\boost_1_59_0"
b2 toolset=msvc variant=release,debug link=static address-model=64
rem trying to avoid excessive options, assuming I don't need these: threading=multi
(old stuff)
      --toolset=msvc-14.0 address-model=64 --build-type=complete --stagedir=windows_lib\x64 stage
      Now open VS2013 x86 Native Tools Command Prompt and build 32-bit:
      cd "C:\Michael's Data\development\sixth_column\boost_1_55_0"
      bjam --toolset=msvc-12.0 address-model=32 --build-type=complete --stagedir=windows_lib\x86 stage
c++11 containers
sorted_vector use when doing lots of unsorted insertions and maintaining constant sort would be expensive; vector is good for a big pile of things that only occasionally needs a sorted lookup
map sorted binary search tree; always sorted by key; you can walk through in sorted order (choose unordered if not needed!)
multimap same as map but allows dupe keys (not as common)
unordered_map hashmap; always sorted by key; additional bucket required for hash collisions; no defined order when walking through
unordered_multimap same as map but allows dupe keys; dupes are obviously in the same bucket, and you can walk just the dupes if needed
set
multiset
unordered_set
unordered_multiset
sets are just like maps, except the key is embedded in the object, nice for encapsulation.

Items must be const (!) since they are the key - sounds bad, but this is mitigated by the mutable keyword.
You can use mutable on the variables that are not part of the key to remove the const.
This changes the constness of the object from binary (completely const) to logical (constness is defined by the developer).
So... set is a good way to achieve both encapsulation and logical const - make const work for you, not against!  :-)

set (etc.) of pointers sets of pointers are the pinnacle of object stores

The entire object can be dereferenced and accessed then without const issues.
A pointer functor can be provided that does a sort by dereferencing the pointer to the object.
Two requirements: you must make sure yourself that you do not change the key values - you can mark them const, provided in constructor;
you must create sort/equal/hash functors that dereference the pointers to use object contents
(the default will be by pointer address).
The arguably biggest advantage, as a result, is that you can create multiple sets
to reference the same group of objects with different sort funtors to create multiple indices.
You just have to manage the keys carefully, so that they don't change (which would invalidate the sorting).
The primary container can manage object allocation; using a heap-based unique_ptr allocation

   map vs key redux
               
       use a key in the set, derive a class from it with the contents
           + small key
           + encapsulation
           - requires mutable to solve the const problem
       use a key in the set, key includes a mutable object
           + encapsulation
           - weird bc everything uses a const object but we have const functions like save() that change the mutable subobject
       use a map
           + small key
           - no encapsulation, have to deal with a pair instead of an object
               can we just put a ref to key in the value?  sure why not - err, bc we don't have access to it
           + solves const problem bc value is totally mutable by design
           + we can have multiple keys - and the value can have multiple refs to them
           + simpler equal and hash functions
       map:
           create an object with internal key(s)
           create map index(es) with duplicate key values outside the object - dupe data is the downside
       use set(s) with one static key for find(): 
           create an object with internal key(s)
           create set index(es) with specific hash/equals functor(s)
           when finding, use one static key object (even across indexes!) so there isn't a big construction each time; just set the necessary key values
               that proves difficult when dealing with member vars that are references
               but to solve it, just set up a structure of dummy static key objects that use each other; then provide a function to setKey(Object& keyref) { keyref_ = keyref; }
               nope, can't reassign refs
               the solution: use pointers not references
               yes that's right
               just do it
               apparently there was a reason i was anti-reference for all those years
               two reasons to use pointers:
                   dynamically allocated
                   reassignment required
               there ya go.  simple.  get it done. 
           when accessing find results from the set, use a const_cast on the object!
           WARNING: a separate base class with the key sounds good... but fails when you have more than one index on the object.  just use a static key object for them all!
c++11 example for large groups of objects with frequent crud AND search
Best solution is an unordered set of pointers:
typedef boost::unordered_set<MajorObject*> MajorObjects;
c++11 example for large groups of objects with infrequent crud and frequent search
Best solution is a vector of pointers sorted on demand (sorted_vector):
TODO
c++11 example to associate two complex objects (one the map key, one the map value)
Use unordered_map with a custom object as key. You must add hash and equals functions. Boost makes it easy:
static bool operator==(MyKeyObject const& m1, MyKeyObject const& m2)
{
    return 
            m1.id_0 == m2.id_0
        &&  m1.id_1 == m2.id_1;
}
static std::size_t hash_value(MyKeyObject const& mko)
{
    std::size_t seed = 0;
    boost::hash_combine(seed, mko.id_0);
    boost::hash_combine(seed, mko.id_1);
    return seed;
}
typedef boost::unordered_map<MyKeyObject, MyValueObject*> MyMap;

Note that you can extend this to use a pointer to a key object, whoop.

c++11 example for multiple unordered_set indexes into one group of objects
Objects will be dynamically created. One set should include them all and be responsible for memory allocation cleanup:
TODO
c++11 example for set with specific sorting
Use set with a specific sort functor. You can create as many of these indexes as you want!
struct customers_set_sort_functor
{
    bool operator()(const MyObject* l, const MyObject* r) const
    {
        // the id is the key
        return l->id_ < r->id_;
    }
};
typedef set<MyObject*,myobject_sort_by_id_functor> MyObjectsById;
c++11 loop through vector to erase some items
Note that other containers' iterators may not be invalidated so you can just erase() as needed...

For vectors, you have to play with iterators to get it right - watch for proper ++ pre/postfix!

for (it = numbers.begin(); it != numbers.end(); )  // NOTE we increment below, only if we don't erase
{
    if (*it.no_good()) 
    {
        numbers.erase(it++);  // NOTE that we ERASE THEN INCREMENT here.
    }
    else 
    {
        ++it;
    }
}

I thought I had always looped backwards to do this, I *think* that's ok too, but I don't see it used in my code, I think I'll avoid.  :-)

c++11 range based for loop, jacked with boost index if needed
No iterator usage at all. Nice at times, not enough at others. Make SURE to always use a reference or you will be working on a COPY. Make it const if you aren't changing the object.
for (auto& mc : my_container)
    mc.setString("default");
for (const auto& cmc : my_container)
    cout << cmc.asString();

boost index can give you the index if you need it, sweet:

#include <boost/range/adaptor/indexed.hpp>
...
for (const auto &element: boost::adaptors::index(mah_container))
    cout << element.value() << element.index();
c++11 for loop using lambda
This C++11 for loop is clean and elegant and a perfect way to check if your compiler is ready for c++11:
vector<int> v;
for_each( v.begin(), v.end(), [] (int val)
{
   cout << val;
} );

This is using a lambda function, we should switch from iterators and functors to those - but not quite yet, since we're writing cross-platform code. Do not touch this until we can be sure that all platforms provide compatible C++11 handling.

c++11 integer types
I really like the "fast" C++11 types, that give best performance for a guaranteed minimum bit width.

Use them when you know a variable will not exceed the maximum value of that bit width, but does not have to be a precise bit width in memory or elsewhere.

Pick specific-width fields whenever data is shared with other processes and components and you want a guarantee of its bit width.

And when using pointer size and array indices you should use types defined for those specific situations.

FAST types:

   int_fast8_t
   int_fast16_t                fastest signed integer type with width of
   int_fast32_t                at least 8, 16, 32 and 64 bits respectively
   int_fast64_t
   uint_fast8_t
   uint_fast16_t               fastest unsigned integer type with width of
   uint_fast32_t               at least 8, 16, 32 and 64 bits respectively
   uint_fast64_t

SMALL types:

   int_least8_t
   int_least16_t               smallest signed integer type with width of
   int_least32_t               at least 8, 16, 32 and 64 bits respectively
   int_least64_t
   uint_least8_t
   uint_least16_t		smallest unsigned integer type with width of
   uint_least32_t		at least 8, 16, 32 and 64 bits respectively
   uint_least64_t

EXACT types:

   int8_t                      signed integer type with width of
   int16_t                     exactly 8, 16, 32 and 64 bits respectively
   int32_t                     with no padding bits and using 2's complement for negative values
   int64_t                     (provided only if the implementation directly supports the type)
   uint8_t                     unsigned integer type with width of
   uint16_t                    exactly 8, 16, 32 and 64 bits respectively
   uint32_t                    (provided only if the implementation directly supports the type)
   uint64_t

SPECIFIC-USE types:

   intptr_t                    integer type capable of holding a pointer
   uintptr_t                   unsigned integer type capable of holding a pointer 
   size_t                      unsigned integer type capable of holding an array index (same size as uintptr_t)
C++11 scoped enumeration
C++11 has scoped enumeration, which lets you specify the SPECIFIC VARIABLE TYPE for the enum. Perfect, let's use uint_fast32_t.
enum class STRING_PREF_INDEX int_fast32_t: { ... };

Unfortunately, gcc gives me a scary warning, and stuff fails. For some reason, it does not know about the provided type, although it is definitely defined. Revisit this later if you have time.

warning: elaborated-type-specifier for a scoped enum must not use the ‘class’ keyword

Old skool is still cool:

typedef enum
{
    // assert( SP_COUNT == 2 );
    SP_FIRST = 0                ,
    SP_SOME_PREF = SP_FIRST     ,
    SP_ANOTHA                   ,

    SP_COUNT
} STRING_PREF_INDEX;
c++ in-memory storage of "major" objects
   OBSERVATION ONE

   Consider An Important Qt Design: QObjects cannot normally be copied
       their copy constructors and assignment operators are private
       why?  A Qt Object...
           might have a unique QObject::objectName(). If we copy a Qt Object, what name should we give the copy?
           has a location in an object hierarchy. If we copy a Qt Object, where should the copy be located?
           can be connected to other Qt Objects to emit signals to them or to receive signals emitted by them. If we copy a Qt Object, how should we transfer these connections to the copy?
           can have new properties added to it at runtime that are not declared in the C++ class. If we copy a Qt Object, should the copy include the properties that were added to the original?
   in other words, a QObject is a pretty serious object that has the ability to be tied to other objects and resources in ways that make copying dangerous
   isn't this true of all serious objects?  pretty much
   OBSERVATION TWO

   if you have a vector of objects, you often want to track them individually outside the vector
   if you use a vector of pointers, you can move the object around much more cheaply, and not worry about costly large vector reallocations
   a vector of objects (not pointers) only makes sense if the number of objects is initially known and does not change over time
   OBSERVATION THREE

   STL vectors can store your pointers, iterate thru them, etc.
   for a vector of any substantial size, you want to keep objects sorted so you can find them quickly
   that's what my sorted_vector class is for; it simply bolts vector together with sort calls and a b_sorted status
   following STL practices, to get sorting, you have to provide operator< for whatever is in your vector
   BUT... you are not allowed to do operator<(const MyObjectPtr* right) because it would require a reference to a pointer which is not allowed
   BUT... you can provide a FUNCTOR to do the job, then provide it when sorting/searching
   a functor is basically a structure with a bool operator()(const MyObjectPtr* left, const MyObjectPtr* right)
   OBSERVATION FOUR

   unordered_set works even better when combining frequent CRUD with frequent lookups
   SUMMARY
   Dealing with tons of objects is par for the course in any significant app.
   Finding a needle in the haystack of those objects is also standard fare.
   Having multiple indices into those objects is also essential.
   Using unordered_set with object pointers and is very powerful.
c++ stl reverse iterator skeleton
From SGI...
reverse_iterator rfirst(V.end());
reverse_iterator rlast(V.begin());

while (rfirst != rlast) 
{
    cout << *rfirst << endl;
    ...
    rfirst++;
}
c++ stl reading a binary file into a string
   std::ifstream in("my.zip",std::ios::binary);
   if (!in)
   {
      std::cout << "problem with file open" << std::endl;
      return 0;
   }
   in.seekg(0,std::ios::end);
   unsigned long length = in.tellg();
   in.seekg(0,std::ios::beg);
 
   string str(length,0);
   std::copy( 
       std::istreambuf_iterator< char >(in) ,
       std::istreambuf_iterator< char >() ,
       str.begin() 
   );

For more, see c++ stl reading a binary file

C/C++ best-in-class tool selection
I need to have easy setup of debug-level tool support for portable C++11 code. And I need to decide and stick to it to be efficient.
  • Compiler selection
    • linux and mac: gcc
    • windows: Visual Studio
  • IDE selection
    • linux and mac: eclipse
    • windows: eclipse OR Visual Studio OR Qt Creator
  • Debugger selection
    • linux and mac: eclipse using gdb OR ddd
    • windows: eclipse OR Visual Studio OR Qt Creator
c/c++ gdb debugging
(gdb) help break
Set breakpoint at specified line or function.
Argument may be line number, function name, or "*" and an address.
If line number is specified, break at start of code for that line.
If function is specified, break at start of code for that function.
If an address is specified, break at that exact address.
With no arg, uses current execution address of selected stack frame.
This is useful for breaking on return to a stack frame.

Multiple breakpoints at one place are permitted, and useful if conditional.    

Do "help breakpoints" for info on other commands dealing with breakpoints.
ddd gives you a front end. I need to use it more, compare to other options
C - Create a portable command line C project in Visual Studio
   Visual Studio: File -> New -> project
   Visual C++ -> Win32 -> Win32 Console Application
   name: oms_with_emap
   next -> click OFF precompiled header checkbox (even tho it didn't seem to respect it)
   you'll get a _tmain(..., TCHAR*...)
   change it to main(..., char*...)
   change the project to explicitly say "Not using precompiled header"
   remove the f'in stdafx.h
   recompile!  should be clean
   vs will recognize C files and compile accordingly
gcc install multiple versions in ubuntu (4 and 5 in wily, eg)
My code will not compile with gcc 5, the version provided with Ubuntu wily.

It gives warnings like this:

/home/m/development/boost_1_59_0/boost/smart_ptr/shared_ptr.hpp:547:34: warning: ‘template<class> class std::auto_ptr’ is deprecated [-Wdeprecated-declarations]

and outright errors like this:

depbase=`echo AtServer.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\
g++ -DPACKAGE_NAME=\"at_server\" -DPACKAGE_TARNAME=\"at_server\" -DPACKAGE_VERSION=\"1.0\" -DPACKAGE_STRING=\"at_server\ 1.0\" -DPACKAGE_BUGREPORT=\"m@abettersoftware.com\" -DPACKAGE_URL=\"\" -DPACKAGE=\"at_server\" -DVERSION=\"1.0\" -I. -I../../src  -I/home/m/development/Reusable/c++ -I/home/m/development/Reusable/c++/sqlite -std=c++11 -I/home/m/development/boost_1_59_0  -ggdb3 -O0 -std=c++11 -MT AtServer.o -MD -MP -MF $depbase.Tpo -c -o AtServer.o ../../src/AtServer.cpp &&\
mv -f $depbase.Tpo $depbase.Po
In file included from /usr/include/c++/5/bits/stl_algo.h:60:0,
                from /usr/include/c++/5/algorithm:62,
                from /usr/include/c++/5/ext/slist:47,
                from /home/m/development/boost_1_59_0/boost/algorithm/string/std/slist_traits.hpp:16,
                from /home/m/development/boost_1_59_0/boost/algorithm/string/std_containers_traits.hpp:23,
                from /home/m/development/boost_1_59_0/boost/algorithm/string.hpp:18,
                from /home/m/development/Reusable/c++/utilities.hpp:4,
                from ../../src/MemoryModel.hpp:11,
                from ../../src/SqliteLocalModel.hpp:13,
                from ../../src/AtServer.cpp:70:
/usr/include/c++/5/bits/algorithmfwd.h:573:13: error: initializer provided for function
    noexcept(__and_<is_nothrow_move_constructible<_Tp>,
            ^
/usr/include/c++/5/bits/algorithmfwd.h:582:13: error: initializer provided for function
    noexcept(noexcept(swap(*__a, *__b)))

You can set up the update-alternatives tool to switch out the symlinks:

sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 60 --slave /usr/bin/g++ g++ /usr/bin/g++-4.9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 20 --slave /usr/bin/g++ g++ /usr/bin/g++-5

BUT that seems BAD to me to switch out the compiler used by the SYSTEM. Instead, we should specify the compiler required by the PROJECT. This is supposed to do it, but it still uses /usr/include/c++/5, which seems wrong, and gives me errors:

CC=gcc-4.9
JavaScript
Register with npm
A one-time registration is required on a new machine if you want to publish from it:
npm adduser
Username: moodboom
Password: (see private)
Email: (this IS public) moodboom@gmail.com
Publish a node module
npm install -g # keep doing this until you are happy with local install
# update version in package.json
git commit -a -m "1.0.5"
git tag 1.0.5
git push && git push --tags  # NOTE: bitpost has a git hook to push changes all the way up to github
npm publish
Update a node module's dependencies
# make sure dependency in package.json has a carat at the beginning of its version (^x means "at least" version x)
# make sure the dependency has a new version available - completely publish it first if it is your own
# then you can simply reinstall from within the module folder to get all dependencies upgraded
npm install -g
Develop several node modules at once
Convert dependencies to use local packages instead of published versions, eg:
cd ~/development/mah-haus
npm install -S /home/m/development/thedigitalage/rad-scripts

Then reinstall everything (local dependent modules, then parent modules, pita - consider links if doing longer-term dev)

npm install -g

Then convert back to published versions as they become available (it's up to me to stabilize and publish new module versions):

cd ~/development/mah-haus
npm install -S rad-scripts
install Node.js
Windows

Linux

  • install Node.js using the "Node.js Version Manager" nvm details

directory and it will default to that version

auto AWS
  • npm install -g aws-sdk
  • Add credentials here: C:\Users\Administrator\.aws
  • see existing scripts, anything is possible
install bootstrap
  • npm install -g grunt-cli
  • mkdir mysite && cd mysite
  • npm install bootstrap
  • cd node_modules/bootstrap
  • npm install # to actually pull down dependencies
  • grunt dist # builds and minifies so you're good to go!
git
git convert to a bare repo
Start with a normal git repo via [git init]; add your files, get it all set up. Then do this:
cd repo

Now you can copy-paste this...

mv .git .. && rm -fr *
mv ../.git .
mv .git/* .
rmdir .git
git config --bool core.bare true
cd ..

Don't copy/paste these, you need to change repo name...

mv repo repo.git # rename it for clarity
git clone repo.git # (optional, if you want a live repo on the server where you have the bare repo)

Then you can clean up old branches like daily and daily_grind, as needed.

git fix github diverge from local bare repo following README.md edit
Yes editing the README.md file on github will FUCK UP your downstream bare repo if you meanwhile push to it before pulling.

Fixing it is a PAIN in the ASS, you have to create a new local repo and pull github into that, pull in from your other local repo, push to github, pull to your bare...

git clone git@github.com:moodboom/quick-http.git quick-http-with-readme-conflict
git remote add local ../quick-http
git fetch local
git merge local/master # merge in changes, likely trivial
git push # pushes back to github
cd ..
mv quick-http.git quick-http.git__gone-out-of-sync-fu-github-readme-editor
git clone git@github.com:moodboom/quick-http.git --bare
cp quick-http.git__gone-out-of-sync-fu-github-readme-editor/config quick-http.git/

And that MIGHT get you on your way... but I would no longer trust ANY of your local repos... This is a serious pita.

git use kdiff3 as difftool and mergetool
It's been made easy on linux...
  • LINUX - put this in ~/.gitconfig
[diff]
    tool = kdiff3

[merge]
    tool = kdiff3
  • WINDOZE
[difftool "kdiff3"]
    path = C:/Progra~1/KDiff3/kdiff3.exe
    trustExitCode = false
[difftool]
    prompt = false
[diff]
    tool = kdiff3
[mergetool "kdiff3"]
    path = C:/Progra~1/KDiff3/kdiff3.exe
    trustExitCode = false
[mergetool]
    keepBackup = false
[merge]
    tool = kdiff3
  • LINUX Before - What a ridiculous pita... copy this into .git/config...
[difftool "kdiff3"]
    path = /usr/bin/kdiff3
    trustExitCode = false
[difftool]
    prompt = false
[diff]
    tool = kdiff3
[mergetool "kdiff3"]
    path = /usr/bin/kdiff3
    trustExitCode = false
[mergetool]
    keepBackup = false
[merge]
    tool = kdiff3
git create merge-to command
Add this handy alias command to all git repos' .config file...
[alias]
    merge-to = "!gitmergeto() { export tmp_branch=`git branch | grep '* ' | tr -d '* '` && git checkout $1 && git merge $tmp_branch && git checkout $tmp_branch; unset tmp_branch; }; gitmergeto"

git create new branch on server, pull to client
# ON CENTRAL SERVER
git checkout master # as needed; we are assuming that master is clean enough as a starting point
git checkout -b mynewbranchy

# HOWEVER, use this instead if you need a new "clean" repo and even master is dirty...
# You need the rm because git "leaves your working folder intact".
git checkout --orphan mynewbranchy
git rm -rf .

# ON CLIENT
git pull
git checkout -b mynewbranchy origin/mynewbranchy
# if files are in the way from the previously checked-out branch, you can force it...
git checkout -f -b mynewbranchy origin/mynewbranchy
git windows configure notepad++ editor
git config --global core.editor "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin"
git pull when untracked files are in the way
This will pull, forcing untracked files to be overwritten by newly tracked ones in the repo:
git fetch --all
git reset --hard origin/mymatchingbranch
git create new branch when untracked files are in the way
  git checkout -b bj143 origin/bj143
     git : error: The following untracked working tree files would be overwritten by checkout:
     (etc)
  
  TOTAL PURGE FIX (too much):
     git clean  -d  -fn ""
        -d dirs too
        -f force, required
        -x include ignored files (don't use this)
        -n dry run
  
  BEST FIX (just overwrite what is in the way):
     git checkout -f -b bj143 origin/bj143
git fix push behavior - ONLY PUSH CURRENT doh
git config --global push.default current
git recreate repo
git clone ssh://m@thedigitalmachine.com/home/m/development/thedigitalage/ampache-with-hangthedj-module
cd ampache-with-hangthedj-module
git checkout -b daily_grind origin/daily_grind

If you already have the daily_grind branches and just need to connect them:

git branch -u origin/daily_grind daily_grind
git connect to origin after the fact
git remote add origin ssh:// m@bitpost.com/home/m/development/logs
git fetch
    From ssh:// bitpost/home/m/development/logs
     * [new branch]      daily_grind -> origin/daily_grind
     * [new branch]      master     -> origin/master
git branch -u origin/daily_grind daily_grind
git checkout master
git branch -u origin/master master
git multiple upstreams
Use this to cause AUTOMATIC push/pull to a second origin:
git remote set-url origin --push --add user1@repo1
git remote set-url origin --push --add user2@repo2
git remote -v show

Leave out --push if you want to pull as well... but I'd be careful, it's better if code is changed in one client with this config, and then pushed to the multiple origins from there. Otherwise, things are GOING TO GET SYNCY-STINKY.

Eclipse
Eclipse Mars installation
We always want these: CDT, PDT, WTP (Web Tools Platform, includes JSTP for Javascript support)
  1. Get and run the Eclipse Installer and install one of the big ones
    1. PDT is good but there are instructions below to bolt on any of the three - so anything goes with your starting choice
  2. Install to development/eclipse/mars
  3. Run and select this as the default workspace, do not prompt again: /home/m/development/eclipse-workspace
  4. Install PDT
    1. Help-> Install New Software -> Add... -> Find latest PDT site [ here] -> add https://wiki.eclipse.org/PDT/Installation
    2. e.g. for Mars1, use http://download.eclipse.org/tools/pdt/updates/3.6/
  5. Install WTP
    1. Help-> Install New Software -> Add... -> add -> "WTP Mars", http://download.eclipse.org/webtools/repository/mars/ -> select WTP 3.7.1
  6. Install CDT
    1. Help-> Install New Software -> Add... -> Find the latest CDT site here, and add it
    2. e.g. for Mars1, use http://download.eclipse.org/tools/cdt/releases/8.8
    3. Install CDT Main Features; CDT Optional Features: autotools memoryview misc multicore qt unittest vc++ visualizer
  7. Close eclipse and update the settings folder to point to the common shared location (make sure the development/config repo is available):
cd ~/development/eclipse-workspace/.metadata/.plugins/org.eclipse.core.runtime/.settings
ln -fs /home/m/development/config/common/home/m/development/eclipse-workspace/.metadata/.plugins/org.eclipse.core.runtime/.settings/* .
  1. Import all the existing projects that you need (A better Trader, Hang The DJ, etc.). You can import existing projects from ~/development, easy peasey!
  2. Install the terminal plugin so you can run CI right in eclipse. Just drag the "install" button there to the eclipse toolbar.
Eclipse settings for all portable C++ boost projects
  • Set up .bashrc with my standard ENV vars for boost, c++ (and use notes there to build latest boost, if needed)
  • Set up build-Debug and build-Release folders via bootstrap force [debug|release], then configure build configurations for them both
  • Configure the project according to these Eclipse project configuration screenshots
Eclipse annoyances
  • To get problems to reset on build, I had to turn on (for all configs, then exit/restart): Project->Properties->C++ Build->Settings->Error Parsers-> [x] GNU gmake Error Parser 7
  • Click only ERRORS on Annotations dropdown arrow to bypass noise - I still can't get Ctrl-[,|.] to navigate errors, insanity
eclipse java project layout format
Eclipse uses a workspace which holds projects. Java apps written with Eclipse are organized as follows:
  • Eclipse workspace (can also be the top version-control folder)
    • project folder (typically one "app" that you can "run")
      • package(s) (named something like "com.developer.project.application")
        • classes (each class is contained in one file)
eclipse new project from existing code
You can set up a new project without specifying anything about it, just to browse the code.
			File->New Project->Empty
			Name: bbby2e05
			Location: c:\
			[ ] Create subdir
			---
			Show all files
			select everything in include, rc->include in project
			repeat for src
			dont worry about mak or install folders for now, just add files later as needed
			save all
			---
			then set up a repo for it!
			cd c:\bbb2e05
			git init (plus add cpp c hpp h, commit, set up daily, sync on bitpost)
It is also possible to set up a C++ makefile or PHP project from existing code.
			(rclick projects area)->New->Project...->C++->Makefile Project with existing code
			(name it and make sure Show all files is selected)
misc
php debugging
Tail these:
tail -f /var/log/apache2/sitelogs/thedigitalage.org/ssl_error_log
tail -f /var/log/ampache-tda/ampache.(today).log
This leads to too much noise, not needed...
emacs /etc/php/apache2-php5.3/php.ini
  display_errors = On
/etc/init.d/apache restart
emacs configuration
This common config should work well in both terminal and UI:
/home/m/development/config/common/home/m/.emacs

NOTE that you need some other things to be configured properly:

  • the terminal must use a light light blue background since it will be carried over into [emacs -nw]
  • the terminal must have 256-color support; set this in .bashrc:
export TERM=xterm-256color
  • Make sure you check out undo support via [ctrl-x u]
Jenkins
Installation:
  • install on a server (TODO)
  • For VS projects:
    • add MSBuild plugin
    • find the right path to msbuild.exe (eg C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe)
    • Manage Jenkins, Configure System, add the msbuild there; USE FULL PATH, INCLUDE EXE, NO LEADING SPACES
  • Add the Build Keeper plugin to clean up previous bad builds on rebuild
  • Add nodes labeled according to what will be built on each
  • Select "New Item" to add folder(s) to create a project layout
  • Within folders, select "New Item" to create "Freestyle Projects"
  • Configure the project:
    • git, build+test+install script, post-build cleanup
    • "Restrict where this project can run" with node labels
    • poll source control ~ every 30 min: "H/30 * * * *"
  • TODO retrieve build+test+install status, report to Jenkins
SQL Server 2008+ proper upsert using MERGE
       -- We need an "upsert": if record exists, update it, otherwise insert.
       -- There are several options to do that.
       -- Trying to do it correctly means...
       --		1) use a lock or transaction to make the upsert atomic
       --		2) use the best-available operation to maximize performance
       -- SQL Server 2008 has MERGE which may be slightly more efficient than 
       -- separate check && (insert||update) steps.  And we can do it with
       -- a single lock instead of a full transaction (which may be better?).
       -- It's messy to code up though since three blocks of fields must be specified.  
       -- Cest la vie.
       MERGE [dbo].[FACT_DCSR_RemPeriodMonthlyReport] WITH (HOLDLOCK) AS rpmr
       USING (SELECT @ID AS ID) AS new_foo
             ON rpmr.ID = new_foo.ID



               @last_months_year as DCSRYear,
               @last_month as DCSRMonth,
               @last_month_name as MonthName,
               Device_Type_ID,




       WHEN MATCHED THEN
           UPDATE
                   SET f.UpdateSpid = @@SPID, 
                   UpdateTime = SYSDATETIME() 
       WHEN NOT MATCHED THEN
           INSERT
             (
                   ID, 
                   InsertSpid, 
                   InsertTime
             )
           VALUES
             (
                   new_foo.ID, 
                   @@SPID, 
                   SYSDATETIME()
             );
Windows command prompt FULL SCREEN
Type cmd in start search box and right-click on the cmd shortcut which appears in the results. Select Run CMD as administrator.

Next, in the command prompt, type wmic and hit Enter. Now try to maximize it! Close it and again open it. It will open as a maximized window! You may have to ensure that the Quick Edit Mode in the Options tab is checked.

bash chmod dirs
find /path/to/base/dir -type d -exec chmod g+x {} \;
Web Services
Firefox Addon development