This is a sweet spot of thread functionality for me at the moment, mixing boost and c++11, so I’m throwing it down here.  It’s also in my Reusable code on git.

// --------------------------------------------------------------------
// CONCISE EXAMPLE OF THREAD WITH EXTERNALLY-ACCESSIBLE STATUS AND DATA
// --------------------------------------------------------------------
// We create a vector, and create a thread to start stuffing it.
// Externally, we can check the status of the job, and have mutex access to the data.
// The atomic job stage is SO CHEAP to change and check, do it all day long as needed.
// Initially, externally, we check the job stage.
// Meanwhile, we do a bunch of intense work inside the mutex.
// Then we do smaller work with frequent mutexing, allowing interruption.
// Great patterns, use them brother!
// Hells to the yeah.
// --------------------------------------------------------------------
std::atomic<int32_t> job_stage(0);
std::unordered_set<int> data;
boost::shared_mutex data_guard;
data.insert(-1);
std::thread t([&job_stage,&data,&data_guard] {
    // stage 1 = jam in data
    job_stage = 1;
    {
        boost::upgrade_lock<boost::shared_mutex> lock(data_guard);
        boost::upgrade_to_unique_lock<boost::shared_mutex> uniqueLock(lock);
        for (int loop = 0; loop < 2000; ++loop)
        {
            std::this_thread::sleep_for(1ms);
            data.insert(loop);
        }
    }
    // stage 2 = mutex-jam data, allowing intervention
    job_stage = 2;
    for (int loop = 3000000; loop < 4000000 && job_stage == 2; ++loop)
    {
        boost::upgrade_lock<boost::shared_mutex> lock(data_guard);
        boost::upgrade_to_unique_lock<boost::shared_mutex> uniqueLock(lock);
        data.insert(loop);
    }
    cout << "thread exiting..." << endl;
});

cout << "pre mutex job stage: " << job_stage << endl;

for (int check = 0; check < 5; ++check)
{
    std::this_thread::sleep_for(200ms);
    // not sure why i was getting std::hex output...
    cout << "check " << check << " job stage: ";
    {
        boost::upgrade_lock<boost::shared_mutex> lock(data_guard);
        cout  << dec << job_stage << " data size " << data.size();
    }
    cout << endl;
}

// We can trigger the thread to exit.
job_stage = 3;

// Let's see what happens if we don't join until after the thread is done.
std::this_thread::sleep_for(300ms);
cout << "done sleeping" << endl;

// NOW we will block to ensure the thread finishes.
cout << "joining" << endl;
t.join();
cout << "all done" << endl;
// --------------------------------------------------------------------

Output:

pre mutex job stage: 1
check 0 job stage: 2 data size 2031
check 1 job stage: 2 data size 225848
check 2 job stage: 2 data size 456199
check 3 job stage: 2 data size 726576
check 4 job stage: 2 data size 936429
thread exiting...
check 5 job stage: 2 data size 1002001
done sleeping
joining
all done

Five minutes into this javascript exploit video, you’ll understand why that massive stack of node modules in your app is BAD.  Nicely done.

And… 14 minutes in and he hits Moment.js… which is everywhere… with a straightforward tough-to-solve regex DDOS hack.

Then mongoose… then…

Diamond dependencies are everywhere, from dll dependency hell to running an upgradable linux distro to your node stack to figuring out your #include order.  Check out Titus’s perspective, it’s pretty tight.

I don’t think anyone in good faith can discount the navigation benefits of a modern IDE.  Yes my favorite is still written in C++, as is my favorite editor (put that vim and emacs shit down, son, it’s time to code…I KID), nothing beats speed when you just want to type code.  But JetBrains has shown how they can provide amazing IDE features for all kinds of code: C++, Python, Scala, etc. with their Java library stack.  Yes you pay a performance price and yes we’ve been burned by slow Java IDEs before, haven’t we, Eclipse… but JetBrains has really hit a critical mass of solid IDE functionality,  and I’m going to give it another good try.  Here are some rolling notes, made in cronological order…

  • the soggy keylag is KILLING me… but I love the navigation power… hrmph…
  • it’s not THAT laggy considering all it is doing… and my other dev environment is through a VM anyway (does that make it better or worse, not sure yet…) continuing…
  • on a VM, all hope is lost.  Even sublime is uselessly unusable.  Back to emacs.  Sad world.
  • The speed is now pretty good, I don’t know if it got everything indexed and it’s faster, or if I got used to it.  I think it’s faster!  Just in time for my open source license to expire… and JetBrains renewed it!  Yay, thanks guys.
  • Some things just take some adjustment.  Keymaps, panes, etc.  Also, don’t expect to copy/paste huge chunks of code as fast as you can in a dumb editor – it causes a ton of analysis to occur.  In a non-VM fairly-decent environment, CLion is humming along now.
  • CLion is undeniably faster than Sublime on my VM.  It is downright snappy at editing code compared to Sublime.  Happily shocked!
  • Uhoh… lots of clion lockups on laptop… doh, whoops, out of disk space.  Don’t let it happen!  🙂

Fun tips:

  • You can open a file into an existing clion session by running the startup script with a full path to the file, or do this:
    clion `pwd`/myfile.cpp
    or use the little bash script to do it for you:
    !/bin/bash
    # if $1 does not start with [/], prefix it with `pwd`
    MYFILE=$1
    if ! [[ $MYFILE =~ ^/ ]]; then MYFILE=`pwd`/$1; fi
    cd /home/m/apps/jetbrains/clion/bin
    ./clion.sh $MYFILE $2 $3 $4 $5 &
  • For the huge hirez monitors I have gotten addicted to using with i3, make sure you enable this (which is oddly disabled by default):

    File > Settings > Editor > General > [x] Change font size (zoom) with Ctrl+Mouse Wheel

  • CLion will auto-create projects from CMakeLists.txt, really nice.  It seems to auto-create Debug config using ./cmake-build-debug.  To create Release config too, go to:

    File > Settings > Build… > CMake > click +, it will auto-create Release (a little weird but it was what I needed)

JWT is neat, in every meaning of the word. Sure, the base64-encoded data is basically plaintext. But a secure signature makes it All All Right. 🙂 Simple and elegant.

I used my OAuth code from twitter to do the encoding and encrypting. As per the usual, it’s in my Reusable project on github.

I used the most-readily-available encryption algorithm. Looking forward to setting up better faster harder ones soon.