The underappreciated scaling of memory access costs hidden by the cult of Big-O

In any CS-101 textbook, you learn about the scaling of certain algorithms.  I’ve had a few conversations with several colleagues recently related to this topic, and how badly the “cult of complexity” can lead one astray when trying to scale real systems rather than ones in textbooks.  I wanted to write a bit about the topic, and share a simple program that demonstrates the realities.

This post sadly has nothing in particular to do with the giant fighting mecha anime, “Big O” from 1999… Actually, now that I think about it, scaling is bound to be a concern when making giant robots. But the square-cube law is a bit of a different topic than memory time complexity, so we’ll have to leave it for another day.

In the textbook, one algorithm might scale linearly — O(n).  Another might do better at O(log(n)), or have a constant number of operations regardless of working set size and be O(1), etc.  It’s a pretty straightforward way of keeping track of the terribleness of an algorithm.  But the key is that Big O notation only estimates the number of operations as n grows large, rather than than the amount of time taken to run it.  There is a pretty natural and intuitive implicit assumption that makes Big O useful.  The assumption is that one read operation from memory is about as slow as any other.  So you just need to count up the number of read operations to get an idea of how much time you’ll spend waiting on memory.

This is, of course, pants-on-fire nonsense.

Continue reading

Lost in Userland (Talk at SCALE17x)

The slides for my talk at Scale17x are available here: In Google Slides

The synopsis is here: At the SCALE website

My repository with W-Utils, the “Worst Utils” versions of a few common userland utilities is here:  On GitHub . The implementations are all incomplete, proof of concept sorts of things, but they are useful examples that are much simplified compared to the “real thing” in GNU Coreutils.

A video of the talk is up on the SCALE YouTube page.  My talk in the “Ballroom C” video starts at 4:05:30.  The camera doesn’t cover the screen, so you may need to be clicking along with the slide deck in you want the full experience.  (The embedded video starts near the right time, but it doesn’t seem to want to seek to the exact spot, so you may need to move in time a little.)

Adventures in Userland I – Brown Paper Packages Tied Up in std::string, xz, ar, tar, gz, and spaghetti

I was recently sitting, staring at a progress bar, which is how very nerdy adventures start.

assorted wine bottles

This photo of a bar popped up when I did a search for “progress bar,”  It’s rather more colorful and visually interesting than the actual progress bar I was staring at that inspired me to figure out what happens when you install a debian package.  Photo by Chris F on

The particular progress bar was telling me about the packages being installed as part of upgrading my workstation from Ubuntu 16.04 to the newer Ubuntu 18.04.  As the package names whizzed by, one after the other, the thing that annoyed me was that it took So. Damned. Long.  My day job often involves trying to understand why Linux systems don’t go as fast as I would like, so I naturally started firing up some basic utilities to see what was happening.  The most obvious thing to check is always CPU usage.  top showed me that my CPU cores were sitting almost entirely idle.  CPU usage is a metric that I often describe as convenient to measure, relatively easy to understand, and generally useless.  But it’s still a good place to start.  I wasn’t really surprised that the installation process wasn’t CPU bound, so I fired up iotop, which is a much more useful utility for seeing what processes on a system are io bound, and saw…  Nothing interesting.  And it was then that I sort of fell into a curiosity.  If you count all the many servers I have caused package installations to happen on, I have probably installed many millions of debian packages over the years.  Some with salt, others with apt-get, and some with dpkg, but I never really studied in detail exactly how the ecosystem worked.

I started by trying to figure out exactly what a debian package is.  It seems like a silly question, with a simple answer.  Of course, “a debian package is just a common standard ar archive,” as a friend of mine pointed out while I was talking to him.  But that sort of understates things.  First off, ar archives aren’t that common, or particularly standardised.  Ar archives are ‘common’ only as the format for static libraries, and debian packages.  They just aren’t common as general purpose archives, like tarballs or zip files.  Which is sort of interesting in it’s own right.

Let’s consider just how standard the format actually is…  Wikipedia has a good breakdown of the format.  Is the diagram on Wikipedia all we’d need to know to read a debian package?  Well, man 5 ar  notes “There have been at least four ar formats” and “No archive format is currently specified by any standard.  AT&T System V UNIX has historically distributed archives in a different format from all of the above.”  Eep, that’s not terribly promising.  Thankfully, debian packages are at least consistent among themselves in their Ar dialect, since they can generally be assumed to be made with the ar on a debian Linux distribution.

There’s a whole side-story here about how there is a C system header for reading ar archives in an old-school “read a struct” way.  But the format use a slightly odd whitespace padded text pattern, so to get trimmed filenames as C++ std::strings and integer number values is more of a pain in the neck than you’d hope.  There isn’t a good c++ library with a modern API for the format.  So I wrote a YAML definition for Katai in order to have a convenient C++ API for reading it, and used the SPI Pystring library for some of the string manipulation.  In any event, I could read the format.  Yay, I could read a debian package myself!

A debian package consists of just three things when you unpack it.  I file called ‘debian-binary’ that tells you the version number of the format.  And, two tarballs.  One with control metadata about the package and the other with the actual contents of the package.

At this point, anybody trying to write their own code to unpack a debian package in order to better understand the process will try and punch a wall.  Because we’ve just figured out how to write code to read this relatively uncommon Ar format, and the first thing we find inside of it is two tarballs, which is a completely different format!  Surely, we could have designed the package files to either be an Ar with Ar archives in it, or a tar file with tar files in it!  Well, okay, my friend’s assertion that I just needed to know about Ar archives was a lie, but I only need to know about two formats.  That’s not too bad.  Oh, well, tarballs are actually two formats unto themselves.  There’s a compression format, and then the actual tar archive.  So, you need to handle three file formats to install a debian package.  I have some code that will unpack the Ar layer, so let’s see which compression method is used on the tar files…


Wait, Why aren't they using the same compression?!

If you unpack the apturl package, you get the debian-binary file, and the data and control archives.  It’s totally arbitrary that I used apturl-common as a test file for my code.  It just happened to be a package that I downloaded.  Other packages will vary slightly.

Wait, those two tar files have different compression formats.  One is a .gz file, and the other is a .xz!  Not just different compression formats from debian files of different eras.  For example, if Ubuntu 12.04 packages used gz and Ubuntu 18.04 used xz, you would only need to support one or another to install packages from any particular distribution.  As it turns out, there are different compression formats inside a single package.  Okay, so to unpack and install a debian file, you actually need to support a few compression formats.  Let’s say xz, bz2, and gz at a minimum.  Okay, so you need to support 5 different formats.  So, what’s in that control archive?

You get a few scripts.  preinst, postinst, and prerm.  Those scripts get run when you would expect.  Before install, after install, and before removing the package if you uninstall it.  Languages like Python can be embedded in native applications, but shell scripts aren’t really intended to be used that way.  (And actually, if I were embedding Python today, I’d probably use PyBind11 instead of Boost.Python like I did in my old blog post.  But that’s neither here nor there.)  So, you can pass on being responsible for running the scripts in-process if you are trying to implement something to install the packages, and just shell out to do it.  (Writing a shell is definitely at least a whole other blog post unto itself.)  You also have files called md5sums, control, and conffiles.  Conffiles is just a newline separated list of files that the package uses for configuration so the install program can warn you about merging local changes during install.  It’s barely a file format, so we’ll count it as half.  md5sums is a listing of checksums of all the files in the content archive called “data,” in the format of md5sums.

b25977509ca6665bd7f390db59555b92  usr/bin/apturl 
da0e92f4f035935dc8cacbba395818f2  usr/lib/python3/dist-packages/AptUrl/ 
2c645156bfd8c963600cd7aed5d0fc0b  usr/lib/python3/dist-packages/AptUrl/ 
927320b1041af741eb41557f607046a7  usr/lib/python3/dist-packages/AptUrl/ 
b697ac30c6e945c0d80426a8a4205ef8  usr/lib/python3/dist-packages/AptUrl/ 
d41d8cd98f00b204e9800998ecf8427e  usr/lib/python3/dist-packages/AptUrl/ 
d41d8cd98f00b204e9800998ecf8427e  usr/lib/python3/dist-packages/AptUrl/ 
a8f4538391be3cd2ecac685fe98b8bca  usr/lib/python3/dist-packages/apturl-0.5.2.egg-info 
4bd6e933c4d337fdb27eee28abbd289d  usr/share/applications/apturl.desktop 
3824814ef04af582f716067990b7808f  usr/share/doc/apturl-common/changelog.gz 
2ae15dd4b643380e1fbb9c44cf8e9c54  usr/share/doc/apturl-common/copyright 
019ea97889973f086dfd4af9d82cf2fb  usr/share/kde4/services/apt+http.protocol

This is also a pretty simple format, but you need to split the space after the hash, while correctly handling the possibility of things like spaces in filenames.  (And I’m not entirely sure what you do if you have a newline in a filename, which is possible, in these simple formats.)  So we are up to Six and a half file formats.

Package: apturl-common 
Source: apturl 
Version: 0.5.2ubuntu11.2 
Architecture: amd64 
Maintainer: Michael Vogt <> 
Installed-Size: 168 
Depends: python3:any (>= 3.3.2-2~), python3-apt, python3-update-manager 
Replaces: apturl (<< 0.3.6ubuntu2) 
Section: admin 
Priority: optional 
Description: install packages using the apt protocol - common data 
 AptUrl is a simple graphical application that takes an URL (which follows the 
 apt-protocol) as a command line option, parses it and carries out the 
 operations that the URL describes (that is, it asks the user if he wants the 
 indicated packages to be installed and if the answer is positive does so for 
 This package contains the common data shared between the frontends.

The “control” file is yet another text file, but the format is different from conffiles or md5sums.  We are now up to seven and a half file formats.  Which is surely a far cry for the original “you just need to know the Ar format!” that I got as received wisdom when I first fell into this rabbit hole.

On the bright side, this does give us enough information to unpack and install the data in the package.  (And I’d like to complain how vague a name “data” is for the archive with the actual contents.  As if the rest of the package was somehow something other than data!)  But we still haven’t covered any of the local database that keeps track of what packages are available, what are installed, how dependency resolution works, etc.  But some of that will have to wait for another blog post.  This is certainly enough content that the original progress bar that isnpired me did finish what it was doing long before I made it this far with my own code.

Learning how to unpack packages wound up just being the first steps of a project to try and do my own simple implementations of a whole raft of common UNIX command line utilities that I depend on every day.  Trying to implement a useful subset of a complete userland is what inspired the blog post’s title, “Adventures in Userland.”  The UNIX userland is full of fascinating history, layers of cruft, clever design, and features you never even realised were there.  Even implementing my own cat turned out to be an interesting project, despite how simple that utility seems.  I am hoping to make time to document some of the things I learned while poking around the things I have long taken for granted, and how shaky and wobbly some of the underpinnings of modern state of the art cloud and container systems are.

convenient modern C++ API’s for things like machine learning and image processing are easy to find, but not so much for things like .debs, and .tars.  The utilities in GNU coreutils sometimes have surprising limitations, and some files haven’t had any commits since Star Trek: The Next Generation was in first run.  I think it’s fair to say some of that stuff is about due for a fresh look.

Don’t Dereference Symlinks

Don’t Be That Guy

If your application dereferences symlinks by default, you are a jerk.  Your software is bad, and you should feel bad.  Why do you hate your users?

Won’t Someone Think Of The Users?

On OS-X in the Finder, there is a neat pane on the left where you can bookmark your favorite places to get to them quickly and easily.  Just drag a folder into it, and you can get to it from any Finder window.  It’s super convenient., Unless of course you make a symbolic link.  Which is basically just another concept for an easy way to get to another place.

if you create a symlink, and then try to add it as a Favorite, Finder will dereference the link, and favorite what the link points to rather than the link itself.  This is evil.  It’s not what the user asked for!  It’s an extreme violation of the Principle Of Least Surprise.  The implicit contract between the user and the system is that if I favorite something, clicking on the thing and the favorite will always take me to the same place.  The favorite represents the thing I was dragging into the Favorites bar, not whatever it may have been pointing at.  If I ever change where the symlink points, the favorite and the symlink will now be doing two different things.  For no obvious reason!

ThisIsWhereDocsGo is a symlink, so when you try to make a favorite out of it, it won't have the name of the thing you tried to favorite.  What name will it have?  Impossible to say from what the UI shows you!

ThisIsWhereDocsGo is a symlink, so when you try to make a favorite out of it, it won’t have the name of the thing you tried to favorite. What name will it have? Impossible to say from what the UI shows you!

Continue reading

Python Script Editor with HTML Output

File this one under Stupid Python Tricks.  I have written a bit about working on an app with an embedded Python run time.  It’s good fun.  I recently added a new feature to the script editor that was relatively easy, but for some reason isn’t very common.  There are a few small quirks to making a script editor do this, so in case anybody is curious how to do it in their own app, this is how I did it.

Behold the rich text glory!

Behold the rich text glory!  HTML output from Python directly in the script editor!  It really is like Christmas!

Continue reading

Installer Downloader Installer Downloaders

I couldn’t figure out how to fit this into a 140 character tweet, so now it’s a blog post.  Recently on a mailing list that I am subscribed to, a friend and former coworker posted:

Went to download the Unity 5 updates, what I ended up with was:


I can pretty much guarantee that at no time in the history of electronic software distribution has anyone ever said “gee, I really wish this application had its own custom downloader, because those guys at Microsoft / Apple / Mozilla / Google clearly don’t know what they are doing with those web browser things, and don’t even get me started about those curl / wget people…”. I feel slightly better now.

And I had unfortunately spent the morning wrestling with Adobe CC, so I chimed in with my response…

That damned dress.

I posted a reddit comment about the picture of a dress that has been confusing people.  Depending on viewing conditions it looks either white and gold, or black and blue.  It’s a rather extraordinary demonstration of how malleable human perception is, and it really shows off the fact that while we believe we are capable of observing an “objective” reality, our minds make up our perception of the world with only modest input from our senses.  Given that nobody can tell the color of a dress when they are looking at a photo of it, the picture obviously speaks to how much we rely on things like human recollection of events in court cases which literally decide matters of life and death.  A witness can be 100% sure that they saw one thing, but that doesn’t mean that what they saw necessarily had a strong correlation with the objective reality that is so important.  The reddit comment was in response to a post that then got deleted, so I assume nobody will ever read it there, so I am reprinting it here.

Continue reading

Google Support is analagous to awesome

But some analogies are pretty awful.  Which is the case here.  I used to use a Nexus5 phone from Google.  It was pretty good.  I won’t claim it was perfect.  It had about half the battery life I wish it did, and it always felt slightly too large in my hand.  But, nobody makes small phones anymore for some reason, and it certainly got the job done.  Or at least it got the job done until I had dinner at a particular sushi restaurant with a very hard floor, and shattered the screen.  ::Sigh::  Oh well, I guess it was time to contact Google support to see if they would repair it, and if it was going to cost something, how much…

Continue reading

Embedding Python in my App Part II The Desolation of Documentation

In our last thrilling adventure, I mentioned a few ways I didn’t put Python into my app.  Having settled on Boost.Python, the documentation naturally contained a few simple examples to get me started, and some very obscure reference material, but relatively little of the intermediate stuff.  You know, like basically all documentation for anything that has some nonobvious gotchas, it never makes it easy to figure out YOUR problem when you don’t know enough about it to understand what’s going on.  This post covers a few specific warts that I lost time on.  They won’t necessarily be what you lost time on, but that are the examples that would have shaved a week off of my project if I had found them all on one place at the right moment, so maybe you will find them useful.  Here’s the simple example from the docs:

#include <boost/python.hpp>

    using namespace boost::python;
    def("greet", greet);

So, let’s start with BOOST_PYTHON_MODULE(foo).  According to module.hpp, that resolves to BOOST_PYTHON_MODULE_INIT, which can have a slightly different definition depending on platform.  And that definition contains some other preprocessor macros, which have some other macros.  It’s quite a few layers to pick apart what’s actually going on, even if you are willing to sit down and try to read Boost code, which is something you should generally try to avoid doing.  So, what does it do that you actually care about?  It creates a function called void initfoo() that runs executes the stuff in the module block at run time.  Initially, after a brief glance over the docs I had assumed that the “boost::python::def()” was doing some magic to generate classes at compile time, and I didn’t realise that it was just a function that executed at runtime.  So, you can stick any C++ in there that you like, such as ‘std::cout<< “When does this happen? << std::endl;’ when you are trying to figure out what is going on, or do something “interesting.”  And, if you are embedding Python in your app, rather than building a standalone module, you will need to deal with that function yourself.  Which isn’t so bad, except that in order to mesh with what the Python libs expect since Python is written in C, you need to be aware that the function initfoo() is declared as ‘extern “C”.’  At least if the module declaration and your actual embedding setup are happening in different places.  Which is never the case in teh simple examples in the docs, but in teh context of a larger application that will definitely be an issue.

Okay, so moving on to dealing with classes.  The next slightly fancier mini example shows exposing a simple class to Python.

#include <boost/python.hpp>
using namespace boost::python;

        .def("greet", &World::greet)
        .def("set", &World::set);

But if you have, for example, a Node class that can reference a Graph, which the Graph class can also reference a Node, you might try this:

bpy::class_<Node>("node").def("getGraph", &playback::node::getGraph);
bpy::class_<Graph>("nodalGraph").def("getNode", &playback::nodalGraph::getNode);

And then you’ll spend some time angry about a rather vague compiler error, and lose half a day after you through things were going swimmingly based on how easy the trivial example built.  You may even start looking at all the other options I looked at in the first blog post and see if maybe they will work better than Boost.  But fear not.  The problem is that we can’t create a method of Node that refers to a Graph until we first create the Python binding for Graph.  Of course, by that logic we also can’t do the binding for Graph first since it has a method that deals with Node which would need to be bound first.  Continue to fear not!  It’s a little nonobvious in the trivial example, but binding a class and binding the methods of the class don’t need to happen at the same time.   boost::python::class_<foo>(“bar”) constructs an instance of the template class named “class_” which is templated with type “foo” and exposed to Python with the name “bar”.  (In practice you’ll usually want foo and bar to be the same name.)  That instance is something that we can hang on to.  Which means we can make a bunch of them for all the various classes that we need to wrap.

auto nodeBinding = bpy::class_<NodeWrap, boost::noncopyable>("node", bpy::no_init);
auto graphBinding = bpy::class_<GraphWrap, boost::noncopyable>("nodalGraph", bpy::no_init);

nodeBinding.def("getGraph", &playback::node::getGraph, bpy::return_internal_reference<>());
graphBinding.def("getNode", &playback::nodalGraph::getNode, bpy::return_internal_reference<>());

See, that’s not so bad, right?  I am using noncopyable and no_init because I want to create these things only on the well manicured C++ side.  A Node should ALWAYS be made by the Graph, and the Graph should always live in a project, etc.  I just wanted Python to have access to existing objects, so that was the easiest way to do it given things like private constructors, and no copy constructor.  I’m only about 10% sure that return_internal_reference is actually correct for my use of returning bare pointers.  I’m still slogging my way through some understanding there.

So, what about when the methods you are trying to bind are ambiguous?  Or you need to return arrays of stuff?  Well, you need member function pointers when you have methods with the same name.  Which is something that I use rarely enough I think they look really weird.  This is another thing that took me a while to figure out because I got confused by the docs, and I couldn’t find a useful example.  BOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS sure sounds like it would have something to make the pain go away, but that didn’t turn out to be what I needed.  So, we’ll start with the declaration of a generic parameter class in the header:

class Parameter : public SomeBase {
    explicit Parameter(OutputType T, std::string parameterName, QObject *parent);
    explicit Parameter(OutputType T, std::string parameterName);
    virtual void setValue(int newValue);
    virtual void setValue(double newValue);
    virtual void setValue(Color newValue);
    virtual void setValue(std::string newValue);
    std::vector<std::string> getTags();
    // Some implementation nonsense to arcane to consider exposing to mere Python scripters.

Then this is the Python binding for that class.  Note that one of those returns a vector of strings.  You’ll need to bind every template variation of vector that you use, even for standard stuff like strings, so I include my binding of a vector of strings using the vector indexing suite for the sake of a slightly more complete example.

auto parameterBinding = bpy::class_<core::Parameter, boost::noncopyable>("Parameter", bpy::init<core::OutputType, std::string>());
 .def(bpy::vector_indexing_suite<StringList>() );
parameterBinding.def("setValue", static_cast<void (core::Parameter::*)(int)>(&core::Parameter::setValue) )
 .def("setValue", static_cast<void (core::Parameter::*)(double)>(&core::Parameter::setValue) )
 .def("setValue", static_cast<void (core::Parameter::*)(core::Color)>(&core::Parameter::setValue) )
 .def("setValue", static_cast<void (core::Parameter::*)(std::string)>(&core::Parameter::setValue) )
 .def("getTags", &core::Parameter::getTags);

Note that the original C++ class has two constructors, but I only expose one to Python.  That’s fine.  So far, I haven’t really needed to expose every last constructor that seems useful in C++ to the scripting engine. YMMV, just pick the constructor you like based on the types it uses as above.  Then wait to see if anybody files any support tickets about actually needing another constructor.  Then I have a bunch of different versions of setValue() which take different types and do different things and have the same name.  It’s admittedly not something to hold up as a great example of perfect design, but you will see similar things in real C++ projects, so now you have an example.  Doesn’t that pointer to member function syntax seem gross?  I think “static_cast<void (core::Parameter::*)(double)>(&core::Parameter::setValue)” belongs in a god damned zoo.  But it does work.

And since we have figured out how to do bindings for a bunch of cool stuff now, that brings us to actually doing the Python embedding.  All of the trivial examples are based on building a module to run from standalone Python.  But if you want to script an existing app, it’s not always practical to build your app as a giant python module and have the python side do all the driving.  In my case, I had an existing native C++ app that I wanted to add scripting features to.  So, the next post will be about some of that magic.  In the context of an app with multiple modules to be exposed, and how to layer things so that the UI can trigger Python, but Python can also expose the UI in a cross platform way.  You knew it was going to be spread across three posts when you saw the first one was titled “An unexpected Journey!”

Embedding Python in my App Part I An Unexpected Journey

Boost Python is pretty awesome.  It’s a way to wire up your C++ code to Python without having to go all the way into the fairly low level Python C API.  I have been working on a C++ application using Qt, and I wanted to embed Python in the app to allow user scripting.  There’s a lot I could write about that, but I’m lazy, so maybe some of the subtleties of allowing users to script your app will wind up in another post.

I looked at a few alternatives to Boost before I settled on it.  Since I am working on a Qt app, I looked at PyQt and PySide and how they do things first.  Both of them are bindings for the whole Qt framework with a bunch of UI and miscellaneous stuff, but also have systems for wrapping that stuff which you can use for your own code and classes.  Exposing Qt to my users would have been pretty neat, so they could build their own UI’s, or talk to databases or network services with the Qt classes pretty much “for free.”  PyQt licensing is a bit tricky, and I’m not certain if my app will eventually be a commercial product, so I didn’t want to deal with the complexities.  On a personal project like this one, sometimes laziness is the best way forward.  That said, it works well, and the license cost is not at all unreasonable.  If I had a boss on this project who handed me PyQt as the way forward, I probably would have been able to make it work.  PySide is similar to PyQt.  Instead of being managed by a third party like PyQt, PySide is developed in-house by the maintainers of Qt, Nokia erm I mean Digia.  Nope, I think it’s the Qt Company for the moment.  That said, I need Qt 5.4 support, and PySide seems to have gotten of given up at Qt 4.x.  There may be some work maintaining it someplace that I am not looking, but the official release doesn’t seem to be very current.  They also don’t support Python 3, which I want to support going forward.  My current build is done with Python 2.7, but I’d rather support it from the get-go than have a big panic when I decide I can’t possibly live without the latest greatest python next month.  Shiboken, the wrapper generator used with PySide is also terribly documented.  In theory it supports doing your own stuff without involving PySide, but finding a simple guide to doing it was frustrating, and I had no idea what I was doing.

I tried SWIG, but it hates namespaces.  Dealing with it in a real live modern-ish C++ code base proved to be really annoying.  I could wrap simple classes, but wrapping QObject derived classes spread across multiple modules in different namespaces using SWIG proved to be remarkably similar to bashing my face against the mouth of an angry lion.  It’s not that it can’t be done, but you’ll frequently find your face being ripped off if you do.  Also, SWIG is very much targeted toward making Python modules, rather than embedding python.  While I did make simple Python modules in my experiments, I never did manage to get it embedded in my app properly.

Which brings me to Boost Python.  It builds as a part of my existing code, so it works fine with whatever my real app is.  On the other hand, while SWIG and Shiboken are generators for the binding, Boost requires me to actively expose stuff by hand.  I would have much preferred to have my Python bindings ‘just work’ which is basically impossible with Boost.  In an ideal world, I’d just write doxygen comments in my header files, run a generator, and magically have a fully documented, working Python API for my app.  Oh well.  Programming sucks, and you have to do work to make the program that you want.  I’ll dig more into the nitty gritty of using Boost.Python in my next post, now that I have established the rationale for using it that led me down that path.