Electron is fine, or it will be tomorrow, thanks to Moore’s Law, right?

Electron is somewhat controversial as an application development framework. Some people complain that the applications that use it aren’t very efficient. Others say they just use RAM and no sense letting it go to waste. So, which is it? Is the tradeoff of developer time for efficiency worthwhile?

We take it as an article of faith that newer cheaper better faster machines come out every year.

Sure, Moore’s Law doesn’t give the gains it once did. And sure, I am looking at buying a laptop today that has literally the exact same amount of RAM as my 10 year old desktop that seems to finally have died. And sure, modern laptops make RAM upgrades impossible because the RAM is soldered on, so I’ll have the same amount of RAM for the next several years over the lifespan of the laptop. So I’ll have the same amount of RAM in my main computer for somewhere between 10 and fifteen years depending on how long the new laptop lasts.

But faith says I’ll have more RAM over time no matter what!

Continue reading

The Corporate Event Horizon

We all like to think the whole Universe is centered on us. And thanks to relativity and whatnot, the whole observable universe really is centered on the observer! In science, the universe is constantly expanding, and every observer sees themselves as being at the center of that expansion.

Expansion - Meaning of Expansion
Books always use raisins in a rising loaf of bread to explain the expansion of the universe not having a center. Despite the fact that bread does have a center. So, here’s some raisin bread. Now you understand relativity 100%. Congrats.

You can’t see everything — you can only see a subset of the whole universe. Light doesn’t travel infinitely fast, and the universe isn’t infinitely old. That means you can only see a ting bubble of only about 15 billion light years around yourself. Aliens living in a distant galaxy half way across the universe can likewise only see a 15 billion light year bubble around themselves. And if those aliens are more than 15 billion light years away from us, we can’t see them and they can’t see us.

To clarify that a bit, it’s not just that we can’t see them with our eyeballs or our current telescopes — we can’t have any sort of interaction with them. We can’t ever have had any sort of interaction with them at any point in the past. And the things we are doing today can’t possibly be effected by any sort of interaction with those distant aliens. That is to say, they are beyond a metaphorical horizon, beyond which the events don’t matter to us and can’t be observed by us. Just like how you can’t see stuff below the Earth’s horizon. Hence, an Event Horizon.

Event Horizon (1997) - IMDb
This blog post has nothing to do with the 1990’s sci fi horror film.

So, why am I trying to shoehorn the concept of an Event Horizon into corporate life? When a corporation is small, everybody shares an event horizon. Three engineers wedged into one work room in a startup will hear each other’s conversations. When something breaks, they’ll all know about it. They all observe the exact same bubble in the universe. Their shared universe is small. And they can readily reach consensus about what’s important, what’s broken, and what’s working. They may disagree about what to do next, which Linux distribution or programming language is best, etc. But they exist in a shared universe and a shared understanding of basic facts. That shared universe which can not last forever. And the inevitable collapse of the shared perspective leads to all sorts of trouble that is, by its nature, impossible to observe directly.

Continue reading

The Wheel of Reinvention. Or, the Eight Steps to get to Step One.

The wheel of reinvention is a term that I most closely associate with the history of computer graphics, but the idea is pretty universal.  Technology is cyclical, no matter the specific field. All software needs to be a bit more flexible than you originally thought, and it eventually spawns more software because the solution to the problem of a computer program you don’t like is almost always another computer program that is even less carefully constructed.

Technology is Cyclical from 30 Rock
Maybe Liz Lemon’s boyfriend on 30 Rock was right, after all.

This is basically my take on the Configuration Complexity Clock. I’m not the first person to write about the topic. And since it’s practically a Natural law, like a gravitational pull, I certainly won’t be the last to deal with it, notice it, or write about it. But it is something that popped up on my radar again recently, so I wanted to put it in my own words. Let’s look at the whole cycle of terrible crippling success after terrible success…

Continue reading

Color Wheels

(This started as a post many years ago on an old blog. In those days it was fashionable to install every plugin on your self hosted word press instance, and then never update any of it, so naturally that blog needed to get taken down. A colleague reminded me of this in a conversation, so I decided to dig it out of an old MySQL backup since I don’t think anybody else on the Internet was ever bored enough to document this many color wheel widgets in various applications. At some point perhaps I’ll more actively revisit the topic with some screenshots of software from the current decade. At this point the post is mainly interesting for having documented long obsolete color wheels like Shake and “Apple Color” from the old Final Cut Studio.)

Have you ever noticed how many variations there are on color wheels?  It’s basically a very simple idea.  Different colors go around the edge of a circle.  Different saturations go from the edge to the center.  Most saturated is at the outermost edge of the circle.  Easy, right?  Couldn’t be simpler.  So you’d think…  While making a color adjuster widget for an application recently, I have been pondering them with more attention than I ever thought I would.  Here are some well known examples of color wheels and color wheel type adjuster widgets…  (Click images to get them full-sized, in all their utilitarian glory.)

Color wheel from Shake’s Colorwheel node. Note the red along +X and green to top-left.
Shake Color editor widget. Uses basically the same wheel plus extra widgets. There is an odd antialiasing issue around the border of the color wheel in the widget that is not present in the one generated by the node, so for some reason they are not identical, despite using the same layout and orientation

This Shake wheel widget is ‘live.’  So, as you adjust the value slider, it will get darker or lighter.  If value is set to 0, you will just see a big black square with no wheel in it, which can be counter-intuitive the first time you try to select a color.

The first version of my own color wheel widget generally matches up well with Shake’s. A difference between the two is not exactly zero. As near as I can figure, Shake and I use slightly different math for transforming between HSV and RGB color spaces.  The difference is what inspired me to pay more attention to the variations.
Nuke PLE Color Wheel Node. (Ignore the confetti – that’s just because I took the screen shot from the free demo.) Red still along +X, but green is at the bottom-left, so Nuke uses a vertically flipped color wheel compared to Shake.  As you rotate clockwise around the Nuke wheel, you get the same colors on the Shake wheel by going counter-clockwise.
The Nuke Color Picker. It uses a 70% value band around the color wheel so that you can always see the colors, and a ‘live’ inner wheel that darkens and lightens as you adjust the value slider.

For some reason, the wheel in the Nuke color picker uses Shake ordering for the colors, despite the fact that the color wheel node doesn’t. (It’s shown in thumbnail in the DAG view to the right of the color picker window in case you don’t believe me.)  The button you click to bring up the color picker has an image on it with the same ordering as the node, despite the fact that this means the wheel on the button doesn’t match the wheel on the window it brings up. Also notable are the radius-circle and vector line in the color wheel pointing out exactly where the current selected color is.

The Final Cut Pro 3-Way Color Corrector. These wheels have the biggest break from what we’ve seen so far. The colors go in the same clockwise/counterclockwise order as in Shake, but the orientation is such that Red isn’t on a major axis. Nothing specifically seems to be oriented to a major axis, but red has been rotated so that it is up and a little bit left. Also, the saturation falloff is no longer linear. It appears to be a quadratic falloff based on Saturation = Radius Squared. This results in a much larger area in the middle which is completely blown out and almost colorless.
Apple Color’s Primary Color Corrector. In contrast to the bright white blown out region of the FCP 3-Way wheels, the entire center of this color widget is the dark, neutral background color. The actual color is only visible at the rim of the wheel. We also have another new orientation. Colors are the same clockwise order as Shake, but rotated -90 degrees so that red is now situated at the +Y axis.
Screen shot of a later version of my own work.

That last one was inspired in part by the visual softness of the FCP wheels with their nonlinear saturation falloff, I am experimenting with a biased cubic falloff for the saturation in my color wheels.  This results in a slightly smoother appearance than straight linear falloff, but the biasing prevents the center from being completely blown out, so you can always see what you are doing, even if you zoom in on the widget so you can’t see the fully saturated edges anymore, because you want very fine adjustments.  It’s intended for a color adjuster for color correction, rather than a true color picker, so it wasn’t important to me to have the color of the clicked spot map precisely to the resulting color.  Also new in this version is that the widget will paint itself with the selected color as the background, giving you a local, live preview as you adjust.

Personally, I think red along +X makes the most sense from a correctness standpoint.  In mathematics, we define a unit circle such that this is the direction of zero degrees.  In HSV color space, we define zero degrees as red.  It seems like a simple, learnable convention.  I can’t see why having red in some other direction would specifically be more intuitive or easier for the user, but I’m willing to be proven wrong if somebody has a good argument.  Without that, I’ll stick to +X = red for reasons of comfort with the mathematics.

As far as order around the circle, the unit circle in mathematics goes counter clockwise from +X.  In optics, we learn the Roy G Biv color ordering according the frequency of colors in the spectrum.  Logically, we should match increasing frequency to increasing angle.  Consequently, Orange should be next after red as you go counter-clockwise.  This is a “green up” or “Shake” orientation.  (And contrary to the most recent screenshot I have posted of my own widget.  It’s still a work in progress…)

I therefore declare that a Standard Color Wheel ought to be red at +X, and green at top-left.  So, why are there so many variations when it seems like there is a correct answer?  I dunno.  I guess a lot of these color wheels were made pretty much independently by people who weren’t particularly concerned about matching up with some other standard.  Some may have intentionally wanted to differentiate themselves from existing color wheels that they had seen just for the sake of novelty.  Most of the details are presumably now lost to time.

So, just from the apps that I had handy to check on, there are four different layouts for the colors, and four different saturation calculations for the middle.  What do you prefer?  Do you know of any other interesting variations on a simple color wheel?  Is one or the other more intuitive or functional for you?

The underappreciated scaling of memory access costs hidden by the cult of Big-O

In any CS-101 textbook, you learn about the scaling of certain algorithms.  I’ve had a few conversations with several colleagues recently related to this topic, and how badly the “cult of complexity” can lead one astray when trying to scale real systems rather than ones in textbooks.  I wanted to write a bit about the topic, and share a simple program that demonstrates the realities.

This post sadly has nothing in particular to do with the giant fighting mecha anime, “Big O” from 1999… Actually, now that I think about it, scaling is bound to be a concern when making giant robots. But the square-cube law is a bit of a different topic than memory time complexity, so we’ll have to leave it for another day.

In the textbook, one algorithm might scale linearly — O(n).  Another might do better at O(log(n)), or have a constant number of operations regardless of working set size and be O(1), etc.  It’s a pretty straightforward way of keeping track of the terribleness of an algorithm.  But the key is that Big O notation only estimates the number of operations as n grows large, rather than than the amount of time taken to run it.  There is a pretty natural and intuitive implicit assumption that makes Big O useful.  The assumption is that one read operation from memory is about as slow as any other.  So you just need to count up the number of read operations to get an idea of how much time you’ll spend waiting on memory.

This is, of course, pants-on-fire nonsense.

Continue reading

Lost in Userland (Talk at SCALE17x)

The slides for my talk at Scale17x are available here: In Google Slides

The synopsis is here: At the SCALE website

My repository with W-Utils, the “Worst Utils” versions of a few common userland utilities is here:  On GitHub . The implementations are all incomplete, proof of concept sorts of things, but they are useful examples that are much simplified compared to the “real thing” in GNU Coreutils.

A video of the talk is up on the SCALE YouTube page.  My talk in the “Ballroom C” video starts at 4:05:30.  The camera doesn’t cover the screen, so you may need to be clicking along with the slide deck in you want the full experience.  (The embedded video starts near the right time, but it doesn’t seem to want to seek to the exact spot, so you may need to move in time a little.)

Adventures in Userland I – Brown Paper Packages Tied Up in std::string, xz, ar, tar, gz, and spaghetti

I was recently sitting, staring at a progress bar, which is how very nerdy adventures start.

assorted wine bottles

This photo of a bar popped up when I did a search for “progress bar,”  It’s rather more colorful and visually interesting than the actual progress bar I was staring at that inspired me to figure out what happens when you install a debian package.  Photo by Chris F on Pexels.com

The particular progress bar was telling me about the packages being installed as part of upgrading my workstation from Ubuntu 16.04 to the newer Ubuntu 18.04.  As the package names whizzed by, one after the other, the thing that annoyed me was that it took So. Damned. Long.  My day job often involves trying to understand why Linux systems don’t go as fast as I would like, so I naturally started firing up some basic utilities to see what was happening.  The most obvious thing to check is always CPU usage.  top showed me that my CPU cores were sitting almost entirely idle.  CPU usage is a metric that I often describe as convenient to measure, relatively easy to understand, and generally useless.  But it’s still a good place to start.  I wasn’t really surprised that the installation process wasn’t CPU bound, so I fired up iotop, which is a much more useful utility for seeing what processes on a system are io bound, and saw…  Nothing interesting.  And it was then that I sort of fell into a curiosity.  If you count all the many servers I have caused package installations to happen on, I have probably installed many millions of debian packages over the years.  Some with salt, others with apt-get, and some with dpkg, but I never really studied in detail exactly how the ecosystem worked.

I started by trying to figure out exactly what a debian package is.  It seems like a silly question, with a simple answer.  Of course, “a debian package is just a common standard ar archive,” as a friend of mine pointed out while I was talking to him.  But that sort of understates things.  First off, ar archives aren’t that common, or particularly standardised.  Ar archives are ‘common’ only as the format for static libraries, and debian packages.  They just aren’t common as general purpose archives, like tarballs or zip files.  Which is sort of interesting in it’s own right.

Let’s consider just how standard the format actually is…  Wikipedia has a good breakdown of the format.  Is the diagram on Wikipedia all we’d need to know to read a debian package?  Well, man 5 ar  notes “There have been at least four ar formats” and “No archive format is currently specified by any standard.  AT&T System V UNIX has historically distributed archives in a different format from all of the above.”  Eep, that’s not terribly promising.  Thankfully, debian packages are at least consistent among themselves in their Ar dialect, since they can generally be assumed to be made with the ar on a debian Linux distribution.

There’s a whole side-story here about how there is a C system header for reading ar archives in an old-school “read a struct” way.  But the format use a slightly odd whitespace padded text pattern, so to get trimmed filenames as C++ std::strings and integer number values is more of a pain in the neck than you’d hope.  There isn’t a good c++ library with a modern API for the format.  So I wrote a YAML definition for Katai in order to have a convenient C++ API for reading it, and used the SPI Pystring library for some of the string manipulation.  In any event, I could read the format.  Yay, I could read a debian package myself!

A debian package consists of just three things when you unpack it.  I file called ‘debian-binary’ that tells you the version number of the format.  And, two tarballs.  One with control metadata about the package and the other with the actual contents of the package.

At this point, anybody trying to write their own code to unpack a debian package in order to better understand the process will try and punch a wall.  Because we’ve just figured out how to write code to read this relatively uncommon Ar format, and the first thing we find inside of it is two tarballs, which is a completely different format!  Surely, we could have designed the package files to either be an Ar with Ar archives in it, or a tar file with tar files in it!  Well, okay, my friend’s assertion that I just needed to know about Ar archives was a lie, but I only need to know about two formats.  That’s not too bad.  Oh, well, tarballs are actually two formats unto themselves.  There’s a compression format, and then the actual tar archive.  So, you need to handle three file formats to install a debian package.  I have some code that will unpack the Ar layer, so let’s see which compression method is used on the tar files…


Wait, Why aren't they using the same compression?!

If you unpack the apturl package, you get the debian-binary file, and the data and control archives.  It’s totally arbitrary that I used apturl-common as a test file for my code.  It just happened to be a package that I downloaded.  Other packages will vary slightly.

Wait, those two tar files have different compression formats.  One is a .gz file, and the other is a .xz!  Not just different compression formats from debian files of different eras.  For example, if Ubuntu 12.04 packages used gz and Ubuntu 18.04 used xz, you would only need to support one or another to install packages from any particular distribution.  As it turns out, there are different compression formats inside a single package.  Okay, so to unpack and install a debian file, you actually need to support a few compression formats.  Let’s say xz, bz2, and gz at a minimum.  Okay, so you need to support 5 different formats.  So, what’s in that control archive?

You get a few scripts.  preinst, postinst, and prerm.  Those scripts get run when you would expect.  Before install, after install, and before removing the package if you uninstall it.  Languages like Python can be embedded in native applications, but shell scripts aren’t really intended to be used that way.  (And actually, if I were embedding Python today, I’d probably use PyBind11 instead of Boost.Python like I did in my old blog post.  But that’s neither here nor there.)  So, you can pass on being responsible for running the scripts in-process if you are trying to implement something to install the packages, and just shell out to do it.  (Writing a shell is definitely at least a whole other blog post unto itself.)  You also have files called md5sums, control, and conffiles.  Conffiles is just a newline separated list of files that the package uses for configuration so the install program can warn you about merging local changes during install.  It’s barely a file format, so we’ll count it as half.  md5sums is a listing of checksums of all the files in the content archive called “data,” in the format of md5sums.

b25977509ca6665bd7f390db59555b92  usr/bin/apturl 
da0e92f4f035935dc8cacbba395818f2  usr/lib/python3/dist-packages/AptUrl/AptUrl.py 
2c645156bfd8c963600cd7aed5d0fc0b  usr/lib/python3/dist-packages/AptUrl/Helpers.py 
927320b1041af741eb41557f607046a7  usr/lib/python3/dist-packages/AptUrl/Parser.py 
b697ac30c6e945c0d80426a8a4205ef8  usr/lib/python3/dist-packages/AptUrl/UI.py 
d41d8cd98f00b204e9800998ecf8427e  usr/lib/python3/dist-packages/AptUrl/Version.py 
d41d8cd98f00b204e9800998ecf8427e  usr/lib/python3/dist-packages/AptUrl/__init__.py 
a8f4538391be3cd2ecac685fe98b8bca  usr/lib/python3/dist-packages/apturl-0.5.2.egg-info 
4bd6e933c4d337fdb27eee28abbd289d  usr/share/applications/apturl.desktop 
3824814ef04af582f716067990b7808f  usr/share/doc/apturl-common/changelog.gz 
2ae15dd4b643380e1fbb9c44cf8e9c54  usr/share/doc/apturl-common/copyright 
019ea97889973f086dfd4af9d82cf2fb  usr/share/kde4/services/apt+http.protocol

This is also a pretty simple format, but you need to split the space after the hash, while correctly handling the possibility of things like spaces in filenames.  (And I’m not entirely sure what you do if you have a newline in a filename, which is possible, in these simple formats.)  So we are up to Six and a half file formats.

Package: apturl-common 
Source: apturl 
Version: 0.5.2ubuntu11.2 
Architecture: amd64 
Maintainer: Michael Vogt <mvo@ubuntu.com> 
Installed-Size: 168 
Depends: python3:any (>= 3.3.2-2~), python3-apt, python3-update-manager 
Replaces: apturl (<< 0.3.6ubuntu2) 
Section: admin 
Priority: optional 
Description: install packages using the apt protocol - common data 
 AptUrl is a simple graphical application that takes an URL (which follows the 
 apt-protocol) as a command line option, parses it and carries out the 
 operations that the URL describes (that is, it asks the user if he wants the 
 indicated packages to be installed and if the answer is positive does so for 
 This package contains the common data shared between the frontends.

The “control” file is yet another text file, but the format is different from conffiles or md5sums.  We are now up to seven and a half file formats.  Which is surely a far cry for the original “you just need to know the Ar format!” that I got as received wisdom when I first fell into this rabbit hole.

On the bright side, this does give us enough information to unpack and install the data in the package.  (And I’d like to complain how vague a name “data” is for the archive with the actual contents.  As if the rest of the package was somehow something other than data!)  But we still haven’t covered any of the local database that keeps track of what packages are available, what are installed, how dependency resolution works, etc.  But some of that will have to wait for another blog post.  This is certainly enough content that the original progress bar that isnpired me did finish what it was doing long before I made it this far with my own code.

Learning how to unpack packages wound up just being the first steps of a project to try and do my own simple implementations of a whole raft of common UNIX command line utilities that I depend on every day.  Trying to implement a useful subset of a complete userland is what inspired the blog post’s title, “Adventures in Userland.”  The UNIX userland is full of fascinating history, layers of cruft, clever design, and features you never even realised were there.  Even implementing my own cat turned out to be an interesting project, despite how simple that utility seems.  I am hoping to make time to document some of the things I learned while poking around the things I have long taken for granted, and how shaky and wobbly some of the underpinnings of modern state of the art cloud and container systems are.

convenient modern C++ API’s for things like machine learning and image processing are easy to find, but not so much for things like .debs, and .tars.  The utilities in GNU coreutils sometimes have surprising limitations, and some files haven’t had any commits since Star Trek: The Next Generation was in first run.  I think it’s fair to say some of that stuff is about due for a fresh look.

Don’t Dereference Symlinks

Don’t Be That Guy

If your application dereferences symlinks by default, you are a jerk.  Your software is bad, and you should feel bad.  Why do you hate your users?

Won’t Someone Think Of The Users?

On OS-X in the Finder, there is a neat pane on the left where you can bookmark your favorite places to get to them quickly and easily.  Just drag a folder into it, and you can get to it from any Finder window.  It’s super convenient., Unless of course you make a symbolic link.  Which is basically just another concept for an easy way to get to another place.

if you create a symlink, and then try to add it as a Favorite, Finder will dereference the link, and favorite what the link points to rather than the link itself.  This is evil.  It’s not what the user asked for!  It’s an extreme violation of the Principle Of Least Surprise.  The implicit contract between the user and the system is that if I favorite something, clicking on the thing and the favorite will always take me to the same place.  The favorite represents the thing I was dragging into the Favorites bar, not whatever it may have been pointing at.  If I ever change where the symlink points, the favorite and the symlink will now be doing two different things.  For no obvious reason!

ThisIsWhereDocsGo is a symlink, so when you try to make a favorite out of it, it won't have the name of the thing you tried to favorite.  What name will it have?  Impossible to say from what the UI shows you!

ThisIsWhereDocsGo is a symlink, so when you try to make a favorite out of it, it won’t have the name of the thing you tried to favorite. What name will it have? Impossible to say from what the UI shows you!

Continue reading

Python Script Editor with HTML Output

File this one under Stupid Python Tricks.  I have written a bit about working on an app with an embedded Python run time.  It’s good fun.  I recently added a new feature to the script editor that was relatively easy, but for some reason isn’t very common.  There are a few small quirks to making a script editor do this, so in case anybody is curious how to do it in their own app, this is how I did it.

Behold the rich text glory!

Behold the rich text glory!  HTML output from Python directly in the script editor!  It really is like Christmas!

Continue reading

Installer Downloader Installer Downloaders

I couldn’t figure out how to fit this into a 140 character tweet, so now it’s a blog post.  Recently on a mailing list that I am subscribed to, a friend and former coworker posted:

Went to download the Unity 5 updates, what I ended up with was:


I can pretty much guarantee that at no time in the history of electronic software distribution has anyone ever said “gee, I really wish this application had its own custom downloader, because those guys at Microsoft / Apple / Mozilla / Google clearly don’t know what they are doing with those web browser things, and don’t even get me started about those curl / wget people…”. I feel slightly better now.

And I had unfortunately spent the morning wrestling with Adobe CC, so I chimed in with my response…