Of course your JIRA backlog keeps growing. It’s math.

Programmers generally understand mathematical limits. At least in terms of “Big O” notation for complexity, even if they don’t love digging into mathematical formalisms about limits from Calculus class. Some algorithms scale better than others. Knowing how things scale is an important part of being able to shout buzzwords like “Web Scale!” in meetings. Scaling organizations is at least as hard as scaling technology.

Stolen from Twitter.
We all learned what exponential growth is during the pandemic.

What is the limit of the logarithmic function log(t) as t approaches infinity? (The limit does not exist / infinity.)

What is the limit of the exponential function et as t approaches infinity? (The limit does not exist / infinity. But bigger than the log function.)

What is the limit of the constant function C as t approaches infinity? (The limit is C.)

And finally… It costs log(t) to remove an item from a queue. It costs C to add an item to that queue. What is the upper bound on storage required for that queue to hold all items as t approaches infinity?

Even junior developers have no problem answering these questions. But empirically, it seems like architects as team leads have no idea. People are often surprised at how much Jira queues tend to grow over time, and they don’t expect to have to scale organizations over time in the way that they scale technology. The math above is more than enough to explain why, in the long term, you will never ever catch up on your Jira queue. Let’s dig in.

Continue reading

Vulkan Unit Tests with RenderDoc

edit to add: The very helpful author of RenderDoc told me on Twitter that the command line capture utility is meant as an internal thing, and not for general use. So don’t complain if this breaks in the future. Also, he said that my “--opt-ref-all-resources” is probably superfluous in most cases and will generally bloat the capture for no benefit.

One of the things I really like about Vulkan vs. OpenGL is that Vulkan is “offscreen by default” while OpenGL depends on the windowing system. With Vulkan, you can just allocate some space for an image, and draw/compute to that image. You can pick which specific device you are using, or do it once for every device. If you want to display the image, blit it to the swapchain. With OpenGL, you tend to need to open a Window with certain options, initialize OpenGL and create a Context using that Window handle, and use one of several extensions for offscreen rendering. (Probably FBO, maybe PBO. It’s an archaeological expedition through revisions of the spec and outdated documentation to be sure exactly what to do.)

Ceci n’est pas une teste.

Consequently, it is muuuuuch easier to make a “little” Vulkan application (that admittedly depends on a not-so-little engine/library with a bunch of boiler plate and convenience code) that does one thing off screen and exits without needing to pop up anything in the GUI as a part of the process.

This naturally raises the question… How are you sure that little utility actually does what you told it to?

Continue reading

Okay, but what if we just made static linking better?

Every once in a while, somebody gets so frustrated by the modern dynamic linking ecosystem that they suggest just throwing it out entirely and going back to static linking everything. Seriously, the topic comes up occasionally in some very nerdy circles. I was inspired to write this blog post by a recent conversation on Twitter.

Static and Dynamic Linking C Code – Flawless!
I got this diagram from another blog.

So, what’s dynamic linking? Why is it good? Why is it bad? Is it actually bad? What’s static linking, and how did things work before dynamic linking? Why did we mostly abandon it? And… What would static linking look like if we “invented” it today with some lessons learned from the era of dynamic linking? (And no, I am not talking about containers and Docker images when I say modern static linking. This isn’t one of those blog posts where we just treat a container image as morally equivalent to a static linked binary. Though I have some sympathy for that madness given that the modern software stack often feels un-fixable.)

Continue reading

Electron is fine, or it will be tomorrow, thanks to Moore’s Law, right?

Electron is somewhat controversial as an application development framework. Some people complain that the applications that use it aren’t very efficient. Others say they just use RAM and no sense letting it go to waste. So, which is it? Is the tradeoff of developer time for efficiency worthwhile?

We take it as an article of faith that newer cheaper better faster machines come out every year.

Sure, Moore’s Law doesn’t give the gains it once did. And sure, I am looking at buying a laptop today that has literally the exact same amount of RAM as my 10 year old desktop that seems to finally have died. And sure, modern laptops make RAM upgrades impossible because the RAM is soldered on, so I’ll have the same amount of RAM for the next several years over the lifespan of the laptop. So I’ll have the same amount of RAM in my main computer for somewhere between 10 and fifteen years depending on how long the new laptop lasts.

But faith says I’ll have more RAM over time no matter what!

Continue reading

The Corporate Event Horizon

We all like to think the whole Universe is centered on us. And thanks to relativity and whatnot, the whole observable universe really is centered on the observer! In science, the universe is constantly expanding, and every observer sees themselves as being at the center of that expansion.

Expansion - Meaning of Expansion
Books always use raisins in a rising loaf of bread to explain the expansion of the universe not having a center. Despite the fact that bread does have a center. So, here’s some raisin bread. Now you understand relativity 100%. Congrats.

You can’t see everything — you can only see a subset of the whole universe. Light doesn’t travel infinitely fast, and the universe isn’t infinitely old. That means you can only see a ting bubble of only about 15 billion light years around yourself. Aliens living in a distant galaxy half way across the universe can likewise only see a 15 billion light year bubble around themselves. And if those aliens are more than 15 billion light years away from us, we can’t see them and they can’t see us.

To clarify that a bit, it’s not just that we can’t see them with our eyeballs or our current telescopes — we can’t have any sort of interaction with them. We can’t ever have had any sort of interaction with them at any point in the past. And the things we are doing today can’t possibly be effected by any sort of interaction with those distant aliens. That is to say, they are beyond a metaphorical horizon, beyond which the events don’t matter to us and can’t be observed by us. Just like how you can’t see stuff below the Earth’s horizon. Hence, an Event Horizon.

Event Horizon (1997) - IMDb
This blog post has nothing to do with the 1990’s sci fi horror film.

So, why am I trying to shoehorn the concept of an Event Horizon into corporate life? When a corporation is small, everybody shares an event horizon. Three engineers wedged into one work room in a startup will hear each other’s conversations. When something breaks, they’ll all know about it. They all observe the exact same bubble in the universe. Their shared universe is small. And they can readily reach consensus about what’s important, what’s broken, and what’s working. They may disagree about what to do next, which Linux distribution or programming language is best, etc. But they exist in a shared universe and a shared understanding of basic facts. That shared universe which can not last forever. And the inevitable collapse of the shared perspective leads to all sorts of trouble that is, by its nature, impossible to observe directly.

Continue reading

The Vulkan-HPP Dilemma

The Vulkan API is C. It’s uncommon to write full applications in only C these days — C++ is far more common. If you are writing C++, it makes sense to use the Vulkan-C++ wrapper which adds type safety, exceptions, and optional automatic RAII style scoped memory management with “Unique” data types. Vulkan is hard enough to use without that syntax sugar sprinkled on top, so if you are using C++ and it is practical, your application should absolutely use the C++ binding types to take advantage of objectively good things like type safety, and having cleanup work properly. (Or not?)

Continue reading

The Wheel of Reinvention. Or, the Eight Steps to get to Step One.

The wheel of reinvention is a term that I most closely associate with the history of computer graphics, but the idea is pretty universal.  Technology is cyclical, no matter the specific field. All software needs to be a bit more flexible than you originally thought, and it eventually spawns more software because the solution to the problem of a computer program you don’t like is almost always another computer program that is even less carefully constructed.

Technology is Cyclical from 30 Rock
Maybe Liz Lemon’s boyfriend on 30 Rock was right, after all.

This is basically my take on the Configuration Complexity Clock. I’m not the first person to write about the topic. And since it’s practically a Natural law, like a gravitational pull, I certainly won’t be the last to deal with it, notice it, or write about it. But it is something that popped up on my radar again recently, so I wanted to put it in my own words. Let’s look at the whole cycle of terrible crippling success after terrible success…

Continue reading

First thoughts on Vulkan Video API’s in the context of a post production pipeline

Vulkan has some new extensions for decoding video. Like everything with Vulkan, they seem to be kind of a pain / kind of awesome. I don’t have experience using them in practice yet, but I have poked through the extensions and some sample code.

So… Disclaimer: I’m not *good* at Vulkan. At best, I know just barely enough to have an opinion. A ton of people have spent way more time with the API than I have. And they have done way more interesting things than me. That said, a lot of the information out there is focused on gamedev. Vulkan has a ton of functionality that can be handy for offline image processing type tasks, but you have to figure out some of the details yourself. Hopefully, some of my notes may be handy for you if you are going down this path. Since I have been playing with Vulkan, a few people have asked me questions about it, and I have started making some notes that I figured I may as well share in case they prove useful to anybody. The target audience here is admittedly very narrow — people who know enough about Vulkan to want to do stuff with it, but not enough to just go read the extension specifications themselves.

Continue reading

Color Wheels

(This started as a post many years ago on an old blog. In those days it was fashionable to install every plugin on your self hosted word press instance, and then never update any of it, so naturally that blog needed to get taken down. A colleague reminded me of this in a conversation, so I decided to dig it out of an old MySQL backup since I don’t think anybody else on the Internet was ever bored enough to document this many color wheel widgets in various applications. At some point perhaps I’ll more actively revisit the topic with some screenshots of software from the current decade. At this point the post is mainly interesting for having documented long obsolete color wheels like Shake and “Apple Color” from the old Final Cut Studio.)

Have you ever noticed how many variations there are on color wheels?  It’s basically a very simple idea.  Different colors go around the edge of a circle.  Different saturations go from the edge to the center.  Most saturated is at the outermost edge of the circle.  Easy, right?  Couldn’t be simpler.  So you’d think…  While making a color adjuster widget for an application recently, I have been pondering them with more attention than I ever thought I would.  Here are some well known examples of color wheels and color wheel type adjuster widgets…  (Click images to get them full-sized, in all their utilitarian glory.)

Color wheel from Shake’s Colorwheel node. Note the red along +X and green to top-left.
Shake Color editor widget. Uses basically the same wheel plus extra widgets. There is an odd antialiasing issue around the border of the color wheel in the widget that is not present in the one generated by the node, so for some reason they are not identical, despite using the same layout and orientation

This Shake wheel widget is ‘live.’  So, as you adjust the value slider, it will get darker or lighter.  If value is set to 0, you will just see a big black square with no wheel in it, which can be counter-intuitive the first time you try to select a color.

The first version of my own color wheel widget generally matches up well with Shake’s. A difference between the two is not exactly zero. As near as I can figure, Shake and I use slightly different math for transforming between HSV and RGB color spaces.  The difference is what inspired me to pay more attention to the variations.
Nuke PLE Color Wheel Node. (Ignore the confetti – that’s just because I took the screen shot from the free demo.) Red still along +X, but green is at the bottom-left, so Nuke uses a vertically flipped color wheel compared to Shake.  As you rotate clockwise around the Nuke wheel, you get the same colors on the Shake wheel by going counter-clockwise.
The Nuke Color Picker. It uses a 70% value band around the color wheel so that you can always see the colors, and a ‘live’ inner wheel that darkens and lightens as you adjust the value slider.

For some reason, the wheel in the Nuke color picker uses Shake ordering for the colors, despite the fact that the color wheel node doesn’t. (It’s shown in thumbnail in the DAG view to the right of the color picker window in case you don’t believe me.)  The button you click to bring up the color picker has an image on it with the same ordering as the node, despite the fact that this means the wheel on the button doesn’t match the wheel on the window it brings up. Also notable are the radius-circle and vector line in the color wheel pointing out exactly where the current selected color is.

The Final Cut Pro 3-Way Color Corrector. These wheels have the biggest break from what we’ve seen so far. The colors go in the same clockwise/counterclockwise order as in Shake, but the orientation is such that Red isn’t on a major axis. Nothing specifically seems to be oriented to a major axis, but red has been rotated so that it is up and a little bit left. Also, the saturation falloff is no longer linear. It appears to be a quadratic falloff based on Saturation = Radius Squared. This results in a much larger area in the middle which is completely blown out and almost colorless.
Apple Color’s Primary Color Corrector. In contrast to the bright white blown out region of the FCP 3-Way wheels, the entire center of this color widget is the dark, neutral background color. The actual color is only visible at the rim of the wheel. We also have another new orientation. Colors are the same clockwise order as Shake, but rotated -90 degrees so that red is now situated at the +Y axis.
Screen shot of a later version of my own work.

That last one was inspired in part by the visual softness of the FCP wheels with their nonlinear saturation falloff, I am experimenting with a biased cubic falloff for the saturation in my color wheels.  This results in a slightly smoother appearance than straight linear falloff, but the biasing prevents the center from being completely blown out, so you can always see what you are doing, even if you zoom in on the widget so you can’t see the fully saturated edges anymore, because you want very fine adjustments.  It’s intended for a color adjuster for color correction, rather than a true color picker, so it wasn’t important to me to have the color of the clicked spot map precisely to the resulting color.  Also new in this version is that the widget will paint itself with the selected color as the background, giving you a local, live preview as you adjust.

Personally, I think red along +X makes the most sense from a correctness standpoint.  In mathematics, we define a unit circle such that this is the direction of zero degrees.  In HSV color space, we define zero degrees as red.  It seems like a simple, learnable convention.  I can’t see why having red in some other direction would specifically be more intuitive or easier for the user, but I’m willing to be proven wrong if somebody has a good argument.  Without that, I’ll stick to +X = red for reasons of comfort with the mathematics.

As far as order around the circle, the unit circle in mathematics goes counter clockwise from +X.  In optics, we learn the Roy G Biv color ordering according the frequency of colors in the spectrum.  Logically, we should match increasing frequency to increasing angle.  Consequently, Orange should be next after red as you go counter-clockwise.  This is a “green up” or “Shake” orientation.  (And contrary to the most recent screenshot I have posted of my own widget.  It’s still a work in progress…)

I therefore declare that a Standard Color Wheel ought to be red at +X, and green at top-left.  So, why are there so many variations when it seems like there is a correct answer?  I dunno.  I guess a lot of these color wheels were made pretty much independently by people who weren’t particularly concerned about matching up with some other standard.  Some may have intentionally wanted to differentiate themselves from existing color wheels that they had seen just for the sake of novelty.  Most of the details are presumably now lost to time.

So, just from the apps that I had handy to check on, there are four different layouts for the colors, and four different saturation calculations for the middle.  What do you prefer?  Do you know of any other interesting variations on a simple color wheel?  Is one or the other more intuitive or functional for you?

The underappreciated scaling of memory access costs hidden by the cult of Big-O

In any CS-101 textbook, you learn about the scaling of certain algorithms.  I’ve had a few conversations with several colleagues recently related to this topic, and how badly the “cult of complexity” can lead one astray when trying to scale real systems rather than ones in textbooks.  I wanted to write a bit about the topic, and share a simple program that demonstrates the realities.

This post sadly has nothing in particular to do with the giant fighting mecha anime, “Big O” from 1999… Actually, now that I think about it, scaling is bound to be a concern when making giant robots. But the square-cube law is a bit of a different topic than memory time complexity, so we’ll have to leave it for another day.

In the textbook, one algorithm might scale linearly — O(n).  Another might do better at O(log(n)), or have a constant number of operations regardless of working set size and be O(1), etc.  It’s a pretty straightforward way of keeping track of the terribleness of an algorithm.  But the key is that Big O notation only estimates the number of operations as n grows large, rather than than the amount of time taken to run it.  There is a pretty natural and intuitive implicit assumption that makes Big O useful.  The assumption is that one read operation from memory is about as slow as any other.  So you just need to count up the number of read operations to get an idea of how much time you’ll spend waiting on memory.

This is, of course, pants-on-fire nonsense.

Continue reading