edit to add: The very helpful author of RenderDoc told me on Twitter that the command line capture utility is meant as an internal thing, and not for general use. So don’t complain if this breaks in the future. Also, he said that my “--opt-ref-all-resources” is probably superfluous in most cases and will generally bloat the capture for no benefit.
One of the things I really like about Vulkan vs. OpenGL is that Vulkan is “offscreen by default” while OpenGL depends on the windowing system. With Vulkan, you can just allocate some space for an image, and draw/compute to that image. You can pick which specific device you are using, or do it once for every device. If you want to display the image, blit it to the swapchain. With OpenGL, you tend to need to open a Window with certain options, initialize OpenGL and create a Context using that Window handle, and use one of several extensions for offscreen rendering. (Probably FBO, maybe PBO. It’s an archaeological expedition through revisions of the spec and outdated documentation to be sure exactly what to do.)
Consequently, it is muuuuuch easier to make a “little” Vulkan application (that admittedly depends on a not-so-little engine/library with a bunch of boiler plate and convenience code) that does one thing off screen and exits without needing to pop up anything in the GUI as a part of the process.
This naturally raises the question… How are you sure that little utility actually does what you told it to?
Every once in a while, somebody gets so frustrated by the modern dynamic linking ecosystem that they suggest just throwing it out entirely and going back to static linking everything. Seriously, the topic comes up occasionally in some very nerdy circles. I was inspired to write this blog post by a recent conversation on Twitter.
So, what’s dynamic linking? Why is it good? Why is it bad? Is it actually bad? What’s static linking, and how did things work before dynamic linking? Why did we mostly abandon it? And… What would static linking look like if we “invented” it today with some lessons learned from the era of dynamic linking? (And no, I am not talking about containers and Docker images when I say modern static linking. This isn’t one of those blog posts where we just treat a container image as morally equivalent to a static linked binary. Though I have some sympathy for that madness given that the modern software stack often feels un-fixable.)
Electron is somewhat controversial as an application development framework. Some people complain that the applications that use it aren’t very efficient. Others say they just use RAM and no sense letting it go to waste. So, which is it? Is the tradeoff of developer time for efficiency worthwhile?
We take it as an article of faith that newer cheaper better faster machines come out every year.
Sure, Moore’s Law doesn’t give the gains it once did. And sure, I am looking at buying a laptop today that has literally the exact same amount of RAM as my 10 year old desktop that seems to finally have died. And sure, modern laptops make RAM upgrades impossible because the RAM is soldered on, so I’ll have the same amount of RAM for the next several years over the lifespan of the laptop. So I’ll have the same amount of RAM in my main computer for somewhere between 10 and fifteen years depending on how long the new laptop lasts.
But faith says I’ll have more RAM over time no matter what!
We all like to think the whole Universe is centered on us. And thanks to relativity and whatnot, the whole observable universe really is centered on the observer! In science, the universe is constantly expanding, and every observer sees themselves as being at the center of that expansion.
You can’t see everything — you can only see a subset of the whole universe. Light doesn’t travel infinitely fast, and the universe isn’t infinitely old. That means you can only see a ting bubble of only about 15 billion light years around yourself. Aliens living in a distant galaxy half way across the universe can likewise only see a 15 billion light year bubble around themselves. And if those aliens are more than 15 billion light years away from us, we can’t see them and they can’t see us.
To clarify that a bit, it’s not just that we can’t see them with our eyeballs or our current telescopes — we can’t have any sort of interaction with them. We can’t ever have had any sort of interaction with them at any point in the past. And the things we are doing today can’t possibly be effected by any sort of interaction with those distant aliens. That is to say, they are beyond a metaphorical horizon, beyond which the events don’t matter to us and can’t be observed by us. Just like how you can’t see stuff below the Earth’s horizon. Hence, an Event Horizon.
So, why am I trying to shoehorn the concept of an Event Horizon into corporate life? When a corporation is small, everybody shares an event horizon. Three engineers wedged into one work room in a startup will hear each other’s conversations. When something breaks, they’ll all know about it. They all observe the exact same bubble in the universe. Their shared universe is small. And they can readily reach consensus about what’s important, what’s broken, and what’s working. They may disagree about what to do next, which Linux distribution or programming language is best, etc. But they exist in a shared universe and a shared understanding of basic facts. That shared universe which can not last forever. And the inevitable collapse of the shared perspective leads to all sorts of trouble that is, by its nature, impossible to observe directly.
The Vulkan API is C. It’s uncommon to write full applications in only C these days — C++ is far more common. If you are writing C++, it makes sense to use the Vulkan-C++ wrapper which adds type safety, exceptions, and optional automatic RAII style scoped memory management with “Unique” data types. Vulkan is hard enough to use without that syntax sugar sprinkled on top, so if you are using C++ and it is practical, your application should absolutely use the C++ binding types to take advantage of objectively good things like type safety, and having cleanup work properly. (Or not?)
The wheel of reinvention is a term that I most closely associate with the history of computer graphics, but the idea is pretty universal. Technology is cyclical, no matter the specific field. All software needs to be a bit more flexible than you originally thought, and it eventually spawns more software because the solution to the problem of a computer program you don’t like is almost always another computer program that is even less carefully constructed.
This is basically my take on the Configuration Complexity Clock. I’m not the first person to write about the topic. And since it’s practically a Natural law, like a gravitational pull, I certainly won’t be the last to deal with it, notice it, or write about it. But it is something that popped up on my radar again recently, so I wanted to put it in my own words. Let’s look at the whole cycle of terrible crippling success after terrible success…
Vulkan has some new extensions for decoding video. Like everything with Vulkan, they seem to be kind of a pain / kind of awesome. I don’t have experience using them in practice yet, but I have poked through the extensions and some sample code.
So… Disclaimer: I’m not *good* at Vulkan. At best, I know just barely enough to have an opinion. A ton of people have spent way more time with the API than I have. And they have done way more interesting things than me. That said, a lot of the information out there is focused on gamedev. Vulkan has a ton of functionality that can be handy for offline image processing type tasks, but you have to figure out some of the details yourself. Hopefully, some of my notes may be handy for you if you are going down this path. Since I have been playing with Vulkan, a few people have asked me questions about it, and I have started making some notes that I figured I may as well share in case they prove useful to anybody. The target audience here is admittedly very narrow — people who know enough about Vulkan to want to do stuff with it, but not enough to just go read the extension specifications themselves.
(This started as a post many years ago on an old blog. In those days it was fashionable to install every plugin on your self hosted word press instance, and then never update any of it, so naturally that blog needed to get taken down. A colleague reminded me of this in a conversation, so I decided to dig it out of an old MySQL backup since I don’t think anybody else on the Internet was ever bored enough to document this many color wheel widgets in various applications. At some point perhaps I’ll more actively revisit the topic with some screenshots of software from the current decade. At this point the post is mainly interesting for having documented long obsolete color wheels like Shake and “Apple Color” from the old Final Cut Studio.)
Have you ever noticed how many variations there are on color wheels? It’s basically a very simple idea. Different colors go around the edge of a circle. Different saturations go from the edge to the center. Most saturated is at the outermost edge of the circle. Easy, right? Couldn’t be simpler. So you’d think… While making a color adjuster widget for an application recently, I have been pondering them with more attention than I ever thought I would. Here are some well known examples of color wheels and color wheel type adjuster widgets… (Click images to get them full-sized, in all their utilitarian glory.)
This Shake wheel widget is ‘live.’ So, as you adjust the value slider, it will get darker or lighter. If value is set to 0, you will just see a big black square with no wheel in it, which can be counter-intuitive the first time you try to select a color.
For some reason, the wheel in the Nuke color picker uses Shake ordering for the colors, despite the fact that the color wheel node doesn’t. (It’s shown in thumbnail in the DAG view to the right of the color picker window in case you don’t believe me.) The button you click to bring up the color picker has an image on it with the same ordering as the node, despite the fact that this means the wheel on the button doesn’t match the wheel on the window it brings up. Also notable are the radius-circle and vector line in the color wheel pointing out exactly where the current selected color is.
That last one was inspired in part by the visual softness of the FCP wheels with their nonlinear saturation falloff, I am experimenting with a biased cubic falloff for the saturation in my color wheels. This results in a slightly smoother appearance than straight linear falloff, but the biasing prevents the center from being completely blown out, so you can always see what you are doing, even if you zoom in on the widget so you can’t see the fully saturated edges anymore, because you want very fine adjustments. It’s intended for a color adjuster for color correction, rather than a true color picker, so it wasn’t important to me to have the color of the clicked spot map precisely to the resulting color. Also new in this version is that the widget will paint itself with the selected color as the background, giving you a local, live preview as you adjust.
Personally, I think red along +X makes the most sense from a correctness standpoint. In mathematics, we define a unit circle such that this is the direction of zero degrees. In HSV color space, we define zero degrees as red. It seems like a simple, learnable convention. I can’t see why having red in some other direction would specifically be more intuitive or easier for the user, but I’m willing to be proven wrong if somebody has a good argument. Without that, I’ll stick to +X = red for reasons of comfort with the mathematics.
As far as order around the circle, the unit circle in mathematics goes counter clockwise from +X. In optics, we learn the Roy G Biv color ordering according the frequency of colors in the spectrum. Logically, we should match increasing frequency to increasing angle. Consequently, Orange should be next after red as you go counter-clockwise. This is a “green up” or “Shake” orientation. (And contrary to the most recent screenshot I have posted of my own widget. It’s still a work in progress…)
I therefore declare that a Standard Color Wheel ought to be red at +X, and green at top-left. So, why are there so many variations when it seems like there is a correct answer? I dunno. I guess a lot of these color wheels were made pretty much independently by people who weren’t particularly concerned about matching up with some other standard. Some may have intentionally wanted to differentiate themselves from existing color wheels that they had seen just for the sake of novelty. Most of the details are presumably now lost to time.
So, just from the apps that I had handy to check on, there are four different layouts for the colors, and four different saturation calculations for the middle. What do you prefer? Do you know of any other interesting variations on a simple color wheel? Is one or the other more intuitive or functional for you?
In any CS-101 textbook, you learn about the scaling of certain algorithms. I’ve had a few conversations with several colleagues recently related to this topic, and how badly the “cult of complexity” can lead one astray when trying to scale real systems rather than ones in textbooks. I wanted to write a bit about the topic, and share a simple program that demonstrates the realities.
This post sadly has nothing in particular to do with the giant fighting mecha anime, “Big O” from 1999… Actually, now that I think about it, scaling is bound to be a concern when making giant robots. But the square-cube law is a bit of a different topic than memory time complexity, so we’ll have to leave it for another day.
In the textbook, one algorithm might scale linearly — O(n). Another might do better at O(log(n)), or have a constant number of operations regardless of working set size and be O(1), etc. It’s a pretty straightforward way of keeping track of the terribleness of an algorithm. But the key is that Big O notation only estimates the number of operations as n grows large, rather than than the amount of time taken to run it. There is a pretty natural and intuitive implicit assumption that makes Big O useful. The assumption is that one read operation from memory is about as slow as any other. So you just need to count up the number of read operations to get an idea of how much time you’ll spend waiting on memory.
My repository with W-Utils, the “Worst Utils” versions of a few common userland utilities is here: On GitHub . The implementations are all incomplete, proof of concept sorts of things, but they are useful examples that are much simplified compared to the “real thing” in GNU Coreutils.
A video of the talk is up on the SCALE YouTube page. My talk in the “Ballroom C” video starts at 4:05:30. The camera doesn’t cover the screen, so you may need to be clicking along with the slide deck in you want the full experience. (The embedded video starts near the right time, but it doesn’t seem to want to seek to the exact spot, so you may need to move in time a little.)