Electron is fine, or it will be tomorrow, thanks to Moore’s Law, right?

Electron is somewhat controversial as an application development framework. Some people complain that the applications that use it aren’t very efficient. Others say they just use RAM and no sense letting it go to waste. So, which is it? Is the tradeoff of developer time for efficiency worthwhile?

We take it as an article of faith that newer cheaper better faster machines come out every year.

Sure, Moore’s Law doesn’t give the gains it once did. And sure, I am looking at buying a laptop today that has literally the exact same amount of RAM as my 10 year old desktop that seems to finally have died. And sure, modern laptops make RAM upgrades impossible because the RAM is soldered on, so I’ll have the same amount of RAM for the next several years over the lifespan of the laptop. So I’ll have the same amount of RAM in my main computer for somewhere between 10 and fifteen years depending on how long the new laptop lasts.

But faith says I’ll have more RAM over time no matter what!

And anyway, if I was getting a new desktop instead of a new laptop, I could easily add more RAM to it. And I would probably get a new desktop if I could get a new GPU. Unfortunately, the GPU I upgraded my old desktop with back in early 2019 now costs 3X as much despite being 3 years old because of supply chain disruptions and whatnot during the pandemic.

But faith says new computer parts will be cheaper and faster over time, no matter what!

Who are you to challenge the faith?

Sure, electron apps have objectively poor UX. Sure, electron constrains you to an ecosystem that was never intended for desktop application development. And sure, the reason you want to make a native desktop app in the first place is because you found webdev and that browser based ecosystem overly limiting for some reason.

But faith says that webdev and JavaScript is the easiest way to develop things so the trouble is worth it.

The only thing you have to bring to bear against the faith is facts and objective reality, and I dunno why you’d waste your time with a losing fight like that. You would seriously have us believe our own eyes, rather than wisdom passed down from the 90’s that was originally just intended to discourage us from writing nonportable DOS code with a bunch of assembly that talked to fixed hardcoded hardware addresses?

Sure, ahead of time compilation in a static typed language like C++ means you can create some behavior guarantees at compile time so there are categories of bugs that you can’t ship to end users. So you don’t spend your time chasing type mismatch bugs and such from sporadic reports from nontechnical end users like in a dynamic language. And sure, JavaScript being able to fetch code from the network and eval it directly is objectively going to make it impossible to understand the behavior of an application in ways far more numerous than “DLL Hell” and the dynamic linker ever made possible.

Faith tells us that JavaScript is inherently simple. As long as you ignore the massive amounts of complexity, it is so simple! People keep bringing up the many hundreds of megabytes of extremely complex code involved in the runtime necessary for executing hello world, as if that counted. But it doesn’t count as long as we ignore it. And we can ignore it because users will never notice it. And we know users never notice it, because they never write posts complaining about it being huge and having poor performance. And when they do write such posts, we just tell them they are wrong, because we know better. Or at least we believe better. And that’s the important thing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s