Microsoft is my Copilot (Plus!)

If I can offer people too important to ever listen to me one good piece of advice that their egos are way too big to ever follow, it’s that you shouldn’t always Just Tweet Though It. Sometimes the reason people are mad at you isn’t just that they haven’t heard what you are saying. Sometimes, the reason people are mad at you is because they have heard what you are saying. And your attempt to reiterate your point with some workshopped doublespeak is not going to be helpful.

So, here’s me complaining about Copilot, specifically in response to Microsoft’s current attempt at damage control, in a format too long for me to fit into a bleet or a toot or a skoot.

Picking Apart a Tweet

Whelp, there’s the tweet. Microsoft posted a blog post responding to the criticism of Recall on CopilotPlus PCs. Let’s start by looking at the post on Ex Twitter first.

“We are sharing an update” is exactly the kind of nonsense phrasing that lets us know we are off to a bad start. It’s phrased like a corporate manager sharing a progress report on a project with some underlings. It’s not, “We heard how much you hate this, so we are pivoting.” The tone is “Here is some more information about something that is happening and you have no say in it. You are just being informed about what we are doing.”

The next sentence is “We are committed to innovation and believe in having big ideas.” Which is to say “we aren’t particularly committed to security.” And “we aren’t particularly committed to listening to what our customers are screaming.” This is marketing puffery, which is the last thing you need when doing crisis communication. A tech company talking about innovation is just boilerplate that they can put in their document templates, and it doesn’t an any way address what’s going on. It just sounds vaguely positive as long as you don’t consider the context. Which means that Microsoft does not consider the current situation a crisis, and they do not think they need to be doing crisis communication right now. This is an error on Microsoft’s part. The context is all that matters, and all that Microsoft needs to be addressing right now.

Then in the final sentence you get a reference to “listening” and a notion that Windows should “exceed customer expectations.” What’s not in the Tweet? Any reference to security, any reference to a change in plan, any reference to what they’ve been hearing from the listening they are supposedly doing.

In terms of actual substance, this tweet gets half a star out of five. It doesn’t indicate any clear understanding of the problem. It doesn’t indicate the problem will be met with an actual change to solve the problem. In fairness, this is just a link to a blog post with the actual meat. A bad tweet can still lead to good information, so let’s reserve judgement on the corporate communications for a moment.

Wait, What’s Recall?

Here’s me complaining about lack of clear substance and information, and I glossed right over the problem.

Here’s an Ars Technica Op-Ed that gives a pretty good introduction. I consider it more than fair to Microsoft.

Basically. Microsoft is jumping on dumb fucking AI bandwagon bullshit. Yes, you can quote me on that. Like my stance that “enshittification” is a good piece of technical jargon that should be adopted, I believe that speaking very plainly about how bad some of these ideas are is necessary for good technical communication. Every engineer involved in the project at Microsoft should have been able to stand up in a meeting and tell their manager, “This is dumb fucking bullshit.” If engineers weren’t comfortable speaking that bluntly, there’s a real internal culture problem of yes-men keeping their heads down and leading the company of a cliff.

Here’s another recent Ars Technica headline about a completely different company releasing a completely different trend chasing AI product. You’ll notice a theme:

The specific dumb fucking bullshit is that Microsoft wants to take screenshots of everything you ever do with your computer. Then have an AI model scan your framebuffer for any kind of text or anything it thinks might be interesting, then store it forever. It all goes in a SQLite database

And here are some Mastodon threads with additional technical details and the threat model.

“The problem is that having the screen recorder as a -native part of the operating system- alters the threat model dramatically vs. a third-party application that must be installed separately from the operating system.”

https://mstdn.social/@munin@infosec.exchange/112482139094944476

“…a would-be hacker would need to gain physical access to your device, unlock it and sign in before they could access saved screenshots.” I’ve got some news for Microsoft about how domestic abuse works.

https://mstdn.social/@evacide@hachyderm.io/112481894532472856

Ah, but of course. The DRM is protected…

https://mstdn.social/@capital@scalie.zone/112480157374284985

Find it fascinating that when faced with drawing safety and security boundaries, the primary beneficiary is not the owner of the device, or the person using it, but random corporations who control the intellectual property rights. The system doesn’t work for you.

https://mstdn.social/@sarahjamielewis@mastodon.social/112482021840236514

Microsoft Recall is going to make post-breach impact analysis impossible.

https://mstdn.social/@gsuberland@chaos.social/112481961405498447

So, It’s Bad and Everybody Hates It

The information security community basically had a collective aneurism when the feature was announced. The consensus of absolutely 100% of the people I would remotely trust to have an opinion on it hate it. The only people I’ve seen on the Internet actively defending it had weirdly consistent phrasing in their responses, which didn’t seem to map closely to what they were responding to. So I can only speculate that at least some of the biggest boosters of the feature in discussions I’ve seen were inauthentic and working from prepared talking points. If I thought there was a credible counterpoint to make here, I would make it or quote someone making it. I do not.

I may make a future post picking at the technical aspects, but hopefully I’ve thrown enough links and quotes at you to fill the gap for now. And I had said this post was about comms and Microsoft’s reaction to the critique, rather than about Recall and Copilot directly.

On to the Bloggery

The blog is at https://blogs.windows.com/windowsexperience/2024/06/07/update-on-the-recall-preview-feature-for-copilot-pcs/

“Update on the Recall preview feature for Copilot+ PCs” Again, this is an “update” not a retraction or an apology or a pivot. So we are off to a bad start.

On May 20, we introduced Copilot+ PCs, our fastest, most intelligent Windows PCs ever. Copilot+ PCs have been reimagined from the inside out to deliver better performance and all new AI experiences to help you be more productive, creative and communicate more effectively.

When doing crisis management, do not start with marketing speak advertising the crisis. The crisis is not what the angry people want. When people are screaming at you about the “intelligence,” don’t start by advertising the intelligence. Recall uses nonzero amounts of system resources so you obviously get better performance without Recall that you do with it, making the puffery about “better performance” actively insulting to the audience for this blogpost that is complaining about the feature in question. There isn’t even an attempt to make lies that respect the reader with an assumption of basic familiarity with the topic.

One of the new experiences exclusive to Copilot+ PCs is Recall, a new way to instantly find something you’ve previously seen on your PC. To create an explorable visual timeline, Recall periodically takes a snapshot of what appears on your screen. These images are encrypted, stored and analyzed locally, using on-device AI capabilities to understand their context. When logged into your Copilot+ PC, you can easily retrace your steps visually using Recall to find things from apps, websites, images and documents that you’ve seen, operating like your own virtual and completely private “photographic memory.”

This much seems like a reasonably accurate and neutral description of the feature. The people angry about it already know what it is, so this is a bit of a waste of space from a crisis management perspective, but it is actual information and not just puffery. See, even when I am annoyed I can sometimes admit when some part is done relatively well. Minus some points for talking about exclusive experiences, but some managers just lose the ability to speak human English at a certain point. No normal person speaking human English gets excited to talk about ZOMG!! Experiences (sparkle flashes) from an OS update. Also, the claim of “completely private” is unsupportable, and exactly the problem that Microsoft keeps generating for themselves. They are making absolute claims about security, doubling down instead of admitting that such a claim is impossible nonsense because they’ve apparently gotten completely high on their own supply.

You are always in control of what’s saved. You can disable saving snapshots, pause temporarily, filter applications and delete your snapshots at any time.

And here’s where we get into issues. “You are always in control” isn’t really accurate.

As AI becomes more prevalent, we are rearchitecting Windows to give customers and developers more choice to leverage both the cloud and the power of local processing on the device made possible by the neural processing unit (NPU). This distributed computing model offers choice for both privacy and security. All of this work will continue to be guided by our Secure Future Initiative (SFI).

We need a secure Present, not just a secure future. No external force or entity is forcing Microsoft to rearchitect Windows in this way. Again, this blog post is just an update about what they are doing. Not any sort of a meditation on whether or not they should be. It’s all treated as unstoppable inertia, and that’s a huge problem. All of this talk about distributed processing and NPU’s is utterly irrelevant to the criticism raised against Recall. The whole mode of thinking here doesn’t seem to grasp that they might be on the wrong track. Perhaps reasonable people could disagree about whether or not they are on the wrong track, but this tone doesn’t even contemplate the possibility of error. “If it runs locally, magic, magic, then there’s no possible threat model that could attack your computer” does not address the concerns.

Our team is driven by a relentless desire to empower people through the transformative potential of AI

Well, shit. There’s your problem. People are begging you to relent! Your customers! Your users! When you’ve decided this is the path forward no matter what, you make an ass of yourself. The users need to matter at least as much as the toys you’ve recently become infatuated with.

and we see great utility in Recall and the problem it can solve. We also know for people to get the full value out of experiences like Recall, they have to trust it.

We don’t trust it. And we won’t. So, take the hint. If getting value depends on having trust, and you’ve shot yourself in the foot with regard to ever having trust, the logical conclusion from this statement is that there’s no value in the feature. But admitting that would require listening to literally anybody outside the AI hype bubble. Microsoft’s own blog post is making statements that logically lead to cancelling the project if they are drawing input and observations from the same universe as the rest of us. And if they aren’t, that further erodes the possibility of trust in the future.

That’s why we are launching Recall in preview on Copilot+ PCs – to give customers a choice to engage with the feature early, or not, and to give us an opportunity to learn from the types of real world scenarios customers and the Windows community finds most useful.

If you only want to know where it’s useful, you are biasing the experiment such that it can’t be useful. The community has engaged with the feature, but the feedback hasn’t been what Microsoft wanted to hear so it doesn’t count.

Skipping ahead a bit…

Second, Windows Hello enrollment is required to enable Recall. In addition, proof of presence is also required to view your timeline and search in Recall.

This is sufficiently misleading that I’m not sure MS management or legal understands the issues at play. Recall stored the data in a SQLite database on the disk. Querying and searching it without using the visual timeline is trivial if you have access. Physical presence is not required. There are existing demonstrations of tools for extracting the data.

This is so wrong that I am kind of shocked it got signed off on as an official communication. I’m no lawyer but I genuinely wonder if it could open Microsoft to legal exposure if somebody depends on the claims made here. The frontend app may try to detect physical presence, but we are talking about bytes in files on disk in the filesystem. Those can be accessed by anything that can open files with the relevant user permissions. If the VP credited as the author of the blog post believes the claim he is making, he’s not competent to make claims about software. The visual timeline frontend is not the data. (Also, the ability to reliably detect physical presence is massively oversold here. I am mostly not picking at it because the reliability of that feature doesn’t matter much. It’s another one of those AI, ???, Magic kinds of feature claims that has never been subject to substantive scrutiny.)

Skipping ahead a bit more,

Recall snapshots will only be decrypted and accessible when the user authenticates.

Right, so any process running as the user can access it. Being “authenticated as a user” is pretty fucking core to talking about a compromise. This sort of statement is basically, “There’s absolutely no problem, unless there’s any kind of a problem.” But again, the author of the blog post genuinely seems to be unable to grasp the nature of the criticism. Skipping ahead again,

This gives an additional layer of protection to Recall data in addition to other default enabled Window Security features like SmartScreen and Defender which use advanced AI techniques to help prevent malware from accessing data like Recall.

Oh good. More damned AI. Remember when he said they were “relentless” about using AI? Well, I do trust him on that. AI hype is leading to magical thinking. The tooling is not well tested. It’s behavior is not externally vetted. The proponents are consistently starting to sound like cult members, and I don’t particularly mean that metaphorically. Don’t worry about the risks of AI because we have AI is the sort of thing that would only be said by somebody who believes that AI is the solution to every problem, despite all of the evidence to the contrary.

The blog post goes on. There’s no need to try to pick apart every paragraph.

My conclusion is that it was written by somebody who does not really understand the criticism. Somebody who is in a hype bubble and will not listen to criticism. Microsoft has chosen the path it wants to wreck itself on, and that should be understood as a huge risk for users. Not just in regard to this feature. This feature is a sign of a truly broken culture that has become entirely insular in ways that do not serve users. As time goes on, it will only generate more and more layers of fundamentally misguided complexity and technology. One more sign that the whole tech industry is at a real turning point. It makes me incredibly sad.

Leave a comment