Website and blog:

Full bio

Gemini: gemini://

Main fedi:

PGP, other contact info: see website

  • 5 Posts
Joined 2Y ago
Cake day: Jan 28, 2021


Put together this brief overview to the basics of stylometric fingerprinting resistance. TLDR: obfuscate your language patterns with a good style guide.

Unfortunately, Gitea (the forge software that powers Codeberg) has major accessibility issues. It’s not usable from most assistive technologies (e.g. screen readers). GitLab isn’t much better.

Sourcehut is pretty much the only GitHub alternative with good accessibility I know of.

This is their privacy policy:

It includes detailed fingerprinting metrics like mouse behavior and font information.

I should probably link it, thanks for the feedback.

I said that about Petal because readers likely hadn’t heard of it and didn’t have any expectations. l assume readers already knew Bing, Google, and Yandex were bad for privacy.

Not at all; there are tons of newish engines out there, the best of which are trying to carve out a niche for themselves in an area that Google and Bing ignore. I listed 44 English engines with their own indexes, along with some for other languages which I’m unfortunately unable to review because I don’t speak the langs required.

On these engines, you won’t get far if you use natural language queries or expect the engine to make inferences. Use broad terms and keywords instead. I recommend giving Mojeek, Marginalia, Teclis, Petal (bad privacy, but usable through Searx), Kagi, and Alexandria a try.

The reality is more nuanced than this. Wrote up my thoughts on my blog: A layered approach to content blocking.

Strictly speaking about content filtering: declarativeNetRequest is honestly a good thing for like 80% of websites. But there’s that 20% that’ll need privileged extensions. Content blocking should use a layered approach that lets users selectively enable a more privileged layer. Chromium will instead be axing the APIs required for that privileged layer; Firefox’s permission system is too coarse to support a layered approach.

A more complex look at where Manifest v3 fits into the content-blocking landscape, and why it can't replace privileged extensions despite bringing important improvements to the table.

He is a security grifter that recommends Windows and MacOS over Linux for some twisted security purposes.

Windows Enterprise and macOS are ahead of Linux’s exploit mitigations. Madaidan wasn’t claiming that Windows and macOS are the right OSes for you, or that Linux is too insecure for it to be a good fit for your threat model; he was only claiming that Windows and macOS have stronger defenses available.

QubesOS would definitely give Windows and macOS a run for their money, if you use it correctly. Ultimately, Fuchsia is probably going to eat their lunch security-wise; its capabilities system is incredibly well done and its controls over dynamic code execution put it even ahead of Android. I’d be interested in seeing Zircon- or Fuchsia-based distros in the future.

When it comes to privacy: I fully agree that the default settings of Windows, macOS, Chrome, and others are really bad. And I don’t think “but it’s configurable” excuses them:

I think you have gotten influenced by madaidan’s grift because you use a lot of closed source tools and want to justify it to yourself as safe.

Here’s an exhaustive list of the proprietary software on my machine:

  • Microcode
  • Intel subsystems for my processor (ME, AMT is disabled. My next CPU hopefully won’t be x86_64 because the research I did on ME and AMD Secure Technology gave me nightmares).
  • Non-executable firmware
  • Patent-encumbered media codecs with open-source implementations (AVC/H.264, HEVC/H.265). This should be FLOSS but algorithms are patented; commercial use and distribution can be subject to royalties.
  • Web apps I’m required to use and would rather avoid (e.g. the web version of Zoom for school).
  • Some Nintendo 3DS games I play in a FLOSS emulator (Citra). Sandboxed, ofc.

That’s it. I don’t even have proprietary drivers. I’m strongly against proprietary software on ideological grounds. If you want to know more about my setup, I’ve made my dotfiles available.

And… you cannot study the closed source software.

Sure you can. I went over several example.

I freely admit that this leaves you dependent on a vendor for fixes, and that certain vendors like oracle can be horrible to work with (seriously check out that link, it’s hilarious). My previous articles on FLOSS being an important mitigation against user domestication are relevant here.

Can you, with complete certainty, confidently assert the closed source software is more secure? How is it secure? Is it also a piece of software not invading your privacy? Security is not the origin of privacy, and security is not merely regarding its own resilience as standalone code to resist break-in attempts. This whole thing is not just a simple two way relation, but more like a magnetic field generated by a magnet itself. I am sure you understand that.

I can’t confidently assert anything with complete certainty regardless of source model, and you shouldn’t trust anyone who says they can.

I can somewhat confidently say that, for instance, Google Chrome (Google’s proprietary browser based on the open-source Chromium) is more secure than most Webkit2GTK browsers. The vast majority of Webkit2gtk-based browsers don’t even fully enable enable sandboxing (webkit_web_context_set_sandbox_enabled).

I can even more confidently say that Google Chrome is more secure than Pale Moon. In fact, most browsers are more secure than Pale Moon.

To determine if a piece of software invades privacy, see if it phones home. Use something like Wireshark to inspect what it sends. Web browsers make it easy to save key logs to decrypt packets. Don’t stop there; there are other techniques I mentioned to work out the edge cases. A great option is using a decompiler.

Certain forms of security are necessary for certain levels of privacy. Other forms of security are less relevant for certain levels of privacy, depending on your threat model. There’s a bit of a venn-diagram effect going on here.

FLOSS being less secure when analysed with whitebox methods assures where it stands on security.

Sure, but don’t stop at whitebox methods. You should use black-box methods too. I outlined why in the article and used a Linux vuln as a prototypical example.

This will always be untrue for closed source software, therefore the assertation that closed source software is more secure, is itself uncertain.

You’re making a lot of blanket, absolute statements. Closed-source software can be analyzed, and I described how to do it. This is more true for closed-source software that documents its architecture; such documentation can then be tested.

Moreover, FOSS devs are idealistic and generally have good moral inclinations towards the community and in the wild there are hardly observations that tell FOSS devs have been out there maliciously sitting with honeypots and mousetraps. This has long been untrue for closed source devs, where only a handful examples exist where closed source software devs have been against end user exploitation. (Some common examples in Android I see are Rikka Apps (AppOps), Glasswire, MiXplorer, Wavelet, many XDA apps, Bouncer, Nova Launcher, SD Maid, emulators vetted at r/emulation.)

I am in full agreement with this paragraph. There is a mind-numbing amount of proprietary shitware out there. That’s why, even if I was only interested in security, I wouldn’t consider running proprietary software that hasn’t been researched.

I am tired of people acting like blackbox analysis is same as whitebox analysis.

I was very explicit that the two types of analysis are not the same. I repeatedly explained the merits of source code, and the limitations of black-box analysis. I also devoted an entire section to make an example of Intel ME because it showed both the strengths and the limitations of dynamic analysis and binary analysis.

My point was only that people can study proprietary software, and vulnerability discovery (beyond low-hanging fruit typically caught by e.g. static code analysis) is slanted towards black-box approaches. We should conclude that software is secure through study, not by checking the source model.

Edit: I liked that last sentence I wrote so I added it to the conclusion. Diff.

Lots of FLOSS is less secure than proprietary counterparts, and vice versa. The difference is that proprietary counterparts make us entirely dependent on the vendor for most things, including security. I wrote two articles exploring that issue, both of which I linked near the top. I think you might like them ;).

Now, if a piece of proprietary software doesn’t document its architecture, makes heavy use of obfuscation techniques in critical places, and is very large/complex: I’d be very unlikely to consider it secure enough for most purposes.

I find people who agree with me for the wrong reasons to be more problematic than people who simply disagree with me. After writing a lot about why free software is important, I needed to clarify that there are good and bad reasons for supporting it. You can audit the security of proprietary software quite thoroughly; source code isn't a necessary or sufficient precondition for a particular software implementation to be considered secure.