• 0 Posts
Joined 2Y ago
Cake day: Feb 15, 2021


The issue is that thunderbird is apparently built on top of firefox code, so I expect there’s a similar level of layers upon layers of bloat as with many electron apps. But this rebuilding project seems to be focused on the UI side of things and changing how things look, so I don’t know if it’ll actually improve in terms of performance (it might help a bit though, since they intend to remove stuff and clean up the code on the Thunderbird side of things).

Personally, I feel it would be more interesting to turn it into a Firefox extension (and extend the extensions API where necessary), so the resources that are shared could be actually shared. That, or fully embrace K-9 Mail (the android app that they partnered with and which will become Thunderbird mobile) and adapt it for the desktop.

It doesn’t matter where did the “commerce” quote came from, if you do not agree with that quote (or have no first hand knowledge) do not confront its criticism, and if you do agree with it then own up to it.

Throwing a complaint, then saying “I did not say it” and then trying to silence anyone that disagrees with that quote with “don’t sell me on this so please stop”, is a bit like throwing a stone and then running away, imho. You are the one who brought that quote to the conversation.

I don’t use Vivaldi (nor are interested to even try, for other reasons), and I don’t know if it’s true or not the “commerce” statement, but I’m willing to bet that any corporation that opens a new Mastodon instance is gonna “by default” face allegations of being “commercial” in its inception by many random opinionated people, even if the instance was so young that it had little content.

I’m willing to bet it’s the other way around: most Vivaldi users (or at least the ones that matter in terms of extending Mastodon userbase) don’t have a Mastodon account but have a Vivaldi account already (since they are already Vivaldi users).

I think the way they have done it is the most comfortable for new users. Specially considering that most likely it’ll be Vivaldi users coming to Mastodon, rather than the other way around (since there are better alternatives to Vivaldi for those who value FOSS, which is common in early Mastodon users).

And as mentioned in other comment, you can actually use third party Mastodon accounts, even if the option is not obvious.

Even for the most minimal oneliner you’ll have to depend on complex library code under the hood which you’ll have to keep up to date as part of the OS. And/or depend on the compiler itself to not introduce bugs into the resulting binary you are distributing.

Either that or you write your software in pure assembler (which will end up exposing a lot of internal complexity anyway, resulting in asm files that are far from “minimal”).

These are just some known vulnerabilities in libc (we don’t know how many “unknown” ones there might be, or if new fixes will introduce new problems): https://www.cvedetails.com/vulnerability-list/vendor_id-72/product_id-767/GNU-Glibc.html

My problem with this idea is that I generally do not like the defaults most distros use, I like experimenting and I often switch desktop environment or uninstall / clean up stuff I don’t need.

I’d be ok if the image is just kernel + init system + shell, and maybe some small core components / tools… but if the OS comes preloaded with huge software libraries, like typical KDE / GNOME distros do, then it’s gonna be a lot of dead weight that I’d have to keep updated even if I do not use it.

Immutable images are great for devices with specific purposes meant for a particular software stack (like Chrome Books, the Steam Deck or so) but for a more general purpose computer where I actually want to deeply customize the UI for my workflow, I don’t want to carry around whatever popular software the maintainers of the popular distro might have decided to include.

I don’t see where you find in my comment the assumption that people mostly follow verified accounts (if anything it’s the other way around, one of the requirements for verification is notoriety, that doesn’t mean that verification creates notoriety, nor that notoriety cannot exist without verification). Nor did I say that “enough” of them would migrate (enough for what exactly?). I was trying to be careful with my words and I used “if” when I meant “if”, not “when”.

I was simply explaining the other view point, not necessarily saying that it will happen, but that it’s a possibility and it wouldn’t be so surprising to see an increased interest towards Twitter alternatives as a consecuence from changes like this, perhaps translating to a (small?) spike of new users exploring the fediverse (even if it’s possible most wouldn’t stay). But I don’t have a magic crystal ball so I can’t tell you what will happen.

Your last paragraph is essentially part of what I was meaning to say in the last two lines from my previous comment. We agree.

(and btw, it wasn’t me who gave you that downvote, to be clear)

I think that the point is that “regular users” also includes people who are neither a business nor a corporation but that are notable and active enough to have got the “verified” check-mark. Like a lot of individual popular figures and social media presences.

If a few of those big individuals ends up deciding to not pay up and instead moves to an alternative, the audience following them might be tempted to move as well to follow them up there, so it could potentially start a snowball effect.

That said, I don’t believe the verification badge alone would be enough reason for them to move…

It pains me that even though every single of the alternatives uses less resources and should actually be cheaper to produce (not only ecological, but also economical!) they are typically way more expensive than dairy.

You are missing the point. A process-independent file opener that is used by all applications to access files provides user-friendly security.

But that was essentially what I said… I’m the one who proposed something like that 2 comments ago.

This would be a core component of an OS so the description is correct.

Again, I disagree that “this would be a core component of an OS”. You did not address any of my points, so I don’t see how it follows that “the description is correct”. The term “core OS component” is subjective to begin with.

But even if you wanted to label it that way, it wouldn’t make any difference. That’s just a label you are putting on it, it would not make Flatpak any less of an app distribution / management system with focus on cross-distro compatibility and containerization. Flatpak would still be Flatpak. Whether or not you want to consider it a core part of the OS is not important.

And Flatpak already uses independent processes to manage the whole container & runtime that the app uses for access to the system resources, which already closely matches what you defined as “a core component of an OS”.

That’s a very loose definition of “OS Component”. At that point you might as well consider the web browser an “OS Component” too, or frameworks like Retroarch, who offer a filesystem API for their libretro cores.

But even if we accepted that loose definition, so what? even as it is today Flatpak is already an “OS Component” integrated already in many distros (it’s even a freedesktop.org standard), and it already implements a filesystem interface layer for its apps. As I said, I think the real reason they won’t do it is because they keep wanting to be transparent to the app devs (ie. they don’t want them to have to support Flatpak-specific APIs). Which is why I think there needs to be a change of philosophy if they want app containerization to be seamless, safe and generally useful.

You can install different flatpak repos without really having to depend on one specific central repository, so I’d say the “centralizing software” issue is not that different from any typical package manager.

That said, I do agree that Flatpak has a lot of issues. Specifically the problems with redundancy and security. Personally I find Guix/Nix offers better solutions to many of the problems Flatpak tries to fix.

or learn how to do it and spend time configuring each and every application as needed

And even if they were to spend the time, afaik there’s simply no right way to configure a flatpak like GIMP so it can edit any file from any arbitrary location they might want without first giving it read/write permissions for every single of those locations and allowing the program to access those whole folder trees at any point in time without the user knowing (making it “unsafe”).

It shouldn’t have to be this way, there could be a Flatpak API for requesting the user for a file to open with their explicit consent, without requiring constant micro-management of the flatpak settings nor pushing the user to give it free access to entire folders. The issue is that Flatpak tries to be transparent to the app devs so such level of integration is unlikely to happen with its current philosophy.

Back when the anti-Stallman letter broke out, some transgender people were calling him “transfobe” for having openly proposed the use of a gender-neutral pronoun that he came up with as the preferred way to speak when you don’t know the gender of the person.

It seems promoting the use of a gender-neutral pronoun can be counterproductive. Some people might actually find it offensive and condescending.

No modern AI has been able to reliably pass the Turing test without blatant cheats (like allowing the use of foreign kids unable to understand/express/speak themselves fluently, instead of adults). Just because it dates back to the 1950s doesn’t make it any less valid, imho.

I was interested by the other tests you shared, thanks for that! However, in my opinion:

The Markus test is just a Turing Test with a video feed. I don’t think this necessarily makes the test better, it adds more requirements for the AI, but it’s unclear if those are actually necessary requirements for consciousness.

The Lovelace test 2.0 is also not very different from a Turing test where the tester is the developer and the questions/answers are on a specific domain, where it’s creativity is what’s tested. I don’t think this improves much over the original test either, since already in the Turing test you have freedom to ask questions that might already require innovative answers. Given the more restricted scope of this test and how modern procedural generation and neural nets have developed, it’s likely easier to pass the Lovelance test than the Turing test. And at the same time, it’s also easier for a real human to not pass it if they can’t be creative enough. I don’t think this test is really testing the same thing.

The MIST is another particular case of a more restricted Turing test. It’s essentially a standardized and “simplified” Turing test where the tester is always the same and asks the same questions out of a set of ~80k. The only advantage is that it’s easier to measure and more consistent since you don’t depend on how good the tester is at choosing their questions or judging the answers, but it’s also easier to cheat, since it would be trivial to make a program specifically designed to answer correctly that set of questions.

Oh but I agree that assuming our reality is solipsist isn’t useful for practical purposes. I’m just highlighting the fact that we do not know. We don’t have enough data preciselly because there are many things related to consciousness that we cannot test.

Personally I think that if it looks like a duck, quacks like a duck and acts like a duck then it probably is a duck (and that’s what the studies you are referencing generally need to assume). Which is why, in my opinion, the turing test is a valid approach (and other tests with the same philosophy).

Disregarding turing-like tests and at the same time assuming that only humans are capable of having “a soul” is imho harder to defend, because it requires additional assumptions. I think it’s easier to assume that either duck-likes are ducks or that we are in a simulation. Personally I’m skeptical on both and I just side with the duck test because it’s the more pragmatic approach.

Do we know for sure that our architecture is the same? How do you prove that we are really the same? For all I know I could be plugged to a simulation :P

If there was a way to test consciousness then we would be able to prove that we are at least interacting with other conscious beings… but since we can’t test that, it could theoretically be possible that we (I? you?) are alone, interacting with a big non-sentient and interconnected AI, designed to make us “feel” like we are part of a community.

I know it’s trippy to think that but… well… from a philosophical point of view, isn’t that true?

Personally, I think this has very little to do with computing power and more to do with sensorial experience and replicating how the human brain interacts with the environment. It’s not about being able to do calculations very fast, but about what do those calculations do and how are they conditioned, what stimuli are the ones that cause them to evolve, in which way and by how much.

The real problem is that to think like a human you need to see like a human, touch like a human, have instincts of a human, the needs of a human and the limitations of a human. From babies we learn from things by touching, sucking, observing, experimenting, moved by instincts such as wanting food, wanting companionship, wanting approval from our family… all the things that ultimatelly motivate us. A human AI would make mistakes just like we do, because that’s how our brain works. It might just be little more than a toddler and it could still be a human-like AI.

If “what we call a soul” means consciousness, then I doubt there’s a way to prove that anything else than your own self can be shown to actually have a soul. Not even what we call “other people”.

You being aware of your own consciousness doesn’t mean every human necessarily is in the same, right? …and since we lack of a way to prove consciousness then we can’t assume other people are any more conscious than an AI could be.

my counter-point was that most people aren’t open to installing an operating system

I mean, the original point didn’t say users should be required to install it themselves. It just said that phones should have an open source OS to increase their life span, which is something your “counter-point” is just building up on, not contradicting nor opposing it.

In fact, not every Android phone has open source firmware available that properly supports the hardware, so there are many cases where even if you knew how to install it you wouldn’t be able to.

Exceptions like the Pinephone are super rare, and I wouldn’t expect that to change without force.

I agree. There needs to be either legislation or a consumer driven shift. The real problem is that most users don’t seem to care that much about that and prefer getting a new shiny one with the latest trending features instead of a Pinephone or Fairphone.

I think the point was that open source software makes it last much longer. If using open source Android OS has extended the life of your phone then you are proving his point.

Of course it’s not the only thing that can extend the life of the phone, and of course additional measures should be taken to extend it further, but that doesn’t contradict anything the comment said.

Also, if having an open source OS isn’t a “simple option” for “typical consumer”, then we aren’t even there yet. Imho the phones should come with a fully open source OS that is easily upgradable independently of the manufacturer right out from the store.

This is all human-made. One way or another, the cause is always between monitor and the chair. One of the reasons I find the crypto space so toxic and dangerous is their insistence on technosolutionism.

Preciselly, you can’t stop technosolutionism if you don’t differentiate between the technical factors and the human ones.

Saying technical issues are all the same as human ones or in the same level (just because they are “human-made”) is in fact technosolutionist.

The goal is to solve human issues by manipulating technology, not solving problems in the technology by manipulating humans. Manipulating humans is not in the same level as manipulating technology… I think that this should be pretty clear.

Your analogy falls apart due to how small the ratio of non-scammy uses of NFTs to scammy ones is.

The issue is that if the nature of NFTs already makes such purchases “scammy” for you then, of course, most of it will be “scammy”. But note that something feeling scammy to you is not the same as committing actual fraud. If someone is fully aware that they are buying something because they purposefully want to speculate with it in an extremely unstable market, then it’s their own fault if the risk they took doesn’t pay off. That’s not the same thing as getting scammed.

Myself, I’m not one to invest in such risks, and in fact, right now my bank is charging me money just because I have the money stored in my account doing nothing, which it makes no sense that they’d charge me for that! I wish I could just have it all as cash stored in a vault at home and don’t need banks, but sadly sending cash by post is not exactly secure (nor generally accepted). It’s too bad there isn’t a safe and government-backed cryptocurrency infrastructure in place. I would certainly find that useful.

And they will not be able to solve [domain names] with blockchain tech.

Some have already used the blockchain for that purpose though. Gittorrent used the bitcoin blockchain before (I’m not up to date on what’s the current state on that project, I hear it’s no longer maintained and there are other alternatives). And there’s also the ENS for .eth domain names which are distributed, or am I wrong?

We’re talking legal issues […], disputes […] Neither of these can be written down in code, be it on blockchain or not.

But those are human issues, they should not be in the code itself, just like they aren’t in the code of current DNS servers either. Instead, the tech should just be transparent and flexible enough to allow that kind of human control (again, humans are meant to manipulate the technology, not the other way around).

If anything, I’d imagine a public ledger in a blockchain with proper authorization using government issued signatures would make it easier to track and identify the owner and have legislation impart whatever sanction or punishment. Wouldn’t it? (I’m not even sure if the current DNS system allows this, I believe you can get domain names with some level of anonymity if you really want to).

I think the problem here is getting to the sweet spot between privacy and identification, maybe with different levels for different purposes. If this was controlled by each government and there were some layers in place and measures that allowed some level of anonymity at the same time as allowed disclosure in circumstances that require it, this could be a tool very controlled and safe.

In particular, I think a public p2p ledger would be helpful to have traceability of public funds in a way that can be peer-reviewed without depending on the government “accidentally” losing a hard disk or destroying evidence “by mistake”. Which is something I’ve seen happen more than once in my country whenever there’s an internal investigation for corruption.

It’s essentially a wrapper around Webkit.

Knowing the people at suckless, I was surprised when they launched surf based on Webkit instead of going for a cleaner & simpler engine like the one from NetSurf, even if that would have meant most websites wouldn’t work. After all, the web is anything but clean & simple. Compromising the UX in favor of cleaner code never stopped the suckless team before.

FLOSS community is not perfect, for example, but bullshit gets called out. Projects that make exorbitant claims about security (snakeoil, etc), get called out. But crypto scene acts as if that’s bad for business.

I think we have to differentiate the technical factors from the human ones. Calling out security vulnerabilities is not a problem, but when the cause is between the monitor and the chair then things get much more complicated.

Can’t generate “bad press”, right? Because if one does, they and potentially the whole scene is NGMI, HFBP!

Just not for the wrong reasons. It would be silly to say “internet” = “porn”, or “peer to peer” = “piracy”, so for the same reason, “NFT” = “fraud” is just as misdirected, imho.

I’ll agree to not continue with the simil about xenophobia since it’s true that it’s sensitive (though I do still think it does fit), but at least I hope you do accept these other broad generalizations that are mischaracterizing entire technologies that are very much different from that negative purpose someone might want to attribute to them due to how circunstancially “optimal” some specific instances might be for those purposes.

Saying “the association is well-deserved” already is admitting to the mischaracterization.

And frankly, I have not yet seen a single use of NFTs that is not either unnecessary (as in: whatever is being done could be done as well or better without NFTs)

It would be great to find a solution for distributed domain names that was done well or better than what can be done with NFTs, it’s something that p2p distributed networks haven’t managed to solve without blockchain tech.

not calling out crypto/NFT/web3 scams just to preserve the few potentially useful and non-scammy projects would be effectively aiding and abeting the scammers

I’m all for calling any and all scams. Just as long as we separate the technology from the scam. My problem isn’t with this article, but with the reactions in the comments that seem to jump to conclusions and paint things with broad strokes, assuming NFT = fraud.

Those are fair points. But I’m used to seeing so much bad press against NFT from people who blindedly criticise it and assotiate it with any possible bad use of it… to the point that they think “NFT=bad”, and this kind of news paints that picture for anyone who doesn’t know better…

It would be like highlighting in the news every crime perpetrated by someone of color and then complain about “whataboutism” when someone says that white people also commit crimes.

I’m afraid that all this demonization will make it much much harder for any fair and honest project that we ever attempt in the future related to blockchain technology (such as the one you mentioned).

But he didn’t really say that banks are bad, or that the cryptocurrency/NFT/web3 scene isn’t rife with scams.

Scams also existing in fiat currency (his point) doesn’t make fiat bad, in the same way as cryptocurrency/NFT/web3 having good uses doesn’t mean that it cannot also be “rife with scams”.

Are hammers bad because people can use them to smash skulls? imho what we need is measures to prevent, block, minimize or discourage that kind of behavior, not necessarily ban hammers.

Personally, I think the open source and p2p nature of blockchain technology can be a better way to introduce measures of control and protection in a way that is fairer and more transparent than using obscure private ledgers on the hands of more central authorities managed by humans that we have to trust…

It’s definitelly not optimal for that. In my opinion, using proper blogs, websites and feeds is a much more intelligent, decentralized, and powerful alternative to artificially limited microblogging.

The only reason companies and groups love having a Twitter is because it allows them to advertise themselves there, due to how big its userbase is. It also allows them to have a more direct engagement with their “followers” or appear to be more “down to earth” preciselly because of the way it’s traditionally a platform more “individual-centered”. Twitter just happens to be good for Marketing. And the same goes for Facebook.

Imho, the blogosphere was in a very good place before Twitter and Facebook started to rise in popularity, when having a personal website was more of a common thing to do instead. Imho, the solution isn’t Mastodon either… I’d much rather go back to when using feed readers was a thing. I just wish there was a more modern pub-sub like alternative to RSS that we could use for websites (or maybe there is but nobody uses it…), and a more standardized API for viewing/posting coments to a blog post directly from your feed reader.

Hmm… that’s interesting actually. Having users have to authenticate might help some instances of trolling and abuse, but at the same time there’s the problem of the identification causing trouble for privacy.

A middle ground would be allowing non-verified users to participate, but have them have a lower influence in the relevance of the content, perhaps having caps that limit how much non-verified influence can affect the weighted relevance of a post (so… content promoted by unverified accounts would be of a lower priority, and pushing it with a farm of non-verified bot accounts would not have much of an impact).

Of course there’s likely gonna be some level of bias based on who are the people who would go through the trouble of verifying themselves… but that’s not the same thing as not being transparent. Bias is gonna be a problem that you cannot escape no matter what. If a social network is full of idiots the algorithm isn’t gonna magically make their conversations any less idiotic. So I think the algorithm could still be a good and useful thing to come out of this, even if the social network itself isn’t.

There’s still the chance that they have/make an algorithm that can actually be transparent without being exploitable in ways that are detrimental (which is what I would consider a “good algorithm”)… but I agree that this is the least likely outcome.

Still, I couldn’t care less about any of the other outcomes. I have nothing to lose whether Twitter burns or stays as it is 😁

Personally, I wouldn’t say that an algorithm that relies on obscurity (needless complexity being a form of obscurity) would be a good algorithm, not when it’s public. I guess we’ll see.

It’s possible that the algorithms will have to be heavily refactored, cleaned up and maybe simplified before they are publicly released, since I expect that many of those approaches would be useless against someone with access to the code and the ability to run tests against it systematically to “game the system”.

I’m not interested in Twitter (or any “individual-centric” social network to be honest… I don’t want to “follow” people, but ideas/topics). So I don’t have anything to lose from this.

I might have something to gain if he actually open sources the algorithms Twitter uses, because if they are actually good (I have no idea), they could have other applications too.

XMPP is actively developed, but the development happens in extensions to the standard, which might not be implemented/supported everywhere, since they are optional. The design is more modular than Matrix, with development being more distributed.

Matrix development is more centralized so it’s easier for it to propagate new features. You just upgrade to the newest version. There are also only a few options when it comes to client / server software so it’s more focused on a specific implementation.

I seriously doubt any of these are reasons for the masses. You can go and ask any average person and chances are (s)he won’t even know or care about GNOME/KDE, systemd, or actually have any idea of any kind of toxicity of this kind. I think the article exagerates the importance of some pretty irrelevant internet discussions that are only followed by those who are actually geeks that are passionate about technology, not “the masses”.

In fact, the first time I was ever exposed to toxicity in the computer world was when Microsoft, MS-DOS and Windows users continuously criticised aspects of those very same systems (the “blue screen of death” meme being a famous example of things like this later on). Not in the Linux community.

Also, the article claims there’s a lack of developers but fails to offer any numbers that can be compared. How many developers actually work on Windows (the OS, not apps) vs how many developers work on GNU/Linux OS? how many of them work in it for a living? (because there’s people who do actually get paid to work for Linux) how many don’t get paid and still contribute adding up to insane numbers of hours? I don’t think it’s that simple, you can’t throw an assertion based on a presuposition you hold on one particular aspect and forget about the rest of the picture.