if you aren’t refusing to acknowledge they’re ux problems, you’re saying it’s unhelpful to call them what they are, which is obviously nonsense
and again, sane defaults are ux
if you aren’t refusing to acknowledge they’re ux problems, you’re saying it’s unhelpful to call them what they are, which is obviously nonsense
and again, sane defaults are ux
or i could argue that an issue 90% of people will run into is a higher priority than one 2% of people will run into
or i could argue than the risk of accidentally opening something you didn’t want to is higher than the risk of losing unsaved work
the reason foss sucks when it comes to ux is this attitude of insisting that ux problems are somehow some “other” category of problem, rather than an engineering constraint that needs to be designed around like every other one
case in point, for some reason you’re still refusing to acknowledge that they’re both ux problems. and if you do, your original reply ceases to even make sense.
yet very different
which is why my first words to you were “it is and it isn’t”
binning them into the same category is not helpful
both are caused by people in the foss space not paying enough attention to ux
increased attention to ux could solve both
personally i think categorising all work solely through the lens of severity is unhelpful
Single/double click behavior is a matter of preference.
And defaulting to the preference that most people prefer or are used to is a matter of UX.
Which is why I say they’re both UX decisions.
it is and it isn’t
they’re both bad UX, which FOSS is generally pretty bad at, probably because there’s not as much overlap between people who who are really into FOSS and people who are really into UX
linux-centric communities also tend to be plagued by elitism, which i expect stifles a lot of this kind of thing before proper conversations can take root
powerful isn’t the same as well-structured
it was written to be a language that anybody could read or write as well as english, which just like every other time that’s been tried, results in a language that’s exactly as anal about grammar as C or Python except now it’s impossible to remember what that structure is because adding anything to the language to make that easier is forbidden
when you write a language where its designers were so keen for it to remain human readable that they made deleting all rows in a table the default action, i don’t think “well structured” can be used to describe it
sql syntax doesn’t support even itself correctly i fail to see your point
if you don’t believe that adding more structure to the absolute maniacal catastrophe that is sql is a good thing then i’m going to start to have doubts about your authenticity as a human being
how could you know the total participant count is 37 ahead of time if you’re currently looking for sign ups
also, a book exchange of 37 people doesn’t strike me as particularly “huge”
but pay it forward can work in theory
this can’t even work in theory because books entering the system 1 at a time and leaving the system 36 at a time requires 35 books to be conjured out of thin air
that just sounds like saying “six” with a french accent
im not sure i believe you
“six” in english translates to “six” in french
i don’t know what to do with this information
shader processing isn’t bottlenecked by read speed
Argon2 has parameters that allow you to specify the execution time, the memory required, and the degree of parallelism.
But at a certain point you get diminishing returns and you’re just wasting resources. It seems like a similar question to why not just use massive encryption keys.
It depends on the hash. E.g., OWASP only recommends 2 iterations of Argon2id as a minimum.
Yes, a hashing function is designed to be resource intensive, since that’s what makes it hard to brute force. No, a hashing function isn’t designed to be infinitely expensive, because that would be insane. Yes, it’s still a bad thing to provide somebody with a force multiplier like that if they want to run a denial-of-service.
Incorrect.
They’re designed to be resource intensive to calculate to make them harder to brute force, and impossible to reverse.
Some literally have a parameter which acts as a sliding scale for how difficult they are to calculate, so that you can increase security as hardware power advances.
you have to limit it somewhere or you’re opening yourself up for a DoS attack
password hashing algorithms are literally designed to be resource intensive
if you’re just going to take us back in circles again this discussion is a bit pointless, isn’t it?