• 0 Posts
  • 12 Comments
Joined 10 months ago
cake
Cake day: September 11th, 2023

help-circle





  • Here’s my last one, I’m pretty fucking boring:

    User:

    Please provide a comma separated list of US state and territory postal abbreviations, with each item in single quotes.

    ChatGPT:

    Sure! Here is a comma-separated list of US state and territory postal abbreviations in single quotes:

    ‘AL’, ‘AK’, ‘AZ’, ‘AR’, ‘CA’, ‘CO’, ‘CT’, ‘DE’, ‘FL’, ‘GA’, ‘HI’, ‘ID’, ‘IL’, ‘IN’, ‘IA’, ‘KS’, ‘KY’, ‘LA’, ‘ME’, ‘MD’, ‘MA’, ‘MI’, ‘MN’, ‘MS’, ‘MO’, ‘MT’, ‘NE’, ‘NV’, ‘NH’, ‘NJ’, ‘NM’, ‘NY’, ‘NC’, ‘ND’, ‘OH’, ‘OK’, ‘OR’, ‘PA’, ‘RI’, ‘SC’, ‘SD’, ‘TN’, ‘TX’, ‘UT’, ‘VT’, ‘VA’, ‘WA’, ‘WV’, ‘WI’, ‘WY’, ‘AS’, ‘DC’, ‘FM’, ‘GU’, ‘MH’, ‘MP’, ‘PW’, ‘PR’, ‘VI’

    Please note that AS refers to American Samoa, DC refers to the District of Columbia, FM refers to Federated States of Micronesia, GU refers to Guam, MH refers to Marshall Islands, MP refers to Northern Mariana Islands, PW refers to Palau, PR refers to Puerto Rico, and VI refers to the U.S. Virgin Islands.



  • and a private telecommunications company can read absolutely all your digital communication

    Well maybe. It’s one of the reasons e2e encryption is so imperative to online privacy. For instance, turning on https everywhere, then your isp can only see which servers you’re connecting to, not what’s in your traffic to them.

    And to point it out up front, yeah the distant end’s servers likely have some for of that traffic captured, but now law enforcement has to dig up every company that they’re trying to pull info from. Which is significantly more difficult than just relying on a one stop shop arrangement.

    And for the best privacy, like security, a multi-layered approach is better. So throw in a VPN, throw in something like a mullvad browser, throw in pseudonymous accounts, throw in different usernames + passwords across accounts, throw in…





  • It exists, it’s called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.

    And therein lies the issue: if you place a robots.txt out for the content, all bots will ignore the content, including search engine indexers.

    So huge publishers want it both ways, they want to be indexed, but they don’t want the content to be archived.

    If the NYT is serious about not wanting to have their content on the webarchive but still want humans to see it, the solution is simple: Put that content behind a login! But the NYT doesn’t want to do that, since then they’ll lose out on the ad revenue of having regular people load their website.

    I think in the case of the article here though, the motivation is a bit more nefarious, in that the NYT et al simply don’t want to be held accountable. So there’s a choice to be had for them, either retain the privilege of being regarded as serious journalism, or act like a bunch of hacks that can’t be relied upon.