• 1 Post
  • 20 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2023

help-circle



  • These microplastics are digestible by your immune system, though, which makes them ultimately harmless. PLA is used for drug delivery for this reason.

    Being concerned about incomplete PLA degradation is like being concerned about a piece of wood breaking down into micro-woods. Yet even if you get a dangerous shard of micro-wood embedded in your skin, your body can deal with this cellose polymer just fine.

    Ultimately it will break down completely someday and in the meantime, nothing will be harmed.



  • A million tiny decisions can be just as damaging. In my limited experience with several different local and cloud models you have to review basically all output as it can confidently introduce small errors. Often code will compile and run, but it has small errors that can cause output to drift, or the aforementioned long-run overflow type errors.

    Those are the errors that junior or lazy coders will never notice and walk away from, causing hard to diagnose failure down the road. And the code “looks fine” so reviewers would need to really go over it with a fine toothed comb, which only happens in critical industries.

    I will only use AI to write comments and documentation blocks and to get jumping off points for algorithms I don’t keep in my head. (“Write a function to sort this array”) It’s better than stack exchange for that IMO.


  • I tried using AI tools to do some cleanup and refactoring of some legacy embedded C code and was curious if it could do any optimization or knew any clever algorithms.

    It’s pretty good at figuring out the function of the code and adding comments, it did some decent refactoring of some sections to make them more readable.

    It has no clue about how to work in a resource constrained environment or about the main concepts that separate embedded from everything else. Namely that it has to be able to run “forever”, operate in realtime on a constant flow of sensor data, and that nobody else is taking care of your memory management.

    It even explained to me that we could do input filtering by using big arrays to do simple averaging on a device with only 1kB RAM, or use a long long for a never-reset accumulator without worrying about what will happen because “it will be years before it overflows”.

    AI buddy, some of these units have run for decades without a power cycle. If lazy coders start dumping AI output into embedded systems the whole world is going to get a lot more glitchy.



  • For free tier, Google Cloud is more transparent about what you get than AWS IMO.

    The only catch is to make sure your persistent disk is “standard” to make it totally free as it defaults to SSD.

    However if you do mess up the disk you’ll still only be paying $1-2/mo. Been using GC for years, and recently they finally started offering dual stack so you can do your own 6to4 tunneling or translation if you want, depends on your usage case.

    AirVPN also are legit and will let you forward ports to expose your local services if you’re worried about DMCA type issues.

    I finally got IPv6 here through Starlink, it’s nice to have full access to the internet again after a decade behind CGNAT





  • I really don’t see how building a docker container afterward makes it easier

    What it’s supposed to make easier is both sandboxing and reuse / deployment. For example, Docker + Traefik makes some tasks so incredibly easy and secure compared to running them on bare metal. Or if you need to spin up multiple instances, they can be created and destroyed in seconds. Without the container, this just isn’t feasible.

    The dockerfile uses MySQL because it works. If you want to know if the core service works with PostgreSQL, that’s not really on the guy who wrote the dockerfile, that’s on the application maintainer. Read the docs, do some testing, create your own container using its own PostgreSQL or connecting to an external database if that suits your needs better.

    Once again the flexibility of bind mounts means you could often drop that external database right on top of the one in the container. That’s the real beauty of Docker IMO, being able to slot the containers into your system seamlessly due to the mount system.

    adapting can be a pita when the package is built around a really specific environment

    That’s the great thing about Docker, it lets you bring that really specific environment anywhere and in an incredibly lightweight manner compared to the old days of heavyweight VMs. I’ve even got Docker containers running on a Raspberry Pi B+ that otherwise is so old that it would be nearly impossible to install the libraries required to run modern software.


  • You can download from Spotify using Zotify. Albums, playlists, if you set it to Artist unfortunately you will get a bunch of singles and EPs that you have to clean up.

    If you have Premium you can download at high bitrates, otherwise you get Ogg Vorbis at around 150 ABR. You can automatically transcode to whatever format you want, then I feed it to beets to catalogue and deliver it with Ampache.

    I like the moderate bitrate OGGs myself, as I often stream from Ampache to my phone and our mobile service is quite slow. So this system works great for me.








  • Despite being proud to still fly the Jolly Roger for most media I have to say that for the Linux gamer it’s nearly as cost effective just to put Steam games on your wishlist and wait for the sale notification. Lots of great games can be had for single dollars, you get support, patches, online play etc. so it’s not worth the effort to plunder them.

    I found honestly it’s rare that a Steam game has issues on Linux these days and if it does, just refund it and get your $5 back. Otherwise as mentioned they are very hard to find.