Nope. I don’t talk about myself like that.

  • 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: June 8th, 2023

help-circle


  • I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the “AI” autocomplete just triggers almost every time you stop typing or on random occasions.

    They did… You just refuse to acknowledge it. It’s no longer a discussion of simply 3Wh when GitHub copilot is making queries every time you pause typing. It could easily equate to hundreds or even thousands of queries a day (if not rate limited). That fully changes the scope of the argument.





  • I mean, that’s effectively the same boat I’m in. I run all my own stuff in my own cluster (recently posted some of it if you check my post history).

    But putting up Jellyfin for any user that isn’t on your network is literally a security nightmare. I cannot run blatantly insecure software and leave it internet facing. It’s one thing if it was just found and they’re working on closing it… But this has been documented/known for 4 years. They’re not fixing it and have shown no interest in addressing it at all.

    VPN is literally the only answer… and that breaks all TV-based access outright since none of them do VPN. Basic auth doesn’t work. Other forms of auths breaks all app access (leaving only browser). And each time any of these possible alternative answers come up, they’ve outright dismissed it.

    If/When Plex finally gets hostile, I’ll simply turn it off. But I can’t let Jellyfin be what services my users, it just doesn’t work.


  • I’ve spoken out on this same issue before… and as a previous security instructor/researcher… it’s fucking scary how many people shit on Plex for a platform that has had known vulnerabilities in auth for 4 years now, that’s existed since the previous code-base… so at least 7 years old and those issues existed in the previous emby codebase going back over a decade.

    Plex isn’t perfect… there’s risks involved there too… but at least when something is brought up as a significant risk it seems to get fixed outside of the implicit risks of the Plex org itself.

    All I read in these threads is effectively “WAAAH I don’t WANNA pay!”… Without realizing that the payment gave them something significantly more secure.



  • The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

    My reason is that you can’t trust the answers regardless. Hallucinations are a rampant problem. Even if we managed to cut it down to 1/100 query will hallucinate, you can’t trust ANYTHING. We’ve seen well trained and targeted AIs that don’t directly take user input (so can’t be super manipulated) in google search results recommending that people put glue on their pizzas to make the cheese stick better… or that geologists recommend eating a rock a day.

    If a custom tailored AI can’t cut it… the general ones are not going to be all that valuable without significant external validation/moderation.