• 0 Posts
  • 1 Comment
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Because you don’t train your self-hosted LLM.
    As a result you only pay for the electricity of computing your tokens (your request), this can be especially reasonable if the same machine also does local game streaming and or transcoding, and thus already has the requirements to host a LLM.

    If you don’t have rather unreasonable means, your local LLM is just very much more limited in parameters (size), and will not be as good as other, much larger models.

    Privacy, Ethics and personal interest usually are the largest drivers from what I can tell.