

It you’re talking about TOTP exclusively, that only needs the secret and the correct time on the device. The secret is cached along with the passwords on the device.
It you’re talking about TOTP exclusively, that only needs the secret and the correct time on the device. The secret is cached along with the passwords on the device.
LLMs are perfectly fine, and cool tech. Problem is they’re billed as being actual intelligence or things that can replace humans. Sure they mimic humans well enough, but it would take a lot more than just absorbing content to be good enough at it to replace a human, rather than just aiding them. Either the content needs to be manually processed to add social context, or new tech needs to be made that includes models for how to interpret content in every culture represented by every piece of content, including dead cultures who’s work is available to the model. Otherwise, “hallucinations” (e.g. misinterpretation and thus miscategorization of data) will make them totally unreliable without human filtering.
That being said, there many more targeted uses of the tech that are quite good, but always with the need for a human to verify.
There’s not a need to have vaultwarden up all of the time unless you use new devices often or create and modify entries really often. The data is cached on the device and kept encrypted by the app locally. So a little downtime shouldn’t be a big issue in the large majority of cases.
Signal isn’t that kind of app. It protects your data in flight, but only has minimal protections after the recipient gets the message. It’s a whole other game to protect data at the endpoint. If you can’t trust your recipients to protect data, then you shouldn’t send them data needing protection. In order to do that you need control over all levels of the device receiving the data, hardware, operating system, file system, and software. Anything else will always leave openings for data at rest at tge destination to be compromised by untrustworthy recipients.