AI has been around for years and we all utilize the results of that research.
Remember that at one time a compiler was seen as AI.
It’s the curse of AI: once a problem is solved, it’s no longer AI. It just becomes a tool, and we adjust what “intelligence” means to exclude the new abilities of computers and code.
Even LLMs have value, just not how they’re being used. If you carefully curate the training materials, you could have a useful tool.
I’d love to see an LLM trained exclusively on medical records of patients who were successfully diagnosed and treated. I wouldn’t want to give it a medical license, but it could be a useful tool in the hands of a competent physician. It might turn out to be useless, but we need to try it.
Try to separate the AI hype from AI.
AI has been around for years and we all utilize the results of that research.
Remember that at one time a compiler was seen as AI.
It’s the curse of AI: once a problem is solved, it’s no longer AI. It just becomes a tool, and we adjust what “intelligence” means to exclude the new abilities of computers and code.
Even LLMs have value, just not how they’re being used. If you carefully curate the training materials, you could have a useful tool.
I’d love to see an LLM trained exclusively on medical records of patients who were successfully diagnosed and treated. I wouldn’t want to give it a medical license, but it could be a useful tool in the hands of a competent physician. It might turn out to be useless, but we need to try it.
Stop referring to LLMs as AI for starters
Why? On what basis?