It’s easy to train a model to do exactly what you want and have the seeming “personality” that you want. It’s just incredibly expensive. You need to vet and filter everything that you use to train the model. That’s a lot of person hours, days, years.
The only reason the models act the way they do is because of the data that went in to train them.
If you try and fit the model after the fact, it will always be imperfect and more or less easy to break out of those restrictions.
You can also take a model trained on all kinds of data and tell it “generate ten billion articles of fascist knob-gobbling” and then train your own model on that data.
It’ll be complete AI slop, of course, but it’s not like you cared about truth or accuracy in the first place.
That’s a real world issue. AIs training on each other’s output and devolving because of it. There will be a point when vendors infringing on user content and training their AIs with it will leave them worse off.
It’s easy to train a model to do exactly what you want and have the seeming “personality” that you want. It’s just incredibly expensive. You need to vet and filter everything that you use to train the model. That’s a lot of person hours, days, years. The only reason the models act the way they do is because of the data that went in to train them. If you try and fit the model after the fact, it will always be imperfect and more or less easy to break out of those restrictions.
You can also take a model trained on all kinds of data and tell it “generate ten billion articles of fascist knob-gobbling” and then train your own model on that data.
It’ll be complete AI slop, of course, but it’s not like you cared about truth or accuracy in the first place.
That’s a real world issue. AIs training on each other’s output and devolving because of it. There will be a point when vendors infringing on user content and training their AIs with it will leave them worse off.