AssemblerGPT - my GPT fine-tuning adventure
Hello everyone reading this! Before I start with the topic of this post, I would like to inform that due to being busy next ISKRA Project articles will sadly be delayed. Sadly, I won't be able to publish them in regular intervals. Anyway, let's get into the topic I would like to write about here. I have fine-tuned different AI models in the past, but I never fine-tuned a latest OpenAI's GPT model. Some time ago, I decided to finally try that. But for those of you who don't know, I will first explain what fine-tuning is. In a nutshell, it is a process of training artificial intelligence, usually to shape its output (responses) in a specific manner, teach it dealing with new tasks or to provide it with new data. It is usually done by providing lot of examples of input (for example questions) and desired output for each of the inputs (for example correct answers to all the questions). A example could be: if you fine-tune a text generator AI on theatrical play script...