Alternative approaches:

  • Prompt engineering and models with large context
  • Training an “open source” model for your specific use case (not from scratch, start from a foundational model)

https://www.builder.io/blog/build-ai

https://www.tidepool.so/2023/08/17/why-you-probably-dont-need-to-fine-tune-an-llm/

How-To

https://platform.openai.com/docs/guides/fine-tuning

Steps:

Models and pricing

Fine tuning available for:

  • gpt-3.5-turbo-1106 (recommended)
  • gpt-3.5-turbo-0613
  • babbage-002
  • davinci-002
  • gpt-4-0613 (experimental — eligible users will be presented with an option to request access in the fine-tuning UI)

https://platform.openai.com/docs/models https://openai.com/pricing

Note that for some models the price per token is different for tokens in the input vs. the output

Advanced

Function calling and fine tuning

https://platform.openai.com/docs/guides/function-calling

Technical details

Official client libraries

https://github.com/openai/openai-python https://github.com/openai/openai-node

Official API

https://platform.openai.com/docs/guides/text-generation/chat-completions-api

https://platform.openai.com/docs/api-reference

https://github.com/openai/openai-openapi

See also