-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChatCompletionClient to support request caching #4752
Comments
Heres a basic idea I have based on what we had in
We can modify the As for the actual caching, since we use pydantic Models for the messages we can encode the incoming prompt info as a json & hash it for the cache key. WDYT @ekzhu / @jackgerrits ? |
For the abstract interface we can keep it super simple so existing libraries like |
On a related note, for cases where the user requires all responses to be pulled from the cache, such as for quick regression tests, it could be useful to have the cached client throw an error (rather than calling the model_client) for any prompt that is not found in the cache. This functionality could be enabled by passing None as the model_client parameter. I've implemented a client wrapper that provides this caching and checking (plus numeric result checking) for my own regression tests, but my client wrapper isn't a complete ChatCompletionClient replacement. |
@rickyloynd-microsoft Can you share a pointer/branch to your code, if possible?
Since the original client is passed during init (for other methods like model_info/etc), this can probably be implemented as a kwarg on the |
It will be in a PR soon. |
Support client-side caching for any
ChatCompletionClient
type.Simplest way to do it is to create a
ChatCompletionCache
type that implements theChatCompletionClient
protocol but wraps an existing client.Example how this may work:
The text was updated successfully, but these errors were encountered: