Skip to content

Conversation

cdoern
Copy link
Contributor

@cdoern cdoern commented Sep 3, 2025

What does this PR do?

this document outlines different API stability levels, how to enforce them, and next steps

Next Steps

Following the adoption of this document, all existing APIs should follow the enforcement protocol.

relates to #3237

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Sep 3, 2025
@cdoern
Copy link
Contributor Author

cdoern commented Sep 3, 2025

cc @nathan-weinberg since I know you wanted a look at this :)

Copy link
Collaborator

@mattf mattf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

focusing only on the openai compat apis, how would this framework classify the following and why -

  • /v1/chat/completions
  • /v1/completions
  • /v1/files
  • /v1/batches

@cdoern
Copy link
Contributor Author

cdoern commented Sep 3, 2025

great point @mattf

As the next steps indicates I think leveling each API will be a big task and will require an evaluation of how "API Complete" each one is as compared to the OpenAI spec.

Off the top of my head though:

/v1/chat/completions -- seems stable, so likely would remain v1 as this is the most commonly used API
/v1/completions -- also likely stable

with completions and chat completions though I know there are various features going in as we approach 0.3.0, so we'd need to evaluate if any of these are breaking, if they are -- it'd need to be v1alpha until a consumer can reliably upgrade between z-streams without breakages.

/v1/files -- I have seen a few major enhancements to files go in recently like (#3283) and the s3 provider in general, so I'd imagine this would be v1alpha1 for flexibility until we are sure the surface area is complete. I am not the expert here though and would leave this leveling up to folks more familiar with the API surface

/v1/batches -- given the large changes like #3309 and maybe #3261, #3171, etc etc I think this should be v1alpha1 unless we can ensure this churn is over by perhaps 0.3.0.

v1alpha1 IMO should be viewed as a good thing and not a "downgrade" as it allows us to perfect these APIs without issues of support between versions, stability concerns, etc.

and just a note -- the reason I chose alpha and not beta is as the doc states, beta is almost a stepping stone briefly between alpha and v1 where not much major development should happen.

@reluctantfuturist reluctantfuturist mentioned this pull request Sep 2, 2025
11 tasks
@mattf
Copy link
Collaborator

mattf commented Sep 3, 2025

great point @mattf

As the next steps indicates I think leveling each API will be a big task and will require an evaluation of how "API Complete" each one is as compared to the OpenAI spec.

working through some examples will help, at least me, understand the framework.

Off the top of my head though:

🙏

/v1/chat/completions -- seems stable, so likely would remain v1 as this is the most commonly used API
/v1/completions -- also likely stable

with completions and chat completions though I know there are various features going in as we approach 0.3.0, so we'd need to evaluate if any of these are breaking, if they are -- it'd need to be v1alpha until a consumer can reliably upgrade between z-streams without breakages.

by definition, the shape of these apis (path, input, output) is set and as stable as openai makes them.

there are variations in completeness of the implementation.

  • an implementation may be incomplete because the adapter is missing something it can implement, or
  • an implementation may be inconsistent semantics compared to openai or other adapters, e.g. logprob semantics, or
  • the implementation may be incomplete because it cannot be completed, e.g. image input to a text-only model or multi-image input to a single-image-only model

the first of these is arguably a gap to close.

the second is arguably a bug to fix, but may not be feasible. for instance, the nvidia service does not honor the number of logprobs requested. the llama api service is stricter about json schema for tool calls than other providers.

how would these provider differences impact the classification of the api under this framework?

how would you describe the third using this framework?

/v1/files -- I have seen a few major enhancements to files go in recently like (#3283) and the s3 provider in general, so I'd imagine this would be v1alpha1 for flexibility until we are sure the surface area is complete. I am not the expert here though and would leave this leveling up to folks more familiar with the API surface

also by definition, the shape is stable. it may be new to stack, but it isn't changing.

in the case of /v1/files, the localfs adapter does not implement expiration, while the s3 adapter does.

how does the difference in adapter implementation impact the classification of the api?

/v1/batches -- given the large changes like #3309 and maybe #3261, #3171, etc etc I think this should be v1alpha1 unless we can ensure this churn is over by perhaps 0.3.0.

by definition here, the shape is also stable.

missing from the implementation is support for /v1/embeddings and /v1/responses, which happen to be part of the openapi spec (endpoint is an enum of /v1/responses, /v1/chat/completinos, /v1/embeddings, /v1/completions).

the api shape for a Batch includes a status enum with fields validating, failed, in_progress, finalizing, completed, expired, cancelling, cancelled. the adapter will not produce a finalizing status.

unlike the inference and files endpoints, /v1/batches only has one inline provider.

how do these aspects impact the classification?

v1alpha1 IMO should be viewed as a good thing and not a "downgrade" as it allows us to perfect these APIs without issues of support between versions, stability concerns, etc.

a practical consideration here, when using the LlamaStackClient or OpenAIClient to interact w/ these apis, a path must be provided. users will need multiple clients to talk to each of the top level api versions, e.g. v1client = Client(base_url=".../v1"), alphaclient = Client(base_url=".../v1alpha1")

and just a note -- the reason I chose alpha and not beta is as the doc states, beta is almost a stepping stone briefly between alpha and v1 where not much major development should happen.

@reluctantfuturist reluctantfuturist mentioned this pull request Sep 3, 2025
40 tasks
@cdoern
Copy link
Contributor Author

cdoern commented Sep 3, 2025

@mattf I think simply put:

If any of our OpenAI compatible APIs are not "API complete" in the sense that a new route is added to the API itself (not a provider), or a breaking change to the api datatypes is made (like changing of required params for a route or return type) that is when something needs to be v1alpha1 or v1beta1.

so if our OpenAI compatible APIs are missing something that is in the OpenAI spec, I think that merits a less than v1 ranking until we are 1:1 with what OpenAI documents.

an example:

lets say post_training needs a massive change and supervised_fine_tune needs a new required parameter. This would happen in llama_stack/apis/post_training/... as well as any providers. this is a breaking change that merits a less than v1 leveling of the entire API.

however, lets say the ollama inference provider needs some new logic in how it internally handles streaming chat completions but no changes are required to the inference router or the api types in llama_stack/apis. This would not be a breaking change and allows this to be a v1 api.

So generally: provider changes do not correlate to API maturity, but rather API level datatype or structural changes to required endpoints necessitate a lower level than v1.

Does this align with your thinking?

@ashwinb
Copy link
Contributor

ashwinb commented Sep 3, 2025

I think there are two aspects here:

  • @cdoern is mostly concerned about maturity of the API definition ("is this settled", "will this randomly change")
  • @mattf is thinking about maturity of the API implementation ("does this work as advertised")

And it is not clear whether one should merge both concerns into a single token "v1alpha1". I am sure this issue has been thought of by other projects before?


## Different Levels

### v1alpha1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/alpha/v1 or /beta/v1/ feel slightly more readable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps, I chose v1alpha1, and v1beta1 to mimic the leveling in k8s, and also a single level in the URL like: localhost:8321/v1alpha1/post_training/... seems clean? I am fine with whichever honestly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for my understanding of the why k8s chose this format: is there an expectation that we would create v1alpha2? /v1alpha and /v2alpha seem to make sense as versions that are going to go into the eventual stable /v1 or /v2 versions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think v1alpha2 would exist. If it helps clear it up I can switch both to v1alpha and v1beta without the trailing 1. I agree, It could be nice to extend this in the case of a v2 api.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

switched for now, let me know if v1alpha and v1beta make sense here @ashwinb @raghotham 🙏

Copy link
Contributor

@r3v5 r3v5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, @cdoern! This looks great, thank you!

Copy link
Contributor

@r3v5 r3v5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested some small improvements though.

@cdoern
Copy link
Contributor Author

cdoern commented Sep 4, 2025

I think there are two aspects here:

  • @cdoern is mostly concerned about maturity of the API definition ("is this settled", "will this randomly change")
  • @mattf is thinking about maturity of the API implementation ("does this work as advertised")

And it is not clear whether one should merge both concerns into a single token "v1alpha1". I am sure this issue has been thought of by other projects before?

yeah @ashwinb that is the proper delineation.

I think in LLS specifically, what matters most is the API definiton: datatypes, API routes+parameter+return types

I kind of view the providers similarly to operators in k8s, where the maturity of an individual operator is not correlated to the maturity of all high level APIs. Of course, there is some intertwined nature, but this proposal is basically saying:

Providers can iterate as much as they want on functionality as long as they work within the bounds of an API, if they need to change the API, then the API should not be /v1, or those breaking changes can only happen on a y-stream release basis.

@cdoern
Copy link
Contributor Author

cdoern commented Sep 4, 2025

going to make some of the above suggestions and repush the proposal as is, generally.

@skamenan7
Copy link
Contributor

Great work, @cdoern ! Thanks!

Copy link
Contributor

@r3v5 r3v5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Collaborator

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting close!

Copy link
Collaborator

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solid foundation to start measuring our current APIs and onwards. Thanks!

@cdoern
Copy link
Contributor Author

cdoern commented Sep 9, 2025

rebased

Copy link
Collaborator

@franciscojavierarceo franciscojavierarceo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

one last nit would be to include a proposal over the current state of APIs in this

@cdoern
Copy link
Contributor Author

cdoern commented Sep 9, 2025

lgtm

one last nit would be to include a proposal over the current state of APIs in this

thanks @franciscojavierarceo ! I think this warrants its own piece of work as a follow up. I was imagining this would merge and then the work to actually define which apis are at which level would happen immediately after so that no assumptions are made without research into the actual stability. hope that makes sense!

@cdoern
Copy link
Contributor Author

cdoern commented Sep 9, 2025

@mattf changed the verbiage discussing surface a provider must implement to:

- an API can graduate from `v1alpha` to `v1beta` if the team has identified the extent of the mandatory surface of the API. "mandatory surface" means non-optional routes and the shape of their parameters/return types eg. `/v1/openai/chat/completions`. Optional types can change.

@reluctantfuturist
Copy link
Contributor

lgtm
one last nit would be to include a proposal over the current state of APIs in this

thanks @franciscojavierarceo ! I think this warrants its own piece of work as a follow up. I was imagining this would merge and then the work to actually define which apis are at which level would happen immediately after so that no assumptions are made without research into the actual stability. hope that makes sense!

+1 -- let's handle it separately (both defining which APIs are which, and figuring out how to reflect it in the docs)


Providers can iterate as much as they want on functionality as long as they work within the bounds of an API. If they need to change the API, then the API should not be `/v1`, or those breaking changes can only happen on a y-stream release basis.

### Approval and Announcement Process for Breaking Changes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should probably also include something like this to define a protocol for when there is a breaking change - #3260. A PR that is titled a specific way will not fail the oasdiff check.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 it'll make it easier for calling out in the release notes and any sorts of additional announcements (e.g., in discord, email, etc.).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so by this you mean: I should add a bullet here describing how the PR title and commit message should include an indicator of a breaking change? I can add that!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a section here, luckily conventional commits outlines how to handle this: https://www.conventionalcommits.org/en/v1.0.0/#specification

this document outlines different API stability levels, how to enforce them, and next steps

Signed-off-by: Charlie Doern <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants