You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 7, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: action.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ inputs:
23
23
repoId:
24
24
description: "LLM model"
25
25
required: true
26
-
default: "microsoft/codereviewer"
26
+
default: "meta-llama/Llama-2-7b-chat-hf"
27
27
maxNewTokens:
28
28
description: "The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated."
0 commit comments