Skip to content
This repository was archived by the owner on Jul 7, 2025. It is now read-only.

Commit a79475f

Browse files
authored
Merge pull request #16 from luiyen/llama2
llama2-7b-chat-hf
2 parents d84f4cf + e85b26e commit a79475f

File tree

4 files changed

+17
-16
lines changed

4 files changed

+17
-16
lines changed

.github/workflows/test-action.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ jobs:
5353
githubRepository: ${{ github.repository }}
5454
githubPullRequestNumber: ${{ github.event.pull_request.number }}
5555
gitCommitHash: ${{ github.event.pull_request.head.sha }}
56-
repoId: "microsoft/codereviewer"
56+
repoId: "meta-llama/Llama-2-7b-chat-hf"
5757
temperature: "0.2"
5858
maxNewTokens: "250"
5959
topK: "50"

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ jobs:
6969
githubRepository: ${{ github.repository }}
7070
githubPullRequestNumber: ${{ github.event.pull_request.number }}
7171
gitCommitHash: ${{ github.event.pull_request.head.sha }}
72-
repoId: "microsoft/codereviewer"
72+
repoId: "meta-llama/Llama-2-7b-chat-hf"
7373
temperature: "0.2"
7474
maxNewTokens: "250"
7575
topK: "50"

action.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ inputs:
2323
repoId:
2424
description: "LLM model"
2525
required: true
26-
default: "microsoft/codereviewer"
26+
default: "meta-llama/Llama-2-7b-chat-hf"
2727
maxNewTokens:
2828
description: "The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated."
2929
required: false

entrypoint.py

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -86,13 +86,14 @@ def get_review(
8686
)
8787
for chunked_diff in chunked_diff_list:
8888
question=chunked_diff
89-
template = """Provide a concise summary of the bug found in the code, describing its characteristics,
90-
location, and potential effects on the overall functionality and performance of the application.
91-
Present the potential issues and errors first, following by the most important findings, in your summary
92-
Important: Include block of code / diff in the summary also the line number.
93-
```
94-
{question}
95-
```
89+
template = """Provide a concise summary of the bug found in the code, describing its characteristics,
90+
location, and potential effects on the overall functionality and performance of the application.
91+
Present the potential issues and errors first, following by the most important findings, in your summary
92+
Important: Include block of code / diff in the summary also the line number.
93+
94+
Diff:
95+
96+
{question}
9697
"""
9798

9899
prompt = PromptTemplate(template=template, input_variables=["question"])
@@ -106,12 +107,12 @@ def get_review(
106107

107108
question="\n".join(chunked_reviews)
108109
template = """Summarize the following file changed in a pull request submitted by a developer on GitHub,
109-
focusing on major modifications, additions, deletions, and any significant updates within the files.
110-
Do not include the file name in the summary and list the summary with bullet points.
111-
Important: Include block of code / diff in the summary also the line number.
112-
```
113-
{question}
114-
```
110+
focusing on major modifications, additions, deletions, and any significant updates within the files.
111+
Do not include the file name in the summary and list the summary with bullet points.
112+
Important: Include block of code / diff in the summary also the line number.
113+
114+
Diff:
115+
{question}
115116
"""
116117
prompt = PromptTemplate(template=template, input_variables=["question"])
117118
llm_chain = LLMChain(prompt=prompt, llm=llm)

0 commit comments

Comments
 (0)