Skip to content

Commit 49ad257

Browse files
Update dummy-agent-library.mdx
Hello, This commit corrects two related issues in the unit1/dummy-agent-library.ipynb notebook and its corresponding course webpage (dummy-agent-library.mdx) to improve accuracy and consistency. 1. Fix incorrect prompt formatting in the notebook: The original code that builds the prompt for the second LLM call (after executing the tool) was missing the required "Observation:" prefix. This breaks the Thought -> Action -> Observation cycle defined in the system prompt, leading to model confusion and unpredictable outputs. The fix is to explicitly add the "Observation:\n" string to the assistant's message, ensuring the prompt strictly adheres to the ReAct-style format for a more reliable agent. # Before {"role": "assistant", "content": output.choices[0].message.content + get_weather('London')} # After {"role": "assistant", "content": output.choices[0].message.content + "Observation:\n" + get_weather('London')} 2. Correct misleading output on the course webpage (.mdx file): Additionally, I've corrected the .mdx file. The output displayed on the webpage for the API call using stop=["Observation:"] was incorrect. It showed the Observation: stop word as part of the model's output, which does not reflect the actual behavior of the API (the stop sequence is consumed and not included in the response). The corrected version now shows the accurate model output, which stops right before where the "Observation:" would be generated. These changes make the notebook and the webpage consistent with each other and with the underlying principles being taught. Thank you!
1 parent b145dce commit 49ad257

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

units/en/unit1/dummy-agent-library.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -211,8 +211,8 @@ Action:
211211
"action": "get_weather",
212212
"action_input": {"location": "London"}
213213
}
214-
```
215-
Observation:
214+
215+
216216
````
217217

218218
Much Better!
@@ -239,7 +239,7 @@ Let's concatenate the system prompt, the base prompt, the completion until funct
239239
messages=[
240240
{"role": "system", "content": SYSTEM_PROMPT},
241241
{"role": "user", "content": "What's the weather in London ?"},
242-
{"role": "assistant", "content": output.choices[0].message.content + get_weather('London')},
242+
{"role": "assistant", "content": output.choices[0].message.content + "Observation:\n" + get_weather('London')},
243243
]
244244

245245
output = client.chat.completions.create(

0 commit comments

Comments
 (0)