Build production-ready AI applications in Go with type safety and ease.
- 🎯 Type-safe inputs and outputs with Go generics and JSON Schema validation
- 🔄 Real-time streaming with Go channels
- 🪝 Extensible hook system for pre/post processing
- ⛓️ Support for chaining operations
- 🔌 Provider agnostic with built-in OpenAI support
- 🧪 Comprehensive testing utilities with mock provider
go get github.com/arjunsriva/promptgen
Work directly with Go's basic types:
// String generation
stringGen, _ := promptgen.Create[string, string]("Tell me a {{.}} joke")
joke, _ := stringGen.Run(ctx, "Dad")
// Integer estimation
intGen, _ := promptgen.Create[string, int]("Guess the age: {{.}}")
age, _ := intGen.Run(ctx, "college professor with grey hair")
// Float conversion
floatGen, _ := promptgen.Create[float64, float64]("Convert {{.}} Fahrenheit to Celsius")
celsius, _ := floatGen.Run(ctx, 98.6)
Define type-safe inputs and outputs with JSON Schema validation:
type ProductInput struct {
Name string `json:"name"`
Features []string `json:"features"`
}
type ProductCopy struct {
Title string `json:"title" jsonschema:"required,maxLength=60"`
Description string `json:"description" jsonschema:"required,maxLength=160"`
}
generator, _ := promptgen.Create[ProductInput, ProductCopy](`
Write product copy for {{.Name}}.
Features:
{{range .Features}}- {{.}}
{{end}}
`)
result, err := generator.Run(ctx, ProductInput{
Name: "Ergonomic Chair",
Features: []string{"Adjustable height", "Lumbar support"},
})
Process responses in real-time using Go channels:
stream, _ := generator.Stream(ctx, input)
for {
select {
case chunk := <-stream.Content:
fmt.Print(chunk)
case err := <-stream.Err:
handleError(err)
case <-stream.Done:
return
case <-ctx.Done():
return ctx.Err()
}
}
Build complex workflows by chaining operations:
// Define chain of operations
var (
classifyQuery, _ = promptgen.Create[Query, Classification](
"Classify this query: {{.Text}}")
generateResponse, _ = promptgen.Create[Classification, Response](
"Generate response for {{.Category}} query")
)
// Execute chain
classification, _ := classifyQuery.Run(ctx, query)
response, _ := generateResponse.Run(ctx, classification)
Add pre/post processing hooks for logging, metrics, or transformations:
type LoggingHook struct {
logger *log.Logger
}
func (h *LoggingHook) BeforeRequest(ctx context.Context, prompt string) (string, error) {
h.logger.Printf("Sending prompt: %s", prompt)
return prompt, nil
}
func (h *LoggingHook) AfterResponse(ctx context.Context, response string, err error) (string, error) {
h.logger.Printf("Got response: %s", response)
return response, err
}
generator.WithHook(&LoggingHook{logger: log.Default()})
Switch between providers or implement your own:
// Use OpenAI
generator.WithProvider(provider.NewOpenAI(provider.OpenAIConfig{
Model: "gpt-4",
Temperature: 0.7,
}))
// Use mock provider for testing
generator.WithProvider(&provider.MockProvider{
Response: "mocked response",
})
Check out the examples directory for more complex use cases:
- Chain Operations - Sequential processing
- Support Routing - Query classification and routing
- Parallel Processing - Concurrent operations
- Content Evaluation - Content moderation
- Translation - Language translation
result, err := generator.Run(ctx, input)
if err != nil {
switch {
case errors.Is(err, promptgen.ErrRateLimit):
// Handle rate limiting
case errors.Is(err, promptgen.ErrContextLength):
// Handle context length
case errors.Is(err, promptgen.ErrValidation):
// Handle validation errors
default:
// Handle other errors
}
}
Use the mock provider for reliable testing:
mockProvider := &provider.MockProvider{
Response: `{"title": "Test Title", "description": "Test Description"}`,
}
generator.WithProvider(mockProvider)
result, err := generator.Run(ctx, input)
See CONTRIBUTING.md for development setup and guidelines.
Apache 2.0 - See LICENSE for details.
This project was inspired by promptic, which showed how productive AI development could be in Python. I've built on that vision to create an idiomatic, type-safe Go experience.