Skip to content

Commit 6d13b0f

Browse files
update readme to accurately reflect usage with env variables
1 parent 7315615 commit 6d13b0f

3 files changed

Lines changed: 72 additions & 35 deletions

File tree

README.md

Lines changed: 70 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,70 @@ Install LangGuard using pip:
1818
pip install langguard
1919
```
2020

21+
## Configuration
22+
23+
### Required Components
24+
25+
To use GuardAgent, you need:
26+
1. **LLM Provider** - Currently supports `"openai"` or `None` (test mode)
27+
2. **API Key** - Required for OpenAI (via environment variable)
28+
3. **Prompt** - The text to screen (passed to `screen()` method)
29+
4. **Model** - Optional, defaults to `gpt-4o-mini`
30+
31+
### Setup Methods
32+
33+
#### Method 1: Environment Variables (Recommended)
34+
35+
```bash
36+
export GUARD_LLM_PROVIDER="openai" # LLM provider to use
37+
export GUARD_LLM_API_KEY="your-api-key" # Your OpenAI API key
38+
export GUARD_LLM_MODEL="gpt-4o-mini" # Optional: OpenAI model (default: gpt-4o-mini)
39+
export LLM_TEMPERATURE="0.1" # Optional: Temperature 0-1 (default: 0.1)
40+
```
41+
42+
Then in your code:
43+
```python
44+
from langguard import GuardAgent
45+
46+
agent = GuardAgent() # Automatically uses environment variables
47+
response = agent.screen("Your prompt here")
48+
```
49+
50+
#### Method 2: Partial Configuration
51+
52+
```bash
53+
export GUARD_LLM_API_KEY="your-api-key" # API key must be in environment
54+
```
55+
56+
```python
57+
from langguard import GuardAgent
58+
59+
agent = GuardAgent(llm="openai") # Specify provider in code
60+
response = agent.screen("Your prompt here")
61+
```
62+
63+
#### Method 3: Test Mode (No API Required)
64+
65+
```python
66+
from langguard import GuardAgent
67+
68+
# No provider specified = test mode
69+
agent = GuardAgent() # Uses TestLLM, no API needed
70+
response = agent.screen("Your prompt here")
71+
# Always returns {"safe": false, "reason": "Test mode - always fails for safety"}
72+
```
73+
74+
### Environment Variables Reference
75+
76+
| Variable | Description | Required | Default |
77+
|----------|-------------|----------|----------|
78+
| `GUARD_LLM_PROVIDER` | LLM provider (`"openai"` or `None`) | No | `None` (test mode) |
79+
| `GUARD_LLM_API_KEY` | API key for OpenAI | Yes (for OpenAI) | - |
80+
| `GUARD_LLM_MODEL` | Model to use | No | `gpt-4o-mini` |
81+
| `LLM_TEMPERATURE` | Temperature (0-1) | No | `0.1` |
82+
83+
**Note**: Currently, API keys and models can only be configured via environment variables, not passed directly to the constructor.
84+
2185
## Quick Start
2286

2387
### Basic Usage - Plug and Play
@@ -87,35 +151,8 @@ if is_safe:
87151
pass
88152
```
89153

90-
## 🔧 Configuration
91-
92-
### Environment Variables
93-
94-
LangGuard can be configured using environment variables:
95-
96-
```bash
97-
# LLM Provider Configuration
98-
export GUARD_LLM_PROVIDER="openai" # Options: "openai", or None for test mode
99-
export GUARD_LLM_MODEL="gpt-4o-mini" # OpenAI model to use
100-
export GUARD_LLM_API_KEY="your-api-key" # Your OpenAI API key
101-
export LLM_TEMPERATURE="0.1" # Temperature for LLM generation (0-1)
102-
```
103-
104-
### Programmatic Configuration
105-
106-
```python
107-
from langguard import GuardAgent
108-
109-
# Configure via code
110-
agent = GuardAgent(
111-
llm="openai", # or None for test mode
112-
config={
113-
"default_specification": "Your default security rules here"
114-
}
115-
)
116-
```
117154

118-
## 🛠️ Advanced Usage
155+
## Advanced Usage
119156

120157
### Advanced Usage
121158

@@ -167,7 +204,7 @@ GuardAgent comes with built-in protection against:
167204
- **Malicious Generation**: Phishing emails, malware, or exploit code
168205
- **Prompt Manipulation**: Instructions to ignore previous rules or reveal system prompts
169206

170-
## 🧪 Testing
207+
## Testing
171208

172209
The library includes comprehensive test coverage for various security scenarios:
173210

@@ -192,7 +229,7 @@ LangGuard can detect and prevent:
192229
- **Medical Advice**: Filters out specific medical diagnosis requests
193230
- **Harmful Content**: Blocks requests for dangerous information
194231

195-
## 🏗️ Architecture
232+
## Architecture
196233

197234
LangGuard follows a modular architecture:
198235

@@ -210,7 +247,7 @@ langguard/
210247
- **LLM Providers**: Pluggable LLM backends (OpenAI with structured output support)
211248
- **GuardResponse**: Typed response structure with pass/fail status and reasoning
212249

213-
## 🤝 Contributing
250+
## Contributing
214251

215252
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
216253

@@ -220,11 +257,11 @@ Contributions are welcome! Please feel free to submit a Pull Request. For major
220257
4. Push to the branch (`git push origin feature/amazing-feature`)
221258
5. Open a Pull Request
222259

223-
## 📄 License
260+
## License
224261

225262
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
226263

227-
## 🔗 Links
264+
## Links
228265

229266
- [GitHub Repository](https://github.com/langguard/langguard-python)
230267
- [Issue Tracker](https://github.com/langguard/langguard-python/issues)

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
44

55
[project]
66
name = "langguard"
7-
version = "0.3.0"
7+
version = "0.4.0"
88
description = "A Python library for language security"
99
readme = "README.md"
1010
requires-python = ">=3.11"

src/langguard/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""LangGuard - A library for language security."""
22

3-
__version__ = "0.3.0"
3+
__version__ = "0.4.0"
44

55
# Primary interface
66
from .agent import GuardAgent, GuardResponse

0 commit comments

Comments
 (0)