Installation
This guide covers all installation options for Prompt Amplifier.
Basic Installation
What you get:
- ✅ Core
PromptForgeclass - ✅ TF-IDF embedder (free, no API key)
- ✅ In-memory vector store
- ✅ Document loaders (TXT, CSV, JSON)
What you still need:
- ❌ API key for
expand()function (OpenAI, Google, or Anthropic) - ❌ PDF/DOCX support (install
loadersextra) - ❌ Semantic embeddings (install
embeddings-localextra)
Installation Options
Full Installation (Recommended)
What you get: Everything! All loaders, embedders, vector stores, and generators.
Size: ~2GB (includes ML models)
Minimal + Google Gemini (Lightweight)
What you get:
- Core library
- Google Gemini generator (has free tier!)
Size: ~50MB
Best for: Quick testing, Google Colab
Production Setup
What you get:
- All document loaders (PDF, DOCX, Excel, etc.)
- Sentence Transformers (semantic embeddings)
- ChromaDB (persistent storage)
Size: ~1GB
Feature-Specific Extras
Document Loaders
Adds support for:
| Format | Library Used |
|---|---|
pypdf |
|
| DOCX | python-docx |
| Excel | openpyxl |
| Web pages | beautifulsoup4, requests |
| YouTube | youtube-transcript-api |
| RSS feeds | feedparser |
Embedders
Adds:
- Sentence Transformers (semantic search)
- BM25 (keyword search)
Models used:
all-MiniLM-L6-v2(384 dimensions, fast)all-mpnet-base-v2(768 dimensions, better quality)
Adds: OpenAI embeddings (text-embedding-3-small, text-embedding-3-large)
Requires: OPENAI_API_KEY
Adds: Google embeddings (text-embedding-004)
Requires: GOOGLE_API_KEY
Vector Stores
Use case: Local persistent storage
LLM Generators
Requires: OPENAI_API_KEY
Requires: ANTHROPIC_API_KEY
Requires: GOOGLE_API_KEY (free tier available!)
Requires: Ollama running locally
Verify Installation
Run this to verify everything works:
# Test basic import
from prompt_amplifier import PromptForge
print("✅ Core library installed")
# Test with sample data
forge = PromptForge()
forge.add_texts(["Test document 1", "Test document 2"])
print(f"✅ Added {forge.chunk_count} chunks")
# Test search (no API key needed)
results = forge.search("test")
print(f"✅ Search works! Found {len(results.results)} results")
# Check available extras
try:
from prompt_amplifier.embedders import SentenceTransformerEmbedder
print("✅ Sentence Transformers available")
except ImportError:
print("❌ Install with: pip install prompt-amplifier[embeddings-local]")
try:
from prompt_amplifier.vectorstores import ChromaStore
print("✅ ChromaDB available")
except ImportError:
print("❌ Install with: pip install prompt-amplifier[vectorstore-chroma]")
try:
from prompt_amplifier.generators import GoogleGenerator
print("✅ Google generator available")
except ImportError:
print("❌ Install with: pip install google-generativeai")
Expected Output:
✅ Core library installed
✅ Added 2 chunks
✅ Search works! Found 2 results
✅ Sentence Transformers available (or install message)
✅ ChromaDB available (or install message)
✅ Google generator available (or install message)
Google Colab Quick Setup
For Google Colab, run these cells:
Cell 1: Install
Cell 2: Set API Key
import os
os.environ["GOOGLE_API_KEY"] = "your-key-from-aistudio.google.com"
print("✅ API key set!")
Cell 3: Test
from prompt_amplifier import PromptForge
from prompt_amplifier.generators import GoogleGenerator
forge = PromptForge(generator=GoogleGenerator())
forge.add_texts(["Hello world", "Testing prompt amplifier"])
result = forge.expand("test the system")
print(result.prompt)
Troubleshooting
"No module named 'prompt_amplifier'"
# Make sure you installed it
pip install prompt-amplifier
# If using virtual environment, activate it first
source venv/bin/activate # Linux/Mac
# or
.\venv\Scripts\activate # Windows
"ImportError: openai is required"
"ImportError: sentence_transformers is required"
pip install sentence-transformers
# or install the extra
pip install prompt-amplifier[embeddings-local]
"No space left on device" (Colab)
Sentence Transformers downloads ~400MB models. In Colab:
# Use smaller model
from prompt_amplifier.embedders import SentenceTransformerEmbedder
embedder = SentenceTransformerEmbedder(model="all-MiniLM-L6-v2") # Smallest
Next Steps
- Quick Start - Get running in 5 minutes
- Configuration - Customize behavior
- Core Concepts - Understand how it works