Installation¶
Requirements¶
- PHP 8.2 or higher
- Laravel 10.x, 11.x, or 12.x
- A valid API key from a supported LLM provider
Install via Composer¶
The package uses Laravel's auto-discovery, so the service provider and facades are registered automatically.
Publish Configuration¶
Publish the configuration file to customize settings:
This creates config/instructor.php with all available options.
Configure API Keys¶
Add your LLM provider API key to .env:
# OpenAI (default)
OPENAI_API_KEY=sk-...
# Or Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Or other providers
GEMINI_API_KEY=...
GROQ_API_KEY=...
MISTRAL_API_KEY=...
Verify Installation¶
Run the installation command to verify everything is configured correctly:
This will: 1. Publish the configuration if not already published 2. Check for API key configuration 3. Show next steps
Test Your Connection¶
Test that your API configuration is working:
Or test a specific connection:
Optional: Publish Stubs¶
If you want to customize the response model stubs:
This publishes stubs to stubs/instructor/ in your application.
Manual Registration (Optional)¶
If you've disabled auto-discovery, manually register the service provider in config/app.php:
'providers' => [
// ...
Cognesy\Instructor\Laravel\InstructorServiceProvider::class,
],
'aliases' => [
// ...
'StructuredOutput' => Cognesy\Instructor\Laravel\Facades\StructuredOutput::class,
'Inference' => Cognesy\Instructor\Laravel\Facades\Inference::class,
'Embeddings' => Cognesy\Instructor\Laravel\Facades\Embeddings::class,
],
Upgrading¶
When upgrading to a new version, republish the configuration if there are new options:
Review the changelog for breaking changes.
Next Steps¶
- Configuration - Configure connections and settings
- Facades - Learn how to use the facades
- Response Models - Create your first response model