How to Use Custom Models in Cursor
This guide explains how to integrate and use custom AI models in Cursor, allowing you to leverage different AI capabilities for your development workflow.
Supported Model Types
Cursor supports various AI model integrations:
-
OpenAI Compatible Models
- Anthropic Claude
- DeepSeek
- Mistral
- Local LLMs
-
Custom API Endpoints
- Self-hosted models
- Cloud API services
- Custom implementations
Basic Configuration
Setting Up Custom Models
- Open Cursor Settings
- Navigate to AI Models section
- Add new model configuration:
{
"models": {
"custom-model": {
"name": "Your Model Name",
"apiKey": "your-api-key",
"baseUrl": "https://api.your-model-endpoint.com/v1",
"contextLength": 8192,
"temperature": 0.7
}
}
}
Configuration Parameters
Parameter | Description | Default |
---|---|---|
name | Display name for the model | Required |
apiKey | API authentication key | Required |
baseUrl | API endpoint URL | Required |
contextLength | Maximum context window | 4096 |
temperature | Response randomness | 0.7 |
Model Integration
OpenAI Compatible Models
{
"models": {
"custom-gpt": {
"name": "Custom GPT",
"apiKey": "${OPENAI_API_KEY}",
"baseUrl": "https://api.openai.com/v1",
"model": "gpt-4",
"contextLength": 8192
}
}
}
Anthropic Claude Setup
{
"models": {
"claude": {
"name": "Claude",
"apiKey": "${ANTHROPIC_API_KEY}",
"baseUrl": "https://api.anthropic.com/v1",
"model": "claude-2",
"contextLength": 100000
}
}
}
Local Model Configuration
{
"models": {
"local-llm": {
"name": "Local LLM",
"baseUrl": "http://localhost:8000",
"contextLength": 4096,
"useDocker": true
}
}
}
Advanced Settings
Model Behavior
Configure model behavior:
{
"models": {
"custom-model": {
"settings": {
"temperature": 0.7,
"topP": 0.9,
"frequencyPenalty": 0.0,
"presencePenalty": 0.0,
"stopSequences": ["```", "###"]
}
}
}
}
Response Formatting
{
"models": {
"custom-model": {
"formatting": {
"trimWhitespace": true,
"removeNewlines": false,
"maxTokens": 1000
}
}
}
}
Model-Specific Features
Code Completion
{
"models": {
"code-model": {
"features": {
"codeCompletion": true,
"contextAware": true,
"multiFile": true
}
}
}
}
Chat Capabilities
{
"models": {
"chat-model": {
"features": {
"chat": true,
"systemPrompts": true,
"streaming": true
}
}
}
}
Performance Optimization
Caching Settings
{
"models": {
"custom-model": {
"cache": {
"enabled": true,
"maxSize": "1GB",
"ttl": 3600
}
}
}
}
Rate Limiting
{
"models": {
"custom-model": {
"rateLimit": {
"requestsPerMinute": 60,
"tokensPerMinute": 90000,
"concurrent": 5
}
}
}
}
Troubleshooting
Common Issues
-
Connection Problems
- Verify API endpoint
- Check network connectivity
- Validate API key
-
Response Errors
- Check model compatibility
- Verify request format
- Review error messages
-
Performance Issues
- Optimize context length
- Adjust cache settings
- Monitor rate limits
Best Practices
Security
-
API Key Management
- Use environment variables
- Rotate keys regularly
- Implement access controls
-
Request Validation
- Sanitize inputs
- Validate responses
- Handle errors gracefully
Performance
-
Context Optimization
- Minimize context size
- Clear unused context
- Use efficient prompts
-
Resource Management
- Monitor usage
- Implement caching
- Optimize requests
Model Comparison
Feature Matrix
Feature | OpenAI | Claude | Local LLM |
---|---|---|---|
Code Completion | ✓ | ✓ | ✓ |
Chat | ✓ | ✓ | Varies |
Context Length | 8K-32K | 100K | Varies |
Response Speed | Fast | Medium | Varies |
Related Resources
- Model Configuration Guide
- API Integration
- Performance Optimization
Conclusion
Custom model integration in Cursor provides flexibility in choosing AI capabilities that best suit your needs. Following these configuration guidelines ensures optimal performance and reliability.
Related Articles
- Advanced Model Configuration
- API Integration Guide
- Performance Optimization