Common issues and solutions for NeuroCache.
- Installation Issues
- Cache Not Working
- Low Cache Hit Rate
- Performance Issues
- Redis Connection Problems
- Provider Errors
- Memory Issues
- TypeScript Errors
- Debugging Tips
Symptoms:
Error: Cannot find module 'neurocache'Solution:
# Install NeuroCache
npm install neurocache
# Verify installation
npm list neurocacheStill not working?
# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm installSymptoms:
npm WARN neurocache@1.0.0 requires a peer of redis@^4.0.0 but none is installedSolution:
# If using RedisStore
npm install redis
# If using OpenAI provider
npm install openai
# Both
npm install redis openaiNote: These are optional peer dependencies. Only install what you need:
redis- Only if usingRedisStoreopenai- Only if usingOpenAIProvider
Symptoms:
- Every request takes ~2 seconds
cache.getCacheHitRate()returns0- Metrics show 0 cache hits
Diagnosis:
// Enable logging
const cache = new NeuroCache({
// ...
logging: true // See what's happening
});
// Make two identical requests
await cache.generate(request);
await cache.generate(request);
// Check metrics
console.log(cache.getMetrics());Common Causes:
Problem:
// ❌ Different timestamp each time
const response = await cache.generate({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'user', content: `What is 2+2? (${Date.now()})` } // ← Changes!
]
});Solution:
// ✅ Remove dynamic elements
const response = await cache.generate({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'user', content: 'What is 2+2?' } // ← Static
]
});Problem:
// Even identical requests have different hashes due to temperature
const response = await cache.generate({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello' }],
temperature: 1.5 // ← Cache key includes this
});Solution:
// Use consistent temperature
const response = await cache.generate({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello' }],
temperature: 0 // ← Deterministic
});Note: Temperature is part of cache key. Requests with different temperatures won't match.
Check store:
// Test store directly
const store = cache['store']; // Access private property for debugging
await store.set('test-key', 'test-value', 60);
const value = await store.get('test-key');
if (value === null) {
console.error('Store is not working!');
}Solution: See Redis Connection Problems or Performance Issues.
Problem:
const cache = new NeuroCache({
// ...
ttl: 1 // 1 second - expires immediately!
});
await cache.generate(request);
await sleep(2000); // Wait 2 seconds
await cache.generate(request); // Cache miss (expired)Solution:
const cache = new NeuroCache({
// ...
ttl: 3600 // 1 hour
});This should not happen! NeuroCache returns exact cached responses.
Diagnosis:
const response1 = await cache.generate(request);
const response2 = await cache.generate(request);
console.log('Same?', response1.content === response2.content); // Should be trueIf false:
- Check if you're modifying responses after receiving them
- Verify store is working correctly
- File a bug report
Below 40% → Investigate why
Diagnosis:
// Check metrics
const metrics = cache.getMetrics();
console.log({
hitRate: (metrics.cacheHits / metrics.totalRequests * 100).toFixed(1) + '%',
totalRequests: metrics.totalRequests,
cacheHits: metrics.cacheHits,
cacheMisses: metrics.cacheMisses
});Common Causes:
Problem: Every request is different.
// Each user asks different questions
await cache.generate({
messages: [{ role: 'user', content: 'Tell me about penguins' }]
});
await cache.generate({
messages: [{ role: 'user', content: 'What is quantum physics?' }]
});
await cache.generate({
messages: [{ role: 'user', content: 'How to bake cookies?' }]
});Solution: This is expected for unique content. Low hit rate is normal.
Problem:
// These are different without Context Intelligence
"What is 2+2?"
"What is 2+2?" // Extra spaces
" What is 2+2? " // Leading/trailing spacesSolution:
const cache = new NeuroCache({
// ...
enableContextIntelligence: true // Default - normalizes whitespace
});Problem:
await cache.generate({ messages: [{ role: 'user', content: 'What is 2+2?' }] });
await cache.generate({ messages: [{ role: 'user', content: 'What is 2 + 2?' }] }); // Space around +
await cache.generate({ messages: [{ role: 'user', content: 'Calculate 2+2' }] }); // Different wordingSolution: These are genuinely different requests. Consider:
- Normalizing input on your side
- Using semantic similarity (future feature)
- Accepting lower hit rate for varied input
Expected:
- Cold request (miss): ~1,500-2,000ms (OpenAI API latency)
- Cached request (hit): ~5-15ms (MemoryStore) or ~10-30ms (RedisStore)
Symptoms:
- Cached requests taking >100ms
- High latency even with high hit rate
Diagnosis:
const t1 = Date.now();
await cache.generate(request);
console.log('First request:', Date.now() - t1, 'ms');
const t2 = Date.now();
await cache.generate(request);
console.log('Second request:', Date.now() - t2, 'ms'); // Should be <30msCommon Causes:
Problem: FileStore or RedisStore is slow.
// Benchmark store
const store = cache['store'];
const t1 = Date.now();
await store.get('test-key');
console.log('Store get:', Date.now() - t1, 'ms'); // Should be <5msSolution:
// Use MemoryStore for best performance
const cache = new NeuroCache({
// ...
store: new MemoryStore(10000)
});Problem: Redis is on remote server.
Solution:
// Deploy Redis closer to app server
// Or use MemoryStore if single-instance appProblem: Serialization/deserialization overhead.
// 10KB response takes longer than 100 byte responseSolution: This is expected. Still much faster than calling provider.
Symptoms:
- App using 500MB+ RAM
- Out of memory errors
- Slow garbage collection
Diagnosis:
// Check MemoryStore size
const store = cache['store'] as MemoryStore;
console.log('Cache entries:', store['cache'].size); // Access private propertySolution:
// Reduce max entries
const cache = new NeuroCache({
// ...
store: new MemoryStore(1000) // Limit to 1000 entries
});
// Or use RedisStore (offload memory to Redis)
const cache = new NeuroCache({
// ...
store: new RedisStore({...})
});Symptoms:
Error: getaddrinfo ENOTFOUND localhost
Error: ECONNREFUSED 127.0.0.1:6379Solution:
# Check if Redis is running
redis-cli ping
# Should return: PONG
# If not installed
# macOS
brew install redis
brew services start redis
# Ubuntu
sudo apt install redis-server
sudo systemctl start redis
# Windows
# Download from https://github.com/microsoftarchive/redis/releases
# Docker
docker run -d -p 6379:6379 redis:7-alpine// Verify host/port
const store = new RedisStore({
host: 'localhost', // ← Correct hostname?
port: 6379 // ← Correct port?
});
// Test connection
await store.set('ping', 'pong', 60);
const value = await store.get('ping');
console.log(value); // Should be 'pong'Problem:
Error: NOAUTH Authentication requiredSolution:
const store = new RedisStore({
host: 'localhost',
port: 6379,
password: process.env.REDIS_PASSWORD // ← Add password
});Problem:
Error: unable to verify the first certificateSolution:
const store = new RedisStore({
host: 'your-redis-cloud.com',
port: 6380,
password: 'your-password',
tls: {
rejectUnauthorized: false // ← For self-signed certs (dev only!)
}
});Production:
const store = new RedisStore({
host: 'your-redis-cloud.com',
port: 6380,
password: 'your-password',
tls: {} // ← Use proper CA-signed certificate
});Symptoms:
- Works for a while, then errors
Error: Connection closed
Solution:
// Implement reconnection logic
const redisClient = createClient({...});
redisClient.on('error', (err) => {
console.error('Redis error:', err);
});
redisClient.on('reconnecting', () => {
console.log('Redis reconnecting...');
});
redisClient.on('ready', () => {
console.log('Redis connected!');
});
await redisClient.connect();Symptoms:
Error: Incorrect API key provided
OpenAI API error: 401 UnauthorizedSolution:
// Verify API key is set
if (!process.env.OPENAI_API_KEY) {
throw new Error('OPENAI_API_KEY not set');
}
// Check key is valid (starts with 'sk-')
if (!process.env.OPENAI_API_KEY.startsWith('sk-')) {
throw new Error('Invalid OPENAI_API_KEY format');
}
const provider = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY
});Get API key: https://platform.openai.com/api-keys
Symptoms:
Error: Rate limit exceeded
OpenAI API error: 429 Too Many RequestsSolution:
// Implement retry with exponential backoff
const provider = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!,
maxRetries: 3 // Retry up to 3 times
});Or:
// Upgrade OpenAI plan
// Or reduce request frequencySymptoms:
Error: Request timed outSolution:
const provider = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!,
timeout: 60000 // 60 seconds (default is 60000)
});Symptoms:
- Memory usage grows over time
- Never releases memory
- Eventually crashes
Diagnosis:
// Monitor memory usage
setInterval(() => {
const usage = process.memoryUsage();
console.log({
rss: Math.round(usage.rss / 1024 / 1024) + 'MB',
heapUsed: Math.round(usage.heapUsed / 1024 / 1024) + 'MB'
});
}, 60000);Solution:
// 1. Limit MemoryStore size
const cache = new NeuroCache({
store: new MemoryStore(1000) // LRU eviction
});
// 2. Or use RedisStore (offload to Redis)
const cache = new NeuroCache({
store: new RedisStore({...})
});
// 3. Clear cache periodically
setInterval(() => {
cache.clearCache();
}, 3600000); // Every hourSymptoms:
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memorySolution:
# Increase Node.js heap size
node --max-old-space-size=4096 app.js # 4GB
# Or in package.json
{
"scripts": {
"start": "node --max-old-space-size=4096 app.js"
}
}Long-term fix:
// Reduce cache size or use RedisStore
const cache = new NeuroCache({
store: new RedisStore({...}) // Moves memory to Redis
});Common errors:
Error:
const response = await cache.generate(request);
console.log(response.text); // Error: Property 'text' does not existSolution:
import type { GenerateResponse } from 'neurocache';
const response: GenerateResponse = await cache.generate(request);
console.log(response.content); // ✅ Correct propertyError:
class MyProvider implements Provider {
async generate(request: any): Promise<any> { // ❌ Using 'any'
// ...
}
}Solution:
import type { Provider, GenerateRequest, GenerateResponse } from 'neurocache';
class MyProvider implements Provider {
async generate(request: GenerateRequest): Promise<GenerateResponse> {
// TypeScript will enforce correct types
return {
content: "...",
usage: {
promptTokens: 10,
completionTokens: 20,
totalTokens: 30
}
};
}
}const cache = new NeuroCache({
// ...
logging: true // See cache hits/misses in console
});See what keys are being generated:
// Add this to NeuroCache source (for debugging only)
console.log('Cache key:', cacheKey);Or:
// Compute hash manually
import crypto from 'crypto';
const request = { model: 'gpt-3.5-turbo', messages: [...] };
const hash = crypto.createHash('sha256')
.update(JSON.stringify(request))
.digest('hex');
console.log('Expected cache key:', hash);const store = new MemoryStore();
// Test set
await store.set('test', JSON.stringify({ foo: 'bar' }), 60);
// Test get
const value = await store.get('test');
console.log('Retrieved:', value); // Should be '{"foo":"bar"}'
// Test expiration
await sleep(61000);
const expired = await store.get('test');
console.log('After TTL:', expired); // Should be nullsetInterval(() => {
const metrics = cache.getMetrics();
console.log({
requests: metrics.totalRequests,
hits: metrics.cacheHits,
misses: metrics.cacheMisses,
hitRate: cache.getCacheHitRate().toFixed(2),
providerErrors: metrics.providerErrors,
storeErrors: metrics.storeErrors
});
}, 10000); // Every 10 secondsconst metrics = cache.getMetrics();
if (metrics.providerErrors > 0) {
console.error('Provider errors detected!', metrics.providerErrors);
// Check OpenAI API status, API key, rate limits
}
if (metrics.storeErrors > 0) {
console.error('Store errors detected!', metrics.storeErrors);
// Check Redis connection, disk space (FileStore)
}Search existing issues: https://github.com/eneswritescode/neurocache/issues
const cache = new NeuroCache({
// ...
logging: true // Detailed logs
});
// Also enable store logging (if supported)// app.ts
import { NeuroCache, OpenAIProvider, MemoryStore } from 'neurocache';
const cache = new NeuroCache({
provider: new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!
}),
store: new MemoryStore(),
logging: true
});
async function main() {
const request = {
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'test' }]
};
console.log('Request 1...');
await cache.generate(request);
console.log('Request 2 (should hit cache)...');
await cache.generate(request);
console.log('Metrics:', cache.getMetrics());
}
main();Include:
- NeuroCache version (
npm list neurocache) - Node.js version (
node --version) - Store type (MemoryStore, FileStore, RedisStore)
- Provider type (OpenAIProvider, custom)
- Minimal reproduction code
- Error messages
- Metrics output
Still stuck? Contact: eneswrites@protonmail.com or file an issue.