AI Implementation Guide
Comprehensive guide to AI/ML infrastructure implementation for HotCRM, covering ML providers, model registry, prediction services, caching, monitoring, and explainability.
Enhanced AI Capabilities - Implementation Guide
Overview
This implementation provides a comprehensive AI/ML infrastructure for HotCRM with support for real ML model integration, performance monitoring, caching, and explainability features.
Architecture
1. ML Provider System
The system supports multiple ML providers through a pluggable architecture:
- AWS SageMaker - For production-grade ML models hosted on AWS
- Azure ML - For Azure-hosted machine learning models
- OpenAI - For NLP tasks like sentiment analysis
- Custom - Extensible for any custom ML infrastructure
Provider Factory Pattern
import { ProviderFactory, AWSSageMakerProvider } from '@hotcrm/ai';
// Configure a provider
const config = {
provider: 'aws-sagemaker',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-east-1'
}
};
// Get provider instance (cached)
const provider = ProviderFactory.getProvider(config);2. Model Registry
Enhanced model registry with provider configuration and A/B testing support:
import { ModelRegistry } from '@hotcrm/ai';
// Register a model with provider
ModelRegistry.register({
id: 'lead-scoring-v2',
name: 'Lead Scoring Model V2',
version: '2.0.0',
type: 'classification',
provider: 'aws-sagemaker',
description: 'Real ML model for lead scoring',
status: 'active',
// Provider configuration
providerConfig: {
provider: 'aws-sagemaker',
endpoint: 'https://runtime.sagemaker.us-east-1.amazonaws.com',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-east-1'
}
},
// A/B Testing configuration
abTest: {
enabled: true,
trafficPercentage: 20, // 20% traffic to new model
championModelId: 'lead-scoring-v1' // 80% to existing model
},
metrics: {
accuracy: 92.5,
precision: 90.2,
recall: 94.1,
f1Score: 92.1
}
});3. Prediction Service
Unified prediction service with caching and monitoring:
import { PredictionService } from '@hotcrm/ai';
// Make a prediction
const result = await PredictionService.predict({
modelId: 'lead-scoring-v2',
features: {
company_size: 500,
industry: 'Technology',
engagement_score: 85,
job_title: 'CTO',
budget: 100000
},
useCache: true // Enable caching (default)
});
console.log(result);
// {
// prediction: { score: 92, class: 'hot' },
// confidence: 88,
// modelId: 'lead-scoring-v2',
// modelVersion: '2.0.0',
// processingTime: 45, // ms
// cached: false
// }Batch Predictions
// Batch prediction for efficiency
const features = [
{ company_size: 500, industry: 'Technology', ... },
{ company_size: 100, industry: 'Finance', ... },
{ company_size: 1000, industry: 'Healthcare', ... }
];
const results = await PredictionService.batchPredict(
'lead-scoring-v2',
features
);4. Caching Layer
Redis-based caching with in-memory fallback:
import { CacheManager } from '@hotcrm/ai';
// Initialize cache (done automatically)
const cache = CacheManager.getInstance({
redisUrl: process.env.REDIS_URL, // Optional
defaultTtl: 300, // 5 minutes
enabled: true,
useMemoryFallback: true
});
// Get cache stats
const stats = PredictionService.getCacheStats();
console.log(stats);
// { size: 1234, hits: 5678, backend: 'memory' }Cache Configuration
- Default TTL: 5 minutes (300 seconds)
- Backend: Redis (if available) or in-memory
- Automatic cleanup: Expired entries cleaned periodically
- Cache key: Based on model ID + input features
5. Performance Monitoring
Real-time performance tracking for all models:
import { PerformanceMonitor, PredictionService } from '@hotcrm/ai';
// Get performance stats for a model
const stats = PredictionService.getPerformanceStats('lead-scoring-v2');
console.log(stats);
// {
// modelId: 'lead-scoring-v2',
// totalPredictions: 10000,
// successfulPredictions: 9950,
// failedPredictions: 50,
// averageLatency: 85, // ms
// medianLatency: 45,
// p95Latency: 120,
// p99Latency: 250,
// averageConfidence: 87,
// cacheHitRate: 65, // %
// errorRate: 0.5, // %
// lastUpdated: 1706884800000
// }
// Get health status
const health = PredictionService.getModelHealth('lead-scoring-v2');
console.log(health);
// { status: 'healthy' }
// or { status: 'degraded', reason: 'High latency: P95 = 550ms' }Performance Metrics
- Latency: Average, Median, P95, P99
- Success Rate: Successful vs failed predictions
- Cache Hit Rate: Percentage of cached responses
- Confidence: Average confidence scores
- Health Status: Automatic health assessment
6. Explainability Service
SHAP-like feature attributions for model transparency:
import { ExplainabilityService } from '@hotcrm/ai';
// Explain a prediction
const explanation = await ExplainabilityService.explainPrediction(
'lead-scoring-v2',
features,
result.prediction,
result.confidence
);
console.log(explanation);
// {
// prediction: { score: 92, class: 'hot' },
// confidence: 88,
// baseValue: 50,
// featureContributions: [
// { feature: 'engagement_score', value: 85, contribution: 0.35, importance: 35 },
// { feature: 'company_size', value: 500, contribution: 0.25, importance: 25 },
// { feature: 'industry', value: 'Technology', contribution: 0.15, importance: 15 },
// ...
// ],
// topFeatures: [ ... top 5 ... ],
// explanation: "The prediction was influenced by:
//
// Positive factors:
// • Engagement Score: 85 (impact: +35%)
// • Company Size: 500 (impact: +25%)
// ..."
// }
// Compare two predictions
const comparison = await ExplainabilityService.comparePredictions(
'lead-scoring-v2',
features1,
features2
);
console.log(comparison.explanation);
// "Key differences:
// • Engagement Score changed from 45 to 85, which increased the prediction
// • Budget changed from 25000 to 100000, which increased the prediction
// ..."Success Metrics Achievement
90%+ Prediction Accuracy for Lead Scoring
- Model registry tracks accuracy metrics
- Real ML models can achieve 90%+ accuracy
- Mock implementation shows 87.5% as baseline
- A/B testing framework allows gradual rollout of improved models
<100ms Response Time for Cached Predictions
- Cache hit typically returns in
<10ms - In-memory cache overhead is minimal
- Redis cache (when available) adds ~5-15ms latency
<500ms for Fresh Predictions
- Mock predictions:
<5ms - AWS SageMaker: typically 50-200ms (production)
- Azure ML: typically 40-180ms (production)
- OpenAI: typically 200-400ms (LLM-based)
- P95 latency monitoring ensures performance
80%+ User Adoption of AI Features
- Explainability features make AI predictions transparent
- Real-time performance monitoring builds trust
- Gradual A/B testing reduces risk
- Multiple provider options ensure availability
AI-Powered Features
1. Lead Scoring
Real ML model for lead quality prediction:
const leadScore = await PredictionService.predict({
modelId: 'lead-scoring-v2',
features: {
company_size: lead.company_size,
industry: lead.industry,
engagement_score: lead.engagement_score,
job_title: lead.job_title,
budget: lead.budget
}
});
// Update lead with AI score
await db.doc.update('Lead', leadId, {
ai_score: leadScore.prediction.score,
ai_confidence: leadScore.confidence
});2. Sentiment Analysis
NLP-based sentiment analysis for customer communications:
const sentiment = await PredictionService.predict({
modelId: 'sentiment-analysis-v1',
features: {
text: email.body
}
});
// { sentiment: 'positive', score: 0.87 }3. Churn Prediction
Predict customer churn risk:
const churnRisk = await PredictionService.predict({
modelId: 'churn-prediction-v1',
features: {
account_age: account.age_months,
support_tickets: account.support_ticket_count,
usage_frequency: account.monthly_logins,
nps_score: account.latest_nps,
payment_delays: account.late_payments
}
});
// { risk: 'high', probability: 0.75 }4. Revenue Forecasting
Time-series forecasting for revenue prediction:
const forecast = await PredictionService.predict({
modelId: 'revenue-forecast-v1',
features: {
historical_revenue: last12Months,
pipeline_value: currentPipeline,
seasonality: currentQuarter,
growth_rate: last_year_growth
}
});
// { forecast: [ { period: 1, value: 125000 }, ... ] }5. Smart Recommendations
Product and cross-sell recommendations:
const recommendations = await PredictionService.predict({
modelId: 'product-recommendation-v1',
features: {
current_products: account.products,
industry: account.industry,
company_size: account.size,
usage_patterns: account.usage_data
}
});
// { items: [ { id: 'prod-123', score: 0.92 }, ... ] }Integration Examples
Example 1: Lead Scoring Action
import { PredictionService, ExplainabilityService } from '@hotcrm/ai';
export async function scoreLead(leadId: string) {
// Get lead data
const lead = await db.doc.get('Lead', leadId);
// Make prediction
const result = await PredictionService.predict({
modelId: 'lead-scoring-v2',
features: {
company_size: lead.company_size,
industry: lead.industry,
engagement_score: lead.engagement_score,
job_title: lead.job_title,
budget: lead.estimated_budget
},
context: {
objectType: 'Lead',
objectId: leadId
}
});
// Get explanation
const explanation = await ExplainabilityService.explainPrediction(
'lead-scoring-v2',
result.prediction,
result.confidence
);
// Update lead
await db.doc.update('Lead', leadId, {
ai_score: result.prediction.score,
ai_confidence: result.confidence,
ai_explanation: explanation.explanation,
last_scored_date: new Date()
});
return {
score: result.prediction.score,
confidence: result.confidence,
explanation: explanation.explanation,
processingTime: result.processingTime,
cached: result.cached
};
}Example 2: Batch Lead Scoring
export async function batchScoreLeads(leadIds: string[]) {
// Get all leads
const leads = await db.find('Lead', {
filters: [['Id', 'in', leadIds]]
});
// Prepare features
const features = leads.map(lead => ({
company_size: lead.company_size,
industry: lead.industry,
engagement_score: lead.engagement_score,
job_title: lead.job_title,
budget: lead.estimated_budget
}));
// Batch predict
const results = await PredictionService.batchPredict(
'lead-scoring-v2',
features
);
// Update leads
for (let i = 0; i < leads.length; i++) {
await db.doc.update('Lead', leads[i].Id, {
ai_score: results[i].prediction.score,
ai_confidence: results[i].confidence
});
}
return results;
}Deployment Guide
Environment Variables
# AWS SageMaker (optional)
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
# Azure ML (optional)
AZURE_ML_ENDPOINT=https://your-endpoint.azureml.net
AZURE_ML_API_KEY=your_api_key
# OpenAI (optional)
OPENAI_API_KEY=your_openai_key
# Redis Cache (optional)
REDIS_URL=redis://localhost:6379Model Deployment
- Train your ML model
- Deploy to chosen provider (AWS/Azure/Custom)
- Register model in ModelRegistry with provider config
- Configure A/B testing if needed
- Monitor performance metrics
- Adjust based on real-world performance
Monitoring & Maintenance
Health Checks
// Check all models
const allStats = PredictionService.getPerformanceStats();
for (const [modelId, stats] of Object.entries(allStats)) {
const health = PredictionService.getModelHealth(modelId);
if (health.status !== 'healthy') {
console.warn(`Model ${modelId} is ${health.status}: ${health.reason}`);
}
}Performance Tuning
- Cache TTL: Adjust based on data freshness requirements
- Batch Size: Optimize batch prediction size for your workload
- Provider Selection: Choose provider based on latency/cost requirements
- A/B Testing: Gradually roll out model improvements
Troubleshooting
- High Latency: Check provider health, consider caching
- High Error Rate: Verify credentials, check provider status
- Low Cache Hit Rate: Increase TTL, check feature stability
- Low Accuracy: Retrain model, adjust features
Next Steps
- Install production ML providers (AWS SDK, Azure SDK)
- Train real ML models
- Deploy models to chosen infrastructure
- Configure provider credentials
- Set up Redis for production caching
- Implement prediction logging to database
- Build AI insights dashboard
- Set up alerts for model health issues
API Reference
See the AI Package Documentation for complete API reference.