November 2025 Product Updates
This month marks a significant infrastructure milestone with the complete ZenCLI rollout, enabling custom model support and bringing your privately deployed models into Zencoder. We’ve also introduced Lines of Code metrics for better usage tracking, dynamic model switching mid-conversation, and major improvements to both VS Code and JetBrains plugins.Custom Models in Zencoder Plugins
Bring Your Own Models
Use your privately deployed models or connect to providers not natively supported by Zencoder. Full flexibility to leverage custom LLMs while maintaining Zencoder’s powerful development features.
- Private model deployments let you connect Zencoder to your organization’s internally hosted LLMs
- Unsupported providers can now be integrated, expanding beyond our native model catalog
- Custom model configurations preserve all Zencoder features including context awareness, tool usage, and multi-step reasoning
- Enterprise flexibility enables compliance with strict data governance and security policies
Lines of Code Metrics
Track Generated and Accepted Code
New LoC metrics in the usage analytics dashboard and API provide visibility into how much code Zencoder generates versus how much your team actually uses.
- Lines of code generated shows the total output from all AI agents across your organization
- Lines of code accepted tracks what developers actually keep and commit, giving you a true measure of AI contribution
- Both metrics are available in the web admin panel under usage analytics for visual tracking
- The Analytics API now exposes LoC data for custom reporting and integration with your existing tools
Performance Enhancements
Faster, More Reliable Experience
Significant performance improvements across all Zencoder operations deliver faster agent responses, better resource usage, and enhanced stability during long sessions.
- Faster agent responses with optimized request handling reducing latency across all operations
- Better resource management improving memory usage and system performance during extended sessions
- Enhanced reliability through improved error handling and session management
- Smoother long-running sessions with better stability when working on complex, multi-step tasks
Dynamic Model Switching
Switch Models Mid-Conversation
Change AI models between messages without losing context or starting a new chat session.
- Use the model selector at any point in your conversation
- Context preservation ensures the new model has full visibility into previous messages
- Compare approaches by trying different models on the same problem without context switching
- Optimize costs by starting with a faster model and upgrading only when needed
Model Catalog Updates
We’ve expanded and refined our model offerings this month: New additions:- GPT-5.1-Codex - Updated variant with improved code generation and reasoning
- Gemini Pro 3.0 - Google’s latest model with enhanced multi-turn conversation capabilities
- Sonnet 4 PT - Replaced by newer Sonnet variants
- GPT-5 - Superseded by GPT-5.1-Codex with better performance
Version History
- VS Code
- JetBrains
- 3.6 (November 26, 2025)
- 3.4 (November 21, 2025)
- 3.2 (November 17, 2025)
- 3.0 (November 10, 2025)
- 2.80.0 (November 5, 2025)
- 2.78.0 (November 5, 2025)
- 2.76.0 (November 4, 2025)
Questions or feedback? Join our Discord community or visit our Community Support page.