We are excited to announce the release of xef version 0.0.3! Here are the notable changes and improvements:
🚀 Features & Improvements
Autocloseablewhich has now been renamed to
Conversationfor simpler access. Thanks, @victorcrrd!
- Various Java examples have been added including SQL examples by @adam47deg, GPT-4 examples by @Zevleg, and more.
- Efforts to streamline the CI/CD process, addressing failing actions and CI issues by @victorcrrd.
- The addition of
xef-reasoning, a module to encapsulate common patterns with text, code, and more by @raulraja.
- Setup and integration with Google Cloud Platform (GCP) for AI and chat capabilities by @nomisRev.
- Simplifications and enhancements in conversation DSL, memory management, prompt building, and more by @raulraja and @javipacheco.
- Upgrades to newer versions of dependencies by @dependabot.
- A brand-new
xef-serverweb app introduced by @calvellido.
- Migration to JDK version 20 courtesy of @franciscodr.
- And many more...
🐛 Bug Fixes
- Resolved bugs in conversations by @javipacheco.
Ktorengine issues by @nomisRev.
- Addressed missing
Bearerin token authentication by @jackcviers.
- Various fixes by @javipacheco to enhance the user experience.
- And several other bug fixes throughout the codebase.
- Improved docs for writing the Ktor HTTP layer by @nomisRev.
- Updated READMEs and additional documentation by @calvellido, @ff137, and more.
🙌 New Contributors
We're delighted to welcome our newest contributors to the Xef project:
🚀 Introducing: The Xef Server 🚀
The following features are all under active development and will be released in the coming weeks.
As we continue to push the boundaries of AI accessibility and performance, we are thrilled to provide a sneak peek into the forthcoming functionalities of our Xef Server. Our goal has always been to make AI universally accessible and versatile, and these new capabilities take us one step closer to that vision.
🤖 Unified AI Model Access
- OpenAI Client Compatibility: Say goodbye to the silos of AI models. With the new Xef Server, users will be able to access not just OpenAI models but also models from platforms like GCP Vertex, all via the familiar OpenAI endpoint. This seamless integration aims to create a more streamlined and versatile AI experience for our users.
📊 Performance Monitoring and Tracing
- Prompt Performance Monitoring: Gain insights like never before with real-time monitoring of prompt performance. This will empower users to optimize their AI prompts for the best results.
- AI Application Flow & Trace: Understand the flow of your AI applications and trace issues or bottlenecks with our enhanced tracking functionalities.
🌍 Universal Access via HTTP
- Language-Agnostic AI Capabilities: Whether you're working in Python, Kotlin, Scala, Java, Rust, or any other programming language, Xef Server's HTTP access ensures that you can harness the power of genai capabilities without any restrictions.
📖 Seamless Integration with Xef Library
- Familiar DSL: Use the same Domain Specific Language (DSL) as the Xef library to interact with AI functionalities. This consistent experience ensures that developers can switch between the library and the server without any friction.
Development Progress: We are still hard at work developing these functionalities. We are committed to delivering a robust and user-friendly experience, and we won't stop until we get there. A special shoutout to our contributors for their relentless effort and dedication to bringing these features to life.
Stay tuned for more updates on the Xef Server. We are just getting started, and the future looks incredibly exciting! 🌟
Thank you for your valuable contributions!
For a comprehensive list of changes, please refer to the Full Changelog:
A massive shoutout to all our contributors for making this release possible. Your dedication and hard work have greatly enhanced the Xef experience. Here's to more exciting releases ahead! 🎊