Case studies
Artificial Intelligence
Pushing the boundaries of AI in software development

Challenge: Enhance the reliability and performance of existing AI-powered coding assistants in generating accurate, high-quality code that works in diverse situations, quickly identifying and correcting mistakes, adapting to different programming styles, and responding well to instructions
Solutions:
- AI code assistant plug-ins for IDEs, aligned with developers’ evolving needs
- Automated benchmarking systems to evaluate performance and accuracy
- Advanced prompt engineering techniques to improve the relevance and precision of AI-generated code
- Test cases to ensure the functional and non-functional reliability of IDE plug-ins
- Comprehensive triage systems to manage incoming issues
- Optimized prompts to ensure that LLMs produce precise and contextually relevant code
Benefits:
- Improved code quality
- Enhanced responsiveness of AI assistants
- Increased developer productivity
One of CIeNET’s major achievements is the Generative AI Benchmark System (GAINS). GAINS compares various LLM-based coding assistants – including OpenAI’s ChatGPT, Google Gemini, and Anthropic Claude 3 – to evaluate how they work in different scenarios. This enables CIeNET to identify areas for improvement and fine-tune the systems. An automated testing process reduces errors and speeds up the development of tools, while the fine-tuning of large language model (LLM) models improves the accuracy and quality of code generation. In this way, GAINS enhances the reliability and performance of AI-powered coding assistants already integrated into commonly used software, across real-world coding scenarios. To ensure seamless assimilation within integrated development environments (IDEs), the systems undergo comprehensive testing, triage and fine-tuning to tackle issues across user interfaces, backend services, and LLM interactions. By fixing bugs and adding new capabilities, the existing tools become more useful for developers. Improving how prompts are provided to the AI further enhances their effectiveness.
CIeNET employed diverse tools, including Python, Java, Go, C++ and JavaScript, alongside plug-ins like Gemini, Copilot, and CodeWhisperer, integrated into IDEs such as Visual Studio Code and JetBrains.
Methods Analytics used Python to develop a Natural Language Processing (NLP) pipeline for analyzing complaint texts and identifying key topics. A text classifier distinguishes between complaints, concerns, and compliments, using named entity recognition (NER) to identify relevant staff and departments. The identified topics and entities are enriched using a medical knowledge base (UMLS). Power BI enables dashboard development and reporting. The entire solution is integrated into a web application that streamlines the complaint triaging process and provides the user interface.
These improvements make a real difference for developers and their clients. The new tools improve the AI’s ability to understand real-world challenges and respond to them with suggestions that are more accurate and helpful. This in turn saves time for developers, enabling them to focus on more creative and important tasks and limiting the time they have to devote to resolving repetitive problems.
CIeNET’s expertise in integrating generative AI within modern development workflows exemplifies how tailored solutions can enhance productivity and streamline software engineering processes. By addressing challenges such as LLM reliability, prompt optimization, and test automation, CIeNET’s work in generative AI paves the way for smarter technology that can help solve complex problems. The focus is on making these tools reliable and user-friendly. The solutions are potentially useful not only in coding, but in other areas as well.
