24.11 Release Notes

Enhanced Copilot Management and Monitoring

We are thrilled to introduce our latest update, bringing significant improvements to how analysts build, deploy, and enhance copilots. This release focuses on three key areas: enhanced monitoring and diagnostics, improved duplication of copilots and skills, and greater control over language models and their settings.

Key Improvements

1. Monitoring and Diagnosing Copilots

  • LangSmith Integration: You can now use LangSmith tracing to track every LLM call within your environment, providing deeper insights into copilot operations.
  • Question Browser Improvements: Adjustments to the order and size of columns, as well as filter choices, are now preserved across sessions in the same browser. We have also resolved several bugs related to column filters.
  • Enhanced Diagnostic View: The diagnostic view now reflects the entire chat pipeline, with each evaluation displayed alongside the corresponding prompt that was executed. In addition users with access to the diagnostic panel can provide a reaction to the quality of the answer along with notes.

2. Improved Migration and Copying of Copilots and Skills

  • Environment Migration Warning: When migrating a copilot to a different environment, you will be alerted if the copilot depends on a dataset or skill resources not found in the new environment.  In addition if the dataset or skill resources in the new environment are different than what was found in the previous environment the user will be warned to update the dataset and/or skill resources.
  • Skill Duplication: Duplicating a skill within an environment now also duplicates its resources. Similarly, duplicating an entire copilot will duplicate the resources for all contained skills.
  • Copilot Backup on Import: Importing a copilot over an existing one now automatically backs up the pre-existing copilot as a version.

3. More Control Over Language Models and Their Settings

  • Token Configuration: You can now specify the maximum number of input and output tokens when configuring a language model.
  • Cost Tracking: Input and output token costs can now be captured for better cost management.
  • Copilot-level Overrides: Language models can now be overridden at the copilot level within the settings.

Additional Enhancements

  • Chat Pipeline Status: While a chat question is processing, the UI will now indicate the stage in the process. Skills can also publish messages to the chat UI to show the analysis progress.
  • System Prompt Variable: A new variable can be added to the system prompt.  The variable, {{copilot_dataset_end_date}},will insert the ending date of the dataset associated with the copilot.

Retired Feature: We have retired the ability to download a full thread to PDF or PPT to streamline our service offerings.

We hope these updates significantly enhance your copilot management and deployment experience.

Tested Models

Chat: gpt-4o-2024-05-13

Narrative: gpt-4o-2024-05-13

Embedding: text-embedding-3-small


Was this article helpful?