Engati - User Guide
Generative AI

Gen AI Updates

2min

This document outlines the new settings that have been enabled in the Gen AI system and their respective functionalities. These updates are designed to enhance the flexibility and depth of the AI's responses and training capabilities.

Enabled Settings

  1. DETAILED_RESPONSES
    1. Functionality: When this setting is enabled, the AI generates detailed responses to queries. Instead of providing concise or bullet-point answers, the AI elaborates on the topic in a paragraph format. Additionally, the platform picks all the adjacent content related to the current answering portion, supported only with the models GPT-4o and GPT-4o-mini.
    2. Use Case: This setting is ideal when you require in-depth explanations, comprehensive overviews, or detailed insights into a topic.
  2. MAX_CONTEXT_LENGTH
    1. Functionality: With this setting enabled, the AI's responses are not constrained by a predefined word count. The system can generate longer, more detailed responses without truncation, allowing for richer and more complete answers. This is generally used in combination with DETAILED_RESPONSE. By default, the word count limit is set to 3500, but for detailed responses, it is increased to 60000. This feature is supported only with the models GPT-4o and GPT-4o-mini.
    2. Use Case: This setting is useful when dealing with complex queries requiring extensive elaboration, such as technical documentation, detailed reports, or thorough explanations
    3. 

      Document image
      
  3. MAX_SEARCH_RESULTS
    1. Functionality: Setting this value allows for altering the cap of three documents that the AI can reference during training or generating responses. Users can upload a larger number of documents, providing the AI with a broader and more diverse dataset for training, or limit it to 1 or 2 to ensure that only the most relevant content is used by the generative model.
    2. Use Case: This setting is beneficial for training the AI on specialized subjects, large-scale datasets, or scenarios requiring the integration of information from multiple sources.
  4. IS_CLOSED_SOURCE_LLM_ENABLED
    1. This setting allows you to use your own API key for the AI model, whether it's OpenAI, Gemini, or another supported service. By enabling this setting, you can integrate your API key to train the model on your specific documents. These trained documents are then used to generate responses when users ask queries, ensuring a tailored and personalized AI experience.
  5. ADAPTIVE_CATEGORY_PRIORITY
    1. Enabling this setting removes the need to rely on a predefined priority for documents used in generating responses. By default, when this setting is not enabled, the AI follows a sequential order of document priority. For instance, even if the most relevant answer to a query exists in the 5th document, the AI still processes the first four documents, which might reduce response quality. With this setting enabled, the AI compares the query directly to all documents and selects the one most similar to construct the response, providing faster and more accurate answers.
  6. ENABLE_VOICE_SUPPORT
    1. This setting allows users to voice their queries instead of typing them. When enabled, the system processes spoken inputs, making interactions with the AI more convenient and accessible for users who prefer or require voice-based assistance.

Benefits of These Settings

  • Enhanced Depth: Responses are more informative and tailored to the user's requirements.
  • Flexibility: Greater adaptability in training and response generation.
  • Scalability: Accommodates larger datasets, improving the AI's ability to handle complex and multi-faceted queries.