File size: 3,421 Bytes
3f62146
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
generatorText = """ This microservice integrates with the Retriever to answer the user query in ChaBo RAG workflows.  \
                    
                    # ChaBo Generator on Hugging Face Spaces

                    [ChaBo_Generator](https://huggingface.co/spaces/GIZ/eudr_chabo_generator/blob/main/README.md) Space hosts \
                    a Generator microservice for answering user query. This is just a Infrastructural component and doesnt\
                    not serve any user application through its User Interfaceas the its consumed in ChaBo workflow thorugh Orchestrator.
                        
                    ChaBo Generator - MCP Server

                    A language model-based generation service designed for ChatFed RAG\
                    (Retrieval-Augmented Generation) pipelines. This module serves as an \
                    **MCP (Model Context Protocol) server** that generates contextual responses \
                    using configurable LLM providers with support for retrieval result processing.

                    **API Endpoint**: 1 API which provides context-aware text generation using \
                        configurable LLM providers when properly configured with API credentials.

                    ### api_name: /generate

                    Parameters:
                    - `query` (str, required): The question or query to be answered
                    - `context` (str|list, required): Context for answering - can be plain text or list of retrieval result dictionaries

                    Returns: String containing the generated answer based on the provided context and query.

                    **Hot to connect**:
                    ```python
                    from gradio_client import Client

                    client = Client("ENTER CONTAINER URL / SPACE ID")
                    result = client.predict(
                            query="What are the key findings?",
                            context="Your relevant documents or context here...",
                            api_name="/generate"
                    )
                    print(result)
                    ```

                    #### Configuration

                    LLM Provider Configuration:
                    1. Set your preferred inference provider in `params.cfg`
                    2. Configure the model and generation parameters  
                    3. Set the required API key environment variable
                    4. [Optional] Adjust temperature and max_tokens settings
                    5. Run the app:

                    ```bash
                    docker build -t chatfed-generator .
                    docker run -p 7860:7860 chatfed-generator
                    ```

                    #### Environment Variables Required

                    Make sure to set the appropriate environment variables:
                    - OpenAI: `OPENAI_API_KEY`
                    - Anthropic: `ANTHROPIC_API_KEY`
                    - Cohere: `COHERE_API_KEY`
                    - HuggingFace: `HF_TOKEN` 

                    For more info on Retriever and code base visit the following links:
                    - ChaBo_Generator : [**ReadMe**](https://huggingface.co/spaces/GIZ/eudr_chabo_generator/blob/main/README.md)
                    - ChaBo_Generator: [**Codebase**](https://huggingface.co/spaces/GIZ/eudr_chabo_generator/tree/main)"""