File size: 6,192 Bytes
285a433
 
81e533a
285a433
03f86f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
285a433
81e533a
125f8eb
26afca2
125f8eb
707967c
125f8eb
 
 
26afca2
125f8eb
 
 
 
03f86f0
c164d7a
285a433
0b62b65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
066ad1d
0b62b65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c164d7a
 
 
03f86f0
ae717e9
 
 
 
 
03f86f0
 
ae717e9
285a433
 
ae717e9
 
 
 
 
285a433
 
 
 
 
 
 
5c98617
285a433
ae717e9
 
285a433
 
ae717e9
285a433
 
 
125f8eb
 
 
 
 
 
 
 
 
 
 
 
 
ae717e9
125f8eb
 
285a433
 
ac7873b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
import gradio as gr
from huggingface_hub import InferenceClient
import os

css = """
.message-row {
    justify-content: space-evenly !important;
}
.message-bubble-border {
    border-radius: 6px !important;
}
.dark.message-bubble-border {
    border-color: #21293b !important;
}
.dark.user {
    background: #0a1120 !important;
}
.dark.assistant {
    background: transparent !important;
}
"""

PLACEHOLDER = """
<div class="message-bubble-border" style="display:flex; max-width: 800px; border-radius: 8px; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1); backdrop-filter: blur(10px);">
    <figure style="margin: 0;">
        <img src="https://huggingface.co/spaces/baconnier/PrompTutor/resolve/main/prompt_teacher.jpg" style="width: 100%; height: 100%; border-radius: 8px;">
    </figure>
    <div style="padding: .5rem 1.5rem;">
        <h2 style="text-align: left; font-size: 1.5rem; font-weight: 700; margin-bottom: 0.5rem;"> </h2>
        <p style="text-align: left; font-size: 16px; line-height: 1.5; margin-bottom: 15px;">Prompt Engineering Tutor guide the user through an interactive learning journey to master prompt engineering techniques.</p>
    </div>    
</div>
"""

system_message = """
As an AI Prompt Engineering Tutor, your role is to guide the user through an interactive learning journey to master prompt engineering techniques. You will progressively challenge the user to write prompts, provide feedback, and offer tailored tips for improvement based on their previous responses.

1. Initial Assessment:
   Begin by asking the user to write a simple prompt for a basic task. Evaluate their starting skill level.

2. Progressive Learning Path:
   a) Fundamentals: Introduce basic concepts of clarity and specificity.
   b) Context Utilization: Teach how to incorporate and reference context effectively.
   c) Structure and Flow: Guide on creating well-organized, logical prompts.
   d) Advanced Techniques: Introduce creative and complex prompting strategies.

3. Interactive Prompt Creation:
   After each concept introduction:
   a) Ask the user to write a prompt applying the new concept.
   b) Analyze their response, highlighting strengths and areas for improvement.
   c) Provide a corrected version of their prompt, explaining the enhancements.
   d) Offer 2-3 tips for further improvement, referencing previous lessons.

4. Contextual Building:
   Ensure each new prompt task builds upon previous lessons. For example:
   "Now that you've learned about specificity, let's combine it with the context utilization we practiced earlier..."

5. Reflective Learning:
   After each iteration, ask the user:
   a) What was challenging about this prompt?
   b) How does this new technique compare to what you've learned before?
   c) How might you apply this in a real-world scenario?

6. Adaptive Difficulty:
   Adjust the complexity of tasks based on the user's progress. If they're struggling, simplify; if excelling, challenge them further.

7. Cumulative Application:
   Periodically ask the user to write a prompt that combines multiple techniques learned so far.

8. Progress Tracking:
   Maintain a running commentary on the user's improvement, referencing specific enhancements in their prompts over time.

9. Final Assessment:
   Conclude with a complex prompt-writing task that incorporates all learned techniques. Compare this final prompt to their initial attempt to showcase progress.

10. Learning Summary:
    Provide a comprehensive review of the user's journey, highlighting key improvements and areas for continued practice.

To begin the tutorial, follow these steps:

1. Introduce yourself and explain the importance of effective prompt engineering.
2. Ask the user to write their first simple prompt: "Write a prompt asking ....."
3. Analyze their response, provide feedback, and introduce the first concept (clarity and specificity).
4. Continue the learning journey, progressively introducing new concepts and always building upon previous lessons.
5. Adapt your teaching style and difficulty based on the user's responses and progress.
6. Conclude with a final assessment and comprehensive review of their learning journey.

Remember to maintain an encouraging and supportive tone throughout the interaction, fostering a growth mindset in prompt engineering.
Try to be funny, use smart format to answer using bullet points.

Begin the tutorial by introducing yourself and asking for the first prompt as described above.
"""

if __name__ == '__main__':
    api_token = os.getenv('HF_API_TOKEN2')
    if not api_token:
        raise ValueError("HF_API_TOKEN not found in environment variables")
        
client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct", token=api_token)

def respond(message, history: list[tuple[str, str]]):
    messages = [{"role": "system", "content": system_message}]

    for val in history:
        if val[0]:
            messages.append({"role": "user", "content": val[0]})
        if val[1]:
            messages.append({"role": "assistant", "content": val[1]})

    messages.append({"role": "user", "content": message})

    response = ""

    for message in client.chat_completion(
        messages,
        max_tokens=2000,
        stream=True,
        temperature=0.7,
        top_p=0.95,
    ):
        token = message.choices[0].delta.content

        response += token
        yield response

demo = gr.ChatInterface(
    respond,
    theme=gr.themes.Soft(primary_hue="indigo", secondary_hue="blue", neutral_hue="gray",font=[gr.themes.GoogleFont("Exo"), "ui-sans-serif", "system-ui", "sans-serif"]).set(
        body_background_fill_dark="#0f172a",
        block_background_fill_dark="#0f172a",
        block_border_width="1px",
        block_title_background_fill_dark="#070d1b",
        button_secondary_background_fill_dark="#070d1b",
        border_color_primary_dark="#21293b",
        background_fill_secondary_dark="#0f172a",
        color_accent_soft_dark="transparent"
    ),
    css=css,
    description="AI Prompt Engineering Tutor: Master the art of crafting effective prompts",
    chatbot=gr.Chatbot(scale=1, placeholder=PLACEHOLDER)
)

if __name__ == "__main__":
    demo.launch(share=True)