Gabriele Tuccio commited on
Commit
149cddd
Β·
1 Parent(s): 2fcc3c4
.gradio/certificate.pem ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -----BEGIN CERTIFICATE-----
2
+ MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
3
+ TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
4
+ cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
5
+ WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
6
+ ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
7
+ MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
8
+ h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
9
+ 0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
10
+ A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
11
+ T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
12
+ B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
13
+ B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
14
+ KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
15
+ OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
16
+ jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
17
+ qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
18
+ rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
19
+ HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
20
+ hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
21
+ ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
22
+ 3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
23
+ NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
24
+ ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
25
+ TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
26
+ jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
27
+ oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
28
+ 4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
29
+ mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
30
+ emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
31
+ -----END CERTIFICATE-----
AulSign.log ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-12-08 19:23:54,119 - DEBUG - load_ssl_context verify=True cert=None trust_env=True http2=False
2
+ 2024-12-08 19:23:54,119 - DEBUG - load_verify_locations cafile='/opt/homebrew/Caskroom/miniconda/base/lib/python3.10/site-packages/certifi/cacert.pem'
3
+ 2024-12-08 19:23:54,151 - INFO - Use pytorch device_name: mps
4
+ 2024-12-08 19:23:54,151 - INFO - Load pretrained SentenceTransformer: mixedbread-ai/mxbai-embed-large-v1
5
+ 2024-12-08 19:23:54,196 - DEBUG - Starting new HTTPS connection (1): huggingface.co:443
6
+ 2024-12-08 19:23:54,752 - DEBUG - https://huggingface.co:443 "HEAD /mixedbread-ai/mxbai-embed-large-v1/resolve/main/modules.json HTTP/1.1" 200 0
7
+ 2024-12-08 19:23:54,918 - DEBUG - https://huggingface.co:443 "HEAD /mixedbread-ai/mxbai-embed-large-v1/resolve/main/config_sentence_transformers.json HTTP/1.1" 200 0
8
+ 2024-12-08 19:23:55,369 - DEBUG - https://huggingface.co:443 "HEAD /mixedbread-ai/mxbai-embed-large-v1/resolve/main/README.md HTTP/1.1" 200 0
9
+ 2024-12-08 19:23:55,537 - DEBUG - https://huggingface.co:443 "HEAD /mixedbread-ai/mxbai-embed-large-v1/resolve/main/modules.json HTTP/1.1" 200 0
10
+ 2024-12-08 19:23:55,979 - DEBUG - https://huggingface.co:443 "HEAD /mixedbread-ai/mxbai-embed-large-v1/resolve/main/sentence_bert_config.json HTTP/1.1" 200 0
11
+ 2024-12-08 19:23:56,154 - DEBUG - https://huggingface.co:443 "HEAD /mixedbread-ai/mxbai-embed-large-v1/resolve/main/adapter_config.json HTTP/1.1" 404 0
12
+ 2024-12-08 19:23:56,403 - DEBUG - https://huggingface.co:443 "HEAD /mixedbread-ai/mxbai-embed-large-v1/resolve/main/config.json HTTP/1.1" 200 0
13
+ 2024-12-08 19:23:57,239 - DEBUG - https://huggingface.co:443 "HEAD /mixedbread-ai/mxbai-embed-large-v1/resolve/main/tokenizer_config.json HTTP/1.1" 200 0
14
+ 2024-12-08 19:23:57,440 - DEBUG - https://huggingface.co:443 "GET /api/models/mixedbread-ai/mxbai-embed-large-v1/revision/main HTTP/1.1" 200 148585
15
+ 2024-12-08 19:23:57,932 - DEBUG - https://huggingface.co:443 "GET /api/models/mixedbread-ai/mxbai-embed-large-v1 HTTP/1.1" 200 148585
16
+ 2024-12-08 19:23:59,135 - INFO - 2 prompts are loaded, with the keys: ['query', 'passage']
17
+ 2024-12-08 19:24:04,411 - DEBUG - Using selector: KqueueSelector
18
+ 2024-12-08 19:24:04,418 - DEBUG - load_ssl_context verify=True cert=None trust_env=True http2=False
19
+ 2024-12-08 19:24:04,441 - DEBUG - Starting new HTTPS connection (1): huggingface.co:443
20
+ 2024-12-08 19:24:04,441 - DEBUG - load_verify_locations cafile='/opt/homebrew/Caskroom/miniconda/base/lib/python3.10/site-packages/certifi/cacert.pem'
21
+ 2024-12-08 19:24:04,455 - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None
app.py ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import json
3
+ import pandas as pd
4
+ from sentence_transformers import SentenceTransformer
5
+ # Importa le funzioni dal tuo script
6
+ from scripts.aulsign import AulSign
7
+ from scripts.scripts.sign2text_mapping import sign2text
8
+
9
+ # Configurazione iniziale del modello e dei dati
10
+ def load_resources():
11
+ # Modello di embedding
12
+ model_name = "mixedbread-ai/mxbai-embed-large-v1"
13
+ model = SentenceTransformer(model_name)
14
+
15
+ # Percorsi dei file
16
+ corpus_embeddings_path = 'tools/corpus_embeddings.json'
17
+ sentences_train_embeddings_path = 'tools/sentences_train_embeddings_filtered_01.json'
18
+ rules_prompt_path_text2sign = 'tools/rules_prompt_text2sign.txt'
19
+ rules_prompt_path_sign2text = 'tools/rules_prompt_sign2text.txt'
20
+
21
+ # Carica dati
22
+ with open(corpus_embeddings_path, 'r') as file:
23
+ corpus_embeddings = pd.DataFrame(json.load(file))
24
+
25
+ with open(sentences_train_embeddings_path, 'r') as file:
26
+ sentences_train_embeddings = pd.DataFrame(json.load(file))
27
+
28
+ return model, corpus_embeddings_path, corpus_embeddings, sentences_train_embeddings, rules_prompt_path_text2sign, rules_prompt_path_sign2text
29
+
30
+ # Funzione per la modalitΓ  text2sign
31
+ def text_to_sign(sentence_to_analyse):
32
+ try:
33
+ pseudo_can, fsw_seq, can_desc_association_seq, _ = AulSign(
34
+ input=sentence_to_analyse,
35
+ rules_prompt_path=rules_prompt_path_text2sign,
36
+ train_sentences=sentences_train_embeddings,
37
+ vocabulary=corpus_embeddings,
38
+ model=model,
39
+ ollama=False,
40
+ modality="text2sign"
41
+ )
42
+ return fsw_seq,can_desc_association_seq
43
+ except Exception as e:
44
+ return f"Error: {str(e)}"
45
+ # Funzione per la modalitΓ  sign2text
46
+ def sign_to_text(fsw_to_analyse):
47
+ try:
48
+ mapped_input = sign2text(fsw_to_analyse,corpus_embeddings_path)
49
+ print(mapped_input)
50
+ except Exception as e:
51
+ return f"Error on mapping : {str(e)}"
52
+
53
+ try:
54
+ translation = AulSign(
55
+ input=mapped_input,
56
+ rules_prompt_path=rules_prompt_path_sign2text,
57
+ train_sentences=sentences_train_embeddings,
58
+ vocabulary=corpus_embeddings,
59
+ model=model,
60
+ ollama=False,
61
+ modality="sign2text"
62
+ )
63
+ return translation, mapped_input
64
+ except Exception as e:
65
+ return f"Error on AulSign: {str(e)}"
66
+
67
+ # Carica le risorse
68
+ model, corpus_embeddings_path, corpus_embeddings, sentences_train_embeddings, rules_prompt_path_text2sign, rules_prompt_path_sign2text = load_resources()
69
+
70
+ # Interfaccia Gradio
71
+ with gr.Blocks() as demo:
72
+ gr.Markdown("# AulSign Translator")
73
+ gr.Markdown("Translate from Natural Language to Formal SignWriting (FSW) or viceversa.")
74
+
75
+ with gr.Tab("Text to Sign"):
76
+ text_input = gr.Textbox(label="Insert a sentence", placeholder="Digit here your sentence...")
77
+ fsw_output = gr.Textbox(label="Output")
78
+ intermediate_output = gr.Textbox(label="Intermediate Output")
79
+
80
+ translate_button = gr.Button("Translate")
81
+ translate_button.click(text_to_sign, inputs=text_input, outputs=[fsw_output,intermediate_output])
82
+
83
+ gr.Examples(
84
+ examples=["This is a new ASL translator"],
85
+ inputs=text_input,
86
+ outputs=[fsw_output,intermediate_output],
87
+ fn=text_to_sign
88
+ )
89
+
90
+ with gr.Tab("Sign to Text"):
91
+ sign_input = gr.Textbox(label="Insert a FSW sequence", placeholder="Digit here your FSW sequence...")
92
+ text_output = gr.Textbox(label="Output")
93
+ intermediate_output = gr.Textbox(label="Intermediate Output")
94
+
95
+ reconstruct_button = gr.Button("Translate")
96
+ reconstruct_button.click(sign_to_text, inputs=sign_input, outputs=[text_output,intermediate_output])
97
+
98
+ gr.Examples(
99
+ examples=["M518x584S10004492x534S22a04493x569S30a00482x483 AS33b00S19210S20500S26504M519x547S33b00482x482S20500466x512S26504464x532S19210498x511 M530x522S15a36502x510S1813e501x503S2890f470x478 M512x535S1f720492x466S20320497x485S1dc20488x505 M528x595S10009483x405S10021473x422S2e024488x453S10001491x488S10029493x504S15a48477x548S15a40515x548S22a14476x580S22a04515x580"],
100
+ inputs=sign_input,
101
+ outputs=[text_output,intermediate_output],
102
+ fn=sign_to_text
103
+ )
104
+
105
+ # Avvia l'app
106
+ if __name__ == "__main__":
107
+ demo.launch(share=True)
scripts/__pycache__/aulsign.cpython-310.pyc ADDED
Binary file (13.3 kB). View file
 
scripts/aulsign.py ADDED
@@ -0,0 +1,588 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import numpy as np
4
+ import pandas as pd
5
+ import logging
6
+ from collections import Counter
7
+ from sentence_transformers import SentenceTransformer
8
+ import warnings
9
+ from datetime import datetime
10
+ from sklearn.preprocessing import normalize
11
+ import requests
12
+ import json
13
+ import argparse
14
+ from openai import OpenAI
15
+
16
+ from scripts.scripts.sign2text_mapping import sign2text
17
+
18
+ warnings.filterwarnings("ignore", category=FutureWarning)
19
+
20
+
21
+ # Set up logging configuration
22
+ logging.basicConfig(
23
+ filename='AulSign.log', # Log to a file
24
+ level=logging.DEBUG, # Log everything, including debug info
25
+ format='%(asctime)s - %(levelname)s - %(message)s', # Log format
26
+ filemode='w' # Overwrite the log file each run
27
+ )
28
+
29
+
30
+
31
+ client = OpenAI(
32
+ organization=os.getenv("OPENAI_ORGANIZATION"),
33
+ project=os.getenv("OPENAI_PROJECT"),
34
+ api_key=os.getenv("OPENAI_API_KEY")
35
+ )
36
+
37
+ print('Inference started...')
38
+
39
+ def query_ollama(messages, model="mistral:7b-instruct-fp16"):
40
+ url = "http://localhost:11434/api/chat"
41
+
42
+ options = {"seed": 42,"temperature": 0.1}
43
+
44
+
45
+ payload = {
46
+ "model": model,
47
+ "messages": messages,
48
+ "options": options,
49
+ "stream": False
50
+ }
51
+
52
+ response = requests.post(url, json=payload)
53
+
54
+ if response.status_code == 200:
55
+ return response.json()["message"]["content"]
56
+ else:
57
+ return f"Error: {response.status_code}, {response.text}"
58
+
59
+ def check_repetition(text, threshold=0.2):
60
+ if not text:
61
+ return False
62
+
63
+ words = [word.strip for word in text.split('#')]
64
+
65
+ unique_words = len(set(words))
66
+ total_words = len(words)
67
+
68
+ if "<unk>" in words:
69
+ logging.debug(f"Check repetition: '<unk>' was generated in the answer")
70
+ return True
71
+
72
+
73
+ is_repetitive = unique_words < total_words * threshold
74
+ logging.debug(f"Check repetition: {is_repetitive} (Unique: {unique_words}, Total: {total_words})")
75
+ return is_repetitive
76
+
77
+
78
+ # Function to merge predictions with gold data and compute metrics
79
+ def prepare_dataset(prediction: pd.DataFrame, validation: pd.DataFrame, modality:str):
80
+ if modality=='text2sign':
81
+ validation = validation.rename(columns={'fsw':'gold_fsw_seq','symbol': 'gold_symbol_seq', 'word': 'gold_cd'})
82
+ metrics = prediction.merge(validation[['gold_symbol_seq','gold_cd', 'sentence','gold_fsw_seq']], on=['sentence'])
83
+ elif modality=='sign2text':
84
+ validation = validation.rename(columns={'word': 'gold_cd'})
85
+ metrics = prediction.merge(validation[['sentence','gold_cd']], on=['gold_cd'])
86
+ return metrics
87
+
88
+ # Define cosine similarity function if it's missing
89
+ def cos_sim(a, b):
90
+ return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
91
+
92
+ def find_most_similar_sentence(user_embedding, train_sentences: pd.DataFrame, n=3, unk_threshold=7):
93
+ # Estrai gli embedding, le decomposizioni e le frasi dal DataFrame
94
+ sentence_embeddings = np.vstack(train_sentences["embedding_sentence"].values) # Matrix of sentence embeddings
95
+ decompositions = train_sentences["decomposition"].values
96
+ sentences = train_sentences["sentence"].values
97
+
98
+ # Normalizza gli embedding delle frasi e l'embedding utente
99
+ sentence_embeddings = normalize(sentence_embeddings, axis=1)
100
+ user_embedding = normalize(user_embedding.reshape(1, -1), axis=1)
101
+
102
+ # Calcola le similaritΓ  usando un'unica moltiplicazione matrice-vettore
103
+ similarities = np.dot(sentence_embeddings, user_embedding.T).flatten() # Shape (num_sentences,)
104
+
105
+ # Imposta la similaritΓ  a zero per le frasi con troppi "<unk>"
106
+ unk_counts = np.array([d.count("<unk>") for d in decompositions])
107
+ similarities[unk_counts > unk_threshold] = 0 # Penalizza le frasi con troppi "<unk>"
108
+
109
+ # Ottieni gli indici delle top-n frasi piΓΉ simili
110
+ top_n_indices = np.argsort(similarities)[-n:][::-1]
111
+
112
+ # Ritorna le decomposizioni e le frasi corrispondenti alle top-n similitudini
113
+ return [decompositions[i] for i in top_n_indices], [sentences[i] for i in top_n_indices]
114
+
115
+
116
+ def find_most_similar_canonical_entry(user_embedding, vocabulary: pd.DataFrame, n=30):
117
+ # Extract embeddings and words from the vocabulary
118
+ vocabulary_embeddings = np.vstack(vocabulary["embedding"].values) # Matrix of embeddings
119
+ vocabulary_words = vocabulary["word"].values
120
+
121
+ # Normalize vocabulary embeddings and user embedding
122
+ vocabulary_embeddings = normalize(vocabulary_embeddings, axis=1)
123
+ user_embedding = normalize(user_embedding.reshape(1, -1), axis=1)
124
+
125
+ # Compute cosine similarities for all entries in one matrix multiplication
126
+ similarities = np.dot(vocabulary_embeddings, user_embedding.T).flatten() # Shape (vocabulary_size,)
127
+
128
+ # Get a sorted list of indices based on similarity scores
129
+ sorted_indices = np.argsort(similarities)[::-1] # Sort in descending order
130
+
131
+ # Initialize lists for canonical entries and similarities
132
+ canonical_list = []
133
+ canonical_similarities = []
134
+
135
+ for idx in sorted_indices:
136
+ if len(canonical_list) >= n: # Stop once we have n entries
137
+ break
138
+
139
+ # Get canonical entry for the current word
140
+ canonical_entry = get_most_freq(vocabulary_words[idx])
141
+
142
+ # Check for duplicates in canonical entries
143
+ if canonical_entry not in canonical_list:
144
+ canonical_list.append(canonical_entry)
145
+ canonical_similarities.append(similarities[idx])
146
+
147
+ # Return the top n canonical entries and their similarities
148
+ return canonical_list#, canonical_similarities
149
+
150
+
151
+ def get_most_freq(lista:list):
152
+ lista_cleaned = []
153
+ for segno in lista:
154
+ segno_pulito = segno.lower().strip()
155
+ if segno_pulito not in lista_cleaned:
156
+ lista_cleaned.append(segno_pulito)
157
+
158
+ frequency_count = Counter(lista_cleaned)
159
+ #print(frequency_count)
160
+ top_two_words = frequency_count.most_common(2)
161
+
162
+ if len(top_two_words) >= 2:
163
+ first_word = top_two_words[0][0]
164
+ second_word = top_two_words[1][0]
165
+
166
+ return first_word+'|'+second_word
167
+ else:
168
+ first_word = top_two_words[0][0]
169
+ return first_word
170
+
171
+ def get_most_freq_fsw(lista_fsw):
172
+ if isinstance(lista_fsw,str):
173
+ return lista_fsw
174
+ else:
175
+ frequency_count = Counter(lista_fsw)
176
+ max_freq_word = frequency_count.most_common(1)[0][0]
177
+ return max_freq_word
178
+
179
+
180
+ def get_fsw_exact(vocabulary: pd.DataFrame, can_desc_answer, model, top_k=10):
181
+ # Extract vocabulary embeddings and words
182
+ vocabulary_embeddings = np.vstack(vocabulary["embedding"].values) # Create a matrix of all embeddings
183
+ vocabulary_words = vocabulary["word"].values
184
+ vocabulary_fsw = vocabulary["fsw"].values
185
+
186
+ # Normalize vocabulary embeddings for cosine similarity
187
+ vocabulary_embeddings = normalize(vocabulary_embeddings, axis=1)
188
+
189
+ fsw_seq = []
190
+ can_desc_association_seq = []
191
+ joint_prob = 1
192
+
193
+ for can_d in can_desc_answer:
194
+ # Encode the candidate description and normalize
195
+ can_d_emb = model.encode(can_d, normalize_embeddings=True).reshape(1, -1) # Shape (1, embedding_dim)
196
+
197
+ # Compute cosine similarities using matrix multiplication
198
+ similarities = np.dot(vocabulary_embeddings, can_d_emb.T).flatten() # Shape (vocabulary_size,)
199
+
200
+ # Get the indices of the top_k most similar elements
201
+ top_k_indices = np.argsort(similarities)[-top_k:][::-1] # Indices of top-k elements
202
+ top_k_words = vocabulary_words[top_k_indices]
203
+ top_k_fsws = vocabulary_fsw[top_k_indices]
204
+ top_k_similarities = similarities[top_k_indices]
205
+
206
+ # Check for an exact match in the top_k elements
207
+ exact_match_index = next((i for i, word in enumerate(top_k_words) if get_most_freq(word) == can_d.strip()), None)
208
+
209
+ if exact_match_index is not None:
210
+ # Exact match found
211
+ most_similar_word = get_most_freq(top_k_words[exact_match_index])
212
+ fsw = top_k_fsws[exact_match_index]
213
+ max_similarity = 1 # Assign maximum similarity for an exact match
214
+ else:
215
+ # If no exact match, use the most similar word semantically
216
+ max_index = 0 # First element in the sorted top_k (highest similarity)
217
+ most_similar_word = get_most_freq(top_k_words[max_index])
218
+ fsw = top_k_fsws[max_index]
219
+ max_similarity = top_k_similarities[max_index]
220
+
221
+ # Append the result
222
+ logging.info(fsw)
223
+ fsw_seq.append(get_most_freq_fsw(fsw)) # Append to fsw sequence
224
+ joint_prob *= max_similarity # Multiply joint probability
225
+ can_desc_association_seq.append(most_similar_word)
226
+
227
+ # Logging
228
+ logging.debug(f"Word: {can_d}")
229
+ logging.debug(f"Most similar word in vocabulary: {most_similar_word}")
230
+ logging.debug(f"Similarity: {max_similarity}")
231
+ logging.debug(f"Fsw_seq: {' '.join(fsw_seq)}")
232
+ logging.debug("---")
233
+
234
+ # Compute geometric mean of joint probability
235
+ joint_prob = pow(joint_prob, 1 / len(can_desc_association_seq))
236
+
237
+ return ' '.join(fsw_seq), ' # '.join(can_desc_association_seq), np.round(joint_prob, 3)
238
+
239
+ # Process input sentence through retrieval-augmented generation (RAG)
240
+ def AulSign(input:str, rules_prompt_path:str, train_sentences:pd.DataFrame, vocabulary:pd.DataFrame, model, ollama:bool, modality:str):
241
+ """
242
+ AulSign: A function for translating between text and Formal SignWriting (FSW) or vice versa.
243
+
244
+ This function leverages embeddings, similarity matching, and language models to facilitate
245
+ translations based on the specified modality (`text2sign` or `sign2text`).
246
+
247
+ Args:
248
+ input (str):
249
+ The sentence or sign sequence to be analyzed and translated.
250
+ rules_prompt_path (str):
251
+ Path to a file containing predefined prompts and rules to guide the language model.
252
+ train_sentences (pd.DataFrame):
253
+ A dataset containing sentences and their embeddings for training or similarity matching.
254
+ vocabulary (pd.DataFrame):
255
+ A table of vocabulary entries with canonical descriptions and embeddings, used for matching.
256
+ model:
257
+ The embedding model used to convert sentences or sign sequences into vector representations.
258
+ ollama (bool):
259
+ Specifies whether to use the `query_ollama` method for querying the language model.
260
+ modality (str):
261
+ The translation mode:
262
+ - `'text2sign'`: Converts text to Formal SignWriting sequences.
263
+ - `'sign2text'`: Converts Formal SignWriting to textual sentences.
264
+
265
+ Returns:
266
+ For `modality == "text2sign"`:
267
+ tuple:
268
+ - answer (str):
269
+ The translated text or decomposition provided by the language model.
270
+ - fsw (list):
271
+ A list of Formal SignWriting sequences associated with the translation.
272
+ - can_desc_association_seq (list):
273
+ A list of canonical descriptions associated with the FSW sequences.
274
+ - joint_prob (float):
275
+ The joint probability of the most likely translation path.
276
+
277
+ For `modality == "sign2text"`:
278
+ str:
279
+ The reconstructed textual sentence translated from the input sign sequence.
280
+
281
+ If an invalid modality is provided:
282
+ str:
283
+ Returns 'error' to indicate invalid input.
284
+
285
+ Raises:
286
+ Exception:
287
+ Logs and raises errors encountered during API calls or message construction.
288
+ """
289
+
290
+ sent_embedding = model.encode(input, normalize_embeddings=True)
291
+
292
+ if modality =='text2sign':
293
+
294
+ similar_canonical = find_most_similar_canonical_entry(sent_embedding, vocabulary, n=100)
295
+ #print(similar_canonical)
296
+
297
+
298
+ similar_canonical_str = ' # '.join(similar_canonical)
299
+
300
+ # Load the rules prompt from the file
301
+ with open(rules_prompt_path, 'r') as file:
302
+ rules_prompt = file.read().format(similar_canonical=similar_canonical_str)
303
+
304
+ # Find the most similar sentences from training set
305
+ decomposition, sentences = find_most_similar_sentence(
306
+ user_embedding=sent_embedding,
307
+ train_sentences=train_sentences,
308
+ n=20
309
+ )
310
+
311
+ messages = [{"role": "system", "content": rules_prompt}]
312
+ for sentence, decomposition in zip(sentences, decomposition):
313
+ # Ensure each message has 'role' and 'content' keys
314
+ if sentence and decomposition:
315
+ messages.append({"role": "user", "content": sentence})
316
+ messages.append({"role": "assistant", "content": decomposition})#.replace(' | ',' # ')})
317
+ else:
318
+ logging.warning("Missing 'sentence' or 'decomposition' in messages.")
319
+
320
+ messages.append({"role": "user", "content": "decompose the following sentence as shown in the previous examples"})
321
+ messages.append({"role": "user", "content": input})
322
+
323
+ # Validate the constructed messages before converting to prompt text
324
+ valid_messages = []
325
+ for message in messages:
326
+ if 'role' in message and 'content' in message:
327
+ valid_messages.append(message)
328
+ logging.debug(message)
329
+ else:
330
+ logging.error(f"Invalid message format detected: {message}")
331
+
332
+ if ollama:
333
+ # Query the LLM using query_ollama instead of llm_pipeline
334
+ answer = query_ollama(messages)#, model="mistral:7b-instruct-fp16")
335
+
336
+ logging.info("\n[LOG] MISTRAL Answer:")
337
+ logging.info(answer)
338
+
339
+ can_description_answer = answer.split('#')
340
+ else:
341
+ try:
342
+ # Initial API call
343
+ completion = client.chat.completions.create(
344
+ model="gpt-3.5-turbo",
345
+ messages=messages,
346
+ temperature=0
347
+ )
348
+ answer = completion.choices[0].message.content
349
+
350
+ if check_repetition(answer):
351
+ # Optional: Repetition check
352
+ presence_penalty = 0.6
353
+ completion = client.chat.completions.create(
354
+ model="gpt-3.5-turbo",
355
+ messages=messages,
356
+ presence_penalty=presence_penalty,
357
+ temperature=0
358
+ )
359
+ logging.info(f"presence_penalty: {presence_penalty}")
360
+ answer = completion.choices[0].message.content
361
+ logging.info('ANSWER: GPT')
362
+ logging.info(answer + '\n\n')
363
+
364
+ # Update parsed answer
365
+ can_description_answer = answer.split('#')
366
+
367
+ else:
368
+ logging.info('ANSWER: GPT')
369
+ logging.info(answer + '\n\n')
370
+
371
+ # Split for further processing
372
+ can_description_answer = answer.split('#')
373
+
374
+
375
+ except Exception as e:
376
+ logging.error(f"Error during GPT API call: {e}")
377
+
378
+ # Map canonical descriptions to most similar words in vocabulary
379
+ fsw, can_desc_association_seq, joint_prob = get_fsw_exact(
380
+ vocabulary=vocabulary,
381
+ can_desc_answer=can_description_answer,
382
+ model=model
383
+ )
384
+
385
+ return answer, fsw, can_desc_association_seq, joint_prob
386
+
387
+ elif modality =='sign2text':
388
+
389
+ # Load the rules prompt from the file
390
+ with open(rules_prompt_path, 'r') as file:
391
+ rules_prompt = file.read()
392
+
393
+
394
+ # Find the most similar sentences from training set
395
+ decomposition, sentences = find_most_similar_sentence(
396
+ user_embedding=sent_embedding,
397
+ train_sentences=train_sentences,
398
+ n=30
399
+ )
400
+
401
+ messages = [{"role": "system", "content": rules_prompt}]
402
+ for sentence, decomposition in zip(sentences, decomposition):
403
+ # Ensure each message has 'role' and 'content' keys
404
+ if sentence and decomposition:
405
+ messages.append({"role": "user", "content": decomposition})
406
+ messages.append({"role": "assistant", "content": sentence}) # qui stiamo invertendo il task! dalla decomposition vogliamo che l'assistant ci dia la sentence
407
+ else:
408
+ logging.warning("Missing 'sentence' or 'decomposition' in messages.")
409
+
410
+ messages.append({"role": "user", "content": "reconstruct the sentence as shown on the examples above"})
411
+ messages.append({"role": "user", "content": input})
412
+
413
+ # Validate the constructed messages before converting to prompt text
414
+ valid_messages = []
415
+ for message in messages:
416
+ if 'role' in message and 'content' in message:
417
+ valid_messages.append(message)
418
+ logging.debug(message)
419
+ else:
420
+ logging.error(f"Invalid message format detected: {message}")
421
+
422
+ if ollama:
423
+ # Query the LLM using query_ollama instead of llm_pipeline
424
+ answer = query_ollama(messages)#, model="mistral:7b-instruct-fp16")
425
+
426
+ logging.info("\n[LOG] MISTRAL Answer:")
427
+ logging.info(answer)
428
+
429
+ can_description_answer = answer.split('#')
430
+ else:
431
+ try:
432
+ # Initial API call
433
+ completion = client.chat.completions.create(
434
+ model="gpt-3.5-turbo",
435
+ messages=messages,
436
+ temperature=0
437
+ )
438
+ answer = completion.choices[0].message.content
439
+ logging.info('ANSWER: GPT')
440
+ logging.info(answer + '\n\n')
441
+
442
+
443
+ except Exception as e:
444
+ logging.error(f"Error during GPT API call: {e}")
445
+
446
+ return answer
447
+ else:
448
+ return 'error'
449
+
450
+
451
+ def main(modality, setup, input=None):
452
+ np.random.seed(42)
453
+ current_time = datetime.now().strftime("%Y_%m_%d_%H_%M")
454
+ data_path = f"data/preprocess_output_{setup}/file_comparison"
455
+ corpus_embeddings_path = 'tools/corpus_embeddings.json'
456
+ if setup is None:
457
+ sentences_train_embeddings_path = f"tools/sentences_train_embeddings_filtered_01.json"
458
+ else:
459
+ sentences_train_embeddings_path = f"tools/sentences_train_embeddings_{setup}.json"
460
+ rules_prompt_path_text2sign = 'tools/rules_prompt_text2sign.txt'
461
+ rules_prompt_path_sign2text = 'tools/rules_prompt_sign2text.txt'
462
+
463
+ # Model to use for sentence embeddings
464
+ model_name = "mixedbread-ai/mxbai-embed-large-v1"
465
+ model = SentenceTransformer(model_name)
466
+
467
+ # Load embeddings
468
+ with open(corpus_embeddings_path, 'r') as file:
469
+ corpus_embeddings = pd.DataFrame(json.load(file))
470
+
471
+ with open(sentences_train_embeddings_path, 'r') as file:
472
+ sentences_train_embeddings = pd.DataFrame(json.load(file))
473
+
474
+ if input: # Se Γ¨ fornita una frase personalizzata
475
+ if modality == 'text2sign':
476
+ answer, fsw_seq, can_desc_association_seq, joint_prob = AulSign(
477
+ input=input,
478
+ rules_prompt_path=rules_prompt_path_text2sign,
479
+ train_sentences=sentences_train_embeddings,
480
+ vocabulary=corpus_embeddings,
481
+ model=model,
482
+ ollama=False,
483
+ modality=modality
484
+ )
485
+ #print(f"Input Sentence: {input}")
486
+ print(f"Canonical Descriptions: {can_desc_association_seq}")
487
+ print(f"Translation (FSW): {fsw_seq}")
488
+ #print(f"Canonical Descriptions: {can_desc_association_seq}")
489
+ #print(f"Joint Probability: {joint_prob}")
490
+
491
+ elif modality == 'sign2text': #qui l'input Γ¨ una FSW seq, che deve essere mappata in canonicals
492
+ mapped_input = sign2text(input,corpus_embeddings_path)
493
+ logging.info(f"\nReconstructed Sentence via Vocaboulary: {mapped_input}")
494
+ answer= AulSign(
495
+ input=mapped_input,
496
+ rules_prompt_path=rules_prompt_path_sign2text,
497
+ train_sentences=sentences_train_embeddings,
498
+ vocabulary=corpus_embeddings,
499
+ model=model,
500
+ ollama=False,
501
+ modality=modality
502
+ )
503
+ print(f"Input Sign Voucaboualry Mapping: {input}")
504
+ print(f"Translation (Text): {answer}")
505
+
506
+ else: # Flusso standard con testset
507
+ test_path = os.path.join(data_path, f"test.csv")
508
+ test = pd.read_csv(test_path)
509
+ test = test.head(1)
510
+
511
+ if modality == 'text2sign':
512
+ list_sentence = []
513
+ list_answer = []
514
+ list_fsw_seq = []
515
+ can_desc_association_list = []
516
+ prob_of_association_list = []
517
+
518
+ for index, row in test.iterrows():
519
+ sentence = row['sentence']
520
+ answer, fsw_seq, can_desc_association_seq, joint_prob = AulSign(
521
+ input=sentence,
522
+ rules_prompt_path=rules_prompt_path_text2sign,
523
+ train_sentences=sentences_train_embeddings,
524
+ vocabulary=corpus_embeddings,
525
+ model=model,
526
+ ollama=False,
527
+ modality=modality
528
+ )
529
+
530
+ list_sentence.append(sentence)
531
+ list_answer.append(answer)
532
+ list_fsw_seq.append(fsw_seq)
533
+ can_desc_association_list.append(can_desc_association_seq)
534
+ prob_of_association_list.append(joint_prob)
535
+
536
+ df_pred = pd.DataFrame({
537
+ 'sentence': list_sentence,
538
+ 'pseudo_cd': list_answer,
539
+ 'pred_cd': can_desc_association_list,
540
+ 'joint_prob': prob_of_association_list,
541
+ 'pred_fsw_seq': list_fsw_seq
542
+ })
543
+ output_path = os.path.join('result', f"{modality}_{current_time}")
544
+ os.makedirs(output_path, exist_ok=True)
545
+ df_pred = prepare_dataset(df_pred,test,modality)
546
+ df_pred.to_csv(os.path.join(output_path, f'result_{current_time}.csv'), index=False)
547
+
548
+ elif modality == 'sign2text':
549
+
550
+ list_answer = []
551
+ list_gold_cd = []
552
+
553
+ for index, row in test.iterrows():
554
+ dec_sentence = row['word']
555
+ answer = AulSign(
556
+ input=dec_sentence,
557
+ rules_prompt_path=rules_prompt_path_sign2text,
558
+ train_sentences=sentences_train_embeddings,
559
+ vocabulary=corpus_embeddings,
560
+ model=model,
561
+ ollama=False,
562
+ modality=modality
563
+ )
564
+ list_gold_cd.append(dec_sentence)
565
+ list_answer.append(answer)
566
+
567
+ df_pred = pd.DataFrame({
568
+ 'pseudo_sentence': list_answer,
569
+ 'gold_cd': list_gold_cd,
570
+ })
571
+ output_path = os.path.join('result', f"{modality}_{current_time}")
572
+ os.makedirs(output_path, exist_ok=True)
573
+ df_pred = prepare_dataset(df_pred,test,modality)
574
+ df_pred.to_csv(os.path.join(output_path, f'result_{current_time}.csv'), index=False)
575
+
576
+ if __name__ == "__main__":
577
+
578
+ #sentence_to_analyze = "This is a new ASL translator"
579
+ #main(modality='text2sign', setup="filtered_01", input=sentence_to_analyze)
580
+ #main(modality='text2sign', setup="filtered_01")
581
+
582
+
583
+ parser = argparse.ArgumentParser()
584
+ parser.add_argument("--mode", required=True, help="Mode of operation: text2sign or sign2text")
585
+ parser.add_argument("--input", help="Input text or sign sequence")
586
+ args = parser.parse_args()
587
+
588
+ main(args.mode, setup=None, input=args.input)
scripts/scripts/__pycache__/clean.cpython-310.pyc ADDED
Binary file (1.71 kB). View file
 
scripts/scripts/__pycache__/sign2text_mapping.cpython-310.pyc ADDED
Binary file (2.13 kB). View file
 
scripts/scripts/clean.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ from signwriting.formats.fsw_to_sign import fsw_to_sign
3
+
4
+ def clean_sign(fsw_list: list, glue=' ', sort = False):
5
+ list_factor_x, list_factor_y = [], []
6
+ list_symbols = []
7
+
8
+ for item in fsw_list:
9
+ sign = fsw_to_sign(item)
10
+
11
+ # Aggiungi il simbolo principale con posizione se esiste
12
+ if 'box' in sign and 'symbol' in sign['box'] and 'position' in sign['box']:
13
+ #head del simbolo
14
+ box_symbol = sign['box']['symbol']
15
+ box_factor_x = sign['box']['position'][0]
16
+ box_factor_y = sign['box']['position'][1]
17
+ list_symbols.append(str(box_symbol))
18
+ list_factor_x.append(str(box_factor_x))
19
+ list_factor_y.append(str(box_factor_y))
20
+ if sort:
21
+ # Sort symbols alphabetically by 'symbol' key
22
+ symbols = sorted(sign['symbols'], key=lambda el: el['symbol'])
23
+ else:
24
+ symbols = sign['symbols']
25
+
26
+ for el in symbols:
27
+ symbol = el['symbol']
28
+ factor_x = el['position'][0]
29
+ factor_y = el['position'][1]
30
+ list_factor_x.append(str(factor_x))
31
+ list_factor_y.append(str(factor_y))
32
+ list_symbols.append(symbol)
33
+
34
+ return glue.join(fsw_list), glue.join(list_symbols), glue.join(list_factor_x), glue.join(list_factor_y)
35
+
36
+
37
+
38
+ def replace_match(match):
39
+ return "|" + match.group(0) # Helper function to add '|' separator
40
+
41
+ def clean_symbol(symbols, pattern=r'\b(M|L|R|B)\b', glue=' ', order=False, add_box_symbol=False):
42
+ # Aggiungo i separatori ai simboli tramite il pattern
43
+ symbols = re.sub(pattern, replace_match, symbols)
44
+
45
+
46
+ # Rimuovo il primo carattere '|' se presente
47
+ if symbols.startswith('|'):
48
+ symbols = symbols[1:]
49
+
50
+
51
+ symbols_cleaned_list = []
52
+
53
+ # Itero sui simboli separati dal carattere '|'
54
+ for symbol in symbols.split('|'):
55
+ # Controllo se il simbolo inizia con uno dei caratteri target (M, L, R, B)
56
+
57
+ if symbol.startswith(('M', 'L', 'R', 'B')):
58
+ box_symbol = symbol[0] # Assegno la lettera come box_symbol
59
+ symbol_pure = symbol[1:].strip() # Rimuovo la lettera dal simbolo
60
+
61
+ if order:
62
+ symbol_pure = ' '.join(sorted(symbol_pure.split()))
63
+
64
+ else:
65
+ symbol_pure = ' '.join(symbol_pure.split())
66
+
67
+ # Aggiungo box_symbol al simbolo epurato
68
+ if add_box_symbol:
69
+ symbols_cleaned_list.append(box_symbol + ' ' + symbol_pure)
70
+ else:
71
+ symbols_cleaned_list.append(symbol_pure)
72
+
73
+
74
+ # Unisco tutti i simboli epurati con `glue`
75
+ return glue.join(symbols_cleaned_list)
76
+
77
+
scripts/scripts/sign2text_mapping.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import pandas as pd
3
+ import json
4
+ from collections import Counter
5
+ from scripts.scripts.clean import clean_sign, clean_symbol
6
+
7
+ def get_most_freq(lista):
8
+ lista_cleaned = [item.lower().strip() for item in lista]
9
+ frequency_count = Counter(lista_cleaned)
10
+ top_two_words = frequency_count.most_common(2)
11
+
12
+ if len(top_two_words) >= 2:
13
+ return top_two_words[0][0] + '|' + top_two_words[1][0]
14
+ elif len(top_two_words) == 1:
15
+ return top_two_words[0][0]
16
+ else:
17
+ return ''
18
+
19
+ def sign2text(fsw_seq:str, vocab_path:str):
20
+
21
+ df = pd.DataFrame({'fsw': [fsw_seq]})
22
+
23
+ df['symbol'] = df['fsw'].apply(lambda x: clean_sign(x.split())[1])
24
+ df['symbol'] = df['symbol'].apply(lambda x: clean_symbol(x,glue='|',order=False))
25
+
26
+ #print('\n') #deubg
27
+ #print(df.loc[0,'fsw']) #deubg
28
+ #print('\n') #deubg
29
+ #print('\n') #deubg
30
+ #print(df.loc[0,'symbol']) #deubg
31
+
32
+
33
+ with open(vocab_path, 'r') as file:
34
+ content = file.read()
35
+ vocab = json.loads(content)
36
+
37
+ list_word = []
38
+
39
+ fsw, symbols = df.loc[0,'fsw'], df.loc[0,'symbol']
40
+ for symbol in symbols.split('|'):
41
+
42
+ #print(symbol) #debug
43
+ temp_ordered_symbol = ' '.join(sorted(symbol.split()))
44
+ #print(temp_ordered_symbol) #debug
45
+
46
+ mapped_words = [
47
+ ', '.join(entry['word'])
48
+ for entry in vocab
49
+ if 'symbol' in entry
50
+ and any(temp_ordered_symbol == s for s in (entry['symbol'] if isinstance(entry['symbol'], list) else [entry['symbol']]))
51
+ ]
52
+
53
+ if mapped_words:
54
+ canonical = get_most_freq(' '.join(mapped_words).split(','))
55
+ list_word.append(canonical)
56
+ else:
57
+ list_word.append("<unk>")
58
+
59
+
60
+ return ' # '.join(list_word)
tools/rules_prompt_sign2text.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ You will be provided with a decomposition of a sentence. Each component of the decomposition represents a lexical or semantic unit, followed by its interpretation or contextual synonym. Based on this decomposition, reconstruct the original sentence, ensuring grammatical correctness and coherence with the overall meaning. Follow these rules:
2
+
3
+ 1. Respect the logical order: Reconstruct the sentence by following the order of elements in the decomposition.
4
+ 2. Use synonyms or interpretations: Replace each term or symbol with its appropriate meaning or synonym indicated in the decomposition.
5
+ 3. Include punctuation: Use the indicated punctuation (e.g., full stop|. represents a period, comma|pause represents a comma, etc.).
6
+ 4. Infer the semantic units of the sentence even if corrrispondend <unk> are present.
7
+ 4. Adapt the structure: Rewrite the sentence so that it is grammatical and fluent, even if minor adjustments to the terms provided are necessary.
8
+ 5. Preserve the overall meaning: Ensure that the meaning of the reconstructed sentence aligns with the context suggested by the decomposition.
tools/rules_prompt_text2sign.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Your task is to simplify each sentence by breaking it down into basic components such as ("{similar_canonical}"), using the following structure:
2
+
3
+ 1. Use the symbol " # " as separator between units
4
+ e.g "Jesus said" becames: "Jesus # said|proclamed"
5
+ 2. For each unit, provide only an additional related term, separated by a vertical bar (|).
6
+ e.g. "12" becames: "twelve|12"
7
+ 3. Identify and label all punctuation marks (e.g., "comma," "end of sentence," "colen").
8
+ 4. Repeat proper nouns as they are.
9
+ eg. "Jesus" becames: "Jesus"
10
+ 5. For numbers, include both the numeric form and its word equivalent.
11
+ 6. Avoid adding any extra explanation or commentary beyond the simplified structure.