diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..e86af1421f427a1e2b1c6f71ef059d7603e0df25 --- /dev/null +++ b/.gitignore @@ -0,0 +1,107 @@ +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +pip-wheel-metadata/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py,cover +.hypothesis/ +.pytest_cache/ + +# Environments +.env +.venv +.seriguela +venv/ +ENV/ +env/ +env.bak/ +venv.bak/ + +# IDEs / Editors +.idea/ +.vscode/ +*.suo +*.ntvs* +*.njsproj +*.sln +*.sw? + +# Jupyter Notebook +.ipynb_checkpoints + +# Output folder (geralmente grande demais para Git) +output/* +!output/.gitkeep # Não ignore um .gitkeep se precisar manter a pasta + +# Dados (podem ser grandes, usar Git LFS ou armazenar fora se necessário) +# Note: CSV files in data/processed/ can be 100MB+ and are excluded from git +# Run scripts/data/prepare_training_data_fixed.py on target system to generate them +data/* +data/raw/* +data/processed/* +!data/raw/.gitkeep +!data/processed/.gitkeep + +# OS generated files +.DS_Store +.DS_Store? +._* +.Spotlight-V100 +.Trashes +ehthumbs.db +Thumbs.db +.env + +wandb + +# AWS credentials and keys +aws/keys/*.pem +aws/keys/*.key +aws/.env +aws/credentials +*.pem +*.key \ No newline at end of file diff --git a/ANALYSIS_REPORT.md b/ANALYSIS_REPORT.md new file mode 100644 index 0000000000000000000000000000000000000000..6aae1330bbcef213ae04210df910a930aee3c4e8 --- /dev/null +++ b/ANALYSIS_REPORT.md @@ -0,0 +1,283 @@ +# Seriguela - Relatório Consolidado de Análise + +**Data:** 2026-02-01 +**Status:** ⚠️ BLOCK 2 PRECISA RETREINO + +--- + +## Resumo Executivo + +Projeto Seriguela tem 3 blocos: +1. **Block 1 - Dados:** Preparação e análise ⚠️ **CAUSA RAIZ AQUI** +2. **Block 2 - Treino Supervisionado:** Treinar LLM para gerar expressões ❌ PROBLEMA +3. **Block 3 - PPO Finetuning:** Otimizar para symbolic regression ⛔ BLOQUEADO + +**Causa raiz identificada:** Dados de treino **NÃO TÊM `<|endofex|>` markers**. 0% dos 758,255 exemplos têm o marker. Modelo nunca aprendeu a parar. + +--- + +## Investigação da Causa Raiz (2026-02-01) + +### Descoberta 1: Validação Original Era Frouxa + +Script `test_inference_configs.py` reporta **95% válidas**, mas aceita: +``` +✅ VALID: C*x_1 + C*x_6 - tan(x_9) - Cainers: C9999(x +✅ VALID: C*x_1 + C*x_2 + C*x_1 + C Pressure, sin, sqrt, tan +``` + +Validação original só verifica: +- Tem operador? ✓ +- Tem variável? ✓ +- Não tem "Buyable"? ✓ + +**NÃO verifica:** +- Se usa variáveis do prompt +- Se pode ser parseada +- Se tem outros garbage tokens + +### Descoberta 2: Dados de Treino SEM Markers + +```python +# Dataset: augustocsc/sintetico_natural (700K) +Total de exemplos: 758,255 +Exemplos com <|endofex|>: 0 (0.0%) +Exemplos com <|startofex|>: 0 (0.0%) +``` + +**O modelo NUNCA viu `<|endofex|>` durante treino!** + +### Descoberta 3: Origem do Garbage + +Garbage tokens (Stockholm, Pressure, XP, etc.) vêm do **vocabulário GPT-2 base**. +Como modelo não sabe parar, eventualmente gera tokens aleatórios. + +### Conclusão da Investigação + +| Problema | Causa | +|----------|-------| +| Modelo não para | Dados sem `<|endofex|>` | +| Garbage tokens | GPT-2 base vaza sem stopping | +| Variáveis erradas | Dados têm x_1-x_10, modelo não aprende restrição | +| 95% vs 0% válidas | Validação original era frouxa | + +### Solução Necessária + +1. **Preparar dados** com `<|endofex|>` em 100% dos exemplos +2. **Retreinar modelo** com dados corrigidos +3. **Validação rigorosa** durante treino + +--- + +## Modelos Testados + +| Modelo | HuggingFace Hub | Esperado | Real | Status | +|--------|-----------------|----------|------|--------| +| V1 | augustocsc/Se124M_700K_infix | 83.3% válidas | **0%** | ❌ Falha | +| V2 | augustocsc/Se124M_700K_infix_v2 | 90% válidas | **0%** | ❌ Falha | + +--- + +## Testes Realizados + +### Teste 1: Comparação V1 vs V2 (mesmo prompt) + +**Prompt:** +``` +vars: x_1, x_2 +oper: *, +, -, sin, cos +cons: C +expr: +``` + +**Configurações ótimas usadas:** +- V1: temp=0.5, top_k=40, top_p=0.9, rep_penalty=1.15 +- V2: temp=0.7, top_k=0, top_p=0.8, rep_penalty=1.0 + +**Resultados (20 gerações cada):** + +| Métrica | V1 | V2 | +|---------|----|----| +| Expressões Válidas | 0% | 0% | +| Símbolos Corretos | 0% | 45% | + +### Teste 2: PPO Evaluation + +**Objetivo:** Verificar se modelo pode ser usado para PPO (symbolic regression) + +**Resultados:** +- Valid Rate: 6.7% (muito baixo) +- Best R²: N/A (não conseguiu computar) +- **Conclusão:** PPO inviável com modelo atual + +--- + +## Problemas Identificados + +### 1. Modelos Não Param Corretamente + +**Sintoma:** Expressões continuam além do esperado +``` +Esperado: C*x_1 + sin(x_2)<|endofex|> +Gerado: C*x_1 + sin(x_2) + C Stockholmvars: x_1, x_2, x_3... +``` + +**Causa:** Modelo não aprendeu a gerar `<|endofex|>` + +### 2. Garbage Tokens na Saída + +**Exemplos de lixo gerado:** +- "BuyableInstoreAndOnline" +- "Stockholm", "GREEN", "Muslims" +- "intuition", "records", "crash" +- "xstatics", "xid", "sinmod" + +**Causa:** Dados de treino contaminados OU modelo não convergiu + +### 3. Variáveis Erradas + +**Sintoma:** Usa variáveis não permitidas +``` +Prompt pede: x_1, x_2 +Modelo gera: x_9, x_10, x_3, x_4 +``` + +**Causa:** Modelo não aprendeu a respeitar o prompt + +### 4. Discrepância com Documentação + +**Documentação dizia:** +- V1: 83.3% válidas com config otimizada +- V2: 90% válidas com nucleus sampling + +**Realidade:** +- V1: 0% válidas +- V2: 0% válidas + +**Possíveis causas:** +1. Modelos no Hub não são os mesmos testados +2. Testes anteriores tinham bug +3. Forma de carregar modelo está errada + +--- + +## Configurações de Inferência Testadas + +### V1 Config Ótima (segundo docs) +```python +{ + "temperature": 0.5, + "top_k": 40, + "top_p": 0.9, + "repetition_penalty": 1.15, + "max_new_tokens": 100, + "do_sample": True, +} +``` + +### V2 Config Ótima (segundo docs) +```python +{ + "temperature": 0.7, + "top_k": 0, + "top_p": 0.8, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": True, +} +``` + +**Resultado:** Mesmo com configs ótimas, 0% válidas. + +--- + +## Forma de Carregar Modelos + +```python +# 1. Carregar base GPT-2 +model = AutoModelForCausalLM.from_pretrained("gpt2", torch_dtype=torch.float16) + +# 2. Configurar tokenizer com tokens especiais +tokenizer = AutoTokenizer.from_pretrained("gpt2") +tokenizer.add_special_tokens({ + "additional_special_tokens": ["<|startofex|>", "<|endofex|>"] +}) + +# 3. Redimensionar embeddings +model.resize_token_embeddings(len(tokenizer)) + +# 4. Carregar adapter LoRA +model = PeftModel.from_pretrained(model, "augustocsc/Se124M_700K_infix_v2") + +# 5. Merge adapter no modelo base +model = model.merge_and_unload() +model.eval() +``` + +--- + +## Conclusões + +### Block 2 (Treino) - PRECISA RETREINO + +**Problemas no treino:** +1. Modelo não aprendeu `<|endofex|>` marker +2. Dados podem estar contaminados com garbage +3. Modelo não respeita variáveis do prompt + +**Ações necessárias:** +1. Validar dados de treino (100% devem ter `<|endofex|>`) +2. Limpar garbage tokens dos dados +3. Monitorar valid rate durante treino +4. Só considerar treino bem-sucedido se valid rate > 80% + +### Block 3 (PPO) - BLOQUEADO + +**Pré-requisitos para PPO:** +- ✅ Base model gera >80% expressões válidas +- ✅ Expressões podem ser avaliadas (R² computável) +- ✅ Modelo para corretamente em boundaries + +**Status atual:** ❌ Nenhum pré-requisito atendido + +--- + +## Próximos Passos + +1. **Investigar dados de treino** + - Verificar se `<|endofex|>` está presente + - Identificar fonte de garbage tokens + +2. **Retreinar modelo (V3)** + - Usar dados validados + - Monitorar valid rate durante treino + - Validar antes de fazer push pro Hub + +3. **Só então testar PPO** + - Após valid rate > 80% + - Com modelo que para corretamente + +--- + +## Arquivos de Código Relevantes + +- `scripts/train.py` - Script de treino +- `scripts/generate.py` - Geração com stopping criteria +- `scripts/evaluate.py` - Avaliação de modelo +- `scripts/compare_v1_v2_simple.py` - Comparação V1 vs V2 +- `scripts/evaluate_ppo.py` - Avaliação para PPO +- `scripts/data/prepare_training_data_fixed.py` - Preparação de dados +- `classes/expression.py` - Parsing e validação de expressões + +--- + +## Infraestrutura AWS + +- **Instance:** g5.xlarge (NVIDIA A10G, 24GB) +- **Instance ID:** i-0377b6c8de3660a82 +- **Custo:** ~$1/hora +- **Status atual:** Stopped (para economizar) + +--- + +**Última atualização:** 2026-02-01 diff --git a/EXPERIMENT_PLAN.md b/EXPERIMENT_PLAN.md new file mode 100644 index 0000000000000000000000000000000000000000..501b37e9929998905018373886fa82b0d95ce441 --- /dev/null +++ b/EXPERIMENT_PLAN.md @@ -0,0 +1,195 @@ +# Plano de Experimentos: Formatos de Treino + +**Data:** 2026-02-01 +**Objetivo:** Testar duas abordagens para resolver o problema de stopping + +--- + +## Contexto + +### Problema Identificado +- Dados de treino não têm marcador de fim (0% com qualquer marker) +- Modelo não aprende quando parar +- Gera garbage tokens do vocabulário GPT-2 + +### Experimentos Propostos +1. **EXP-A:** Formato estruturado (JSON-like) +2. **EXP-B:** Token EOS do GPT-2 (`<|endoftext|>`) + +--- + +## EXP-A: Formato Estruturado + +### Formato dos Dados +```json +{"vars": ["x_1", "x_2"], "ops": ["*", "+", "sin"], "expr": "C*sin(x_1) + x_2"} +``` + +### Vantagens +- Estrutura clara e parseável +- Fácil validação (JSON válido = formato correto) +- Modelo aprende estrutura rígida + +### Desvantagens +- Mais tokens por exemplo +- Pode ser mais difícil de aprender + +### Preparação de Dados +```python +# Transformar de: +"vars: x_1, x_2\noper: *, +, sin\ncons: C\nexpr: C*sin(x_1) + x_2" + +# Para: +'{"vars": ["x_1", "x_2"], "ops": ["*", "+", "sin"], "cons": "C", "expr": "C*sin(x_1) + x_2"}' +``` + +### Inferência +```python +prompt = '{"vars": ["x_1", "x_2"], "ops": ["*", "+", "sin"], "cons": "C", "expr": "' +# Modelo completa com: C*sin(x_1) + x_2"} +# Extrair: tudo entre 'expr": "' e '"}' +``` + +### Critério de Sucesso +- JSON parseável em >90% dos casos +- Expressão extraída válida em >80% dos casos + +--- + +## EXP-B: Token EOS do GPT-2 + +### Formato dos Dados +``` +vars: x_1, x_2 +oper: *, +, sin +cons: C +expr: C*sin(x_1) + x_2<|endoftext|> +``` + +### Vantagens +- Token já existe no modelo (ID 50256) +- GPT-2 já entende como "fim de sequência" +- Não precisa resize de embeddings +- Formato similar ao atual + +### Desvantagens +- Pode conflitar com outros usos do EOS +- Menos explícito que marker dedicado + +### Preparação de Dados +```python +# Adicionar <|endoftext|> no final de cada expressão +text = original_text + "<|endoftext|>" +``` + +### Inferência +```python +# Usar eos_token_id como stopping criteria +output = model.generate( + **inputs, + eos_token_id=tokenizer.eos_token_id, # 50256 + max_new_tokens=128 +) +``` + +### Critério de Sucesso +- Modelo gera `<|endoftext|>` em >90% dos casos +- Expressão antes do EOS válida em >80% dos casos + +--- + +## Plano de Execução + +### Fase 1: Preparação de Dados (Local) + +#### 1.1 Criar script de preparação +``` +scripts/data/prepare_experiment_data.py +``` +- Entrada: dataset augustocsc/sintetico_natural (700K) +- Saída A: data/exp_a_json/train.csv, validation.csv +- Saída B: data/exp_b_eos/train.csv, validation.csv + +#### 1.2 Validar dados preparados +- Verificar formato correto em 100% dos exemplos +- Amostrar e inspecionar manualmente + +### Fase 2: Treino (AWS) + +#### 2.1 Treinar EXP-A (JSON) +```bash +python scripts/train.py \ + --use_local_csvs \ + --train_file ./data/exp_a_json/train.csv \ + --output_dir ./output/exp_a_json \ + --num_train_epochs 3 +``` + +#### 2.2 Treinar EXP-B (EOS) +```bash +python scripts/train.py \ + --use_local_csvs \ + --train_file ./data/exp_b_eos/train.csv \ + --output_dir ./output/exp_b_eos \ + --num_train_epochs 3 +``` + +### Fase 3: Avaliação + +#### 3.1 Métricas +- **Valid Rate:** % expressões parseáveis +- **Stopping Rate:** % que param corretamente (JSON fechado ou EOS) +- **Symbol Accuracy:** % que usam apenas símbolos do prompt +- **Garbage Rate:** % com tokens não-matemáticos + +#### 3.2 Comparação +| Métrica | EXP-A (JSON) | EXP-B (EOS) | +|---------|--------------|-------------| +| Valid Rate | ? | ? | +| Stopping Rate | ? | ? | +| Symbol Accuracy | ? | ? | +| Garbage Rate | ? | ? | + +### Fase 4: Decisão + +- Se EXP-A melhor → usar formato JSON +- Se EXP-B melhor → usar EOS token +- Se ambos ruins → investigar outras opções + +--- + +## Estimativas + +| Fase | Tempo | Custo AWS | +|------|-------|-----------| +| Preparação dados | 30 min | $0 | +| Treino EXP-A | 2-3h | ~$3 | +| Treino EXP-B | 2-3h | ~$3 | +| Avaliação | 30 min | ~$0.50 | +| **Total** | **6-7h** | **~$6.50** | + +--- + +## Arquivos a Criar + +``` +scripts/data/prepare_experiment_data.py # Preparação +data/exp_a_json/train.csv # Dados JSON +data/exp_a_json/validation.csv +data/exp_b_eos/train.csv # Dados EOS +data/exp_b_eos/validation.csv +scripts/evaluate_experiments.py # Avaliação +``` + +--- + +## Critério de Sucesso Final + +**Experimento bem-sucedido se:** +- Valid Rate > 80% +- Stopping Rate > 90% +- Garbage Rate < 5% + +**Próximo passo após sucesso:** +- Usar formato vencedor para treinar modelo final +- Prosseguir para Block 3 (PPO) diff --git a/README.md b/README.md new file mode 100644 index 0000000000000000000000000000000000000000..74827a5b72709289449ab9e0fd446640a6a6d45e --- /dev/null +++ b/README.md @@ -0,0 +1,106 @@ +*# Nome do Seu Projeto de Fine-Tuning + +(Breve descrição do objetivo do projeto) + +## Estrutura de Pastas + +Aqui está a organização das pastas e seus propósitos: + +``` +seu_projeto_finetuning/ +│ +├── data/ # Todos os dados relacionados ao projeto +│ ├── raw/ # Dados originais, não processados +│ └── processed/ # Dados limpos, formatados e divididos (train/val/test) +│ +├── scripts/ # Scripts Python principais +│ ├── preprocess_data.py # (Opcional) Script para limpar e formatar dados +│ ├── train.py # Script principal para rodar o Trainer do HF +│ ├── evaluate.py # (Opcional) Script para avaliação customizada +│ └── generate.py # (Opcional) Script para gerar texto com modelo treinado +│ +├── configs/ # Arquivos de configuração (JSON, YAML, etc.) +│ ├── training_args.json # Argumentos de treino (passados para TrainingArguments) +│ ├── peft_config.json # (Se usar PEFT) Configuração LoRA, Adapter, etc. +│ └── model_config.json # (Opcional) Nome do modelo base, caminhos, etc. +│ +├── output/ # Todos os outputs gerados (modelos, logs, resultados) +│ └── {nome_experimento}/ # Subpasta para cada execução/experimento +│ ├── checkpoints/ # Checkpoints salvos pelo Trainer +│ ├── final_model/ # Modelo final treinado +│ ├── logs/ # Logs do TensorBoard ou outros +│ └── ... # Outros resultados (métricas, amostras) +│ +├── notebooks/ # (Opcional) Jupyter notebooks para exploração e testes +│ +├── .gitignore # Especifica arquivos/pastas a serem ignorados pelo Git +├── requirements.txt # Dependências Python do projeto +└── README.md # Documentação do projeto (este arquivo) +``` + +* **`data/`**: Contém todos os dados. + * `raw/`: Armazena os dados originais, sem modificações. + * `processed/`: Guarda os dados após limpeza, formatação e divisão (treino, validação, teste), prontos para serem usados pelo script de treinamento. +* **`scripts/`**: Onde fica o código Python. + * `train.py`: O coração do projeto, responsável por carregar dados, modelo, configurações e executar o fine-tuning com o `Trainer`. + * Scripts auxiliares para pré-processamento, avaliação ou geração podem ser incluídos aqui. +* **`configs/`**: Centraliza as configurações do projeto, como hiperparâmetros de treinamento (`training_args.json`), configurações PEFT (`peft_config.json`) ou detalhes do modelo base. Isso facilita a alteração de parâmetros sem modificar o código principal. +* **`output/`**: Diretório para todos os artefatos gerados durante o treinamento. É **altamente recomendado** criar uma subpasta para cada experimento (identificada por nome ou timestamp) para manter os resultados organizados (checkpoints, modelo final, logs, métricas). O `output_dir` do `TrainingArguments` deve apontar para essa subpasta específica do experimento. +* **`notebooks/`**: Espaço para prototipagem, análise exploratória de dados e testes rápidos usando Jupyter Notebooks. +* **`.gitignore`**: Configura o Git para ignorar arquivos e pastas desnecessários (ambientes virtuais, caches, outputs grandes, dados brutos grandes, etc.). +* **`requirements.txt`**: Lista as bibliotecas Python necessárias para que o projeto funcione, permitindo recriar o ambiente facilmente (`pip install -r requirements.txt`). +* **`README.md`**: Documentação essencial explicando o projeto, como configurá-lo e executá-lo. + +## Como Usar + +1. **Setup:** Crie um ambiente virtual e instale as dependências: + ```bash + python -m venv venv + source venv/bin/activate # Linux/macOS + # venv\Scripts\activate # Windows + pip install -r requirements.txt + ``` +2. **Dados:** Coloque seus dados brutos em `data/raw/` e execute (ou crie) o script `scripts/preprocess_data.py` para gerar os arquivos em `data/processed/`. +3. **Configuração:** Ajuste os arquivos em `configs/` (argumentos de treino, modelo base, PEFT se aplicável). +4. **Treinamento:** Execute o script principal: + ```bash + python scripts/train.py --args_config configs/training_args.json --model_config configs/model_config.json + ``` + *(Adapte os argumentos conforme necessário)* + +## Dependências + +As dependências Python estão listadas no arquivo `requirements.txt`. +``` + +Claro! Aqui está um bloco de instruções pronto para ser adicionado ao seu `README.md`, explicando como configurar o ambiente com `venv`, instalar as dependências e configurar o uso de GPU e Weights & Biases (W&B): + +--- + +### 🚀 Setup do Ambiente (com suporte a GPU e W&B) + +Siga os passos abaixo para configurar o ambiente de desenvolvimento com `venv`, `pip`, suporte a GPU (CUDA 11.8) e monitoramento com Weights & Biases: + +```bash +# 1. Crie o ambiente virtual +python -m venv .seriguela + +# 2. Ative o ambiente virtual +# No Linux/macOS: +source .seriguela/bin/activate +# No Windows: +.seriguela\Scripts\activate + +# 3. Instale as dependências principais +pip install -r requirements.txt + +# 4. Instale PyTorch com suporte a CUDA 11.8 (para uso com GPU) +pip install torch==2.2.1+cu118 torchvision==0.17.1+cu118 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu118 + +# 5. (Opcional) Faça login no Weights & Biases para monitorar seus experimentos +wandb login +``` + +> ⚠️ Certifique-se de que sua GPU e drivers estão atualizados e compatíveis com CUDA 11.8. +> 💡 Para ambientes 100% reprodutíveis, use sempre o mesmo `requirements.txt` e registre os experimentos com `wandb`. +* \ No newline at end of file diff --git a/classes/__init__.py b/classes/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/classes/dataset.py b/classes/dataset.py new file mode 100644 index 0000000000000000000000000000000000000000..06860663a2e83c8f585ad8bf5d817c7e1dca83cf --- /dev/null +++ b/classes/dataset.py @@ -0,0 +1,48 @@ +import pandas as pd +import torch + +class RegressionDataset: + def __init__(self, path: str, file_name: str = 'train.csv', delimiter: str = ',', header: int = 0, + encoding: str = 'utf-8', target_col: str = None): + """ + Initializes the RegressionDataset by loading data from a CSV file. + + Args: + path (str): Path to the directory containing the CSV file. + file_name (str): Name of the CSV file. Defaults to 'train.csv'. + delimiter (str): Delimiter used in the CSV file. Defaults to ','. + header (int): Row number to use as the column names. Defaults to 0. + encoding (str): Encoding of the CSV file. Defaults to 'utf-8'. + target_col (str): Name of the target column. If None, the last column is used. + """ + self.data = pd.read_csv(f"{path}/{file_name}", delimiter=delimiter, header=header, encoding=encoding) + + if self.data.empty: + raise ValueError("CSV file is empty.") + + if target_col is None: + target_col = self.data.columns[-1] + + if target_col not in self.data.columns: + raise ValueError(f"CSV must contain a column named '{target_col}'.") + + self.X = self.data.drop(columns=[target_col]).apply(pd.to_numeric, errors='coerce').values + + self.y = pd.to_numeric(self.data[target_col], errors='coerce').values + + def get_data(self): + """ + Returns the data as PyTorch tensors (X, y). + """ + X_tensor = torch.tensor(self.X, dtype=torch.float32) + y_tensor = torch.tensor(self.y, dtype=torch.float32) + return X_tensor, y_tensor + + def get_numpy(self): + """ + Returns the data as NumPy arrays (useful for sympy and R² calculations). + """ + return self.X, self.y + + + diff --git a/classes/expression.py b/classes/expression.py new file mode 100644 index 0000000000000000000000000000000000000000..428b56f975e427e0e6368e592d1ee3a0932ad81c --- /dev/null +++ b/classes/expression.py @@ -0,0 +1,403 @@ +import sympy +import numpy as np +from sklearn.metrics import r2_score, mean_squared_error +from sklearn.metrics import mean_absolute_error +from scipy.optimize import minimize +import math +import re + + +class Expression: + SAFE_FUNCTIONS = { + 'sqrt': np.sqrt, + 'log': np.log, + 'exp': np.exp, + 'sin': np.sin, + 'cos': np.cos, + 'tan': np.tan, + 'asin': np.arcsin, # Corrected to np.arcsin + 'abs': np.abs, + 'pow': np.power, # Use np.power for vectorization and NaN handling + # '**' is handled by Python's eval; if operands are numpy arrays, np.power is used. + } + + OPERATOR_ARITY = { + '+': 2, + '-': 2, + '*': 2, + '/': 2, + '**': 2, # Changed from '^' to '**' + 'sin': 1, + 'cos': 1, + 'tan': 1, + 'log': 1, + 'sqrt': 1, + 'exp': 1 + } + + OPERATOR_FUNCS = { + '+': sympy.Add, + '-': lambda x, y: x - y, + '*': sympy.Mul, + '/': lambda x, y: x / y, + '**': sympy.Pow, # Changed from '^' to '**', sympy.Pow handles both + 'sin': sympy.sin, + 'cos': sympy.cos, + 'tan': sympy.tan, + 'log': sympy.log, + 'sqrt': sympy.sqrt, + 'exp': sympy.exp + } + + def parse_prefix(self, tokens): + """Parse prefix notation expression to SymPy. + + Example: ['*', 'x_1', '+', 'x_2', 'C'] -> x_1*(x_2 + C) + """ + if not tokens: + raise ValueError("Empty token list") + + # Define unary and binary operators + UNARY_OPS = {'sin', 'cos', 'tan', 'exp', 'log', 'sqrt', 'abs', 'asin'} + BINARY_OPS = {'+', '-', '*', '/', '**', '^'} + + stack = [] + + # Process tokens in reverse order + for token in reversed(tokens): + if token in BINARY_OPS or token in UNARY_OPS: + # Operator: pop operands from stack + if token in UNARY_OPS: + if len(stack) < 1: + raise ValueError(f"Not enough operands for {token}") + arg = stack.pop() + if token in ['sin', 'cos', 'tan', 'exp', 'log', 'sqrt', 'abs', 'asin']: + stack.append(f"{token}({arg})") + else: + raise ValueError(f"Unknown unary operator: {token}") + else: # Binary operator + if len(stack) < 2: + raise ValueError(f"Not enough operands for {token}") + right = stack.pop() + left = stack.pop() + + # Handle operator mapping + op_map = {'+': '+', '-': '-', '*': '*', '/': '/', '**': '**', '^': '**'} + op = op_map.get(token, token) + + if op in ['**', '^']: + stack.append(f"({left})**({right})") + elif op == '/': + stack.append(f"({left})/({right})") + else: + stack.append(f"({left}){op}({right})") + else: + # Operand: push to stack + stack.append(token) + + if len(stack) != 1: + raise ValueError(f"Invalid prefix expression, {len(stack)} elements remaining") + + return sympy.sympify(stack[0], evaluate=False) + + def __init__(self, expression, is_prefix=False): + try: + self.original_expression = expression # Save original + + if is_prefix: + # Ensure input prefix uses '**' if converting from external source + tokens = expression.replace('^', '**').split() + self.sympy_expression = self.parse_prefix(tokens) + else: + # Load the expression as a sympy expression without simplification + self.sympy_expression = sympy.sympify(expression, evaluate=False) + except Exception as e: + raise ValueError(f"Failed to parse expression: {e}") + + self.max_var = 0 + for symbol in self.sympy_expression.free_symbols: + if symbol.name.startswith('x_'): + try: + index = int(symbol.name.split('_')[1]) + self.max_var = max(self.max_var, index) + except ValueError: + # Handle symbols that look like x_ but aren't x_number + pass # Or raise ValueError(f"Invalid variable name: {symbol.name}") if strict + + computable_expression = str(self.sympy_expression) + + for i in range(1, self.max_var + 1): + # Use regex to match whole words to avoid issues with x_1 followed by x_11 + computable_expression = re.sub(rf'\bx_{i}\b', f'x[{i-1}]', computable_expression) + + + self.computable_expression = computable_expression.replace('**C', '**2') + + self.constant_count = self.computable_expression.count('C') + self.best_constants = [1.0] * self.constant_count + + + if self.constant_count > 0: + # Replace 'C' with indexable constants + split_expr = self.computable_expression.split('C') + new_expr = split_expr[0] # Start with first part + + for i in range(1, len(split_expr)): + # Add constant reference + new_expr += f'constants[{i-1}]' + # Add next part + new_expr += split_expr[i] + + self.computable_expression = new_expr + + + + + + def __str__(self): + return f"Expression: {self.original_expression}, Best constants: {self.best_constants}" + def sympy_str(self): + """ + Returns the string representation of the sympy expression. + """ + return str(self.sympy_expression) + + def is_valid_on_dataset(self, X, test_constants_list=None): + """ + Checks if the expression evaluates to valid (finite) values for all rows in X, + across one or more sets of test constants. + + Args: + X (np.ndarray): Input data, shape (n_samples, n_features) + test_constants_list (list of lists): Optional. Defaults to [[1.0]*count]. + Example: [[1.0]*n, [0.5]*n, [2.0]*n] to test more thoroughly. + + Returns: + bool: True if no evaluation returns nan/inf or crashes. False otherwise. + """ + if test_constants_list is None: + test_constants_list = [[1.0] * self.constant_count] + + try: + for constants in test_constants_list: + results = self.evaluate(X, constants) + + if not np.all(np.isfinite(results)): + return False + + return True + except Exception: + return False + + # Inside the Expression class + def evaluate(self, X, constants=None): + # with warnings.catch_warnings(): + # warnings.simplefilter("ignore", category=RuntimeWarning) # Hide power/tan warnings + # np.seterr(invalid='ignore', divide='ignore') + + + + if constants is None: + # print("No constants provided, using best constants.") # Optional: uncomment for debugging + constants = self.best_constants + + try: + local_env = { + "constants": np.array(constants), # Ensure constants is a numpy array for broadcasting + **self.SAFE_FUNCTIONS, + "__builtins__": None + } + + if not isinstance(X, np.ndarray): + X = np.array(X) # Ensure X is a numpy array + + # Ensure X is 2D, even if it has only one sample + if X.ndim == 1: + X = X.reshape(1, -1) + + # x becomes a list of columns (1D arrays of shape (n_samples,)) + x_cols = [X[:, i] for i in range(X.shape[1])] + local_env["x"] = x_cols + + # The result will be a numpy array of shape (n_samples,) + + try: + y_pred_array = eval(self.computable_expression, local_env) + + except FloatingPointError as e: + # print(f"FloatingPointError during eval: {e}") + # print(f"Expression: {self.computable_expression}") + # print(f"Constants: {constants}") + return np.full(X.shape[0], np.nan) # Return NaNs to be caught by loss + + except Exception as e: + # print(f"General exception during eval: {e}") + return np.full(X.shape[0], np.nan) + + finally: + np.seterr(all='warn') # 🔁 Reset to default behavior + + # Ensure output is float to avoid issues with mixed types if some results are int + return np.asarray(y_pred_array, dtype=float) + + except Exception as e: + # Return an array of NaNs of the expected shape to ensure loss calculation doesn't break + num_samples = X.shape[0] if X.ndim > 0 else 1 + return np.full(num_samples, np.nan) # Return NaNs on error + + def fit_constants(self, X, y): + X = np.array(X) + y = np.array(y) + + if self.constant_count == 0: + try: + y_pred = self.evaluate(X) # Vectorized call + if not np.all(np.isfinite(y_pred)): # Check for NaNs/Infs + return -np.inf + if np.all(y_pred == y_pred[0]) and len(np.unique(y)) > 1: # Avoid R2 issues with constant prediction for non-constant y + return 0.0 # Or handle as per specific requirements + return r2_score(y, y_pred) + except Exception as e: # Broader catch for any eval issue + return -np.inf + + def loss(current_constants): + + try: + y_pred = self.evaluate(X, current_constants) + + except Exception as e: + print(f"Exception during evaluation: {e}") + return np.inf + + if not np.all(np.isfinite(y_pred)): + return np.inf + + # MSE calculation + mse = np.mean((y - y_pred) ** 2) + + return mse + + bounds = [(-2., 2.)] * self.constant_count + + initial_guess = ( + self.best_constants + if self.best_constants and len(self.best_constants) == self.constant_count + else [.0] * self.constant_count # Default to 1.0 + ) + + # Ensure initial_guess is a flat numpy array + initial_guess = np.array(initial_guess, dtype=float).flatten() + + + # from scipy.optimize import differential_evolution + # # Step 1: Use Differential Evolution for global exploration + # print("\n--- Starting Differential Evolution ---") + # result_de = differential_evolution(loss, bounds, + # popsize=70, # Aumente para 50, 70, ou mais + # maxiter=10000, # Aumente para 5000, 10000, ou mais + # strategy='rand1bin', # Tente 'rand1exp' se rand1bin não funcionar + # tol=1e-7, # Tolerância mais apertada + # mutation=(0.8, 1.2), # Experimente valores mais altos + # recombination=0.5, # Experimente valores mais baixos + # seed=42, # Mantém a reproducibilidade + # disp=True, # Exibe o progresso + # polish=False) + + # if result_de.success: + # print(f"\nDifferential Evolution finished successfully. Best raw constants: {result_de.x}, Best MSE: {result_de.fun}") + # # Use the result from DE as initial guess for local optimizer + # initial_guess_for_minimize = result_de.x + + # # Step 2: (Optional but recommended) Refine with L-BFGS-B + # # L-BFGS-B will be applied to the "raw" (non-rounded) values, + # # but the loss function internally rounds for discrete ones. + # # It might still struggle if the function is too "stepped" from rounding. + # print("\n--- Starting L-BFGS-B refinement ---") + # result_min = minimize(loss, + # x0=initial_guess_for_minimize, + # method='L-BFGS-B', + # bounds=bounds, + # options={'maxiter': 500, 'ftol': 1e-9, 'disp': True} # More iterations, tighter tolerance + # ) + + # if result_min.success: + # print(f"\nL-BFGS-B refinement successful. Final raw constants: {result_min.x}, Final MSE: {result_min.fun}") + # self.best_constants = list(result_min.x) + # else: + # print(f"\nL-BFGS-B refinement failed: {result_min.message}. Using Differential Evolution's result.") + # self.best_constants = list(result_de.x) + # else: + # print(f"\nDifferential Evolution did not converge successfully: {result_de.message}. Cannot proceed with optimization.") + # return -np.inf # Indicate failure + + # try: + # y_pred = self.evaluate(X) + # if not np.all(np.isfinite(y_pred)): + # print("Final evaluation produced non-finite values for R2 score.") + # return -np.inf + # if len(np.unique(y)) == 1: + # if np.allclose(y_pred, y[0]): + # return 1.0 + # else: + # return 0.0 + # return r2_score(y, y_pred) + # except Exception as e: + # print(f"Error calculating final R2: {e}") + # return -np.inf + + result = minimize(loss, + x0=initial_guess, + method='L-BFGS-B', + bounds=bounds, + #options={'maxiter': 10, 'maxfun': 10, 'disp': True} + ) + + if result.success: + self.best_constants = result.x.tolist() + # print(f"Optimization successful. Final loss: {result.fun}") # Optional + try: + y_pred = self.evaluate(X) # Uses self.best_constants (vectorized) + if not np.all(np.isfinite(y_pred)): + return -np.inf + # Refined R2 calculation for edge cases + if len(np.unique(y)) == 1: # If y is constant + if np.allclose(y_pred, y[0]): + return 1.0 # Perfect prediction of a constant + else: + return 0.0 # Or some other metric for imperfect constant prediction + #return mean_squared_error(y, y_pred) # Use MSE for optimization + #return mean_absolute_error(y, y_pred) # Use MAE for robustness + return r2_score(y, y_pred) + except Exception as e: + return -np.inf + else: + return -np.inf + +# from dataset import RegressionDataset + +# import numpy as np +# import warnings + +# with warnings.catch_warnings(): +# warnings.simplefilter("ignore", category=RuntimeWarning) +# np.seterr(invalid='ignore') + +# #reg = RegressionDataset('../data/evaluate/srsd-feynman_hard/train', 'feynman-bonus.12.txt', delimiter=' ') +# reg = RegressionDataset('./data/evaluate/srsd-feynman_easy/train', 'feynman-i.18.16.txt', delimiter=' ') +# X, y = reg.get_numpy() + +# #x = np.array(X).T +# expression = "x_1*x_2*sin(x_4)" +# #expr = "0.5*x[0]*x[1]**2" + + +# expr = Expression(expression) +# print("Expression:", expr) + +# if expr.is_valid_on_dataset(X): +# print("Expression is valid on dataset.") +# score = expr.fit_constants(X, y) +# print("Fitted constants:", expr.best_constants) +# print("R2 score:", score) +# else: +# print("Expression is not valid on dataset.") \ No newline at end of file diff --git a/configs/eval_dataset_download.sh b/configs/eval_dataset_download.sh new file mode 100644 index 0000000000000000000000000000000000000000..be186a25fc75ca9bc9dbe6e8a8def2b90204ca88 --- /dev/null +++ b/configs/eval_dataset_download.sh @@ -0,0 +1,6 @@ +git clone https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy_dummy +git clone https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium_dummy +git clone https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard_dummy +git clone https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy +git clone https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium +git clone https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard diff --git a/configs/model_config.json b/configs/model_config.json new file mode 100644 index 0000000000000000000000000000000000000000..0967ef424bce6791893e9a57bb952f80fd536e93 --- /dev/null +++ b/configs/model_config.json @@ -0,0 +1 @@ +{} diff --git a/configs/peft_config.json b/configs/peft_config.json new file mode 100644 index 0000000000000000000000000000000000000000..0967ef424bce6791893e9a57bb952f80fd536e93 --- /dev/null +++ b/configs/peft_config.json @@ -0,0 +1 @@ +{} diff --git a/configs/training.sh b/configs/training.sh new file mode 100644 index 0000000000000000000000000000000000000000..a72ce1b389a1d54d33f711c23e9d5c4d0812ab1d --- /dev/null +++ b/configs/training.sh @@ -0,0 +1,82 @@ +CUDA_VISIBLE_DEVICES=0 python /home/augusto/symbo_repos/seringuela/scripts/train_test.py \ + --dataset_repo_id augustocsc/sintetico_natural \ + --data_dir 500k \ + --output_dir ./output \ + --push_to_hub \ + --hub_model_id augustocsc/Se124M500KInfPrompt_EOS \ + --source_data_column i_prompt \ + --report_to wandb \ + --run_name Se124M500KInfPrompt_EOS \ + --model_name_or_path gpt2 \ + --bf16 \ + --eval_strategy steps \ + --num_train_epochs 3 \ + --per_device_train_batch_size 16 \ + --per_device_eval_batch_size 16 \ + --gradient_accumulation_steps 4 \ + --dataloader_num_workers 8 \ + --learning_rate 5e-5 \ + --warmup_ratio 0.03 \ + --weight_decay 0.01 \ + --max_grad_norm 1.0 \ + --lr_scheduler_type cosine \ + --optim adamw_torch_fused \ + --logging_steps 20 \ + --eval_steps 500 \ + --save_steps 1000 \ + --save_total_limit 3 \ + + +# CUDA_VISIBLE_DEVICES=1 python /home/augusto/symbo_repos/seringuela/scripts/train_test.py \ +# --dataset_repo_id augustocsc/sintetico_final \ +# --data_dir 100k \ +# --output_dir ./output \ +# --push_to_hub \ +# --hub_model_id augustocsc/Se124M100KInfPrompt_NT \ +# --source_data_column i_prompt \ +# --report_to wandb \ +# --run_name Se124M100KInfPrompt_NT \ +# --bf16 \ +# --eval_strategy steps \ +# --num_train_epochs 3 \ +# --per_device_train_batch_size 16 \ +# --per_device_eval_batch_size 16 \ +# --gradient_accumulation_steps 2 \ +# --dataloader_num_workers 8 \ +# --learning_rate 2e-5 \ +# --warmup_ratio 0.03 \ +# --weight_decay 0.01 \ +# --max_grad_norm 1.0 \ +# --lr_scheduler_type cosine \ +# --optim adamw_torch_fused \ +# --logging_steps 20 \ +# --eval_steps 500 \ +# --save_steps 1000 \ +# --save_total_limit 3 + +# CUDA_VISIBLE_DEVICES=0 python /home/augusto/symbo_repos/seringuela/scripts/train_test.py \ +# --dataset_repo_id augustocsc/sintetico_final \ +# --data_dir 100k \ +# --output_dir ./output \ +# --push_to_hub \ +# --hub_model_id augustocsc/Se124M100KInfPrompt_WT \ +# --source_data_column i_prompt \ +# --report_to wandb \ +# --run_name Se124M100KInfPrompt_WT \ +# --bf16 \ +# --eval_strategy steps \ +# --num_train_epochs 3 \ +# --per_device_train_batch_size 16 \ +# --per_device_eval_batch_size 16 \ +# --gradient_accumulation_steps 2 \ +# --dataloader_num_workers 8 \ +# --learning_rate 2e-5 \ +# --warmup_ratio 0.03 \ +# --weight_decay 0.01 \ +# --max_grad_norm 1.0 \ +# --lr_scheduler_type cosine \ +# --optim adamw_torch_fused \ +# --logging_steps 20 \ +# --eval_steps 500 \ +# --save_steps 1000 \ +# --save_total_limit 3 diff --git a/configs/training_args.json b/configs/training_args.json new file mode 100644 index 0000000000000000000000000000000000000000..e38b5a002d017bbbfee9f6a3c6c4c5239c8e3f4a --- /dev/null +++ b/configs/training_args.json @@ -0,0 +1,29 @@ +{ + "output_dir": "./output", + "overwrite_output_dir": true, + "num_train_epochs": 50, + "per_device_train_batch_size": 8, + "gradient_accumulation_steps": 1, + "learning_rate": 5e-5, + "weight_decay": 0.01, + "warmup_steps": 0, + "fp16": true, + "seed": 42, + "per_device_eval_batch_size": 8, + "eval_strategy": "epoch", + "metric_for_best_model": "eval_loss", + "greater_is_better": false, + "eval_steps": null, + "load_best_model_at_end": true, + "save_strategy": "epoch", + "save_steps": null, + "save_total_limit": 2, + "logging_dir": "./output/logs", + "logging_steps": 100, + "report_to": "wandb", + "run_name": "Se124M100K", + "push_to_hub": true, + "hub_model_id": "augustocsc/Se124M100K", + "hub_token": null + +} \ No newline at end of file diff --git a/configs/training_large.json b/configs/training_large.json new file mode 100644 index 0000000000000000000000000000000000000000..b6ce7c848733bb2e5cd996723fd05ae1221d97b7 --- /dev/null +++ b/configs/training_large.json @@ -0,0 +1,65 @@ +{ + "model_config": { + "model_name_or_path": "gpt2-large", + "model_size": "774M", + "description": "GPT-2 Large - 774M parameters" + }, + "training_args": { + "num_train_epochs": 2, + "per_device_train_batch_size": 4, + "per_device_eval_batch_size": 4, + "gradient_accumulation_steps": 16, + "effective_batch_size": 64, + "learning_rate": 2e-5, + "weight_decay": 0.01, + "warmup_steps": 100, + "max_grad_norm": 1.0, + "lr_scheduler_type": "cosine", + "fp16": true, + "seed": 42, + "block_size": 128 + }, + "evaluation_args": { + "eval_strategy": "epoch", + "eval_steps": null, + "metric_for_best_model": "eval_loss", + "greater_is_better": false, + "load_best_model_at_end": true + }, + "save_args": { + "save_strategy": "epoch", + "save_steps": null, + "save_total_limit": 2 + }, + "logging_args": { + "logging_dir": "./output/logs", + "logging_steps": 50, + "report_to": "wandb" + }, + "lora_config": { + "r": 8, + "lora_alpha": 32, + "target_modules": ["c_attn", "c_proj"], + "lora_dropout": 0.05, + "bias": "none", + "task_type": "CAUSAL_LM" + }, + "dataset_config": { + "dataset_repo_id": "augustocsc/sintetico_natural", + "data_dir": "700K", + "data_columns": { + "infix": "i_prompt_n", + "prefix": "p_prompt_n" + } + }, + "hub_config": { + "push_to_hub": true, + "hub_model_id_template": "augustocsc/Se774M_700K_{format}", + "formats": ["infix", "prefix"] + }, + "estimated_time": { + "per_epoch_minutes": 180, + "total_hours": 6, + "notes": "Estimated for AWS g5.xlarge with A10G GPU. May need gradient checkpointing for memory optimization." + } +} diff --git a/configs/training_medium.json b/configs/training_medium.json new file mode 100644 index 0000000000000000000000000000000000000000..fcc98258c1a06b4c9c40ddad4fbc044bdc2d1e38 --- /dev/null +++ b/configs/training_medium.json @@ -0,0 +1,65 @@ +{ + "model_config": { + "model_name_or_path": "gpt2-medium", + "model_size": "355M", + "description": "GPT-2 Medium - 355M parameters" + }, + "training_args": { + "num_train_epochs": 2, + "per_device_train_batch_size": 8, + "per_device_eval_batch_size": 8, + "gradient_accumulation_steps": 8, + "effective_batch_size": 64, + "learning_rate": 3e-5, + "weight_decay": 0.01, + "warmup_steps": 100, + "max_grad_norm": 1.0, + "lr_scheduler_type": "cosine", + "fp16": true, + "seed": 42, + "block_size": 128 + }, + "evaluation_args": { + "eval_strategy": "epoch", + "eval_steps": null, + "metric_for_best_model": "eval_loss", + "greater_is_better": false, + "load_best_model_at_end": true + }, + "save_args": { + "save_strategy": "epoch", + "save_steps": null, + "save_total_limit": 2 + }, + "logging_args": { + "logging_dir": "./output/logs", + "logging_steps": 50, + "report_to": "wandb" + }, + "lora_config": { + "r": 8, + "lora_alpha": 32, + "target_modules": ["c_attn", "c_proj"], + "lora_dropout": 0.05, + "bias": "none", + "task_type": "CAUSAL_LM" + }, + "dataset_config": { + "dataset_repo_id": "augustocsc/sintetico_natural", + "data_dir": "700K", + "data_columns": { + "infix": "i_prompt_n", + "prefix": "p_prompt_n" + } + }, + "hub_config": { + "push_to_hub": true, + "hub_model_id_template": "augustocsc/Se355M_700K_{format}", + "formats": ["infix", "prefix"] + }, + "estimated_time": { + "per_epoch_minutes": 90, + "total_hours": 3, + "notes": "Estimated for AWS g5.xlarge with A10G GPU" + } +} diff --git a/configs/training_small.json b/configs/training_small.json new file mode 100644 index 0000000000000000000000000000000000000000..c78ff08c84ee7bf908c8ddc0c7881fd40cd51935 --- /dev/null +++ b/configs/training_small.json @@ -0,0 +1,65 @@ +{ + "model_config": { + "model_name_or_path": "gpt2", + "model_size": "124M", + "description": "GPT-2 Small - 124M parameters" + }, + "training_args": { + "num_train_epochs": 3, + "per_device_train_batch_size": 16, + "per_device_eval_batch_size": 16, + "gradient_accumulation_steps": 4, + "effective_batch_size": 64, + "learning_rate": 5e-5, + "weight_decay": 0.01, + "warmup_steps": 100, + "max_grad_norm": 1.0, + "lr_scheduler_type": "cosine", + "fp16": true, + "seed": 42, + "block_size": 128 + }, + "evaluation_args": { + "eval_strategy": "epoch", + "eval_steps": null, + "metric_for_best_model": "eval_loss", + "greater_is_better": false, + "load_best_model_at_end": true + }, + "save_args": { + "save_strategy": "epoch", + "save_steps": null, + "save_total_limit": 2 + }, + "logging_args": { + "logging_dir": "./output/logs", + "logging_steps": 50, + "report_to": "wandb" + }, + "lora_config": { + "r": 8, + "lora_alpha": 32, + "target_modules": ["c_attn", "c_proj"], + "lora_dropout": 0.05, + "bias": "none", + "task_type": "CAUSAL_LM" + }, + "dataset_config": { + "dataset_repo_id": "augustocsc/sintetico_natural", + "data_dir": "700K", + "data_columns": { + "infix": "i_prompt_n", + "prefix": "p_prompt_n" + } + }, + "hub_config": { + "push_to_hub": true, + "hub_model_id_template": "augustocsc/Se124M_700K_{format}", + "formats": ["infix", "prefix"] + }, + "estimated_time": { + "per_epoch_minutes": 40, + "total_hours": 2, + "notes": "Estimated for AWS g5.xlarge with A10G GPU" + } +} diff --git a/configs/training_v3.json b/configs/training_v3.json new file mode 100644 index 0000000000000000000000000000000000000000..99b5e0e9493adb1c959dd35d02fe020a7b443d56 --- /dev/null +++ b/configs/training_v3.json @@ -0,0 +1,78 @@ +{ + "model_config": { + "model_name_or_path": "gpt2", + "model_size": "124M", + "description": "GPT-2 Small (124M) - v3 with proper end markers" + }, + "training_args": { + "num_train_epochs": 3, + "per_device_train_batch_size": 8, + "per_device_eval_batch_size": 8, + "gradient_accumulation_steps": 4, + "effective_batch_size": 32, + "learning_rate": 5e-5, + "weight_decay": 0.01, + "warmup_steps": 100, + "max_grad_norm": 1.0, + "lr_scheduler_type": "cosine", + "fp16": true, + "seed": 42, + "block_size": 128 + }, + "evaluation_args": { + "eval_strategy": "epoch", + "eval_steps": null, + "metric_for_best_model": "eval_loss", + "greater_is_better": false, + "load_best_model_at_end": true + }, + "save_args": { + "save_strategy": "epoch", + "save_steps": null, + "save_total_limit": 2 + }, + "logging_args": { + "logging_dir": "./output/logs", + "logging_steps": 50, + "report_to": "wandb" + }, + "lora_config": { + "r": 8, + "lora_alpha": 32, + "target_modules": ["c_attn"], + "lora_dropout": 0.05, + "bias": "none", + "task_type": "CAUSAL_LM" + }, + "dataset_config": { + "use_local_csvs": true, + "train_file": "./data/processed/700K_fixed/train_700K.csv", + "validation_file": "./data/processed/700K_fixed/validation_700K.csv", + "test_file": "./data/processed/700K_fixed/test_700K.csv", + "data_column": "text" + }, + "hub_config": { + "push_to_hub": true, + "hub_model_id": "augustocsc/Se124M_700K_infix_v3" + }, + "special_tokens": { + "start_token": "<|startofex|>", + "end_token": "<|endofex|>", + "notes": "End token configured as EOS token for proper stopping" + }, + "estimated_time": { + "per_epoch_minutes": 45, + "total_hours": 2.25, + "notes": "Estimated for AWS g5.xlarge with A10G GPU, GPT-2 Small, 3 epochs" + }, + "version_info": { + "model_version": "v3", + "improvements": [ + "Training data includes proper <|endofex|> markers", + "100% validation rate on prepared dataset", + "Addresses v1 non-stopping issue and v2 garbage generation", + "Uses local CSVs with validated end markers" + ], + "training_date": "2026-02-01" + } +} diff --git a/create_structure.sh b/create_structure.sh new file mode 100644 index 0000000000000000000000000000000000000000..19c2086563f05ba57d85433864c3a9b603e22f9b --- /dev/null +++ b/create_structure.sh @@ -0,0 +1,171 @@ +#!/bin/bash + +echo "Criando estrutura de pastas para o projeto de fine-tuning..." + +# Diretórios Principais +mkdir -p data/raw +mkdir -p data/processed +mkdir -p scripts +mkdir -p configs +mkdir -p output +mkdir -p notebooks + +echo "Diretórios criados." + +# Arquivos Placeholder e de Configuração Inicial +touch data/raw/.gitkeep # Mantém a pasta no Git mesmo vazia +touch data/processed/.gitkeep # Mantém a pasta no Git mesmo vazia + +echo "# Script para pré-processar dados (raw -> processed)" > scripts/preprocess_data.py +echo "# Script principal de treinamento (usa Trainer)" > scripts/train.py +echo "# Script para avaliação customizada" > scripts/evaluate.py +echo "# Script para geração de texto com modelo treinado" > scripts/generate.py + +echo "{}" > configs/training_args.json # Placeholder para argumentos do Trainer +echo "{}" > configs/peft_config.json # Placeholder para config PEFT (se usar) +echo "{}" > configs/model_config.json # Placeholder para config do modelo base + +touch notebooks/01_data_exploration.ipynb +touch notebooks/.gitkeep # Mantém a pasta no Git mesmo vazia + +touch requirements.txt + +echo "Arquivos placeholder criados." + +# Conteúdo Inicial para .gitignore +echo "Gerando .gitignore..." +cat << EOF > .gitignore +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +pip-wheel-metadata/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py,cover +.hypothesis/ +.pytest_cache/ + +# Environments +.env +.venv +venv/ +ENV/ +env/ +env.bak/ +venv.bak/ + +# IDEs / Editors +.idea/ +.vscode/ +*.suo +*.ntvs* +*.njsproj +*.sln +*.sw? + +# Jupyter Notebook +.ipynb_checkpoints + +# Output folder (geralmente grande demais para Git) +output/* +!output/.gitkeep # Não ignore um .gitkeep se precisar manter a pasta + +# Dados (podem ser grandes, usar Git LFS ou armazenar fora se necessário) +data/raw/* +data/processed/* +!data/raw/.gitkeep +!data/processed/.gitkeep + +# OS generated files +.DS_Store +.DS_Store? +._* +.Spotlight-V100 +.Trashes +ehthumbs.db +Thumbs.db +EOF + +# Conteúdo Inicial para README.md (será preenchido com o texto gerado abaixo) +echo "Gerando README.md inicial..." +echo "# Nome do Seu Projeto de Fine-Tuning" > README.md +echo "" >> README.md +echo "(Breve descrição do objetivo do projeto)" >> README.md +echo "" >> README.md +echo "## Estrutura de Pastas" >> README.md +echo "" >> README.md +echo "**(COPIE E COLE A EXPLICAÇÃO DA ESTRUTURA GERADA NA PRÓXIMA SEÇÃO AQUI)**" >> README.md +echo "" >> README.md +echo "## Como Usar" >> README.md +echo "" >> README.md +echo "1. **Setup:** Crie um ambiente virtual e instale as dependências:" >> README.md +echo " \`\`\`bash" >> README.md +echo " python -m venv venv" >> README.md +echo " source venv/bin/activate # Linux/macOS" >> README.md +echo " # venv\\Scripts\\activate # Windows" >> README.md +echo " pip install -r requirements.txt" >> README.md +echo " \`\`\`" >> README.md +echo "2. **Dados:** Coloque seus dados brutos em \`data/raw/\` e execute (ou crie) o script \`scripts/preprocess_data.py\` para gerar os arquivos em \`data/processed/\`." >> README.md +echo "3. **Configuração:** Ajuste os arquivos em \`configs/\` (argumentos de treino, modelo base, PEFT se aplicável)." >> README.md +echo "4. **Treinamento:** Execute o script principal:" >> README.md +echo " \`\`\`bash" >> README.md +echo " python scripts/train.py --args_config configs/training_args.json --model_config configs/model_config.json" >> README.md +echo " \`\`\`" >> README.md +echo " *(Adapte os argumentos conforme necessário)*" >> README.md +echo "" >> README.md +echo "## Dependências" >> README.md +echo "" >> README.md +echo "As dependências Python estão listadas no arquivo \`requirements.txt\`." >> README.md + +chmod +x create_structure.sh + +echo "--------------------------------------------------" +echo "Estrutura criada com sucesso!" +echo "Para usar:" +echo "1. Torne o script executável: chmod +x create_structure.sh" +echo "2. Execute o script: ./create_structure.sh" +echo "3. Copie a explicação da estrutura (gerada na resposta anterior) para dentro do README.md onde indicado." +echo "--------------------------------------------------" \ No newline at end of file diff --git a/notebooks/.gitkeep b/notebooks/.gitkeep new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/notebooks/01_data_exploration.ipynb b/notebooks/01_data_exploration.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..13085e8d0ae0c1b1cc939d278ecf2d9ed75ce308 --- /dev/null +++ b/notebooks/01_data_exploration.ipynb @@ -0,0 +1,174305 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "48ed45da3b484268883cd4770b14de77", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Processing chunks: 0%| | 0/6 [00:00 \u001b[39m\u001b[32m3\u001b[39m \u001b[38;5;28mprint\u001b[39m(\u001b[33m\"\u001b[39m\u001b[33m-\u001b[39m\u001b[33m\"\u001b[39m*\u001b[32m50\u001b[39m)\n", + "\u001b[31mKeyboardInterrupt\u001b[39m: " + ] + } + ], + "source": [ + "for row in df_augmented['i_prompt']:\n", + " print(row)\n", + " print(\"-\"*50)" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "# Split df_augmented into train, validation, and test sets\n", + "train_df, temp_df = train_test_split(df_augmented, test_size=0.3, random_state=42)\n", + "val_df, test_df = train_test_split(temp_df, test_size=0.5, random_state=42)\n", + "\n", + "file = os.path.basename(file_path) # Extract the file name from file_path\n", + "train_file_path = f'../data/processed/{file.replace(\".csv\", \"\")}/train_{file}'\n", + "val_file_path = f'../data/processed/{file.replace(\".csv\", \"\")}/val_{file}'\n", + "test_file_path = f'../data/processed/{file.replace(\".csv\", \"\")}/test_{file}'\n", + "\n", + "# Create directories if they don't exist\n", + "os.makedirs(os.path.dirname(train_file_path), exist_ok=True)\n", + "os.makedirs(os.path.dirname(val_file_path), exist_ok=True)\n", + "os.makedirs(os.path.dirname(test_file_path), exist_ok=True)\n", + "\n", + "# Save the train, validation, and test sets\n", + "train_df.to_csv(train_file_path, index=False)\n", + "val_df.to_csv(val_file_path, index=False)\n", + "test_df.to_csv(test_file_path, index=False)\n", + "\n", + "# Save the processed file\n", + "processed_file_path = f'../data/processed/{file.replace(\".csv\", \"\")}/{file}'\n", + "temp_df.to_csv(processed_file_path, index=False)" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "91109f65adc24f0088f4759fbee0d6bc", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "train_500k.csv: 0%| | 0.00/99.4M [00:00(.*?)\", re.DOTALL)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c76ee26f", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Some weights of the model checkpoint at augustocsc/Se124M100KInfPrompt_EOS_Merged were not used when initializing GPT2LMHeadModel: ['v_head.summary.bias', 'v_head.summary.weight']\n", + "- This IS expected if you are initializing GPT2LMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", + "- This IS NOT expected if you are initializing GPT2LMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", + "Some weights of the model checkpoint at augustocsc/Se124M100KInfPrompt_EOS_Merged were not used when initializing GPT2LMHeadModel: ['v_head.summary.bias', 'v_head.summary.weight']\n", + "- This IS expected if you are initializing GPT2LMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", + "- This IS NOT expected if you are initializing GPT2LMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n" + ] + } + ], + "source": [ + "from transformers import AutoModelForCausalLM\n", + "from peft import PeftModel\n", + "from trl import AutoModelForCausalLMWithValueHead\n", + "\n", + "# Carrega o modelo base\n", + "base_model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n", + "\n", + "# Carrega os pesos LoRA (checkpoint treinado)\n", + "peft_model = PeftModel.from_pretrained(base_model, \"augustocsc/Se124M100KInfPrompt_EOS\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "ffc6e072", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Run 1/1: Generating 10 samples...\n" + ] + } + ], + "source": [ + "all_expressions = []\n", + "\n", + "# Generation loop\n", + "for run in range(REPEAT_TIMES):\n", + " print(f\"Run {run+1}/{REPEAT_TIMES}: Generating {GENERATE_BATCH} samples...\")\n", + " inputs = tokenizer([PROMPT] * GENERATE_BATCH, return_tensors=\"pt\", padding=True)\n", + " outputs = model.generate(\n", + " **inputs,\n", + " max_new_tokens=75,\n", + " do_sample=True,\n", + " top_p=0.9,\n", + " top_k=50,\n", + " temperature=0.7,\n", + " )\n" + ] + }, + { + "cell_type": "code", + "execution_count": 71, + "id": "be3b4bcb", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Generated expressions:\n", + " a_1, b_2, c_1, c_2, c_3, c_4, c_5, c_6, c_7, c_8, c_9, c_10, c_\n", + "\n", + "\n", + "A function that evaluates to a string, and returns a string.\n", + "\n", + "A string can be any character, and can be either a double, a string, a double, a singleton, a string with multiple elements, a string with\n", + "\n", + "\n", + "vars: x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8, x_9, x_10\n", + "\n", + "op: *,\n", + " *, +, +, -, /\n", + "cons: C\n", + "\n", + "expr: *, +, +, -, /\n", + "\n", + "cons: C\n", + "\n", + "expr: *, +, +, -, /\n", + "\n", + "cons: C\n", + "\n", + " *\n", + "\n", + "cons: c\n", + "\n", + "type: Int\n", + "\n", + "value: *\n", + "\n", + "value: *\n", + "\n", + "value: *\n", + "\n", + "value: *\n", + "\n", + "value: *\n", + "\n", + "value: *\n", + "\n", + "value: *\n", + "\n", + "value:\n", + " *, **, -, /\n", + "oper: *, **, +, -, /\n", + "oper: *, **, +, -, /\n", + "\n", + "op: [\n", + "\n", + "op: [\n", + "\n", + "op: [\n", + "\n", + "op:\n", + "\n", + "\n", + "*, **, +, -, /\n", + "\n", + "vars: x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8, x_\n", + " *, **, *, **, *, **, *, *, **, *, **, *, **, *, **, *, **, *, *, **, *, **, *, *\n", + "\n", + "oper\n", + " *, *, *, *, *, *, *\n", + "cons: C\n", + "\n", + "expr: *, *, *, *, *, *, *\n", + "\n", + "cons: C\n", + "\n", + "expr: *, *, *, *\n", + "\n", + "\n", + "vars: *, +, *, *, *, *, *, *, *, *, *, *, *, *, *, *\n", + "\n", + "oper: *, **, +, *, *,\n" + ] + } + ], + "source": [ + "# remove the prompt from the generated text and print the decoded text\n", + "generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)\n", + "generated_text = [text.replace(PROMPT, \"\") for text in generated_text]\n", + "all_expressions.extend(generated_text)\n", + "print(\"Generated expressions:\")\n", + "for text in generated_text:\n", + " print(text)\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "5d8e569f", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Valid Expressions:\n", + "x_6 - x_3 + C*x_6 + x_7 + C\n", + "x_2*(x_9 + x_2)**C\n", + "x_9 + x_2 + C*x_7**C\n", + "x_9**C + x_1**C + x_2\n", + "C*x_1 + x_8 + x_1 + C\n", + "x_1**C*(x_9 + x_4**C + C)\n", + "x_2*(x_9 - C)**C/x_7\n", + "x_1*(x_8 - C)/(x_1 + x_2)\n", + "x_8**C*(x_2 + x_7)\n", + "x_1**C + x_2**C + x_9\n", + "\n", + "Invalid Expressions:\n" + ] + } + ], + "source": [ + "valid_expressions = []\n", + "invalid_expressions = []\n", + "\n", + "for out in outputs:\n", + " text = tokenizer.decode(out)\n", + " expr = text.split(\"expr: \")[1].split(\"<|endoftext|>\")[0].strip() # Extract the expression between \"expr: \" and <|endoftext|>\n", + " try:\n", + " sympy_expr = sp.sympify(expr, evaluate=False) # Try to parse the expression with sympy\n", + " valid_expressions.append(expr)\n", + " except Exception as e:\n", + " invalid_expressions.append(expr)\n", + "\n", + "# Print valid expressions\n", + "print(\"Valid Expressions:\")\n", + "for expr in valid_expressions:\n", + " print(expr)\n", + "\n", + "# Print invalid expressions\n", + "print(\"\\nInvalid Expressions:\")\n", + "for expr in invalid_expressions:\n", + " print(expr)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d05f1edd", + "metadata": {}, + "outputs": [ + { + "ename": "AttributeError", + "evalue": "'AutoModelForCausalLMWithValueHead' object has no attribute 'generation_config'", + "output_type": "error", + "traceback": [ + "\u001b[31m---------------------------------------------------------------------------\u001b[39m", + "\u001b[31mAttributeError\u001b[39m Traceback (most recent call last)", + "\u001b[36mFile \u001b[39m\u001b[32m~/symbo_repos/seringuela/.seriguela/lib/python3.11/site-packages/peft/peft_model.py:793\u001b[39m, in \u001b[36mPeftModel.__getattr__\u001b[39m\u001b[34m(self, name)\u001b[39m\n\u001b[32m 792\u001b[39m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[32m--> \u001b[39m\u001b[32m793\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43msuper\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m.\u001b[49m\u001b[34;43m__getattr__\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mname\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;66;03m# defer to nn.Module's logic\u001b[39;00m\n\u001b[32m 794\u001b[39m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m:\n", + "\u001b[36mFile \u001b[39m\u001b[32m~/symbo_repos/seringuela/.seriguela/lib/python3.11/site-packages/torch/nn/modules/module.py:1928\u001b[39m, in \u001b[36mModule.__getattr__\u001b[39m\u001b[34m(self, name)\u001b[39m\n\u001b[32m 1927\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m modules[name]\n\u001b[32m-> \u001b[39m\u001b[32m1928\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[32m 1929\u001b[39m \u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mtype\u001b[39m(\u001b[38;5;28mself\u001b[39m).\u001b[34m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m'\u001b[39m\u001b[33m object has no attribute \u001b[39m\u001b[33m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m'\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m 1930\u001b[39m )\n", + "\u001b[31mAttributeError\u001b[39m: 'PeftModelForCausalLM' object has no attribute 'generation_config'", + "\nDuring handling of the above exception, another exception occurred:\n", + "\u001b[31mAttributeError\u001b[39m Traceback (most recent call last)", + "\u001b[36mFile \u001b[39m\u001b[32m~/symbo_repos/seringuela/.seriguela/lib/python3.11/site-packages/peft/tuners/lora/model.py:359\u001b[39m, in \u001b[36mLoraModel.__getattr__\u001b[39m\u001b[34m(self, name)\u001b[39m\n\u001b[32m 358\u001b[39m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[32m--> \u001b[39m\u001b[32m359\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43msuper\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m.\u001b[49m\u001b[34;43m__getattr__\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mname\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;66;03m# defer to nn.Module's logic\u001b[39;00m\n\u001b[32m 360\u001b[39m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m:\n", + "\u001b[36mFile \u001b[39m\u001b[32m~/symbo_repos/seringuela/.seriguela/lib/python3.11/site-packages/torch/nn/modules/module.py:1928\u001b[39m, in \u001b[36mModule.__getattr__\u001b[39m\u001b[34m(self, name)\u001b[39m\n\u001b[32m 1927\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m modules[name]\n\u001b[32m-> \u001b[39m\u001b[32m1928\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[32m 1929\u001b[39m \u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mtype\u001b[39m(\u001b[38;5;28mself\u001b[39m).\u001b[34m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m'\u001b[39m\u001b[33m object has no attribute \u001b[39m\u001b[33m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m'\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m 1930\u001b[39m )\n", + "\u001b[31mAttributeError\u001b[39m: 'LoraModel' object has no attribute 'generation_config'", + "\nDuring handling of the above exception, another exception occurred:\n", + "\u001b[31mAttributeError\u001b[39m Traceback (most recent call last)", + "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[41]\u001b[39m\u001b[32m, line 2\u001b[39m\n\u001b[32m 1\u001b[39m \u001b[38;5;66;03m# Generate with beam search and early stopping\u001b[39;00m\n\u001b[32m----> \u001b[39m\u001b[32m2\u001b[39m output = \u001b[43mmodel\u001b[49m\u001b[43m.\u001b[49m\u001b[43mgenerate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 3\u001b[39m \u001b[43m \u001b[49m\u001b[43minputs\u001b[49m\u001b[43m.\u001b[49m\u001b[43minput_ids\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 4\u001b[39m \u001b[43m \u001b[49m\u001b[43mattention_mask\u001b[49m\u001b[43m=\u001b[49m\u001b[43minputs\u001b[49m\u001b[43m.\u001b[49m\u001b[43mattention_mask\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 5\u001b[39m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m#max_length=100,\u001b[39;49;00m\n\u001b[32m 6\u001b[39m \u001b[43m \u001b[49m\u001b[43mnum_beams\u001b[49m\u001b[43m=\u001b[49m\u001b[32;43m5\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;66;43;03m# Enable beam search\u001b[39;49;00m\n\u001b[32m 7\u001b[39m \u001b[43m \u001b[49m\u001b[43mearly_stopping\u001b[49m\u001b[43m=\u001b[49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;66;43;03m# Stop when all beams hit EOS\u001b[39;49;00m\n\u001b[32m 8\u001b[39m \n\u001b[32m 9\u001b[39m \u001b[43m)\u001b[49m\n\u001b[32m 11\u001b[39m decoded_output = tokenizer.decode(output[\u001b[32m0\u001b[39m], skip_special_tokens=\u001b[38;5;28;01mFalse\u001b[39;00m)\n\u001b[32m 12\u001b[39m \u001b[38;5;28mprint\u001b[39m(decoded_output)\n", + "\u001b[36mFile \u001b[39m\u001b[32m~/symbo_repos/seringuela/.seriguela/lib/python3.11/site-packages/peft/peft_model.py:1867\u001b[39m, in \u001b[36mPeftModelForCausalLM.generate\u001b[39m\u001b[34m(self, *args, **kwargs)\u001b[39m\n\u001b[32m 1865\u001b[39m \u001b[38;5;28mself\u001b[39m.base_model.prepare_inputs_for_generation = \u001b[38;5;28mself\u001b[39m.prepare_inputs_for_generation\n\u001b[32m 1866\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mhasattr\u001b[39m(\u001b[38;5;28mself\u001b[39m.base_model, \u001b[33m\"\u001b[39m\u001b[33mmodel\u001b[39m\u001b[33m\"\u001b[39m):\n\u001b[32m-> \u001b[39m\u001b[32m1867\u001b[39m \u001b[38;5;28mself\u001b[39m.base_model.model.generation_config = \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43mgeneration_config\u001b[49m\n\u001b[32m 1868\u001b[39m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[32m 1869\u001b[39m \u001b[38;5;28mself\u001b[39m.base_model.generation_config = \u001b[38;5;28mself\u001b[39m.generation_config\n", + "\u001b[36mFile \u001b[39m\u001b[32m~/symbo_repos/seringuela/.seriguela/lib/python3.11/site-packages/peft/peft_model.py:797\u001b[39m, in \u001b[36mPeftModel.__getattr__\u001b[39m\u001b[34m(self, name)\u001b[39m\n\u001b[32m 795\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m name == \u001b[33m\"\u001b[39m\u001b[33mbase_model\u001b[39m\u001b[33m\"\u001b[39m: \u001b[38;5;66;03m# see #1892: prevent infinite recursion if class is not initialized\u001b[39;00m\n\u001b[32m 796\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m\n\u001b[32m--> \u001b[39m\u001b[32m797\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mself\u001b[39m.base_model, name)\n", + "\u001b[36mFile \u001b[39m\u001b[32m~/symbo_repos/seringuela/.seriguela/lib/python3.11/site-packages/peft/tuners/lora/model.py:363\u001b[39m, in \u001b[36mLoraModel.__getattr__\u001b[39m\u001b[34m(self, name)\u001b[39m\n\u001b[32m 361\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m name == \u001b[33m\"\u001b[39m\u001b[33mmodel\u001b[39m\u001b[33m\"\u001b[39m: \u001b[38;5;66;03m# see #1892: prevent infinite recursion if class is not initialized\u001b[39;00m\n\u001b[32m 362\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m\n\u001b[32m--> \u001b[39m\u001b[32m363\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mself\u001b[39m.model, name)\n", + "\u001b[36mFile \u001b[39m\u001b[32m~/symbo_repos/seringuela/.seriguela/lib/python3.11/site-packages/torch/nn/modules/module.py:1928\u001b[39m, in \u001b[36mModule.__getattr__\u001b[39m\u001b[34m(self, name)\u001b[39m\n\u001b[32m 1926\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m name \u001b[38;5;129;01min\u001b[39;00m modules:\n\u001b[32m 1927\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m modules[name]\n\u001b[32m-> \u001b[39m\u001b[32m1928\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[32m 1929\u001b[39m \u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mtype\u001b[39m(\u001b[38;5;28mself\u001b[39m).\u001b[34m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m'\u001b[39m\u001b[33m object has no attribute \u001b[39m\u001b[33m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m'\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m 1930\u001b[39m )\n", + "\u001b[31mAttributeError\u001b[39m: 'AutoModelForCausalLMWithValueHead' object has no attribute 'generation_config'" + ] + } + ], + "source": [ + "# Generate with beam search and early stopping\n", + "output = model.generate(\n", + " inputs.input_ids,\n", + " attention_mask=inputs.attention_mask,\n", + " #max_length=100,\n", + " num_beams=5, # Enable beam search\n", + " early_stopping=True, # Stop when all beams hit EOS\n", + "\n", + ")\n", + "\n", + "decoded_output = tokenizer.decode(output[0], skip_special_tokens=False)\n", + "print(decoded_output)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7a9ade5c", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "# Save raw expressions\n", + "with open(OUTPUT_EXPR_FILE, 'w') as f:\n", + " json.dump(all_expressions, f, indent=2)\n", + "print(f\"Saved {len(all_expressions)} expressions to {OUTPUT_EXPR_FILE}\")\n", + "\n", + "# Analysis\n", + "analysis = {\n", + " 'total_expressions': len(all_expressions),\n", + " 'syntactic_semantic': {\n", + " 'valid_equations': 0,\n", + " 'parse_errors': defaultdict(int),\n", + " },\n", + " 'diversity_redundancy': {},\n", + " 'statistical_distributions': {\n", + " 'variable_freq': Counter(),\n", + " 'operator_freq': Counter(),\n", + " 'avg_operators_per_eq': 0.0,\n", + " 'avg_variables_per_eq': 0.0,\n", + " }\n", + "}\n", + "\n", + "# Helper to compute tree depth\n", + "def tree_depth(expr):\n", + " if not expr.args:\n", + " return 1\n", + " return 1 + max(tree_depth(arg) for arg in expr.args)\n", + "\n", + "# Operators list\n", + "operators = ['+', '-', '*', '/', '^', 'log', 'exp', 'cos', 'sqrt', 'asin', 'sin', 'pow', 'tan', 'abs']\n", + "\n", + "depths = []\n", + "operator_counts = []\n", + "variable_counts = []\n", + "unique_set = set()\n", + "\n", + "for expr in all_expressions:\n", + " # Parse with sympy\n", + " try:\n", + " sympy_expr = sp.sympify(expr, evaluate=False)\n", + " analysis['syntactic_semantic']['valid_equations'] += 1\n", + " depths.append(tree_depth(sympy_expr))\n", + " except Exception as e:\n", + " err_msg = str(e)\n", + " if 'could not parse' in err_msg:\n", + " analysis['syntactic_semantic']['parse_errors']['parse_failure'] += 1\n", + " else:\n", + " analysis['syntactic_semantic']['parse_errors'][err_msg] += 1\n", + " continue\n", + "\n", + " # Variables\n", + " vars_in_expr = [str(v) for v in sympy_expr.free_symbols]\n", + " for v in vars_in_expr:\n", + " analysis['statistical_distributions']['variable_freq'][v] += 1\n", + " variable_counts.append(len(vars_in_expr))\n", + "\n", + " # Operators\n", + " op_count = sum(expr.count(op) for op in operators)\n", + " analysis['statistical_distributions']['operator_freq'].update({op: expr.count(op) for op in operators})\n", + " operator_counts.append(op_count)\n", + "\n", + " # Diversity\n", + " unique_set.add(expr)\n", + "\n", + "# Populate diversity metrics\n", + "total = analysis['total_expressions']\n", + "unique_count = len(unique_set)\n", + "analysis['diversity_redundancy'] = {\n", + " 'unique_expressions': unique_count,\n", + " 'unique_proportion': unique_count / total if total else 0,\n", + " 'duplicate_counts': {expr: cnt for expr, cnt in Counter(all_expressions).items() if cnt > 1},\n", + " 'structural_diversity': {\n", + " 'avg_tree_depth': sum(depths) / len(depths) if depths else 0,\n", + " 'min_tree_depth': min(depths) if depths else 0,\n", + " 'max_tree_depth': max(depths) if depths else 0,\n", + " }\n", + "}\n", + "\n", + "# Statistical distributions averages\n", + "analysis['statistical_distributions']['avg_operators_per_eq'] = sum(operator_counts) / len(operator_counts) if operator_counts else 0\n", + "analysis['statistical_distributions']['avg_variables_per_eq'] = sum(variable_counts) / len(variable_counts) if variable_counts else 0\n", + "\n", + "# Convert Counters to dicts for JSON serialization\n", + "analysis['statistical_distributions']['variable_freq'] = dict(analysis['statistical_distributions']['variable_freq'])\n", + "analysis['statistical_distributions']['operator_freq'] = dict(analysis['statistical_distributions']['operator_freq'])\n", + "analysis['syntactic_semantic']['parse_errors'] = dict(analysis['syntactic_semantic']['parse_errors'])\n", + "\n", + "# Save analysis results\n", + "with open(OUTPUT_ANALYSIS_FILE, 'w') as f:\n", + " json.dump(analysis, f, indent=2)\n", + "print(f\"Saved analysis results to {OUTPUT_ANALYSIS_FILE}\")\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".seriguela", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/notebooks/03_RL.ipynb b/notebooks/03_RL.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..2a3f2e02c28aee4b396b33cede6bdea232c98f02 --- /dev/null +++ b/notebooks/03_RL.ipynb @@ -0,0 +1,338 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "id": "59d6d70b", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Some weights of the model checkpoint at augustocsc/Se124M100KInfPrompt_EOS_Merged were not used when initializing GPT2LMHeadModel: ['v_head.summary.bias', 'v_head.summary.weight']\n", + "- This IS expected if you are initializing GPT2LMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", + "- This IS NOT expected if you are initializing GPT2LMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", + "WARNING:root:A model is loaded from 'augustocsc/Se124M100KInfPrompt_EOS_Merged', and no v_head weight is found. This IS expected if you are not resuming PPO training.\n", + "Some weights of the model checkpoint at augustocsc/Se124M100KInfPrompt_EOS_Merged were not used when initializing GPT2LMHeadModel: ['v_head.summary.bias', 'v_head.summary.weight']\n", + "- This IS expected if you are initializing GPT2LMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", + "- This IS NOT expected if you are initializing GPT2LMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", + "WARNING:root:A model is loaded from 'augustocsc/Se124M100KInfPrompt_EOS_Merged', and no v_head weight is found. This IS expected if you are not resuming PPO training.\n" + ] + } + ], + "source": [ + "import os\n", + "import torch\n", + "import numpy as np\n", + "from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead\n", + "from transformers import AutoTokenizer\n", + "from datasets import Dataset\n", + "from peft import PeftModel, AutoPeftModelForCausalLM\n", + "import sys\n", + "from transformers import AutoModelForCausalLM\n", + "\n", + "# Add path for Expression class\n", + "sys.path.append(os.path.abspath(os.path.join(os.getcwd(), '../classes')))\n", + "from expression import Expression\n", + "from dataset import RegressionDataset\n", + "\n", + "# === Reward function ===\n", + "def compute_reward(expression_str: str) -> float:\n", + " try:\n", + " expr = Expression(expression_str)\n", + " \n", + " # Check if the expression is valid and can be evaluated\n", + " if expr.is_valid_on_dataset(X):\n", + " score = expr.fit_constants(X, y)\n", + " return max(0.1 , (float(score) if np.isfinite(score) else -1.0))\n", + " else:\n", + " #print(f\"Expressão inválida: {expression_str}\")\n", + " return -1.0\n", + " except Exception as e:\n", + " #print(f\"Erro ao avaliar expressão: {expression_str} - {e}\")\n", + " return -1.0\n", + "\n", + "# === Helper to extract expression ===\n", + "def extract_expression(response: str) -> str:\n", + " return response.split(\"expr: \")[1].split(\"<|endoftext|>\")[0].strip()\n", + "\n", + "# === Load Data ===\n", + "#reg = RegressionDataset('../data/evaluate/srsd-feynman_hard/train', 'feynman-bonus.12.txt', delimiter=' ')\n", + "reg = RegressionDataset('../data/evaluate/srsd-feynman_easy/train', 'feynman-i.18.16.txt', delimiter=' ')\n", + "X, y = reg.get_numpy()\n", + "\n", + "# === Configs ===\n", + "BASE_MODEL = \"augustocsc/Se124M100KInfPrompt_EOS_Merged\"\n", + "LORA_REPO = \"augustocsc/Se124M100KInfPrompt_EOS_Merged\"\n", + "TOKENIZER_REPO = LORA_REPO\n", + "\n", + "# ppo_config = PPOConfig(\n", + "# #model_name=BASE_MODEL,\n", + "# learning_rate=1e-5,\n", + "# batch_size=32,\n", + "# mini_batch_size=8,\n", + "# gradient_accumulation_steps=1,\n", + "# )\n", + "\n", + "\n", + "model = AutoModelForCausalLMWithValueHead.from_pretrained(BASE_MODEL)\n", + "ref_model = AutoModelForCausalLMWithValueHead.from_pretrained(BASE_MODEL)\n", + "tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_REPO)\n", + "\n", + "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", + "model = model.to(device)\n", + "ref_model = ref_model.to(device)\n", + "\n", + "\n", + "import os\n", + "os.environ[\"CUDA_LAUNCH_BLOCKING\"] = \"1\"\n", + "\n", + "\n", + "import numpy as np\n", + "\n", + "def get_safe_functions(X, functions=['log', 'sqrt', 'asin', 'tan', 'abs', 'exp', 'sin', 'cos']):\n", + " \"\"\"\n", + " Returns a list of functions from `functions` that are safe to use on all columns of X.\n", + "\n", + " Parameters:\n", + " X: np.ndarray of shape (n_samples, n_features)\n", + " functions: list of function names to check\n", + "\n", + " Returns:\n", + " List of function names that are safe to use given the data\n", + " \"\"\"\n", + " safe_functions = []\n", + "\n", + " for fn in functions:\n", + " if fn in {'sin', 'cos', 'exp', 'abs'}:\n", + " # These are defined for all real values\n", + " safe_functions.append(fn)\n", + "\n", + " elif fn == 'log':\n", + " if np.all(X > 0):\n", + " safe_functions.append(fn)\n", + "\n", + " elif fn == 'sqrt':\n", + " if np.all(X >= 0):\n", + " safe_functions.append(fn)\n", + "\n", + " elif fn == 'asin':\n", + " if np.all((X >= -1) & (X <= 1)):\n", + " safe_functions.append(fn)\n", + "\n", + " elif fn == 'tan':\n", + " # Check if cos(x) ≈ 0 anywhere → tan(x) will explode\n", + " # We use np.cos to simulate tan issues (e.g., near π/2, 3π/2, etc.)\n", + " cos_vals = np.cos(X)\n", + " if np.all(np.abs(cos_vals) > 1e-6): # adjustable tolerance\n", + " safe_functions.append(fn)\n", + "\n", + " # else skip unknown functions\n", + "\n", + " return safe_functions\n", + "\n", + "\n", + "safe_functions = get_safe_functions(X)\n" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "9e2f618a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "log, sqrt, tan, abs, exp, sin, cos\n" + ] + } + ], + "source": [ + "print(', '.join(safe_functions))" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "dd922d70", + "metadata": {}, + "outputs": [ + { + "ename": "NameError", + "evalue": "name 'PPOConfig' is not defined", + "output_type": "error", + "traceback": [ + "\u001b[31m---------------------------------------------------------------------------\u001b[39m", + "\u001b[31mNameError\u001b[39m Traceback (most recent call last)", + "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[1]\u001b[39m\u001b[32m, line 3\u001b[39m\n\u001b[32m 1\u001b[39m \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mtqdm\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mimport\u001b[39;00m tqdm\n\u001b[32m----> \u001b[39m\u001b[32m3\u001b[39m ppo_config = \u001b[43mPPOConfig\u001b[49m(\n\u001b[32m 4\u001b[39m model_name=\u001b[38;5;28;01mNone\u001b[39;00m, \u001b[38;5;66;03m# definimos o modelo manualmente\u001b[39;00m\n\u001b[32m 5\u001b[39m learning_rate=\u001b[32m1e-5\u001b[39m,\n\u001b[32m 6\u001b[39m batch_size=\u001b[32m5\u001b[39m, \u001b[38;5;66;03m# total prompts/responses por step\u001b[39;00m\n\u001b[32m 7\u001b[39m mini_batch_size=\u001b[32m32\u001b[39m, \u001b[38;5;66;03m# 4 minibatches por batch\u001b[39;00m\n\u001b[32m 8\u001b[39m gradient_accumulation_steps=\u001b[32m1\u001b[39m,\n\u001b[32m 9\u001b[39m ppo_epochs=\u001b[32m4\u001b[39m, \u001b[38;5;66;03m# 4 passes por minibatch\u001b[39;00m\n\u001b[32m 10\u001b[39m log_with=\u001b[38;5;28;01mNone\u001b[39;00m, \u001b[38;5;66;03m# ou \"wandb\"\u001b[39;00m\n\u001b[32m 11\u001b[39m optimize_cuda_cache=\u001b[38;5;28;01mTrue\u001b[39;00m, \u001b[38;5;66;03m# 👍 melhora uso da A100\u001b[39;00m\n\u001b[32m 12\u001b[39m )\n\u001b[32m 14\u001b[39m \u001b[38;5;66;03m# === PPO Trainer ===\u001b[39;00m\n\u001b[32m 15\u001b[39m ppo_trainer = PPOTrainer(\n\u001b[32m 16\u001b[39m config=ppo_config,\n\u001b[32m 17\u001b[39m tokenizer=tokenizer,\n\u001b[32m (...)\u001b[39m\u001b[32m 20\u001b[39m \n\u001b[32m 21\u001b[39m )\n", + "\u001b[31mNameError\u001b[39m: name 'PPOConfig' is not defined" + ] + } + ], + "source": [ + "from tqdm import tqdm\n", + "\n", + "ppo_config = PPOConfig(\n", + " model_name=None, # definimos o modelo manualmente\n", + " learning_rate=1e-5,\n", + " batch_size=5, # total prompts/responses por step\n", + " mini_batch_size=32, # 4 minibatches por batch\n", + " gradient_accumulation_steps=1,\n", + " ppo_epochs=4, # 4 passes por minibatch\n", + " log_with=None, # ou \"wandb\"\n", + " optimize_cuda_cache=True, # 👍 melhora uso da A100\n", + ")\n", + "\n", + "# === PPO Trainer ===\n", + "ppo_trainer = PPOTrainer(\n", + " config=ppo_config,\n", + " tokenizer=tokenizer,\n", + " model=model,\n", + " ref_model=ref_model,\n", + " \n", + ")\n", + "\n", + "# Define the prompt with the safe functions\n", + "PROMPT = f\"\"\"\n", + "vars: x_1, x_2, x_3\n", + "oper: * +, /, **, {', '.join(safe_functions)}\n", + "cons: C\n", + "expr:\"\"\"\n", + "\n", + "# === Dummy dataset ===\n", + "dummy_dataset = Dataset.from_dict({\n", + " \"prompt\": [PROMPT] * 5\n", + "})\n", + "\n", + "\n", + "# Get the device of the model\n", + "device = next(model.parameters()).device\n", + "\n", + "# === PPO Training Loop ===\n", + "# Tokenize the prompt and convert it to tensors\n", + "inputs = tokenizer([PROMPT] * ppo_config.batch_size, return_tensors=\"pt\", padding=True)\n", + "\n", + "# Move inputs to the same device as the model\n", + "inputs = {key: value.to(device) for key, value in inputs.items()}\n", + "\n", + "# Convert the batch tensor into a list of individual tensors\n", + "queries = [inputs[\"input_ids\"][i] for i in range(inputs[\"input_ids\"].size(0))]\n", + "all_rewards = []\n", + "all_responses = []\n", + "for epoch in tqdm(range(10), desc=\"Training Epochs\"): # adjust as needed\n", + " responses = []\n", + " constants = []\n", + " rewards = []\n", + " for i in tqdm(range(ppo_config.batch_size), desc=\"Batch Progress\", leave=False): # Nested progress bar\n", + " try:\n", + " input_ids = inputs[\"input_ids\"][i].unsqueeze(0)\n", + " attention_mask = inputs[\"attention_mask\"][i].unsqueeze(0)\n", + "\n", + " # === VALIDATION PATCH ===\n", + " assert torch.all((input_ids >= 0) & (input_ids < model.config.vocab_size)), \\\n", + " f\"Token inválido detectado: max={input_ids.max().item()}, vocab_size={model.config.vocab_size}\"\n", + "\n", + " # (opcional)\n", + " model.config.pad_token_id = tokenizer.pad_token_id\n", + " reward = -1\n", + " while reward < 0:\n", + " output = model.generate(\n", + " input_ids=input_ids,\n", + " attention_mask=attention_mask,\n", + " max_new_tokens=50,\n", + " do_sample=True,\n", + " top_k=50,\n", + " top_p=0.95,\n", + " temperature=0.7,\n", + " eos_token_id=tokenizer.eos_token_id,\n", + " pad_token_id=tokenizer.pad_token_id,\n", + " return_dict_in_generate=True,\n", + " output_scores=False\n", + " )\n", + " response_ids = output.sequences[0][input_ids.shape[1]:]\n", + " response = tokenizer.decode(response_ids, skip_special_tokens=True)\n", + "\n", + " reward = compute_reward(response)\n", + "\n", + "\n", + " except Exception as e:\n", + " print(f\"Error at index {i}: {e}\")\n", + " print(f\"Input IDs: {input_ids}\")\n", + " print(f\"Token range: min={input_ids.min()}, max={input_ids.max()}, vocab_size={model.config.vocab_size}\")\n", + " raise e\n", + "\n", + " responses.append(response)\n", + " rewards.append(reward)\n", + " all_responses.extend(responses)\n", + " all_rewards.extend(rewards)\n", + "\n", + " #if one reward is >= .9 break\n", + " if any(r >= 0.9 for r in rewards):\n", + " print(\"Reward >= 0.9 found, stopping training.\")\n", + " break\n", + " # Compute rewards with a progress bar\n", + " \n", + " import concurrent.futures\n", + "\n", + " # # Use process-based parallelism\n", + " # with concurrent.futures.ProcessPoolExecutor() as executor:\n", + " # rewards = list(tqdm(executor.map(compute_reward, responses), total=len(responses), desc=\"Computing Rewards\", leave=False))\n", + " \n", + " #rewards = [ compute_reward(response) for response in tqdm(responses, desc=\"Computing Rewards\", leave=False)]\n", + " \n", + "\n", + " # Convert rewards to a list of PyTorch tensors\n", + " rewards = [torch.tensor(reward, dtype=torch.float32, device=device) for reward in rewards]\n", + " \n", + " # Ensure responses are also tokenized and converted to tensors\n", + " responses = [tokenizer(response, return_tensors=\"pt\", padding=True)[\"input_ids\"].squeeze(0).to(device) for response in responses]\n", + "\n", + " # Pass the tokenized tensors to ppo_trainer.step()\n", + " ppo_trainer.step(queries, responses, rewards)\n", + "\n", + " # Log top expressions\n", + " top_k = 3\n", + " sorted_responses = sorted(zip(responses, rewards), key=lambda x: -x[1])\n", + " print(f\"\\nEpoch {epoch + 1} melhores expressões:\")\n", + " for i, (expr, score) in enumerate(sorted_responses[:top_k]):\n", + " print(f\"{i+1}. {tokenizer.decode(expr, skip_special_tokens=True)} -> R² = {score:.4f}\")\n", + " # Print average, median, and std of rewards\n", + " avg_reward = torch.mean(torch.stack(rewards)).item()\n", + " median_reward = torch.median(torch.stack(rewards)).item()\n", + " count_invalid = sum(1 for r in rewards if r == -1.0)\n", + " print(f\"Average Reward: {avg_reward:.4f}, Median Reward: {median_reward:.4f}, Invalid Count: {count_invalid}\")\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "70a60613", + "metadata": {}, + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".seriguela", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/notebooks/04_merging_model.ipynb b/notebooks/04_merging_model.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..c4004ad1ff660fe171e2c8cbd95470ca85279e46 --- /dev/null +++ b/notebooks/04_merging_model.ipynb @@ -0,0 +1,206 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 4, + "id": "86149941", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "('./modelo_final_para_ppo/tokenizer_config.json',\n", + " './modelo_final_para_ppo/special_tokens_map.json',\n", + " './modelo_final_para_ppo/vocab.json',\n", + " './modelo_final_para_ppo/merges.txt',\n", + " './modelo_final_para_ppo/added_tokens.json',\n", + " './modelo_final_para_ppo/tokenizer.json')" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# ===============================\n", + "# 🚀 LoRA Merge + ValueHead + Test\n", + "# ===============================\n", + "\n", + "\n", + "# ✅ Imports\n", + "from transformers import AutoTokenizer, AutoModelForCausalLM\n", + "from peft import PeftModel\n", + "from trl import AutoModelForCausalLMWithValueHead\n", + "\n", + "# === Configurações ===\n", + "LORA_REPO = \"augustocsc/Se124M500KInfPrompt_EOS\"\n", + "BASE_MODEL = \"gpt2\"\n", + "OUTPUT_DIR = \"./modelo_final_para_ppo\"\n", + "MODEL_HUB = \"augustocsc/Se124M500KInfPrompt_EOS_Merged\"\n", + "# === Carregar o tokenizer correto ===\n", + "tokenizer = AutoTokenizer.from_pretrained(LORA_REPO)\n", + "tokenizer.pad_token = tokenizer.eos_token\n", + "\n", + "# === Carregar modelo base e ajustar os embeddings ===\n", + "base_model = AutoModelForCausalLM.from_pretrained(BASE_MODEL)\n", + "base_model.resize_token_embeddings(len(tokenizer)) # Corrige shape para 50258\n", + "\n", + "# Load the PEFT model\n", + "peft_model = PeftModel.from_pretrained(base_model, LORA_REPO)\n", + "\n", + "# === Merge das LoRA weights (corretamente) ===\n", + "merged_model = peft_model.merge_and_unload()\n", + "\n", + "# === Adicionar Value Head ao modelo mergeado ===\n", + "model = AutoModelForCausalLMWithValueHead.from_pretrained(merged_model)\n", + "\n", + "# === Salvar modelo final para PPO ===\n", + "model.save_pretrained(OUTPUT_DIR)\n", + "tokenizer.save_pretrained(OUTPUT_DIR)\n" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "e921394e", + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "0d38506bf99e418eb92d977159c9550b", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "model.safetensors: 0%| | 0.00/498M [00:00 model is loaded from 'augustocsc/Se124M100KInfPrompt_EOS_Merged', and no v_head weight is found. This IS expected if you are not resuming PPO training.\n", + "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n", + "Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n", + "The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "🧪 Resposta do modelo:\n", + "\n", + "\n", + "vars: x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8, x_9, x_10\n", + "oper: *, **, +, -, /\n", + "cons: C\n", + "expr: x_1 + x_2 + C*x_8 + C*x_5**C<|endoftext|>\n" + ] + } + ], + "source": [ + "from transformers import AutoTokenizer, AutoModelForCausalLM\n", + "from peft import PeftModel\n", + "from trl import AutoModelForCausalLMWithValueHead\n", + "# 🔁 Recarregar o modelo já mergeado + value head\n", + "from trl import AutoModelForCausalLMWithValueHead\n", + "MODEL_HUB = \"augustocsc/Se124M100KInfPrompt_EOS_Merged\"\n", + "#load model\n", + "model = AutoModelForCausalLMWithValueHead.from_pretrained(MODEL_HUB)\n", + "tokenizer = AutoTokenizer.from_pretrained(MODEL_HUB)\n", + "\n", + "# 🔁 Prompt de teste\n", + "PROMPT = \"\"\"\n", + "vars: x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8, x_9, x_10\n", + "oper: *, **, +, -, /\n", + "cons: C\n", + "expr:\"\"\"\n", + "\n", + "device = model.pretrained_model.device # 👈 modelo base dentro do wrapper\n", + "input_ids = tokenizer(PROMPT, return_tensors=\"pt\").input_ids.to(device)\n", + "\n", + "# 🔮 Geração\n", + "gen_tokens = output = model.generate(\n", + " input_ids=input_ids,\n", + " max_new_tokens=50,\n", + " do_sample=True,\n", + " top_k=50,\n", + " top_p=0.95,\n", + " temperature=0.7,\n", + " \n", + " )\n", + "\n", + "# Mostrar resposta\n", + "response = tokenizer.decode(gen_tokens[0], skip_special_tokens=False)\n", + "print(\"🧪 Resposta do modelo:\\n\")\n", + "print(response)\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".seriguela", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/out.txt b/out.txt new file mode 100644 index 0000000000000000000000000000000000000000..4e567f7251f90ac39fa0d2bdc3794c0d1105156a --- /dev/null +++ b/out.txt @@ -0,0 +1,7 @@ +Special constants found: [1] +Found 1 constants in the expression: tan(x_1**C + cos(x_1)) +Testing expression validity with constants: [[1.0]] +Expression is valid on dataset. +Bounds for optimization: [(1, 3)] +Fitted constants: [1.0] +R2 score: -1.8028651105117532e-05 diff --git a/out2.txt b/out2.txt new file mode 100644 index 0000000000000000000000000000000000000000..700ba65b3a0b031bebc3a2ad19db76f6e6915f3b --- /dev/null +++ b/out2.txt @@ -0,0 +1,1383 @@ +trainable params: 294,912 || all params: 124,737,024 || trainable%: 0.2364 +{'loss': 2.8274, 'grad_norm': 0.9275674819946289, 'learning_rate': 2.3170731707317074e-06, 'epoch': 0.0} +{'eval_loss': 2.5097358226776123, 'eval_runtime': 119.2276, 'eval_samples_per_second': 523.771, 'eval_steps_per_second': 32.736, 'epoch': 0.0} +{'loss': 2.8217, 'grad_norm': 0.8828601837158203, 'learning_rate': 4.75609756097561e-06, 'epoch': 0.01} +{'eval_loss': 2.5068511962890625, 'eval_runtime': 123.6569, 'eval_samples_per_second': 505.01, 'eval_steps_per_second': 31.563, 'epoch': 0.01} +{'loss': 2.8191, 'grad_norm': 0.931132972240448, 'learning_rate': 7.195121951219512e-06, 'epoch': 0.01} +{'eval_loss': 2.497037887573242, 'eval_runtime': 106.2235, 'eval_samples_per_second': 587.892, 'eval_steps_per_second': 36.743, 'epoch': 0.01} +{'loss': 2.79, 'grad_norm': 1.0729975700378418, 'learning_rate': 9.634146341463415e-06, 'epoch': 0.02} +{'eval_loss': 2.472403049468994, 'eval_runtime': 84.3015, 'eval_samples_per_second': 740.77, 'eval_steps_per_second': 46.298, 'epoch': 0.02} +{'loss': 2.7185, 'grad_norm': 1.086423635482788, 'learning_rate': 1.2073170731707317e-05, 'epoch': 0.02} +{'eval_loss': 2.424081802368164, 'eval_runtime': 97.4446, 'eval_samples_per_second': 640.856, 'eval_steps_per_second': 40.054, 'epoch': 0.02} +{'loss': 2.6593, 'grad_norm': 1.1420583724975586, 'learning_rate': 1.4512195121951219e-05, 'epoch': 0.03} +{'eval_loss': 2.3584988117218018, 'eval_runtime': 111.1086, 'eval_samples_per_second': 562.045, 'eval_steps_per_second': 35.128, 'epoch': 0.03} +{'loss': 2.5596, 'grad_norm': 1.220494031906128, 'learning_rate': 1.6951219512195124e-05, 'epoch': 0.03} +{'eval_loss': 2.265773296356201, 'eval_runtime': 111.4748, 'eval_samples_per_second': 560.199, 'eval_steps_per_second': 35.012, 'epoch': 0.03} +{'loss': 2.3996, 'grad_norm': 1.283768892288208, 'learning_rate': 1.9390243902439026e-05, 'epoch': 0.04} +{'eval_loss': 2.1525917053222656, 'eval_runtime': 122.6026, 'eval_samples_per_second': 509.353, 'eval_steps_per_second': 31.835, 'epoch': 0.04} +{'loss': 2.2752, 'grad_norm': 1.216078519821167, 'learning_rate': 2.1829268292682928e-05, 'epoch': 0.04} +{'eval_loss': 2.009587526321411, 'eval_runtime': 110.6857, 'eval_samples_per_second': 564.192, 'eval_steps_per_second': 35.262, 'epoch': 0.04} +{'loss': 2.1117, 'grad_norm': 1.146297574043274, 'learning_rate': 2.426829268292683e-05, 'epoch': 0.04} +{'eval_loss': 1.8505055904388428, 'eval_runtime': 87.4497, 'eval_samples_per_second': 714.102, 'eval_steps_per_second': 44.631, 'epoch': 0.04} +{'loss': 1.8995, 'grad_norm': 1.1474350690841675, 'learning_rate': 2.6707317073170735e-05, 'epoch': 0.05} +{'eval_loss': 1.635629415512085, 'eval_runtime': 88.7769, 'eval_samples_per_second': 703.426, 'eval_steps_per_second': 43.964, 'epoch': 0.05} +{'loss': 1.6694, 'grad_norm': 1.1208035945892334, 'learning_rate': 2.9146341463414634e-05, 'epoch': 0.05} +{'eval_loss': 1.3913819789886475, 'eval_runtime': 110.5962, 'eval_samples_per_second': 564.649, 'eval_steps_per_second': 35.291, 'epoch': 0.05} +{'loss': 1.4535, 'grad_norm': 0.9514985680580139, 'learning_rate': 3.1585365853658536e-05, 'epoch': 0.06} +{'eval_loss': 1.159269094467163, 'eval_runtime': 119.2688, 'eval_samples_per_second': 523.59, 'eval_steps_per_second': 32.724, 'epoch': 0.06} +{'loss': 1.2244, 'grad_norm': 0.9037125706672668, 'learning_rate': 3.4024390243902444e-05, 'epoch': 0.06} +{'eval_loss': 0.9593096375465393, 'eval_runtime': 103.1309, 'eval_samples_per_second': 605.521, 'eval_steps_per_second': 37.845, 'epoch': 0.06} +{'loss': 1.0506, 'grad_norm': 0.8045913577079773, 'learning_rate': 3.646341463414634e-05, 'epoch': 0.07} +{'eval_loss': 0.8271157741546631, 'eval_runtime': 120.3074, 'eval_samples_per_second': 519.07, 'eval_steps_per_second': 32.442, 'epoch': 0.07} +{'loss': 0.9234, 'grad_norm': 0.8160069584846497, 'learning_rate': 3.890243902439025e-05, 'epoch': 0.07} +{'eval_loss': 0.7381067872047424, 'eval_runtime': 114.2558, 'eval_samples_per_second': 546.563, 'eval_steps_per_second': 34.16, 'epoch': 0.07} +{'loss': 0.8293, 'grad_norm': 0.7860645651817322, 'learning_rate': 4.134146341463414e-05, 'epoch': 0.07} +{'eval_loss': 0.6670233607292175, 'eval_runtime': 87.2773, 'eval_samples_per_second': 715.512, 'eval_steps_per_second': 44.72, 'epoch': 0.07} +{'loss': 0.7586, 'grad_norm': 0.7485722899436951, 'learning_rate': 4.378048780487805e-05, 'epoch': 0.08} +{'eval_loss': 0.6094087362289429, 'eval_runtime': 112.9521, 'eval_samples_per_second': 552.872, 'eval_steps_per_second': 34.554, 'epoch': 0.08} +{'loss': 0.6988, 'grad_norm': 0.7312377095222473, 'learning_rate': 4.6219512195121954e-05, 'epoch': 0.08} +{'eval_loss': 0.5665988326072693, 'eval_runtime': 121.88, 'eval_samples_per_second': 512.373, 'eval_steps_per_second': 32.023, 'epoch': 0.08} +{'loss': 0.6418, 'grad_norm': 0.5786728262901306, 'learning_rate': 4.8658536585365856e-05, 'epoch': 0.09} +{'eval_loss': 0.5375936627388, 'eval_runtime': 141.0798, 'eval_samples_per_second': 442.643, 'eval_steps_per_second': 27.665, 'epoch': 0.09} +{'loss': 0.608, 'grad_norm': 0.7143203616142273, 'learning_rate': 4.999994307167415e-05, 'epoch': 0.09} +{'eval_loss': 0.518476128578186, 'eval_runtime': 148.9918, 'eval_samples_per_second': 419.137, 'eval_steps_per_second': 26.196, 'epoch': 0.09} +{'loss': 0.5688, 'grad_norm': 0.6595646739006042, 'learning_rate': 4.9999408931462294e-05, 'epoch': 0.1} +{'eval_loss': 0.5033944249153137, 'eval_runtime': 101.7817, 'eval_samples_per_second': 613.548, 'eval_steps_per_second': 38.347, 'epoch': 0.1} +{'loss': 0.5613, 'grad_norm': 0.6785131692886353, 'learning_rate': 4.999831255031391e-05, 'epoch': 0.1} +{'eval_loss': 0.49262911081314087, 'eval_runtime': 83.8453, 'eval_samples_per_second': 744.801, 'eval_steps_per_second': 46.55, 'epoch': 0.1} +{'loss': 0.5393, 'grad_norm': 0.6294030547142029, 'learning_rate': 4.999665395288681e-05, 'epoch': 0.11} +{'eval_loss': 0.4804900288581848, 'eval_runtime': 96.4067, 'eval_samples_per_second': 647.756, 'eval_steps_per_second': 40.485, 'epoch': 0.11} +{'loss': 0.5314, 'grad_norm': 0.6262851357460022, 'learning_rate': 4.999443317648311e-05, 'epoch': 0.11} +{'eval_loss': 0.4737214148044586, 'eval_runtime': 103.0845, 'eval_samples_per_second': 605.795, 'eval_steps_per_second': 37.862, 'epoch': 0.11} +{'loss': 0.5283, 'grad_norm': 0.5933963060379028, 'learning_rate': 4.9991650271048464e-05, 'epoch': 0.11} +{'eval_loss': 0.4674193561077118, 'eval_runtime': 123.9631, 'eval_samples_per_second': 503.763, 'eval_steps_per_second': 31.485, 'epoch': 0.11} +{'loss': 0.5129, 'grad_norm': 0.665878415107727, 'learning_rate': 4.9988305299170876e-05, 'epoch': 0.12} +{'eval_loss': 0.46450096368789673, 'eval_runtime': 126.3409, 'eval_samples_per_second': 494.282, 'eval_steps_per_second': 30.893, 'epoch': 0.12} +{'loss': 0.5104, 'grad_norm': 0.7277640104293823, 'learning_rate': 4.998439833607933e-05, 'epoch': 0.12} +{'eval_loss': 0.46187400817871094, 'eval_runtime': 125.1831, 'eval_samples_per_second': 498.853, 'eval_steps_per_second': 31.178, 'epoch': 0.12} +{'loss': 0.5057, 'grad_norm': 0.6579739451408386, 'learning_rate': 4.99799294696421e-05, 'epoch': 0.13} +{'eval_loss': 0.4575568735599518, 'eval_runtime': 125.3803, 'eval_samples_per_second': 498.069, 'eval_steps_per_second': 31.129, 'epoch': 0.13} +{'loss': 0.4965, 'grad_norm': 0.6141186952590942, 'learning_rate': 4.9974898800364735e-05, 'epoch': 0.13} +{'eval_loss': 0.4522075951099396, 'eval_runtime': 90.3076, 'eval_samples_per_second': 691.503, 'eval_steps_per_second': 43.219, 'epoch': 0.13} +{'loss': 0.4948, 'grad_norm': 0.5233548879623413, 'learning_rate': 4.9969306441387845e-05, 'epoch': 0.14} +{'eval_loss': 0.4526471793651581, 'eval_runtime': 87.8274, 'eval_samples_per_second': 711.031, 'eval_steps_per_second': 44.439, 'epoch': 0.14} +{'loss': 0.4824, 'grad_norm': 0.6102485060691833, 'learning_rate': 4.9963152518484525e-05, 'epoch': 0.14} +{'eval_loss': 0.4491405487060547, 'eval_runtime': 136.7099, 'eval_samples_per_second': 456.792, 'eval_steps_per_second': 28.55, 'epoch': 0.14} +{'loss': 0.4858, 'grad_norm': 0.5754684209823608, 'learning_rate': 4.995643717005754e-05, 'epoch': 0.14} +{'eval_loss': 0.4459107518196106, 'eval_runtime': 124.2632, 'eval_samples_per_second': 502.546, 'eval_steps_per_second': 31.409, 'epoch': 0.14} +{'loss': 0.4888, 'grad_norm': 0.5157671570777893, 'learning_rate': 4.994916054713622e-05, 'epoch': 0.15} +{'eval_loss': 0.43970853090286255, 'eval_runtime': 138.1219, 'eval_samples_per_second': 452.122, 'eval_steps_per_second': 28.258, 'epoch': 0.15} +{'loss': 0.475, 'grad_norm': 0.5785585045814514, 'learning_rate': 4.994132281337304e-05, 'epoch': 0.15} +{'eval_loss': 0.4383958578109741, 'eval_runtime': 179.2581, 'eval_samples_per_second': 348.369, 'eval_steps_per_second': 21.773, 'epoch': 0.15} +{'loss': 0.4751, 'grad_norm': 0.49519380927085876, 'learning_rate': 4.993292414503996e-05, 'epoch': 0.16} +{'eval_loss': 0.4390428364276886, 'eval_runtime': 179.6881, 'eval_samples_per_second': 347.536, 'eval_steps_per_second': 21.721, 'epoch': 0.16} +{'loss': 0.4803, 'grad_norm': 0.5125386118888855, 'learning_rate': 4.992396473102445e-05, 'epoch': 0.16} +{'eval_loss': 0.4352133572101593, 'eval_runtime': 100.5119, 'eval_samples_per_second': 621.299, 'eval_steps_per_second': 38.831, 'epoch': 0.16} +{'loss': 0.4744, 'grad_norm': 0.6287256479263306, 'learning_rate': 4.9914444772825256e-05, 'epoch': 0.17} +{'eval_loss': 0.43278783559799194, 'eval_runtime': 108.7737, 'eval_samples_per_second': 574.109, 'eval_steps_per_second': 35.882, 'epoch': 0.17} +{'loss': 0.4767, 'grad_norm': 0.5646315813064575, 'learning_rate': 4.990436448454784e-05, 'epoch': 0.17} +{'eval_loss': 0.4316536486148834, 'eval_runtime': 115.6796, 'eval_samples_per_second': 539.836, 'eval_steps_per_second': 33.74, 'epoch': 0.17} +{'loss': 0.4638, 'grad_norm': 0.49935978651046753, 'learning_rate': 4.989372409289959e-05, 'epoch': 0.18} +{'eval_loss': 0.4304448366165161, 'eval_runtime': 121.725, 'eval_samples_per_second': 513.025, 'eval_steps_per_second': 32.064, 'epoch': 0.18} +{'loss': 0.4729, 'grad_norm': 0.5032947659492493, 'learning_rate': 4.988252383718471e-05, 'epoch': 0.18} +{'eval_loss': 0.42461755871772766, 'eval_runtime': 144.0287, 'eval_samples_per_second': 433.58, 'eval_steps_per_second': 27.099, 'epoch': 0.18} +{'loss': 0.4627, 'grad_norm': 0.5971313118934631, 'learning_rate': 4.987076396929887e-05, 'epoch': 0.18} +{'eval_loss': 0.4258422255516052, 'eval_runtime': 85.9203, 'eval_samples_per_second': 726.813, 'eval_steps_per_second': 45.426, 'epoch': 0.18} +{'loss': 0.4662, 'grad_norm': 0.5689764618873596, 'learning_rate': 4.985844475372346e-05, 'epoch': 0.19} +{'eval_loss': 0.42246755957603455, 'eval_runtime': 82.6824, 'eval_samples_per_second': 755.275, 'eval_steps_per_second': 47.205, 'epoch': 0.19} +{'loss': 0.4577, 'grad_norm': 0.6060552000999451, 'learning_rate': 4.984556646751973e-05, 'epoch': 0.19} +{'eval_loss': 0.42378953099250793, 'eval_runtime': 90.7975, 'eval_samples_per_second': 687.772, 'eval_steps_per_second': 42.986, 'epoch': 0.19} +{'loss': 0.4592, 'grad_norm': 0.6233872771263123, 'learning_rate': 4.983212940032253e-05, 'epoch': 0.2} +{'eval_loss': 0.42229604721069336, 'eval_runtime': 116.8571, 'eval_samples_per_second': 534.396, 'eval_steps_per_second': 33.4, 'epoch': 0.2} +{'loss': 0.4544, 'grad_norm': 0.6351814866065979, 'learning_rate': 4.981813385433376e-05, 'epoch': 0.2} +{'eval_loss': 0.4181474447250366, 'eval_runtime': 110.8397, 'eval_samples_per_second': 563.408, 'eval_steps_per_second': 35.213, 'epoch': 0.2} +{'loss': 0.4613, 'grad_norm': 0.5467421412467957, 'learning_rate': 4.980358014431562e-05, 'epoch': 0.21} +{'eval_loss': 0.41584494709968567, 'eval_runtime': 105.7716, 'eval_samples_per_second': 590.404, 'eval_steps_per_second': 36.9, 'epoch': 0.21} +{'loss': 0.4462, 'grad_norm': 0.45951876044273376, 'learning_rate': 4.978846859758352e-05, 'epoch': 0.21} +{'eval_loss': 0.41724899411201477, 'eval_runtime': 142.0642, 'eval_samples_per_second': 439.576, 'eval_steps_per_second': 27.474, 'epoch': 0.21} +{'loss': 0.458, 'grad_norm': 0.5566853284835815, 'learning_rate': 4.977279955399868e-05, 'epoch': 0.22} +{'eval_loss': 0.4146503806114197, 'eval_runtime': 157.5449, 'eval_samples_per_second': 396.382, 'eval_steps_per_second': 24.774, 'epoch': 0.22} +{'loss': 0.4456, 'grad_norm': 0.5233135223388672, 'learning_rate': 4.975657336596057e-05, 'epoch': 0.22} +{'eval_loss': 0.41585487127304077, 'eval_runtime': 104.0568, 'eval_samples_per_second': 600.134, 'eval_steps_per_second': 37.508, 'epoch': 0.22} +{'loss': 0.4484, 'grad_norm': 0.513327419757843, 'learning_rate': 4.973979039839888e-05, 'epoch': 0.22} +{'eval_loss': 0.4100011885166168, 'eval_runtime': 85.8565, 'eval_samples_per_second': 727.353, 'eval_steps_per_second': 45.46, 'epoch': 0.22} +{'loss': 0.4451, 'grad_norm': 0.6426804065704346, 'learning_rate': 4.9722451028765405e-05, 'epoch': 0.23} +{'eval_loss': 0.410844624042511, 'eval_runtime': 94.0276, 'eval_samples_per_second': 664.145, 'eval_steps_per_second': 41.509, 'epoch': 0.23} +{'loss': 0.4458, 'grad_norm': 0.5014445185661316, 'learning_rate': 4.97045556470255e-05, 'epoch': 0.23} +{'eval_loss': 0.41185927391052246, 'eval_runtime': 101.1627, 'eval_samples_per_second': 617.302, 'eval_steps_per_second': 38.581, 'epoch': 0.23} +{'loss': 0.4439, 'grad_norm': 0.5626912713050842, 'learning_rate': 4.968610465564931e-05, 'epoch': 0.24} +{'eval_loss': 0.40858331322669983, 'eval_runtime': 131.8897, 'eval_samples_per_second': 473.487, 'eval_steps_per_second': 29.593, 'epoch': 0.24} +{'loss': 0.4465, 'grad_norm': 0.4652461111545563, 'learning_rate': 4.966709846960278e-05, 'epoch': 0.24} +{'eval_loss': 0.40622472763061523, 'eval_runtime': 117.3788, 'eval_samples_per_second': 532.021, 'eval_steps_per_second': 33.251, 'epoch': 0.24} +{'loss': 0.4428, 'grad_norm': 0.45794594287872314, 'learning_rate': 4.964753751633824e-05, 'epoch': 0.25} +{'eval_loss': 0.4052998423576355, 'eval_runtime': 86.2346, 'eval_samples_per_second': 724.164, 'eval_steps_per_second': 45.26, 'epoch': 0.25} +{'loss': 0.4507, 'grad_norm': 0.4608239531517029, 'learning_rate': 4.962742223578484e-05, 'epoch': 0.25} +{'eval_loss': 0.40739262104034424, 'eval_runtime': 86.2533, 'eval_samples_per_second': 724.007, 'eval_steps_per_second': 45.25, 'epoch': 0.25} +{'loss': 0.4528, 'grad_norm': 0.4618092179298401, 'learning_rate': 4.960675308033863e-05, 'epoch': 0.25} +{'eval_loss': 0.4010402262210846, 'eval_runtime': 101.5142, 'eval_samples_per_second': 615.165, 'eval_steps_per_second': 38.448, 'epoch': 0.25} +{'loss': 0.4366, 'grad_norm': 0.5214727520942688, 'learning_rate': 4.958553051485242e-05, 'epoch': 0.26} +{'eval_loss': 0.4024697244167328, 'eval_runtime': 108.5119, 'eval_samples_per_second': 575.494, 'eval_steps_per_second': 35.968, 'epoch': 0.26} +{'loss': 0.446, 'grad_norm': 0.490835577249527, 'learning_rate': 4.9563755016625303e-05, 'epoch': 0.26} +{'eval_loss': 0.404105007648468, 'eval_runtime': 129.7406, 'eval_samples_per_second': 481.33, 'eval_steps_per_second': 30.083, 'epoch': 0.26} +{'loss': 0.4404, 'grad_norm': 0.5871369242668152, 'learning_rate': 4.954142707539192e-05, 'epoch': 0.27} +{'eval_loss': 0.4031173884868622, 'eval_runtime': 169.0032, 'eval_samples_per_second': 369.508, 'eval_steps_per_second': 23.094, 'epoch': 0.27} +{'loss': 0.4378, 'grad_norm': 0.45189425349235535, 'learning_rate': 4.951854719331144e-05, 'epoch': 0.27} +{'eval_loss': 0.4032110869884491, 'eval_runtime': 97.7186, 'eval_samples_per_second': 639.059, 'eval_steps_per_second': 39.941, 'epoch': 0.27} +{'loss': 0.4434, 'grad_norm': 0.5572656989097595, 'learning_rate': 4.949511588495629e-05, 'epoch': 0.28} +{'eval_loss': 0.4037443697452545, 'eval_runtime': 81.3441, 'eval_samples_per_second': 767.701, 'eval_steps_per_second': 47.981, 'epoch': 0.28} +{'loss': 0.4364, 'grad_norm': 0.5457513928413391, 'learning_rate': 4.9471133677300547e-05, 'epoch': 0.28} +{'eval_loss': 0.402517706155777, 'eval_runtime': 106.4368, 'eval_samples_per_second': 586.714, 'eval_steps_per_second': 36.67, 'epoch': 0.28} +{'loss': 0.4417, 'grad_norm': 0.5152289271354675, 'learning_rate': 4.9446601109708125e-05, 'epoch': 0.29} +{'eval_loss': 0.39888665080070496, 'eval_runtime': 94.8206, 'eval_samples_per_second': 658.591, 'eval_steps_per_second': 41.162, 'epoch': 0.29} +{'loss': 0.4371, 'grad_norm': 0.5354562997817993, 'learning_rate': 4.9421518733920625e-05, 'epoch': 0.29} +{'eval_loss': 0.3949268162250519, 'eval_runtime': 135.9717, 'eval_samples_per_second': 459.272, 'eval_steps_per_second': 28.704, 'epoch': 0.29} +{'loss': 0.4356, 'grad_norm': 0.4924236536026001, 'learning_rate': 4.939588711404492e-05, 'epoch': 0.29} +{'eval_loss': 0.398179292678833, 'eval_runtime': 131.72, 'eval_samples_per_second': 474.096, 'eval_steps_per_second': 29.631, 'epoch': 0.29} +{'loss': 0.433, 'grad_norm': 0.5048829913139343, 'learning_rate': 4.9369706826540475e-05, 'epoch': 0.3} +{'eval_loss': 0.3964763283729553, 'eval_runtime': 159.0842, 'eval_samples_per_second': 392.547, 'eval_steps_per_second': 24.534, 'epoch': 0.3} +{'loss': 0.4406, 'grad_norm': 0.4456268548965454, 'learning_rate': 4.934297846020638e-05, 'epoch': 0.3} +{'eval_loss': 0.3952368199825287, 'eval_runtime': 87.7588, 'eval_samples_per_second': 711.587, 'eval_steps_per_second': 44.474, 'epoch': 0.3} +{'loss': 0.4342, 'grad_norm': 0.5483442544937134, 'learning_rate': 4.9315702616168126e-05, 'epoch': 0.31} +{'eval_loss': 0.393479585647583, 'eval_runtime': 85.4751, 'eval_samples_per_second': 730.599, 'eval_steps_per_second': 45.662, 'epoch': 0.31} +{'loss': 0.4382, 'grad_norm': 0.44456860423088074, 'learning_rate': 4.928787990786406e-05, 'epoch': 0.31} +{'eval_loss': 0.39443597197532654, 'eval_runtime': 106.0102, 'eval_samples_per_second': 589.075, 'eval_steps_per_second': 36.817, 'epoch': 0.31} +{'loss': 0.429, 'grad_norm': 0.4595698416233063, 'learning_rate': 4.925951096103159e-05, 'epoch': 0.32} +{'eval_loss': 0.39259031414985657, 'eval_runtime': 106.1885, 'eval_samples_per_second': 588.086, 'eval_steps_per_second': 36.755, 'epoch': 0.32} +{'loss': 0.4318, 'grad_norm': 0.42622220516204834, 'learning_rate': 4.923059641369313e-05, 'epoch': 0.32} +{'eval_loss': 0.39193031191825867, 'eval_runtime': 139.0869, 'eval_samples_per_second': 448.985, 'eval_steps_per_second': 28.062, 'epoch': 0.32} +{'loss': 0.4254, 'grad_norm': 0.42002877593040466, 'learning_rate': 4.920113691614175e-05, 'epoch': 0.33} +{'eval_loss': 0.39023852348327637, 'eval_runtime': 132.4602, 'eval_samples_per_second': 471.447, 'eval_steps_per_second': 29.465, 'epoch': 0.33} +{'loss': 0.4305, 'grad_norm': 0.4846990704536438, 'learning_rate': 4.917113313092654e-05, 'epoch': 0.33} +{'eval_loss': 0.391340970993042, 'eval_runtime': 107.2162, 'eval_samples_per_second': 582.449, 'eval_steps_per_second': 36.403, 'epoch': 0.33} +{'loss': 0.4333, 'grad_norm': 0.3990853726863861, 'learning_rate': 4.9140585732837694e-05, 'epoch': 0.33} +{'eval_loss': 0.39038339257240295, 'eval_runtime': 98.4715, 'eval_samples_per_second': 634.173, 'eval_steps_per_second': 39.636, 'epoch': 0.33} +{'loss': 0.4306, 'grad_norm': 0.5101251006126404, 'learning_rate': 4.910949540889136e-05, 'epoch': 0.34} +{'eval_loss': 0.3878594636917114, 'eval_runtime': 131.6971, 'eval_samples_per_second': 474.179, 'eval_steps_per_second': 29.636, 'epoch': 0.34} +{'loss': 0.4281, 'grad_norm': 0.5048412084579468, 'learning_rate': 4.9077862858314195e-05, 'epoch': 0.34} +{'eval_loss': 0.38809457421302795, 'eval_runtime': 124.2787, 'eval_samples_per_second': 502.483, 'eval_steps_per_second': 31.405, 'epoch': 0.34} +{'loss': 0.4291, 'grad_norm': 0.4175941050052643, 'learning_rate': 4.904568879252761e-05, 'epoch': 0.35} +{'eval_loss': 0.38905006647109985, 'eval_runtime': 162.8876, 'eval_samples_per_second': 383.381, 'eval_steps_per_second': 23.961, 'epoch': 0.35} +{'loss': 0.4287, 'grad_norm': 0.6386645436286926, 'learning_rate': 4.901297393513178e-05, 'epoch': 0.35} +{'eval_loss': 0.3892914652824402, 'eval_runtime': 116.7528, 'eval_samples_per_second': 534.874, 'eval_steps_per_second': 33.43, 'epoch': 0.35} +{'loss': 0.436, 'grad_norm': 0.5440050959587097, 'learning_rate': 4.897971902188939e-05, 'epoch': 0.36} +{'eval_loss': 0.38759666681289673, 'eval_runtime': 79.803, 'eval_samples_per_second': 782.527, 'eval_steps_per_second': 48.908, 'epoch': 0.36} +{'loss': 0.4237, 'grad_norm': 0.42988842725753784, 'learning_rate': 4.8945924800709075e-05, 'epoch': 0.36} +{'eval_loss': 0.38579362630844116, 'eval_runtime': 93.934, 'eval_samples_per_second': 664.807, 'eval_steps_per_second': 41.55, 'epoch': 0.36} +{'loss': 0.4228, 'grad_norm': 0.42264896631240845, 'learning_rate': 4.891159203162857e-05, 'epoch': 0.36} +{'eval_loss': 0.3851452171802521, 'eval_runtime': 118.8079, 'eval_samples_per_second': 525.622, 'eval_steps_per_second': 32.851, 'epoch': 0.36} +{'loss': 0.4256, 'grad_norm': 0.4856173098087311, 'learning_rate': 4.887672148679766e-05, 'epoch': 0.37} +{'eval_loss': 0.3863220512866974, 'eval_runtime': 111.3976, 'eval_samples_per_second': 560.587, 'eval_steps_per_second': 35.037, 'epoch': 0.37} +{'loss': 0.4215, 'grad_norm': 0.4942440688610077, 'learning_rate': 4.884131395046081e-05, 'epoch': 0.37} +{'eval_loss': 0.38484588265419006, 'eval_runtime': 118.5198, 'eval_samples_per_second': 526.899, 'eval_steps_per_second': 32.931, 'epoch': 0.37} +{'loss': 0.4296, 'grad_norm': 0.449828177690506, 'learning_rate': 4.880537021893951e-05, 'epoch': 0.38} +{'eval_loss': 0.38425788283348083, 'eval_runtime': 111.5472, 'eval_samples_per_second': 559.835, 'eval_steps_per_second': 34.99, 'epoch': 0.38} +{'loss': 0.4239, 'grad_norm': 0.5470183491706848, 'learning_rate': 4.8768891100614336e-05, 'epoch': 0.38} +{'eval_loss': 0.3840302526950836, 'eval_runtime': 93.095, 'eval_samples_per_second': 670.799, 'eval_steps_per_second': 41.925, 'epoch': 0.38} +{'loss': 0.4211, 'grad_norm': 0.4473894536495209, 'learning_rate': 4.873187741590685e-05, 'epoch': 0.39} +{'eval_loss': 0.38394832611083984, 'eval_runtime': 97.2245, 'eval_samples_per_second': 642.307, 'eval_steps_per_second': 40.144, 'epoch': 0.39} +{'loss': 0.4268, 'grad_norm': 0.5481351017951965, 'learning_rate': 4.8694329997261076e-05, 'epoch': 0.39} +{'eval_loss': 0.38252392411231995, 'eval_runtime': 108.1297, 'eval_samples_per_second': 577.529, 'eval_steps_per_second': 36.096, 'epoch': 0.39} +{'loss': 0.426, 'grad_norm': 0.5597062706947327, 'learning_rate': 4.865624968912482e-05, 'epoch': 0.4} +{'eval_loss': 0.3795596659183502, 'eval_runtime': 137.7994, 'eval_samples_per_second': 453.181, 'eval_steps_per_second': 28.324, 'epoch': 0.4} +{'loss': 0.4192, 'grad_norm': 0.43377256393432617, 'learning_rate': 4.861763734793065e-05, 'epoch': 0.4} +{'eval_loss': 0.3847109377384186, 'eval_runtime': 124.2547, 'eval_samples_per_second': 502.581, 'eval_steps_per_second': 31.411, 'epoch': 0.4} +{'loss': 0.423, 'grad_norm': 0.5017260313034058, 'learning_rate': 4.8578493842076656e-05, 'epoch': 0.4} +{'eval_loss': 0.38193026185035706, 'eval_runtime': 136.5094, 'eval_samples_per_second': 457.463, 'eval_steps_per_second': 28.591, 'epoch': 0.4} +{'loss': 0.4256, 'grad_norm': 0.5032309889793396, 'learning_rate': 4.8538820051906895e-05, 'epoch': 0.41} +{'eval_loss': 0.3818979263305664, 'eval_runtime': 93.2507, 'eval_samples_per_second': 669.678, 'eval_steps_per_second': 41.855, 'epoch': 0.41} +{'loss': 0.4216, 'grad_norm': 0.5377271771430969, 'learning_rate': 4.8498616869691635e-05, 'epoch': 0.41} +{'eval_loss': 0.3813377022743225, 'eval_runtime': 77.6877, 'eval_samples_per_second': 803.834, 'eval_steps_per_second': 50.24, 'epoch': 0.41} +{'loss': 0.424, 'grad_norm': 0.4909280836582184, 'learning_rate': 4.8457885199607246e-05, 'epoch': 0.42} +{'eval_loss': 0.3795780837535858, 'eval_runtime': 126.2659, 'eval_samples_per_second': 494.575, 'eval_steps_per_second': 30.911, 'epoch': 0.42} +{'loss': 0.4201, 'grad_norm': 0.560152530670166, 'learning_rate': 4.841662595771587e-05, 'epoch': 0.42} +{'eval_loss': 0.37835824489593506, 'eval_runtime': 143.2884, 'eval_samples_per_second': 435.82, 'eval_steps_per_second': 27.239, 'epoch': 0.42} +{'loss': 0.4224, 'grad_norm': 0.48200276494026184, 'learning_rate': 4.8374840071944836e-05, 'epoch': 0.43} +{'eval_loss': 0.38105329871177673, 'eval_runtime': 144.7446, 'eval_samples_per_second': 431.436, 'eval_steps_per_second': 26.965, 'epoch': 0.43} +{'loss': 0.419, 'grad_norm': 0.4981854259967804, 'learning_rate': 4.833252848206579e-05, 'epoch': 0.43} +{'eval_loss': 0.3798902928829193, 'eval_runtime': 159.497, 'eval_samples_per_second': 391.531, 'eval_steps_per_second': 24.471, 'epoch': 0.43} +{'loss': 0.421, 'grad_norm': 0.5576156377792358, 'learning_rate': 4.828969213967356e-05, 'epoch': 0.43} +{'eval_loss': 0.37900158762931824, 'eval_runtime': 88.7661, 'eval_samples_per_second': 703.512, 'eval_steps_per_second': 43.969, 'epoch': 0.43} +{'loss': 0.4208, 'grad_norm': 0.44539210200309753, 'learning_rate': 4.8246332008164706e-05, 'epoch': 0.44} +{'eval_loss': 0.3779429495334625, 'eval_runtime': 89.9454, 'eval_samples_per_second': 694.288, 'eval_steps_per_second': 43.393, 'epoch': 0.44} +{'loss': 0.4175, 'grad_norm': 0.5010109543800354, 'learning_rate': 4.820244906271595e-05, 'epoch': 0.44} +{'eval_loss': 0.3783878684043884, 'eval_runtime': 110.5803, 'eval_samples_per_second': 564.73, 'eval_steps_per_second': 35.296, 'epoch': 0.44} +{'loss': 0.4213, 'grad_norm': 0.47311604022979736, 'learning_rate': 4.815804429026214e-05, 'epoch': 0.45} +{'eval_loss': 0.3768884241580963, 'eval_runtime': 132.6691, 'eval_samples_per_second': 470.705, 'eval_steps_per_second': 29.419, 'epoch': 0.45} +{'loss': 0.4233, 'grad_norm': 0.42296308279037476, 'learning_rate': 4.811311868947412e-05, 'epoch': 0.45} +{'eval_loss': 0.37609341740608215, 'eval_runtime': 110.9788, 'eval_samples_per_second': 562.702, 'eval_steps_per_second': 35.169, 'epoch': 0.45} +{'loss': 0.4165, 'grad_norm': 0.4623595178127289, 'learning_rate': 4.806767327073628e-05, 'epoch': 0.46} +{'eval_loss': 0.376276433467865, 'eval_runtime': 141.8868, 'eval_samples_per_second': 440.126, 'eval_steps_per_second': 27.508, 'epoch': 0.46} +{'loss': 0.4164, 'grad_norm': 0.5660254955291748, 'learning_rate': 4.8021709056123745e-05, 'epoch': 0.46} +{'eval_loss': 0.3783346116542816, 'eval_runtime': 127.2522, 'eval_samples_per_second': 490.742, 'eval_steps_per_second': 30.671, 'epoch': 0.46} +{'loss': 0.4212, 'grad_norm': 0.476532518863678, 'learning_rate': 4.797522707937949e-05, 'epoch': 0.47} +{'eval_loss': 0.3740321695804596, 'eval_runtime': 80.9303, 'eval_samples_per_second': 771.627, 'eval_steps_per_second': 48.227, 'epoch': 0.47} +{'loss': 0.4216, 'grad_norm': 0.533787727355957, 'learning_rate': 4.792822838589104e-05, 'epoch': 0.47} +{'eval_loss': 0.37407761812210083, 'eval_runtime': 76.2319, 'eval_samples_per_second': 819.185, 'eval_steps_per_second': 51.199, 'epoch': 0.47} +{'loss': 0.413, 'grad_norm': 0.4579493999481201, 'learning_rate': 4.788071403266696e-05, 'epoch': 0.47} +{'eval_loss': 0.37437182664871216, 'eval_runtime': 120.3127, 'eval_samples_per_second': 519.047, 'eval_steps_per_second': 32.44, 'epoch': 0.47} +{'loss': 0.4181, 'grad_norm': 0.5602650046348572, 'learning_rate': 4.78326850883131e-05, 'epoch': 0.48} +{'eval_loss': 0.3750932216644287, 'eval_runtime': 116.8133, 'eval_samples_per_second': 534.597, 'eval_steps_per_second': 33.412, 'epoch': 0.48} +{'loss': 0.4178, 'grad_norm': 0.4027921259403229, 'learning_rate': 4.7784142633008535e-05, 'epoch': 0.48} +{'eval_loss': 0.3759942650794983, 'eval_runtime': 131.2447, 'eval_samples_per_second': 475.814, 'eval_steps_per_second': 29.738, 'epoch': 0.48} +{'loss': 0.4105, 'grad_norm': 0.6266952157020569, 'learning_rate': 4.773508775848129e-05, 'epoch': 0.49} +{'eval_loss': 0.37301820516586304, 'eval_runtime': 144.7964, 'eval_samples_per_second': 431.281, 'eval_steps_per_second': 26.955, 'epoch': 0.49} +{'loss': 0.4159, 'grad_norm': 0.5054994225502014, 'learning_rate': 4.768552156798381e-05, 'epoch': 0.49} +{'eval_loss': 0.3728788197040558, 'eval_runtime': 129.362, 'eval_samples_per_second': 482.738, 'eval_steps_per_second': 30.171, 'epoch': 0.49} +{'loss': 0.4177, 'grad_norm': 0.4544034004211426, 'learning_rate': 4.76354451762681e-05, 'epoch': 0.5} +{'eval_loss': 0.375459760427475, 'eval_runtime': 91.9713, 'eval_samples_per_second': 678.994, 'eval_steps_per_second': 42.437, 'epoch': 0.5} +{'loss': 0.4149, 'grad_norm': 0.43109652400016785, 'learning_rate': 4.758485970956067e-05, 'epoch': 0.5} +{'eval_loss': 0.3746248781681061, 'eval_runtime': 82.2115, 'eval_samples_per_second': 759.602, 'eval_steps_per_second': 47.475, 'epoch': 0.5} +{'loss': 0.4184, 'grad_norm': 0.553602933883667, 'learning_rate': 4.7533766305537255e-05, 'epoch': 0.51} +{'eval_loss': 0.374144583940506, 'eval_runtime': 110.6797, 'eval_samples_per_second': 564.223, 'eval_steps_per_second': 35.264, 'epoch': 0.51} +{'loss': 0.4141, 'grad_norm': 0.5318209528923035, 'learning_rate': 4.748216611329712e-05, 'epoch': 0.51} +{'eval_loss': 0.3744765520095825, 'eval_runtime': 125.2701, 'eval_samples_per_second': 498.507, 'eval_steps_per_second': 31.157, 'epoch': 0.51} +{'loss': 0.4123, 'grad_norm': 0.4231971502304077, 'learning_rate': 4.7430060293337344e-05, 'epoch': 0.51} +{'eval_loss': 0.3722990155220032, 'eval_runtime': 124.6413, 'eval_samples_per_second': 501.022, 'eval_steps_per_second': 31.314, 'epoch': 0.51} +{'loss': 0.4103, 'grad_norm': 0.47178417444229126, 'learning_rate': 4.737745001752662e-05, 'epoch': 0.52} +{'eval_loss': 0.3737989664077759, 'eval_runtime': 95.1036, 'eval_samples_per_second': 656.632, 'eval_steps_per_second': 41.039, 'epoch': 0.52} +{'loss': 0.4136, 'grad_norm': 0.4382767975330353, 'learning_rate': 4.7324336469078954e-05, 'epoch': 0.52} +{'eval_loss': 0.3740603029727936, 'eval_runtime': 137.9249, 'eval_samples_per_second': 452.768, 'eval_steps_per_second': 28.298, 'epoch': 0.52} +{'loss': 0.4109, 'grad_norm': 0.5672354698181152, 'learning_rate': 4.727072084252705e-05, 'epoch': 0.53} +{'eval_loss': 0.3734896183013916, 'eval_runtime': 83.5261, 'eval_samples_per_second': 747.647, 'eval_steps_per_second': 46.728, 'epoch': 0.53} +{'loss': 0.4118, 'grad_norm': 0.5195208191871643, 'learning_rate': 4.7216604343695404e-05, 'epoch': 0.53} +{'eval_loss': 0.37170031666755676, 'eval_runtime': 83.3826, 'eval_samples_per_second': 748.933, 'eval_steps_per_second': 46.808, 'epoch': 0.53} +{'loss': 0.4103, 'grad_norm': 0.3873823583126068, 'learning_rate': 4.7161988189673236e-05, 'epoch': 0.54} +{'eval_loss': 0.37232646346092224, 'eval_runtime': 105.1628, 'eval_samples_per_second': 593.822, 'eval_steps_per_second': 37.114, 'epoch': 0.54} +{'loss': 0.41, 'grad_norm': 0.6069622039794922, 'learning_rate': 4.710687360878709e-05, 'epoch': 0.54} +{'eval_loss': 0.3711327910423279, 'eval_runtime': 114.3097, 'eval_samples_per_second': 546.305, 'eval_steps_per_second': 34.144, 'epoch': 0.54} +{'loss': 0.4079, 'grad_norm': 0.40274161100387573, 'learning_rate': 4.705126184057322e-05, 'epoch': 0.54} +{'eval_loss': 0.375643253326416, 'eval_runtime': 117.051, 'eval_samples_per_second': 533.511, 'eval_steps_per_second': 33.344, 'epoch': 0.54} +{'loss': 0.4141, 'grad_norm': 0.5107266902923584, 'learning_rate': 4.6995154135749696e-05, 'epoch': 0.55} +{'eval_loss': 0.3729841411113739, 'eval_runtime': 141.2821, 'eval_samples_per_second': 442.009, 'eval_steps_per_second': 27.626, 'epoch': 0.55} +{'loss': 0.4106, 'grad_norm': 0.46290239691734314, 'learning_rate': 4.6938551756188295e-05, 'epoch': 0.55} +{'eval_loss': 0.37101924419403076, 'eval_runtime': 134.0261, 'eval_samples_per_second': 465.939, 'eval_steps_per_second': 29.121, 'epoch': 0.55} +{'loss': 0.4145, 'grad_norm': 0.5005713701248169, 'learning_rate': 4.688145597488611e-05, 'epoch': 0.56} +{'eval_loss': 0.37098807096481323, 'eval_runtime': 94.8347, 'eval_samples_per_second': 658.493, 'eval_steps_per_second': 41.156, 'epoch': 0.56} +{'loss': 0.4094, 'grad_norm': 0.4583738446235657, 'learning_rate': 4.682386807593693e-05, 'epoch': 0.56} +{'eval_loss': 0.37096574902534485, 'eval_runtime': 81.5932, 'eval_samples_per_second': 765.357, 'eval_steps_per_second': 47.835, 'epoch': 0.56} +{'loss': 0.4149, 'grad_norm': 0.5324410796165466, 'learning_rate': 4.6765789354502325e-05, 'epoch': 0.57} +{'eval_loss': 0.371663361787796, 'eval_runtime': 113.9568, 'eval_samples_per_second': 547.997, 'eval_steps_per_second': 34.25, 'epoch': 0.57} +{'loss': 0.4103, 'grad_norm': 0.5413331389427185, 'learning_rate': 4.6707221116782584e-05, 'epoch': 0.57} +{'eval_loss': 0.3710411489009857, 'eval_runtime': 118.7324, 'eval_samples_per_second': 525.956, 'eval_steps_per_second': 32.872, 'epoch': 0.57} +{'loss': 0.4144, 'grad_norm': 0.48495498299598694, 'learning_rate': 4.664816467998728e-05, 'epoch': 0.58} +{'eval_loss': 0.36933138966560364, 'eval_runtime': 130.127, 'eval_samples_per_second': 479.901, 'eval_steps_per_second': 29.994, 'epoch': 0.58} +{'loss': 0.4146, 'grad_norm': 0.4834273159503937, 'learning_rate': 4.658862137230566e-05, 'epoch': 0.58} +{'eval_loss': 0.3700123429298401, 'eval_runtime': 133.7953, 'eval_samples_per_second': 466.743, 'eval_steps_per_second': 29.171, 'epoch': 0.58} +{'loss': 0.4108, 'grad_norm': 0.4542626142501831, 'learning_rate': 4.652859253287681e-05, 'epoch': 0.58} +{'eval_loss': 0.37077397108078003, 'eval_runtime': 141.0495, 'eval_samples_per_second': 442.738, 'eval_steps_per_second': 27.671, 'epoch': 0.58} +{'loss': 0.409, 'grad_norm': 0.5021551847457886, 'learning_rate': 4.646807951175946e-05, 'epoch': 0.59} +{'eval_loss': 0.36945202946662903, 'eval_runtime': 88.2901, 'eval_samples_per_second': 707.305, 'eval_steps_per_second': 44.207, 'epoch': 0.59} +{'loss': 0.4017, 'grad_norm': 0.43930497765541077, 'learning_rate': 4.6407083669901694e-05, 'epoch': 0.59} +{'eval_loss': 0.368845671415329, 'eval_runtime': 93.2115, 'eval_samples_per_second': 669.96, 'eval_steps_per_second': 41.873, 'epoch': 0.59} +{'loss': 0.4117, 'grad_norm': 0.4724309742450714, 'learning_rate': 4.6345606379110326e-05, 'epoch': 0.6} +{'eval_loss': 0.3704380393028259, 'eval_runtime': 110.8313, 'eval_samples_per_second': 563.451, 'eval_steps_per_second': 35.216, 'epoch': 0.6} +{'loss': 0.4167, 'grad_norm': 0.5238968729972839, 'learning_rate': 4.628364902202004e-05, 'epoch': 0.6} +{'eval_loss': 0.3690914511680603, 'eval_runtime': 117.5254, 'eval_samples_per_second': 531.358, 'eval_steps_per_second': 33.21, 'epoch': 0.6} +{'loss': 0.4073, 'grad_norm': 0.4682387411594391, 'learning_rate': 4.6221212992062244e-05, 'epoch': 0.61} +{'eval_loss': 0.36788707971572876, 'eval_runtime': 120.9459, 'eval_samples_per_second': 516.33, 'eval_steps_per_second': 32.271, 'epoch': 0.61} +{'loss': 0.409, 'grad_norm': 0.45870447158813477, 'learning_rate': 4.615829969343385e-05, 'epoch': 0.61} +{'eval_loss': 0.36832132935523987, 'eval_runtime': 139.5028, 'eval_samples_per_second': 447.647, 'eval_steps_per_second': 27.978, 'epoch': 0.61} +{'loss': 0.408, 'grad_norm': 0.46325814723968506, 'learning_rate': 4.6094910541065564e-05, 'epoch': 0.61} +{'eval_loss': 0.36901959776878357, 'eval_runtime': 84.9439, 'eval_samples_per_second': 735.168, 'eval_steps_per_second': 45.948, 'epoch': 0.61} +{'loss': 0.4101, 'grad_norm': 0.42657470703125, 'learning_rate': 4.603104696059016e-05, 'epoch': 0.62} +{'eval_loss': 0.367876797914505, 'eval_runtime': 85.3892, 'eval_samples_per_second': 731.334, 'eval_steps_per_second': 45.708, 'epoch': 0.62} +{'loss': 0.4097, 'grad_norm': 0.46112215518951416, 'learning_rate': 4.596671038831037e-05, 'epoch': 0.62} +{'eval_loss': 0.36871227622032166, 'eval_runtime': 114.7531, 'eval_samples_per_second': 544.194, 'eval_steps_per_second': 34.012, 'epoch': 0.62} +{'loss': 0.4063, 'grad_norm': 0.4977983236312866, 'learning_rate': 4.590190227116659e-05, 'epoch': 0.63} +{'eval_loss': 0.36876916885375977, 'eval_runtime': 132.1652, 'eval_samples_per_second': 472.499, 'eval_steps_per_second': 29.531, 'epoch': 0.63} +{'loss': 0.4092, 'grad_norm': 0.43757128715515137, 'learning_rate': 4.5836624066704326e-05, 'epoch': 0.63} +{'eval_loss': 0.3676663339138031, 'eval_runtime': 125.5424, 'eval_samples_per_second': 497.426, 'eval_steps_per_second': 31.089, 'epoch': 0.63} +{'loss': 0.4116, 'grad_norm': 0.5049664974212646, 'learning_rate': 4.577087724304146e-05, 'epoch': 0.64} +{'eval_loss': 0.36832407116889954, 'eval_runtime': 125.0145, 'eval_samples_per_second': 499.526, 'eval_steps_per_second': 31.22, 'epoch': 0.64} +{'loss': 0.411, 'grad_norm': 0.4957742989063263, 'learning_rate': 4.5704663278835144e-05, 'epoch': 0.64} +{'eval_loss': 0.36823660135269165, 'eval_runtime': 124.6231, 'eval_samples_per_second': 501.095, 'eval_steps_per_second': 31.318, 'epoch': 0.64} +{'loss': 0.4045, 'grad_norm': 0.4239470660686493, 'learning_rate': 4.5637983663248684e-05, 'epoch': 0.65} +{'eval_loss': 0.36874324083328247, 'eval_runtime': 90.5675, 'eval_samples_per_second': 689.519, 'eval_steps_per_second': 43.095, 'epoch': 0.65} +{'loss': 0.4086, 'grad_norm': 0.571670413017273, 'learning_rate': 4.5570839895917896e-05, 'epoch': 0.65} +{'eval_loss': 0.36759719252586365, 'eval_runtime': 91.5202, 'eval_samples_per_second': 682.341, 'eval_steps_per_second': 42.646, 'epoch': 0.65} +{'loss': 0.4016, 'grad_norm': 0.4806293249130249, 'learning_rate': 4.550323348691745e-05, 'epoch': 0.65} +{'eval_loss': 0.36645451188087463, 'eval_runtime': 127.8602, 'eval_samples_per_second': 488.409, 'eval_steps_per_second': 30.526, 'epoch': 0.65} +{'loss': 0.4054, 'grad_norm': 0.5579434633255005, 'learning_rate': 4.5435165956726945e-05, 'epoch': 0.66} +{'eval_loss': 0.3671322166919708, 'eval_runtime': 137.8007, 'eval_samples_per_second': 453.176, 'eval_steps_per_second': 28.324, 'epoch': 0.66} +{'loss': 0.4054, 'grad_norm': 0.5343590974807739, 'learning_rate': 4.536663883619664e-05, 'epoch': 0.66} +{'eval_loss': 0.3667144775390625, 'eval_runtime': 120.2469, 'eval_samples_per_second': 519.331, 'eval_steps_per_second': 32.458, 'epoch': 0.66} +{'loss': 0.4034, 'grad_norm': 0.43134185671806335, 'learning_rate': 4.529765366651307e-05, 'epoch': 0.67} +{'eval_loss': 0.36647868156433105, 'eval_runtime': 120.8756, 'eval_samples_per_second': 516.63, 'eval_steps_per_second': 32.289, 'epoch': 0.67} +{'loss': 0.4068, 'grad_norm': 0.5578097105026245, 'learning_rate': 4.5228211999164386e-05, 'epoch': 0.67} +{'eval_loss': 0.36727964878082275, 'eval_runtime': 91.8583, 'eval_samples_per_second': 679.829, 'eval_steps_per_second': 42.489, 'epoch': 0.67} +{'loss': 0.4093, 'grad_norm': 0.41011855006217957, 'learning_rate': 4.515831539590543e-05, 'epoch': 0.68} +{'eval_loss': 0.3670326769351959, 'eval_runtime': 80.2065, 'eval_samples_per_second': 778.591, 'eval_steps_per_second': 48.662, 'epoch': 0.68} +{'loss': 0.4109, 'grad_norm': 0.44767138361930847, 'learning_rate': 4.508796542872262e-05, 'epoch': 0.68} +{'eval_loss': 0.36649417877197266, 'eval_runtime': 114.7418, 'eval_samples_per_second': 544.248, 'eval_steps_per_second': 34.016, 'epoch': 0.68} +{'loss': 0.4091, 'grad_norm': 0.4349381625652313, 'learning_rate': 4.501716367979864e-05, 'epoch': 0.69} +{'eval_loss': 0.3655380606651306, 'eval_runtime': 145.924, 'eval_samples_per_second': 427.949, 'eval_steps_per_second': 26.747, 'epoch': 0.69} +{'loss': 0.4098, 'grad_norm': 0.45651066303253174, 'learning_rate': 4.494591174147678e-05, 'epoch': 0.69} +{'eval_loss': 0.36726322770118713, 'eval_runtime': 110.6814, 'eval_samples_per_second': 564.214, 'eval_steps_per_second': 35.263, 'epoch': 0.69} +{'loss': 0.4042, 'grad_norm': 0.5622753500938416, 'learning_rate': 4.487421121622521e-05, 'epoch': 0.69} +{'eval_loss': 0.36683568358421326, 'eval_runtime': 174.7508, 'eval_samples_per_second': 357.355, 'eval_steps_per_second': 22.335, 'epoch': 0.69} +{'loss': 0.4015, 'grad_norm': 0.4718138575553894, 'learning_rate': 4.4802063716600866e-05, 'epoch': 0.7} +{'eval_loss': 0.36756452918052673, 'eval_runtime': 102.4595, 'eval_samples_per_second': 609.49, 'eval_steps_per_second': 38.093, 'epoch': 0.7} +{'loss': 0.4055, 'grad_norm': 0.5618736743927002, 'learning_rate': 4.472947086521322e-05, 'epoch': 0.7} +{'eval_loss': 0.3674260079860687, 'eval_runtime': 90.3161, 'eval_samples_per_second': 691.438, 'eval_steps_per_second': 43.215, 'epoch': 0.7} +{'loss': 0.3998, 'grad_norm': 0.41628313064575195, 'learning_rate': 4.4656434294687785e-05, 'epoch': 0.71} +{'eval_loss': 0.3670450747013092, 'eval_runtime': 100.6789, 'eval_samples_per_second': 620.269, 'eval_steps_per_second': 38.767, 'epoch': 0.71} +{'loss': 0.4009, 'grad_norm': 0.4917495846748352, 'learning_rate': 4.458295564762939e-05, 'epoch': 0.71} +{'eval_loss': 0.36591091752052307, 'eval_runtime': 120.4175, 'eval_samples_per_second': 518.596, 'eval_steps_per_second': 32.412, 'epoch': 0.71} +{'loss': 0.401, 'grad_norm': 0.5703805088996887, 'learning_rate': 4.450903657658522e-05, 'epoch': 0.72} +{'eval_loss': 0.3645976781845093, 'eval_runtime': 133.7612, 'eval_samples_per_second': 466.862, 'eval_steps_per_second': 29.179, 'epoch': 0.72} +{'loss': 0.4028, 'grad_norm': 0.44861626625061035, 'learning_rate': 4.4434678744007716e-05, 'epoch': 0.72} +{'eval_loss': 0.36582890152931213, 'eval_runtime': 136.6168, 'eval_samples_per_second': 457.103, 'eval_steps_per_second': 28.569, 'epoch': 0.72} +{'loss': 0.4017, 'grad_norm': 0.48352286219596863, 'learning_rate': 4.435988382221711e-05, 'epoch': 0.72} +{'eval_loss': 0.3646637499332428, 'eval_runtime': 142.195, 'eval_samples_per_second': 439.172, 'eval_steps_per_second': 27.448, 'epoch': 0.72} +{'loss': 0.4045, 'grad_norm': 0.3920513391494751, 'learning_rate': 4.4284653493363825e-05, 'epoch': 0.73} +{'eval_loss': 0.3652428984642029, 'eval_runtime': 95.2681, 'eval_samples_per_second': 655.497, 'eval_steps_per_second': 40.969, 'epoch': 0.73} +{'loss': 0.4016, 'grad_norm': 0.5262975692749023, 'learning_rate': 4.4208989449390714e-05, 'epoch': 0.73} +{'eval_loss': 0.36649009585380554, 'eval_runtime': 102.1982, 'eval_samples_per_second': 611.048, 'eval_steps_per_second': 38.19, 'epoch': 0.73} +{'loss': 0.3985, 'grad_norm': 0.4830452799797058, 'learning_rate': 4.41328933919949e-05, 'epoch': 0.74} +{'eval_loss': 0.3651014268398285, 'eval_runtime': 117.5956, 'eval_samples_per_second': 531.04, 'eval_steps_per_second': 33.19, 'epoch': 0.74} +{'loss': 0.4026, 'grad_norm': 0.4814671277999878, 'learning_rate': 4.405636703258961e-05, 'epoch': 0.74} +{'eval_loss': 0.3652998208999634, 'eval_runtime': 119.8531, 'eval_samples_per_second': 521.038, 'eval_steps_per_second': 32.565, 'epoch': 0.74} +{'loss': 0.4008, 'grad_norm': 0.45466408133506775, 'learning_rate': 4.3979412092265605e-05, 'epoch': 0.75} +{'eval_loss': 0.36402595043182373, 'eval_runtime': 109.8336, 'eval_samples_per_second': 568.569, 'eval_steps_per_second': 35.536, 'epoch': 0.75} +{'loss': 0.3993, 'grad_norm': 0.4847424030303955, 'learning_rate': 4.3902030301752516e-05, 'epoch': 0.75} +{'eval_loss': 0.3640923798084259, 'eval_runtime': 127.5697, 'eval_samples_per_second': 489.521, 'eval_steps_per_second': 30.595, 'epoch': 0.75} +{'loss': 0.4045, 'grad_norm': 0.6004173755645752, 'learning_rate': 4.382422340137991e-05, 'epoch': 0.76} +{'eval_loss': 0.36500751972198486, 'eval_runtime': 121.2107, 'eval_samples_per_second': 515.202, 'eval_steps_per_second': 32.2, 'epoch': 0.76} +{'loss': 0.4031, 'grad_norm': 0.46233096718788147, 'learning_rate': 4.374599314103812e-05, 'epoch': 0.76} +{'eval_loss': 0.3653023838996887, 'eval_runtime': 82.672, 'eval_samples_per_second': 755.371, 'eval_steps_per_second': 47.211, 'epoch': 0.76} +{'loss': 0.3996, 'grad_norm': 0.5082074999809265, 'learning_rate': 4.366734128013896e-05, 'epoch': 0.76} +{'eval_loss': 0.36377328634262085, 'eval_runtime': 82.9406, 'eval_samples_per_second': 752.924, 'eval_steps_per_second': 47.058, 'epoch': 0.76} +{'loss': 0.3985, 'grad_norm': 0.521959662437439, 'learning_rate': 4.358826958757607e-05, 'epoch': 0.77} +{'eval_loss': 0.3654593825340271, 'eval_runtime': 120.406, 'eval_samples_per_second': 518.645, 'eval_steps_per_second': 32.415, 'epoch': 0.77} +{'loss': 0.3988, 'grad_norm': 0.49787694215774536, 'learning_rate': 4.350877984168521e-05, 'epoch': 0.77} +{'eval_loss': 0.3641042411327362, 'eval_runtime': 115.1217, 'eval_samples_per_second': 542.452, 'eval_steps_per_second': 33.903, 'epoch': 0.77} +{'loss': 0.3985, 'grad_norm': 0.5057880282402039, 'learning_rate': 4.3428873830204205e-05, 'epoch': 0.78} +{'eval_loss': 0.3642823100090027, 'eval_runtime': 116.6843, 'eval_samples_per_second': 535.188, 'eval_steps_per_second': 33.449, 'epoch': 0.78} +{'loss': 0.4007, 'grad_norm': 0.5067749619483948, 'learning_rate': 4.334855335023277e-05, 'epoch': 0.78} +{'eval_loss': 0.3660070598125458, 'eval_runtime': 126.8399, 'eval_samples_per_second': 492.337, 'eval_steps_per_second': 30.771, 'epoch': 0.78} +{'loss': 0.4007, 'grad_norm': 0.5921573042869568, 'learning_rate': 4.326782020819209e-05, 'epoch': 0.79} +{'eval_loss': 0.36343204975128174, 'eval_runtime': 114.9311, 'eval_samples_per_second': 543.351, 'eval_steps_per_second': 33.959, 'epoch': 0.79} +{'loss': 0.4001, 'grad_norm': 0.47490814328193665, 'learning_rate': 4.3186676219784205e-05, 'epoch': 0.79} +{'eval_loss': 0.3632364869117737, 'eval_runtime': 93.131, 'eval_samples_per_second': 670.54, 'eval_steps_per_second': 41.909, 'epoch': 0.79} +{'loss': 0.3978, 'grad_norm': 0.44013094902038574, 'learning_rate': 4.310512320995113e-05, 'epoch': 0.79} +{'eval_loss': 0.36435666680336, 'eval_runtime': 92.8376, 'eval_samples_per_second': 672.658, 'eval_steps_per_second': 42.041, 'epoch': 0.79} +{'loss': 0.3995, 'grad_norm': 0.5799044370651245, 'learning_rate': 4.302316301283384e-05, 'epoch': 0.8} +{'eval_loss': 0.363454133272171, 'eval_runtime': 106.2734, 'eval_samples_per_second': 587.617, 'eval_steps_per_second': 36.726, 'epoch': 0.8} +{'loss': 0.3978, 'grad_norm': 0.46712014079093933, 'learning_rate': 4.294079747173105e-05, 'epoch': 0.8} +{'eval_loss': 0.3631080985069275, 'eval_runtime': 115.818, 'eval_samples_per_second': 539.191, 'eval_steps_per_second': 33.699, 'epoch': 0.8} +{'loss': 0.3997, 'grad_norm': 0.603742778301239, 'learning_rate': 4.285802843905773e-05, 'epoch': 0.81} +{'eval_loss': 0.3648291826248169, 'eval_runtime': 127.3229, 'eval_samples_per_second': 490.469, 'eval_steps_per_second': 30.654, 'epoch': 0.81} +{'loss': 0.396, 'grad_norm': 0.4908153712749481, 'learning_rate': 4.2774857776303404e-05, 'epoch': 0.81} +{'eval_loss': 0.3642762303352356, 'eval_runtime': 128.276, 'eval_samples_per_second': 486.825, 'eval_steps_per_second': 30.427, 'epoch': 0.81} +{'loss': 0.3985, 'grad_norm': 0.44930770993232727, 'learning_rate': 4.269128735399035e-05, 'epoch': 0.82} +{'eval_loss': 0.3655836582183838, 'eval_runtime': 125.9784, 'eval_samples_per_second': 495.704, 'eval_steps_per_second': 30.982, 'epoch': 0.82} +{'loss': 0.3968, 'grad_norm': 0.4365586042404175, 'learning_rate': 4.260731905163152e-05, 'epoch': 0.82} +{'eval_loss': 0.36502814292907715, 'eval_runtime': 95.9713, 'eval_samples_per_second': 650.695, 'eval_steps_per_second': 40.668, 'epoch': 0.82} +{'loss': 0.3948, 'grad_norm': 0.4811830520629883, 'learning_rate': 4.2522954757688224e-05, 'epoch': 0.83} +{'eval_loss': 0.36377620697021484, 'eval_runtime': 105.0215, 'eval_samples_per_second': 594.621, 'eval_steps_per_second': 37.164, 'epoch': 0.83} +{'loss': 0.3932, 'grad_norm': 0.4613575339317322, 'learning_rate': 4.243819636952773e-05, 'epoch': 0.83} +{'eval_loss': 0.3649917542934418, 'eval_runtime': 132.6932, 'eval_samples_per_second': 470.619, 'eval_steps_per_second': 29.414, 'epoch': 0.83} +{'loss': 0.3979, 'grad_norm': 0.3941692113876343, 'learning_rate': 4.235304579338051e-05, 'epoch': 0.83} +{'eval_loss': 0.36392855644226074, 'eval_runtime': 135.7973, 'eval_samples_per_second': 459.862, 'eval_steps_per_second': 28.741, 'epoch': 0.83} +{'loss': 0.401, 'grad_norm': 0.5205948352813721, 'learning_rate': 4.226750494429743e-05, 'epoch': 0.84} +{'eval_loss': 0.3636823892593384, 'eval_runtime': 139.1823, 'eval_samples_per_second': 448.678, 'eval_steps_per_second': 28.042, 'epoch': 0.84} +{'loss': 0.3955, 'grad_norm': 0.5491565465927124, 'learning_rate': 4.2181575746106655e-05, 'epoch': 0.84} +{'eval_loss': 0.36260080337524414, 'eval_runtime': 114.3276, 'eval_samples_per_second': 546.22, 'eval_steps_per_second': 34.139, 'epoch': 0.84} +{'loss': 0.3969, 'grad_norm': 0.46972137689590454, 'learning_rate': 4.20952601313704e-05, 'epoch': 0.85} +{'eval_loss': 0.3631291091442108, 'eval_runtime': 88.7909, 'eval_samples_per_second': 703.315, 'eval_steps_per_second': 43.957, 'epoch': 0.85} +{'loss': 0.4007, 'grad_norm': 0.7942949533462524, 'learning_rate': 4.2008560041341435e-05, 'epoch': 0.85} +{'eval_loss': 0.363958477973938, 'eval_runtime': 91.053, 'eval_samples_per_second': 685.842, 'eval_steps_per_second': 42.865, 'epoch': 0.85} +{'loss': 0.3963, 'grad_norm': 0.4735831320285797, 'learning_rate': 4.192147742591947e-05, 'epoch': 0.86} +{'eval_loss': 0.3637272119522095, 'eval_runtime': 111.7946, 'eval_samples_per_second': 558.596, 'eval_steps_per_second': 34.912, 'epoch': 0.86} +{'loss': 0.393, 'grad_norm': 0.4975745975971222, 'learning_rate': 4.183401424360724e-05, 'epoch': 0.86} +{'eval_loss': 0.3626266121864319, 'eval_runtime': 97.2857, 'eval_samples_per_second': 641.903, 'eval_steps_per_second': 40.119, 'epoch': 0.86} +{'loss': 0.3945, 'grad_norm': 0.5676069259643555, 'learning_rate': 4.1746172461466525e-05, 'epoch': 0.87} +{'eval_loss': 0.362351655960083, 'eval_runtime': 117.6427, 'eval_samples_per_second': 530.828, 'eval_steps_per_second': 33.177, 'epoch': 0.87} +{'loss': 0.3915, 'grad_norm': 0.5506062507629395, 'learning_rate': 4.1657954055073866e-05, 'epoch': 0.87} +{'eval_loss': 0.364271879196167, 'eval_runtime': 119.0863, 'eval_samples_per_second': 524.393, 'eval_steps_per_second': 32.775, 'epoch': 0.87} +{'loss': 0.3972, 'grad_norm': 0.4566323161125183, 'learning_rate': 4.1569361008476146e-05, 'epoch': 0.87} +{'eval_loss': 0.3637408912181854, 'eval_runtime': 81.7122, 'eval_samples_per_second': 764.243, 'eval_steps_per_second': 47.765, 'epoch': 0.87} +{'loss': 0.393, 'grad_norm': 0.40000104904174805, 'learning_rate': 4.148039531414596e-05, 'epoch': 0.88} +{'eval_loss': 0.3632229268550873, 'eval_runtime': 87.5892, 'eval_samples_per_second': 712.964, 'eval_steps_per_second': 44.56, 'epoch': 0.88} +{'loss': 0.3942, 'grad_norm': 0.5483269095420837, 'learning_rate': 4.1391058972936856e-05, 'epoch': 0.88} +{'eval_loss': 0.36196622252464294, 'eval_runtime': 132.0301, 'eval_samples_per_second': 472.983, 'eval_steps_per_second': 29.561, 'epoch': 0.88} +{'loss': 0.3905, 'grad_norm': 0.43077611923217773, 'learning_rate': 4.130135399403823e-05, 'epoch': 0.89} +{'eval_loss': 0.36227133870124817, 'eval_runtime': 106.5906, 'eval_samples_per_second': 585.868, 'eval_steps_per_second': 36.617, 'epoch': 0.89} +{'loss': 0.3938, 'grad_norm': 0.44234204292297363, 'learning_rate': 4.121128239493025e-05, 'epoch': 0.89} +{'eval_loss': 0.36203280091285706, 'eval_runtime': 142.3455, 'eval_samples_per_second': 438.707, 'eval_steps_per_second': 27.419, 'epoch': 0.89} +{'loss': 0.3957, 'grad_norm': 0.5302680134773254, 'learning_rate': 4.1120846201338424e-05, 'epoch': 0.9} +{'eval_loss': 0.362118124961853, 'eval_runtime': 120.6973, 'eval_samples_per_second': 517.394, 'eval_steps_per_second': 32.337, 'epoch': 0.9} +{'loss': 0.3948, 'grad_norm': 0.5230308175086975, 'learning_rate': 4.103004744718805e-05, 'epoch': 0.9} +{'eval_loss': 0.3617578148841858, 'eval_runtime': 93.6694, 'eval_samples_per_second': 666.685, 'eval_steps_per_second': 41.668, 'epoch': 0.9} +{'loss': 0.3908, 'grad_norm': 0.535275936126709, 'learning_rate': 4.093888817455844e-05, 'epoch': 0.9} +{'eval_loss': 0.362557053565979, 'eval_runtime': 79.3035, 'eval_samples_per_second': 787.456, 'eval_steps_per_second': 49.216, 'epoch': 0.9} +{'loss': 0.3959, 'grad_norm': 0.62333083152771, 'learning_rate': 4.08473704336371e-05, 'epoch': 0.91} +{'eval_loss': 0.3641321063041687, 'eval_runtime': 118.1903, 'eval_samples_per_second': 528.368, 'eval_steps_per_second': 33.023, 'epoch': 0.91} +{'loss': 0.3917, 'grad_norm': 0.5016458034515381, 'learning_rate': 4.075549628267347e-05, 'epoch': 0.91} +{'eval_loss': 0.362307071685791, 'eval_runtime': 127.2008, 'eval_samples_per_second': 490.94, 'eval_steps_per_second': 30.684, 'epoch': 0.91} +{'loss': 0.3928, 'grad_norm': 0.47533100843429565, 'learning_rate': 4.066326778793278e-05, 'epoch': 0.92} +{'eval_loss': 0.3614982068538666, 'eval_runtime': 124.2662, 'eval_samples_per_second': 502.534, 'eval_steps_per_second': 31.408, 'epoch': 0.92} +{'loss': 0.3896, 'grad_norm': 0.4386855959892273, 'learning_rate': 4.057068702364947e-05, 'epoch': 0.92} +{'eval_loss': 0.3633500933647156, 'eval_runtime': 107.6663, 'eval_samples_per_second': 580.014, 'eval_steps_per_second': 36.251, 'epoch': 0.92} +{'loss': 0.3895, 'grad_norm': 0.40361031889915466, 'learning_rate': 4.0477756071980576e-05, 'epoch': 0.93} +{'eval_loss': 0.36147207021713257, 'eval_runtime': 91.9879, 'eval_samples_per_second': 678.872, 'eval_steps_per_second': 42.43, 'epoch': 0.93} +{'loss': 0.3926, 'grad_norm': 0.4934155344963074, 'learning_rate': 4.038447702295895e-05, 'epoch': 0.93} +{'eval_loss': 0.361216276884079, 'eval_runtime': 87.7247, 'eval_samples_per_second': 711.864, 'eval_steps_per_second': 44.491, 'epoch': 0.93} +{'loss': 0.3912, 'grad_norm': 0.49049246311187744, 'learning_rate': 4.029085197444618e-05, 'epoch': 0.94} +{'eval_loss': 0.3619934916496277, 'eval_runtime': 119.9708, 'eval_samples_per_second': 520.526, 'eval_steps_per_second': 32.533, 'epoch': 0.94} +{'loss': 0.3918, 'grad_norm': 0.47213834524154663, 'learning_rate': 4.019688303208543e-05, 'epoch': 0.94} +{'eval_loss': 0.3629037141799927, 'eval_runtime': 132.3944, 'eval_samples_per_second': 471.682, 'eval_steps_per_second': 29.48, 'epoch': 0.94} +{'loss': 0.3949, 'grad_norm': 0.46632280945777893, 'learning_rate': 4.0102572309254136e-05, 'epoch': 0.94} +{'eval_loss': 0.3618938624858856, 'eval_runtime': 110.1869, 'eval_samples_per_second': 566.746, 'eval_steps_per_second': 35.422, 'epoch': 0.94} +{'loss': 0.3955, 'grad_norm': 0.5080628395080566, 'learning_rate': 4.00079219270164e-05, 'epoch': 0.95} +{'eval_loss': 0.3615620732307434, 'eval_runtime': 126.4499, 'eval_samples_per_second': 493.856, 'eval_steps_per_second': 30.866, 'epoch': 0.95} +{'loss': 0.39, 'grad_norm': 0.5132511258125305, 'learning_rate': 3.9912934014075324e-05, 'epoch': 0.95} +{'eval_loss': 0.3622210919857025, 'eval_runtime': 83.1401, 'eval_samples_per_second': 751.117, 'eval_steps_per_second': 46.945, 'epoch': 0.95} +{'loss': 0.3856, 'grad_norm': 0.5110806226730347, 'learning_rate': 3.9817610706725155e-05, 'epoch': 0.96} +{'eval_loss': 0.3623315989971161, 'eval_runtime': 79.7524, 'eval_samples_per_second': 783.023, 'eval_steps_per_second': 48.939, 'epoch': 0.96} +{'loss': 0.3926, 'grad_norm': 0.5235128998756409, 'learning_rate': 3.9721954148803195e-05, 'epoch': 0.96} +{'eval_loss': 0.3611941933631897, 'eval_runtime': 103.2182, 'eval_samples_per_second': 605.01, 'eval_steps_per_second': 37.813, 'epoch': 0.96} +{'loss': 0.3903, 'grad_norm': 0.5954607725143433, 'learning_rate': 3.962596649164162e-05, 'epoch': 0.97} +{'eval_loss': 0.36093491315841675, 'eval_runtime': 105.3078, 'eval_samples_per_second': 593.004, 'eval_steps_per_second': 37.063, 'epoch': 0.97} +{'loss': 0.3919, 'grad_norm': 0.5522266626358032, 'learning_rate': 3.952964989401908e-05, 'epoch': 0.97} +{'eval_loss': 0.3618551194667816, 'eval_runtime': 131.6356, 'eval_samples_per_second': 474.4, 'eval_steps_per_second': 29.65, 'epoch': 0.97} +{'loss': 0.392, 'grad_norm': 0.5129221081733704, 'learning_rate': 3.943300652211215e-05, 'epoch': 0.98} +{'eval_loss': 0.36146780848503113, 'eval_runtime': 126.0096, 'eval_samples_per_second': 495.581, 'eval_steps_per_second': 30.974, 'epoch': 0.98} +{'loss': 0.3946, 'grad_norm': 0.4617569148540497, 'learning_rate': 3.9336038549446605e-05, 'epoch': 0.98} +{'eval_loss': 0.3608095049858093, 'eval_runtime': 97.7698, 'eval_samples_per_second': 638.725, 'eval_steps_per_second': 39.92, 'epoch': 0.98} +{'loss': 0.3886, 'grad_norm': 0.41871944069862366, 'learning_rate': 3.923874815684857e-05, 'epoch': 0.98} +{'eval_loss': 0.3618209660053253, 'eval_runtime': 76.5732, 'eval_samples_per_second': 815.533, 'eval_steps_per_second': 50.971, 'epoch': 0.98} +{'loss': 0.3925, 'grad_norm': 0.5305113196372986, 'learning_rate': 3.914113753239544e-05, 'epoch': 0.99} +{'eval_loss': 0.36105629801750183, 'eval_runtime': 108.2716, 'eval_samples_per_second': 576.772, 'eval_steps_per_second': 36.048, 'epoch': 0.99} +{'loss': 0.3937, 'grad_norm': 0.655860185623169, 'learning_rate': 3.904320887136665e-05, 'epoch': 0.99} +{'eval_loss': 0.36069878935813904, 'eval_runtime': 128.2489, 'eval_samples_per_second': 486.928, 'eval_steps_per_second': 30.433, 'epoch': 0.99} +{'loss': 0.3892, 'grad_norm': 0.44988998770713806, 'learning_rate': 3.8944964376194385e-05, 'epoch': 1.0} +{'eval_loss': 0.3608449697494507, 'eval_runtime': 116.9723, 'eval_samples_per_second': 533.87, 'eval_steps_per_second': 33.367, 'epoch': 1.0} +{'loss': 0.3875, 'grad_norm': 0.46403780579566956, 'learning_rate': 3.8846406256413933e-05, 'epoch': 1.0} +{'eval_loss': 0.36095595359802246, 'eval_runtime': 117.844, 'eval_samples_per_second': 529.921, 'eval_steps_per_second': 33.12, 'epoch': 1.0} +{'loss': 0.3875, 'grad_norm': 0.43470218777656555, 'learning_rate': 3.874753672861411e-05, 'epoch': 1.01} +{'eval_loss': 0.3602270483970642, 'eval_runtime': 136.9174, 'eval_samples_per_second': 456.1, 'eval_steps_per_second': 28.506, 'epoch': 1.01} +{'loss': 0.3876, 'grad_norm': 0.43704667687416077, 'learning_rate': 3.864835801638731e-05, 'epoch': 1.01} +{'eval_loss': 0.3603985905647278, 'eval_runtime': 89.9378, 'eval_samples_per_second': 694.347, 'eval_steps_per_second': 43.397, 'epoch': 1.01} +{'loss': 0.3866, 'grad_norm': 0.4803130328655243, 'learning_rate': 3.8548872350279555e-05, 'epoch': 1.01} +{'eval_loss': 0.36090967059135437, 'eval_runtime': 75.5313, 'eval_samples_per_second': 826.783, 'eval_steps_per_second': 51.674, 'epoch': 1.01} +{'loss': 0.3875, 'grad_norm': 0.4769569933414459, 'learning_rate': 3.84490819677403e-05, 'epoch': 1.02} +{'eval_loss': 0.360583633184433, 'eval_runtime': 123.7483, 'eval_samples_per_second': 504.637, 'eval_steps_per_second': 31.54, 'epoch': 1.02} +{'loss': 0.3825, 'grad_norm': 0.4240330755710602, 'learning_rate': 3.834898911307214e-05, 'epoch': 1.02} +{'eval_loss': 0.36190542578697205, 'eval_runtime': 116.3424, 'eval_samples_per_second': 536.76, 'eval_steps_per_second': 33.548, 'epoch': 1.02} +{'loss': 0.3803, 'grad_norm': 0.4503519535064697, 'learning_rate': 3.824859603738031e-05, 'epoch': 1.03} +{'eval_loss': 0.360375314950943, 'eval_runtime': 110.9676, 'eval_samples_per_second': 562.759, 'eval_steps_per_second': 35.172, 'epoch': 1.03} +{'loss': 0.3855, 'grad_norm': 0.45445939898490906, 'learning_rate': 3.8147904998522065e-05, 'epoch': 1.03} +{'eval_loss': 0.36017200350761414, 'eval_runtime': 152.2052, 'eval_samples_per_second': 410.288, 'eval_steps_per_second': 25.643, 'epoch': 1.03} +{'loss': 0.3831, 'grad_norm': 0.3759046196937561, 'learning_rate': 3.8046918261055906e-05, 'epoch': 1.04} +{'eval_loss': 0.35999542474746704, 'eval_runtime': 96.2935, 'eval_samples_per_second': 648.517, 'eval_steps_per_second': 40.532, 'epoch': 1.04} +{'loss': 0.3878, 'grad_norm': 0.4808911681175232, 'learning_rate': 3.794563809619065e-05, 'epoch': 1.04} +{'eval_loss': 0.3603195548057556, 'eval_runtime': 80.3862, 'eval_samples_per_second': 776.849, 'eval_steps_per_second': 48.553, 'epoch': 1.04} +{'loss': 0.3881, 'grad_norm': 0.5815185308456421, 'learning_rate': 3.784406678173433e-05, 'epoch': 1.05} +{'eval_loss': 0.36052319407463074, 'eval_runtime': 124.5279, 'eval_samples_per_second': 501.478, 'eval_steps_per_second': 31.342, 'epoch': 1.05} +{'loss': 0.384, 'grad_norm': 0.441284716129303, 'learning_rate': 3.774220660204301e-05, 'epoch': 1.05} +{'eval_loss': 0.3592219352722168, 'eval_runtime': 105.8568, 'eval_samples_per_second': 589.929, 'eval_steps_per_second': 36.871, 'epoch': 1.05} +{'loss': 0.3866, 'grad_norm': 0.48068562150001526, 'learning_rate': 3.7640059847969346e-05, 'epoch': 1.05} +{'eval_loss': 0.36036092042922974, 'eval_runtime': 118.2231, 'eval_samples_per_second': 528.222, 'eval_steps_per_second': 33.014, 'epoch': 1.05} +{'loss': 0.382, 'grad_norm': 0.43218880891799927, 'learning_rate': 3.7537628816811125e-05, 'epoch': 1.06} +{'eval_loss': 0.36076807975769043, 'eval_runtime': 124.6466, 'eval_samples_per_second': 501.0, 'eval_steps_per_second': 31.313, 'epoch': 1.06} +{'loss': 0.3836, 'grad_norm': 0.5461480617523193, 'learning_rate': 3.743491581225957e-05, 'epoch': 1.06} +{'eval_loss': 0.35868728160858154, 'eval_runtime': 116.8055, 'eval_samples_per_second': 534.633, 'eval_steps_per_second': 33.415, 'epoch': 1.06} +{'loss': 0.3912, 'grad_norm': 0.5550000667572021, 'learning_rate': 3.733192314434754e-05, 'epoch': 1.07} +{'eval_loss': 0.35955944657325745, 'eval_runtime': 83.4949, 'eval_samples_per_second': 747.926, 'eval_steps_per_second': 46.745, 'epoch': 1.07} +{'loss': 0.3897, 'grad_norm': 0.40106916427612305, 'learning_rate': 3.722865312939754e-05, 'epoch': 1.07} +{'eval_loss': 0.360419362783432, 'eval_runtime': 84.0269, 'eval_samples_per_second': 743.191, 'eval_steps_per_second': 46.449, 'epoch': 1.07} +{'loss': 0.3828, 'grad_norm': 0.4181188941001892, 'learning_rate': 3.712510808996971e-05, 'epoch': 1.08} +{'eval_loss': 0.3606116771697998, 'eval_runtime': 121.8987, 'eval_samples_per_second': 512.294, 'eval_steps_per_second': 32.018, 'epoch': 1.08} +{'loss': 0.3868, 'grad_norm': 0.47270849347114563, 'learning_rate': 3.702129035480947e-05, 'epoch': 1.08} +{'eval_loss': 0.35938888788223267, 'eval_runtime': 96.6106, 'eval_samples_per_second': 646.389, 'eval_steps_per_second': 40.399, 'epoch': 1.08} +{'loss': 0.3891, 'grad_norm': 0.4688984751701355, 'learning_rate': 3.691720225879527e-05, 'epoch': 1.08} +{'eval_loss': 0.35984858870506287, 'eval_runtime': 129.4788, 'eval_samples_per_second': 482.303, 'eval_steps_per_second': 30.144, 'epoch': 1.08} +{'loss': 0.385, 'grad_norm': 0.5250414609909058, 'learning_rate': 3.6812846142886e-05, 'epoch': 1.09} +{'eval_loss': 0.3599199950695038, 'eval_runtime': 136.6565, 'eval_samples_per_second': 456.971, 'eval_steps_per_second': 28.561, 'epoch': 1.09} +{'loss': 0.3836, 'grad_norm': 0.5816446542739868, 'learning_rate': 3.670822435406835e-05, 'epoch': 1.09} +{'eval_loss': 0.3591647148132324, 'eval_runtime': 88.5551, 'eval_samples_per_second': 705.188, 'eval_steps_per_second': 44.074, 'epoch': 1.09} +{'loss': 0.3866, 'grad_norm': 0.4533269703388214, 'learning_rate': 3.660333924530407e-05, 'epoch': 1.1} +{'eval_loss': 0.35950422286987305, 'eval_runtime': 85.4709, 'eval_samples_per_second': 730.635, 'eval_steps_per_second': 45.665, 'epoch': 1.1} +{'loss': 0.3817, 'grad_norm': 0.3812009394168854, 'learning_rate': 3.6498193175477e-05, 'epoch': 1.1} +{'eval_loss': 0.3601609766483307, 'eval_runtime': 98.9987, 'eval_samples_per_second': 630.796, 'eval_steps_per_second': 39.425, 'epoch': 1.1} +{'loss': 0.3786, 'grad_norm': 0.4896833598613739, 'learning_rate': 3.6392788509340016e-05, 'epoch': 1.11} +{'eval_loss': 0.36013147234916687, 'eval_runtime': 131.2201, 'eval_samples_per_second': 475.903, 'eval_steps_per_second': 29.744, 'epoch': 1.11} +{'loss': 0.3814, 'grad_norm': 0.4892300069332123, 'learning_rate': 3.628712761746191e-05, 'epoch': 1.11} +{'eval_loss': 0.3584238886833191, 'eval_runtime': 139.4507, 'eval_samples_per_second': 447.814, 'eval_steps_per_second': 27.988, 'epoch': 1.11} +{'loss': 0.3868, 'grad_norm': 0.5929563641548157, 'learning_rate': 3.618121287617402e-05, 'epoch': 1.12} +{'eval_loss': 0.3590581715106964, 'eval_runtime': 136.328, 'eval_samples_per_second': 458.072, 'eval_steps_per_second': 28.629, 'epoch': 1.12} +{'loss': 0.3802, 'grad_norm': 0.5652954578399658, 'learning_rate': 3.60750466675168e-05, 'epoch': 1.12} +{'eval_loss': 0.36025765538215637, 'eval_runtime': 85.641, 'eval_samples_per_second': 729.184, 'eval_steps_per_second': 45.574, 'epoch': 1.12} +{'loss': 0.384, 'grad_norm': 0.5698074102401733, 'learning_rate': 3.596863137918623e-05, 'epoch': 1.12} +{'eval_loss': 0.3605651259422302, 'eval_runtime': 92.4864, 'eval_samples_per_second': 675.213, 'eval_steps_per_second': 42.201, 'epoch': 1.12} +{'loss': 0.3833, 'grad_norm': 0.5115004181861877, 'learning_rate': 3.586196940448016e-05, 'epoch': 1.13} +{'eval_loss': 0.3590751886367798, 'eval_runtime': 130.4808, 'eval_samples_per_second': 478.599, 'eval_steps_per_second': 29.912, 'epoch': 1.13} +{'loss': 0.3795, 'grad_norm': 0.6686394214630127, 'learning_rate': 3.575506314224445e-05, 'epoch': 1.13} +{'eval_loss': 0.35975590348243713, 'eval_runtime': 155.3267, 'eval_samples_per_second': 402.043, 'eval_steps_per_second': 25.128, 'epoch': 1.13} +{'loss': 0.3872, 'grad_norm': 0.44545629620552063, 'learning_rate': 3.564791499681901e-05, 'epoch': 1.14} +{'eval_loss': 0.35951441526412964, 'eval_runtime': 110.8438, 'eval_samples_per_second': 563.387, 'eval_steps_per_second': 35.212, 'epoch': 1.14} +{'loss': 0.3864, 'grad_norm': 0.43542173504829407, 'learning_rate': 3.554052737798377e-05, 'epoch': 1.14} +{'eval_loss': 0.3594025671482086, 'eval_runtime': 90.2499, 'eval_samples_per_second': 691.946, 'eval_steps_per_second': 43.247, 'epoch': 1.14} +{'loss': 0.3748, 'grad_norm': 0.5121486783027649, 'learning_rate': 3.543290270090445e-05, 'epoch': 1.15} +{'eval_loss': 0.35911065340042114, 'eval_runtime': 102.1107, 'eval_samples_per_second': 611.572, 'eval_steps_per_second': 38.223, 'epoch': 1.15} +{'loss': 0.3778, 'grad_norm': 0.45874112844467163, 'learning_rate': 3.5325043386078236e-05, 'epoch': 1.15} +{'eval_loss': 0.359013170003891, 'eval_runtime': 125.793, 'eval_samples_per_second': 496.435, 'eval_steps_per_second': 31.027, 'epoch': 1.15} +{'loss': 0.3844, 'grad_norm': 0.5211372971534729, 'learning_rate': 3.521695185927938e-05, 'epoch': 1.16} +{'eval_loss': 0.3595026135444641, 'eval_runtime': 112.2531, 'eval_samples_per_second': 556.314, 'eval_steps_per_second': 34.77, 'epoch': 1.16} +{'loss': 0.3796, 'grad_norm': 0.4190448522567749, 'learning_rate': 3.51086305515046e-05, 'epoch': 1.16} +{'eval_loss': 0.358477383852005, 'eval_runtime': 130.8159, 'eval_samples_per_second': 477.373, 'eval_steps_per_second': 29.836, 'epoch': 1.16} +{'loss': 0.3824, 'grad_norm': 0.424062579870224, 'learning_rate': 3.500008189891844e-05, 'epoch': 1.16} +{'eval_loss': 0.35944676399230957, 'eval_runtime': 137.8968, 'eval_samples_per_second': 452.86, 'eval_steps_per_second': 28.304, 'epoch': 1.16} +{'loss': 0.3815, 'grad_norm': 0.5177463889122009, 'learning_rate': 3.4891308342798446e-05, 'epoch': 1.17} +{'eval_loss': 0.3580131232738495, 'eval_runtime': 90.8696, 'eval_samples_per_second': 687.226, 'eval_steps_per_second': 42.952, 'epoch': 1.17} +{'loss': 0.3803, 'grad_norm': 0.41919058561325073, 'learning_rate': 3.478231232948031e-05, 'epoch': 1.17} +{'eval_loss': 0.3588556945323944, 'eval_runtime': 80.8263, 'eval_samples_per_second': 772.62, 'eval_steps_per_second': 48.289, 'epoch': 1.17} +{'loss': 0.3775, 'grad_norm': 0.6438215374946594, 'learning_rate': 3.467309631030283e-05, 'epoch': 1.18} +{'eval_loss': 0.35819584131240845, 'eval_runtime': 120.094, 'eval_samples_per_second': 519.993, 'eval_steps_per_second': 32.5, 'epoch': 1.18} +{'loss': 0.3765, 'grad_norm': 0.37241220474243164, 'learning_rate': 3.456366274155272e-05, 'epoch': 1.18} +{'eval_loss': 0.35832029581069946, 'eval_runtime': 114.128, 'eval_samples_per_second': 547.175, 'eval_steps_per_second': 34.198, 'epoch': 1.18} +{'loss': 0.3831, 'grad_norm': 0.4621847867965698, 'learning_rate': 3.445401408440949e-05, 'epoch': 1.19} +{'eval_loss': 0.35916781425476074, 'eval_runtime': 114.2104, 'eval_samples_per_second': 546.78, 'eval_steps_per_second': 34.174, 'epoch': 1.19} +{'loss': 0.3862, 'grad_norm': 0.49737563729286194, 'learning_rate': 3.434415280488996e-05, 'epoch': 1.19} +{'eval_loss': 0.3589498698711395, 'eval_runtime': 102.5793, 'eval_samples_per_second': 608.778, 'eval_steps_per_second': 38.049, 'epoch': 1.19} +{'loss': 0.3791, 'grad_norm': 0.4248494505882263, 'learning_rate': 3.4234081373792915e-05, 'epoch': 1.19} +{'eval_loss': 0.3585757613182068, 'eval_runtime': 82.2395, 'eval_samples_per_second': 759.344, 'eval_steps_per_second': 47.459, 'epoch': 1.19} +{'loss': 0.3776, 'grad_norm': 0.4176553189754486, 'learning_rate': 3.4123802266643464e-05, 'epoch': 1.2} +{'eval_loss': 0.3572247624397278, 'eval_runtime': 95.4561, 'eval_samples_per_second': 654.207, 'eval_steps_per_second': 40.888, 'epoch': 1.2} +{'loss': 0.3805, 'grad_norm': 0.39727696776390076, 'learning_rate': 3.401331796363737e-05, 'epoch': 1.2} +{'eval_loss': 0.35871046781539917, 'eval_runtime': 136.7163, 'eval_samples_per_second': 456.771, 'eval_steps_per_second': 28.548, 'epoch': 1.2} +{'loss': 0.3815, 'grad_norm': 0.4775928854942322, 'learning_rate': 3.3902630949585305e-05, 'epoch': 1.21} +{'eval_loss': 0.35941725969314575, 'eval_runtime': 107.6945, 'eval_samples_per_second': 579.863, 'eval_steps_per_second': 36.241, 'epoch': 1.21} +{'loss': 0.3848, 'grad_norm': 0.3992287516593933, 'learning_rate': 3.379174371385696e-05, 'epoch': 1.21} +{'eval_loss': 0.3585735857486725, 'eval_runtime': 131.3528, 'eval_samples_per_second': 475.422, 'eval_steps_per_second': 29.714, 'epoch': 1.21} +{'loss': 0.3825, 'grad_norm': 0.48247072100639343, 'learning_rate': 3.3680658750325e-05, 'epoch': 1.22} +{'eval_loss': 0.3588811159133911, 'eval_runtime': 125.2797, 'eval_samples_per_second': 498.469, 'eval_steps_per_second': 31.154, 'epoch': 1.22} +{'loss': 0.3778, 'grad_norm': 0.5211309790611267, 'learning_rate': 3.356937855730907e-05, 'epoch': 1.22} +{'eval_loss': 0.3572573661804199, 'eval_runtime': 92.0753, 'eval_samples_per_second': 678.228, 'eval_steps_per_second': 42.389, 'epoch': 1.22} +{'loss': 0.3775, 'grad_norm': 0.41353365778923035, 'learning_rate': 3.3457905637519535e-05, 'epoch': 1.23} +{'eval_loss': 0.3578314781188965, 'eval_runtime': 86.6786, 'eval_samples_per_second': 720.455, 'eval_steps_per_second': 45.028, 'epoch': 1.23} +{'loss': 0.3791, 'grad_norm': 0.411578506231308, 'learning_rate': 3.3346242498001215e-05, 'epoch': 1.23} +{'eval_loss': 0.35837510228157043, 'eval_runtime': 132.6903, 'eval_samples_per_second': 470.63, 'eval_steps_per_second': 29.414, 'epoch': 1.23} +{'loss': 0.3764, 'grad_norm': 0.4316157102584839, 'learning_rate': 3.323439165007701e-05, 'epoch': 1.23} +{'eval_loss': 0.3577561676502228, 'eval_runtime': 143.8396, 'eval_samples_per_second': 434.15, 'eval_steps_per_second': 27.134, 'epoch': 1.23} +{'loss': 0.3767, 'grad_norm': 0.5238822102546692, 'learning_rate': 3.3122355609291416e-05, 'epoch': 1.24} +{'eval_loss': 0.3580741286277771, 'eval_runtime': 150.5359, 'eval_samples_per_second': 414.838, 'eval_steps_per_second': 25.927, 'epoch': 1.24} +{'loss': 0.3804, 'grad_norm': 0.3849773406982422, 'learning_rate': 3.301013689535395e-05, 'epoch': 1.24} +{'eval_loss': 0.35729920864105225, 'eval_runtime': 115.8327, 'eval_samples_per_second': 539.123, 'eval_steps_per_second': 33.695, 'epoch': 1.24} +{'loss': 0.3781, 'grad_norm': 0.4865558445453644, 'learning_rate': 3.289773803208246e-05, 'epoch': 1.25} +{'eval_loss': 0.3579167127609253, 'eval_runtime': 88.8963, 'eval_samples_per_second': 702.481, 'eval_steps_per_second': 43.905, 'epoch': 1.25} +{'loss': 0.3796, 'grad_norm': 0.48617660999298096, 'learning_rate': 3.278516154734643e-05, 'epoch': 1.25} +{'eval_loss': 0.35814711451530457, 'eval_runtime': 97.354, 'eval_samples_per_second': 641.453, 'eval_steps_per_second': 40.091, 'epoch': 1.25} +{'loss': 0.3739, 'grad_norm': 0.4784976840019226, 'learning_rate': 3.267240997301001e-05, 'epoch': 1.26} +{'eval_loss': 0.35806506872177124, 'eval_runtime': 122.9086, 'eval_samples_per_second': 508.085, 'eval_steps_per_second': 31.755, 'epoch': 1.26} +{'loss': 0.3801, 'grad_norm': 0.5124030113220215, 'learning_rate': 3.2559485844875207e-05, 'epoch': 1.26} +{'eval_loss': 0.357117623090744, 'eval_runtime': 112.5491, 'eval_samples_per_second': 554.851, 'eval_steps_per_second': 34.678, 'epoch': 1.26} +{'loss': 0.3828, 'grad_norm': 0.43709248304367065, 'learning_rate': 3.244639170262476e-05, 'epoch': 1.26} +{'eval_loss': 0.3578929305076599, 'eval_runtime': 118.5076, 'eval_samples_per_second': 526.954, 'eval_steps_per_second': 32.935, 'epoch': 1.26} +{'loss': 0.3802, 'grad_norm': 0.3986222743988037, 'learning_rate': 3.233313008976506e-05, 'epoch': 1.27} +{'eval_loss': 0.35750454664230347, 'eval_runtime': 116.0485, 'eval_samples_per_second': 538.12, 'eval_steps_per_second': 33.632, 'epoch': 1.27} +{'loss': 0.3817, 'grad_norm': 0.5093176364898682, 'learning_rate': 3.221970355356895e-05, 'epoch': 1.27} +{'eval_loss': 0.3567073941230774, 'eval_runtime': 84.397, 'eval_samples_per_second': 739.931, 'eval_steps_per_second': 46.246, 'epoch': 1.27} +{'loss': 0.3793, 'grad_norm': 0.44514235854148865, 'learning_rate': 3.210611464501843e-05, 'epoch': 1.28} +{'eval_loss': 0.3584875166416168, 'eval_runtime': 100.2982, 'eval_samples_per_second': 622.623, 'eval_steps_per_second': 38.914, 'epoch': 1.28} +{'loss': 0.3769, 'grad_norm': 0.6016170382499695, 'learning_rate': 3.199236591874724e-05, 'epoch': 1.28} +{'eval_loss': 0.35815897583961487, 'eval_runtime': 117.7294, 'eval_samples_per_second': 530.437, 'eval_steps_per_second': 33.152, 'epoch': 1.28} +{'loss': 0.3759, 'grad_norm': 0.49130281805992126, 'learning_rate': 3.1878459932983506e-05, 'epoch': 1.29} +{'eval_loss': 0.3573407530784607, 'eval_runtime': 130.0112, 'eval_samples_per_second': 480.328, 'eval_steps_per_second': 30.02, 'epoch': 1.29} +{'loss': 0.3805, 'grad_norm': 0.4623708128929138, 'learning_rate': 3.176439924949212e-05, 'epoch': 1.29} +{'eval_loss': 0.35622385144233704, 'eval_runtime': 130.4199, 'eval_samples_per_second': 478.823, 'eval_steps_per_second': 29.926, 'epoch': 1.29} +{'loss': 0.374, 'grad_norm': 0.47126761078834534, 'learning_rate': 3.1650186433517126e-05, 'epoch': 1.3} +{'eval_loss': 0.35625043511390686, 'eval_runtime': 134.1858, 'eval_samples_per_second': 465.385, 'eval_steps_per_second': 29.087, 'epoch': 1.3} +{'loss': 0.3768, 'grad_norm': 0.4738393723964691, 'learning_rate': 3.15358240537241e-05, 'epoch': 1.3} +{'eval_loss': 0.35749444365501404, 'eval_runtime': 80.2291, 'eval_samples_per_second': 778.371, 'eval_steps_per_second': 48.648, 'epoch': 1.3} +{'loss': 0.3813, 'grad_norm': 0.46369558572769165, 'learning_rate': 3.142131468214231e-05, 'epoch': 1.3} +{'eval_loss': 0.3564567267894745, 'eval_runtime': 76.571, 'eval_samples_per_second': 815.556, 'eval_steps_per_second': 50.972, 'epoch': 1.3} +{'loss': 0.375, 'grad_norm': 0.4585086405277252, 'learning_rate': 3.1306660894106905e-05, 'epoch': 1.31} +{'eval_loss': 0.3571344316005707, 'eval_runtime': 102.8928, 'eval_samples_per_second': 606.923, 'eval_steps_per_second': 37.933, 'epoch': 1.31} +{'loss': 0.379, 'grad_norm': 0.47841712832450867, 'learning_rate': 3.1191865268200954e-05, 'epoch': 1.31} +{'eval_loss': 0.35737061500549316, 'eval_runtime': 137.1745, 'eval_samples_per_second': 455.245, 'eval_steps_per_second': 28.453, 'epoch': 1.31} +{'loss': 0.3791, 'grad_norm': 0.38598835468292236, 'learning_rate': 3.107693038619752e-05, 'epoch': 1.32} +{'eval_loss': 0.3570095896720886, 'eval_runtime': 133.7685, 'eval_samples_per_second': 466.836, 'eval_steps_per_second': 29.177, 'epoch': 1.32} +{'loss': 0.3797, 'grad_norm': 0.38338926434516907, 'learning_rate': 3.096185883300155e-05, 'epoch': 1.32} +{'eval_loss': 0.3575214743614197, 'eval_runtime': 145.6086, 'eval_samples_per_second': 428.876, 'eval_steps_per_second': 26.805, 'epoch': 1.32} +{'loss': 0.379, 'grad_norm': 0.5660759806632996, 'learning_rate': 3.0846653196591735e-05, 'epoch': 1.33} +{'eval_loss': 0.35605257749557495, 'eval_runtime': 87.6739, 'eval_samples_per_second': 712.276, 'eval_steps_per_second': 44.517, 'epoch': 1.33} +{'loss': 0.3753, 'grad_norm': 0.5323981642723083, 'learning_rate': 3.073131606796234e-05, 'epoch': 1.33} +{'eval_loss': 0.357185959815979, 'eval_runtime': 78.7025, 'eval_samples_per_second': 793.469, 'eval_steps_per_second': 49.592, 'epoch': 1.33} +{'loss': 0.379, 'grad_norm': 0.5482169389724731, 'learning_rate': 3.061585004106488e-05, 'epoch': 1.34} +{'eval_loss': 0.3576512932777405, 'eval_runtime': 117.7952, 'eval_samples_per_second': 530.14, 'eval_steps_per_second': 33.134, 'epoch': 1.34} +{'loss': 0.3751, 'grad_norm': 0.44023412466049194, 'learning_rate': 3.050025771274986e-05, 'epoch': 1.34} +{'eval_loss': 0.35736533999443054, 'eval_runtime': 101.7085, 'eval_samples_per_second': 613.99, 'eval_steps_per_second': 38.374, 'epoch': 1.34} +{'loss': 0.3752, 'grad_norm': 0.3776783347129822, 'learning_rate': 3.038454168270828e-05, 'epoch': 1.34} +{'eval_loss': 0.3565199077129364, 'eval_runtime': 141.0592, 'eval_samples_per_second': 442.708, 'eval_steps_per_second': 27.669, 'epoch': 1.34} +{'loss': 0.3747, 'grad_norm': 0.4063069224357605, 'learning_rate': 3.0268704553413253e-05, 'epoch': 1.35} +{'eval_loss': 0.357405424118042, 'eval_runtime': 126.489, 'eval_samples_per_second': 493.703, 'eval_steps_per_second': 30.856, 'epoch': 1.35} +{'loss': 0.3789, 'grad_norm': 0.4472464919090271, 'learning_rate': 3.0152748930061402e-05, 'epoch': 1.35} +{'eval_loss': 0.3566621243953705, 'eval_runtime': 87.2021, 'eval_samples_per_second': 716.13, 'eval_steps_per_second': 44.758, 'epoch': 1.35} +{'loss': 0.3777, 'grad_norm': 0.43099963665008545, 'learning_rate': 3.0036677420514325e-05, 'epoch': 1.36} +{'eval_loss': 0.35681232810020447, 'eval_runtime': 85.4926, 'eval_samples_per_second': 730.449, 'eval_steps_per_second': 45.653, 'epoch': 1.36} +{'loss': 0.3735, 'grad_norm': 0.3990977704524994, 'learning_rate': 2.9920492635239905e-05, 'epoch': 1.36} +{'eval_loss': 0.35761746764183044, 'eval_runtime': 123.1924, 'eval_samples_per_second': 506.914, 'eval_steps_per_second': 31.682, 'epoch': 1.36} +{'loss': 0.379, 'grad_norm': 0.4401179850101471, 'learning_rate': 2.9804197187253614e-05, 'epoch': 1.37} +{'eval_loss': 0.35646820068359375, 'eval_runtime': 113.5329, 'eval_samples_per_second': 550.043, 'eval_steps_per_second': 34.378, 'epoch': 1.37} +{'loss': 0.3771, 'grad_norm': 0.3958906829357147, 'learning_rate': 2.9687793692059757e-05, 'epoch': 1.37} +{'eval_loss': 0.35668909549713135, 'eval_runtime': 114.4487, 'eval_samples_per_second': 545.642, 'eval_steps_per_second': 34.103, 'epoch': 1.37} +{'loss': 0.3727, 'grad_norm': 0.48708605766296387, 'learning_rate': 2.9571284767592643e-05, 'epoch': 1.37} +{'eval_loss': 0.3571412265300751, 'eval_runtime': 156.4587, 'eval_samples_per_second': 399.134, 'eval_steps_per_second': 24.946, 'epoch': 1.37} +{'loss': 0.374, 'grad_norm': 0.44846460223197937, 'learning_rate': 2.945467303415768e-05, 'epoch': 1.38} +{'eval_loss': 0.3569110035896301, 'eval_runtime': 83.0743, 'eval_samples_per_second': 751.713, 'eval_steps_per_second': 46.982, 'epoch': 1.38} +{'loss': 0.3747, 'grad_norm': 0.4888741075992584, 'learning_rate': 2.9337961114372497e-05, 'epoch': 1.38} +{'eval_loss': 0.3564073443412781, 'eval_runtime': 85.5501, 'eval_samples_per_second': 729.958, 'eval_steps_per_second': 45.622, 'epoch': 1.38} +{'loss': 0.3727, 'grad_norm': 0.4034750759601593, 'learning_rate': 2.922115163310792e-05, 'epoch': 1.39} +{'eval_loss': 0.35592034459114075, 'eval_runtime': 106.0027, 'eval_samples_per_second': 589.117, 'eval_steps_per_second': 36.82, 'epoch': 1.39} +{'loss': 0.3796, 'grad_norm': 0.5319281816482544, 'learning_rate': 2.9104247217428926e-05, 'epoch': 1.39} +{'eval_loss': 0.3564211130142212, 'eval_runtime': 133.0852, 'eval_samples_per_second': 469.233, 'eval_steps_per_second': 29.327, 'epoch': 1.39} +{'loss': 0.3782, 'grad_norm': 0.432975172996521, 'learning_rate': 2.8987250496535616e-05, 'epoch': 1.4} +{'eval_loss': 0.35593557357788086, 'eval_runtime': 136.8904, 'eval_samples_per_second': 456.19, 'eval_steps_per_second': 28.512, 'epoch': 1.4} +{'loss': 0.3729, 'grad_norm': 0.4261586368083954, 'learning_rate': 2.887016410170404e-05, 'epoch': 1.4} +{'eval_loss': 0.3574845492839813, 'eval_runtime': 124.2757, 'eval_samples_per_second': 502.496, 'eval_steps_per_second': 31.406, 'epoch': 1.4} +{'loss': 0.3755, 'grad_norm': 0.4479020833969116, 'learning_rate': 2.8752990666227015e-05, 'epoch': 1.41} +{'eval_loss': 0.35603755712509155, 'eval_runtime': 95.3827, 'eval_samples_per_second': 654.71, 'eval_steps_per_second': 40.919, 'epoch': 1.41} +{'loss': 0.3713, 'grad_norm': 0.40735599398612976, 'learning_rate': 2.8635732825354943e-05, 'epoch': 1.41} +{'eval_loss': 0.35711869597435, 'eval_runtime': 86.1439, 'eval_samples_per_second': 724.927, 'eval_steps_per_second': 45.308, 'epoch': 1.41} +{'loss': 0.3774, 'grad_norm': 0.5334275364875793, 'learning_rate': 2.85183932162365e-05, 'epoch': 1.41} +{'eval_loss': 0.3576970100402832, 'eval_runtime': 123.2472, 'eval_samples_per_second': 506.689, 'eval_steps_per_second': 31.668, 'epoch': 1.41} +{'loss': 0.3753, 'grad_norm': 0.4624794125556946, 'learning_rate': 2.8400974477859336e-05, 'epoch': 1.42} +{'eval_loss': 0.3557118773460388, 'eval_runtime': 114.0588, 'eval_samples_per_second': 547.507, 'eval_steps_per_second': 34.219, 'epoch': 1.42} +{'loss': 0.3784, 'grad_norm': 0.4494278132915497, 'learning_rate': 2.8283479250990764e-05, 'epoch': 1.42} +{'eval_loss': 0.35665711760520935, 'eval_runtime': 113.2527, 'eval_samples_per_second': 551.404, 'eval_steps_per_second': 34.463, 'epoch': 1.42} +{'loss': 0.3809, 'grad_norm': 0.44111549854278564, 'learning_rate': 2.8165910178118305e-05, 'epoch': 1.43} +{'eval_loss': 0.35548266768455505, 'eval_runtime': 123.1533, 'eval_samples_per_second': 507.075, 'eval_steps_per_second': 31.692, 'epoch': 1.43} +{'loss': 0.3712, 'grad_norm': 0.4429190158843994, 'learning_rate': 2.804826990339029e-05, 'epoch': 1.43} +{'eval_loss': 0.3559216558933258, 'eval_runtime': 95.6123, 'eval_samples_per_second': 653.137, 'eval_steps_per_second': 40.821, 'epoch': 1.43} +{'loss': 0.3734, 'grad_norm': 0.5382483005523682, 'learning_rate': 2.7930561072556422e-05, 'epoch': 1.44} +{'eval_loss': 0.3564108908176422, 'eval_runtime': 87.4435, 'eval_samples_per_second': 714.152, 'eval_steps_per_second': 44.635, 'epoch': 1.44} +{'loss': 0.3739, 'grad_norm': 0.42668792605400085, 'learning_rate': 2.7812786332908208e-05, 'epoch': 1.44} +{'eval_loss': 0.3560451567173004, 'eval_runtime': 101.3665, 'eval_samples_per_second': 616.062, 'eval_steps_per_second': 38.504, 'epoch': 1.44} +{'loss': 0.373, 'grad_norm': 0.43752390146255493, 'learning_rate': 2.7694948333219457e-05, 'epoch': 1.44} +{'eval_loss': 0.3559126853942871, 'eval_runtime': 109.3024, 'eval_samples_per_second': 571.332, 'eval_steps_per_second': 35.708, 'epoch': 1.44} +{'loss': 0.3778, 'grad_norm': 0.44761478900909424, 'learning_rate': 2.757704972368675e-05, 'epoch': 1.45} +{'eval_loss': 0.35652053356170654, 'eval_runtime': 124.1749, 'eval_samples_per_second': 502.904, 'eval_steps_per_second': 31.431, 'epoch': 1.45} +{'loss': 0.3732, 'grad_norm': 0.49060818552970886, 'learning_rate': 2.7459093155869743e-05, 'epoch': 1.45} +{'eval_loss': 0.3571232259273529, 'eval_runtime': 105.7506, 'eval_samples_per_second': 590.521, 'eval_steps_per_second': 36.908, 'epoch': 1.45} +{'loss': 0.376, 'grad_norm': 0.4520125091075897, 'learning_rate': 2.734108128263159e-05, 'epoch': 1.46} +{'eval_loss': 0.35546061396598816, 'eval_runtime': 96.6464, 'eval_samples_per_second': 646.149, 'eval_steps_per_second': 40.384, 'epoch': 1.46} +{'loss': 0.3785, 'grad_norm': 0.4545651972293854, 'learning_rate': 2.7223016758079323e-05, 'epoch': 1.46} +{'eval_loss': 0.3557923138141632, 'eval_runtime': 93.5774, 'eval_samples_per_second': 667.341, 'eval_steps_per_second': 41.709, 'epoch': 1.46} +{'loss': 0.3755, 'grad_norm': 0.41117027401924133, 'learning_rate': 2.7104902237504048e-05, 'epoch': 1.47} +{'eval_loss': 0.35705259442329407, 'eval_runtime': 114.6988, 'eval_samples_per_second': 544.452, 'eval_steps_per_second': 34.028, 'epoch': 1.47} +{'loss': 0.3747, 'grad_norm': 0.4800783395767212, 'learning_rate': 2.6986740377321312e-05, 'epoch': 1.47} +{'eval_loss': 0.3548160493373871, 'eval_runtime': 114.0837, 'eval_samples_per_second': 547.388, 'eval_steps_per_second': 34.212, 'epoch': 1.47} +{'loss': 0.3764, 'grad_norm': 0.4070318937301636, 'learning_rate': 2.6868533835011367e-05, 'epoch': 1.48} +{'eval_loss': 0.35595500469207764, 'eval_runtime': 127.0646, 'eval_samples_per_second': 491.467, 'eval_steps_per_second': 30.717, 'epoch': 1.48} +{'loss': 0.3788, 'grad_norm': 0.43168729543685913, 'learning_rate': 2.6750285269059334e-05, 'epoch': 1.48} +{'eval_loss': 0.3551158308982849, 'eval_runtime': 115.5799, 'eval_samples_per_second': 540.302, 'eval_steps_per_second': 33.769, 'epoch': 1.48} +{'loss': 0.3734, 'grad_norm': 0.3911379873752594, 'learning_rate': 2.663199733889546e-05, 'epoch': 1.48} +{'eval_loss': 0.35603341460227966, 'eval_runtime': 87.0828, 'eval_samples_per_second': 717.111, 'eval_steps_per_second': 44.819, 'epoch': 1.48} +{'loss': 0.3755, 'grad_norm': 0.4488193392753601, 'learning_rate': 2.6513672704835323e-05, 'epoch': 1.49} +{'eval_loss': 0.35534313321113586, 'eval_runtime': 98.1745, 'eval_samples_per_second': 636.092, 'eval_steps_per_second': 39.756, 'epoch': 1.49} +{'loss': 0.3706, 'grad_norm': 0.4494810700416565, 'learning_rate': 2.6395314028019958e-05, 'epoch': 1.49} +{'eval_loss': 0.35567575693130493, 'eval_runtime': 116.8313, 'eval_samples_per_second': 534.514, 'eval_steps_per_second': 33.407, 'epoch': 1.49} +{'loss': 0.3762, 'grad_norm': 0.42967432737350464, 'learning_rate': 2.6276923970356017e-05, 'epoch': 1.5} +{'eval_loss': 0.35465937852859497, 'eval_runtime': 132.7896, 'eval_samples_per_second': 470.278, 'eval_steps_per_second': 29.392, 'epoch': 1.5} +{'loss': 0.374, 'grad_norm': 0.4031289517879486, 'learning_rate': 2.615850519445595e-05, 'epoch': 1.5} +{'eval_loss': 0.35575583577156067, 'eval_runtime': 146.6732, 'eval_samples_per_second': 425.763, 'eval_steps_per_second': 26.61, 'epoch': 1.5} +{'loss': 0.3756, 'grad_norm': 0.39520078897476196, 'learning_rate': 2.604006036357805e-05, 'epoch': 1.51} +{'eval_loss': 0.35568079352378845, 'eval_runtime': 97.4342, 'eval_samples_per_second': 640.925, 'eval_steps_per_second': 40.058, 'epoch': 1.51} +{'loss': 0.3759, 'grad_norm': 0.46674367785453796, 'learning_rate': 2.5921592141566603e-05, 'epoch': 1.51} +{'eval_loss': 0.35515907406806946, 'eval_runtime': 73.3612, 'eval_samples_per_second': 851.24, 'eval_steps_per_second': 53.202, 'epoch': 1.51} +{'loss': 0.3718, 'grad_norm': 0.4579851031303406, 'learning_rate': 2.580310319279197e-05, 'epoch': 1.52} +{'eval_loss': 0.3560398817062378, 'eval_runtime': 129.4173, 'eval_samples_per_second': 482.532, 'eval_steps_per_second': 30.158, 'epoch': 1.52} +{'loss': 0.3755, 'grad_norm': 0.39092880487442017, 'learning_rate': 2.5684596182090654e-05, 'epoch': 1.52} +{'eval_loss': 0.35592228174209595, 'eval_runtime': 141.7858, 'eval_samples_per_second': 440.439, 'eval_steps_per_second': 27.527, 'epoch': 1.52} +{'loss': 0.3753, 'grad_norm': 0.37485358119010925, 'learning_rate': 2.5566073774705375e-05, 'epoch': 1.52} +{'eval_loss': 0.3550061285495758, 'eval_runtime': 138.4913, 'eval_samples_per_second': 450.917, 'eval_steps_per_second': 28.182, 'epoch': 1.52} +{'loss': 0.3764, 'grad_norm': 0.5100060105323792, 'learning_rate': 2.5447538636225133e-05, 'epoch': 1.53} +{'eval_loss': 0.35579755902290344, 'eval_runtime': 115.6, 'eval_samples_per_second': 540.208, 'eval_steps_per_second': 33.763, 'epoch': 1.53} +{'loss': 0.3749, 'grad_norm': 0.4345734417438507, 'learning_rate': 2.5328993432525233e-05, 'epoch': 1.53} +{'eval_loss': 0.35453000664711, 'eval_runtime': 87.4356, 'eval_samples_per_second': 714.217, 'eval_steps_per_second': 44.639, 'epoch': 1.53} +{'loss': 0.3749, 'grad_norm': 0.5151851773262024, 'learning_rate': 2.521044082970738e-05, 'epoch': 1.54} +{'eval_loss': 0.3550436198711395, 'eval_runtime': 91.0203, 'eval_samples_per_second': 686.089, 'eval_steps_per_second': 42.881, 'epoch': 1.54} +{'loss': 0.3701, 'grad_norm': 0.3919490873813629, 'learning_rate': 2.5091883494039664e-05, 'epoch': 1.54} +{'eval_loss': 0.35569244623184204, 'eval_runtime': 120.6127, 'eval_samples_per_second': 517.757, 'eval_steps_per_second': 32.36, 'epoch': 1.54} +{'loss': 0.3698, 'grad_norm': 0.42496317625045776, 'learning_rate': 2.4973324091896623e-05, 'epoch': 1.55} +{'eval_loss': 0.3553696572780609, 'eval_runtime': 131.1482, 'eval_samples_per_second': 476.164, 'eval_steps_per_second': 29.76, 'epoch': 1.55} +{'loss': 0.3741, 'grad_norm': 0.462298721075058, 'learning_rate': 2.485476528969926e-05, 'epoch': 1.55} +{'eval_loss': 0.3554234206676483, 'eval_runtime': 99.7211, 'eval_samples_per_second': 626.227, 'eval_steps_per_second': 39.139, 'epoch': 1.55} +{'loss': 0.3763, 'grad_norm': 0.4626288115978241, 'learning_rate': 2.4736209753855105e-05, 'epoch': 1.55} +{'eval_loss': 0.35537949204444885, 'eval_runtime': 106.7004, 'eval_samples_per_second': 585.265, 'eval_steps_per_second': 36.579, 'epoch': 1.55} +{'loss': 0.3717, 'grad_norm': 0.454356849193573, 'learning_rate': 2.4617660150698224e-05, 'epoch': 1.56} +{'eval_loss': 0.35414910316467285, 'eval_runtime': 103.3284, 'eval_samples_per_second': 604.364, 'eval_steps_per_second': 37.773, 'epoch': 1.56} +{'loss': 0.3697, 'grad_norm': 0.45416221022605896, 'learning_rate': 2.4499119146429238e-05, 'epoch': 1.56} +{'eval_loss': 0.3548886477947235, 'eval_runtime': 97.2398, 'eval_samples_per_second': 642.206, 'eval_steps_per_second': 40.138, 'epoch': 1.56} +{'loss': 0.3729, 'grad_norm': 0.5498567223548889, 'learning_rate': 2.4380589407055396e-05, 'epoch': 1.57} +{'eval_loss': 0.3539298176765442, 'eval_runtime': 121.5367, 'eval_samples_per_second': 513.82, 'eval_steps_per_second': 32.114, 'epoch': 1.57} +{'loss': 0.3737, 'grad_norm': 0.4501713216304779, 'learning_rate': 2.4262073598330588e-05, 'epoch': 1.57} +{'eval_loss': 0.3550536036491394, 'eval_runtime': 120.6968, 'eval_samples_per_second': 517.396, 'eval_steps_per_second': 32.337, 'epoch': 1.57} +{'loss': 0.3753, 'grad_norm': 0.3924010992050171, 'learning_rate': 2.414357438569539e-05, 'epoch': 1.58} +{'eval_loss': 0.3549703359603882, 'eval_runtime': 105.6873, 'eval_samples_per_second': 590.875, 'eval_steps_per_second': 36.93, 'epoch': 1.58} +{'loss': 0.3738, 'grad_norm': 0.4781033396720886, 'learning_rate': 2.4025094434217165e-05, 'epoch': 1.58} +{'eval_loss': 0.35503125190734863, 'eval_runtime': 90.1114, 'eval_samples_per_second': 693.009, 'eval_steps_per_second': 43.313, 'epoch': 1.58} +{'loss': 0.3726, 'grad_norm': 0.4409623444080353, 'learning_rate': 2.3906636408530063e-05, 'epoch': 1.59} +{'eval_loss': 0.35426756739616394, 'eval_runtime': 92.7765, 'eval_samples_per_second': 673.101, 'eval_steps_per_second': 42.069, 'epoch': 1.59} +{'loss': 0.3762, 'grad_norm': 0.4189828932285309, 'learning_rate': 2.3788202972775116e-05, 'epoch': 1.59} +{'eval_loss': 0.355080246925354, 'eval_runtime': 122.248, 'eval_samples_per_second': 510.831, 'eval_steps_per_second': 31.927, 'epoch': 1.59} +{'loss': 0.3697, 'grad_norm': 0.43777167797088623, 'learning_rate': 2.366979679054034e-05, 'epoch': 1.59} +{'eval_loss': 0.3563116490840912, 'eval_runtime': 126.3416, 'eval_samples_per_second': 494.279, 'eval_steps_per_second': 30.892, 'epoch': 1.59} +{'loss': 0.3732, 'grad_norm': 0.5200866460800171, 'learning_rate': 2.3551420524800802e-05, 'epoch': 1.6} +{'eval_loss': 0.35429462790489197, 'eval_runtime': 111.3068, 'eval_samples_per_second': 561.044, 'eval_steps_per_second': 35.065, 'epoch': 1.6} +{'loss': 0.3705, 'grad_norm': 0.48102903366088867, 'learning_rate': 2.3433076837858735e-05, 'epoch': 1.6} +{'eval_loss': 0.3547670543193817, 'eval_runtime': 101.1924, 'eval_samples_per_second': 617.122, 'eval_steps_per_second': 38.57, 'epoch': 1.6} +{'loss': 0.3745, 'grad_norm': 0.49137595295906067, 'learning_rate': 2.33147683912837e-05, 'epoch': 1.61} +{'eval_loss': 0.3544248640537262, 'eval_runtime': 87.3498, 'eval_samples_per_second': 714.919, 'eval_steps_per_second': 44.682, 'epoch': 1.61} +{'loss': 0.3723, 'grad_norm': 0.44753944873809814, 'learning_rate': 2.3196497845852674e-05, 'epoch': 1.61} +{'eval_loss': 0.35363471508026123, 'eval_runtime': 94.2417, 'eval_samples_per_second': 662.637, 'eval_steps_per_second': 41.415, 'epoch': 1.61} +{'loss': 0.3753, 'grad_norm': 0.5232013463973999, 'learning_rate': 2.3078267861490223e-05, 'epoch': 1.62} +{'eval_loss': 0.35316982865333557, 'eval_runtime': 122.7736, 'eval_samples_per_second': 508.644, 'eval_steps_per_second': 31.79, 'epoch': 1.62} +{'loss': 0.3733, 'grad_norm': 0.45653530955314636, 'learning_rate': 2.296008109720871e-05, 'epoch': 1.62} +{'eval_loss': 0.35466381907463074, 'eval_runtime': 135.2967, 'eval_samples_per_second': 461.564, 'eval_steps_per_second': 28.848, 'epoch': 1.62} +{'loss': 0.3743, 'grad_norm': 0.44984227418899536, 'learning_rate': 2.284194021104846e-05, 'epoch': 1.63} +{'eval_loss': 0.3535783886909485, 'eval_runtime': 112.5053, 'eval_samples_per_second': 555.067, 'eval_steps_per_second': 34.692, 'epoch': 1.63} +{'loss': 0.3703, 'grad_norm': 0.46621114015579224, 'learning_rate': 2.2723847860017972e-05, 'epoch': 1.63} +{'eval_loss': 0.3544209897518158, 'eval_runtime': 96.9346, 'eval_samples_per_second': 644.228, 'eval_steps_per_second': 40.264, 'epoch': 1.63} +{'loss': 0.3751, 'grad_norm': 0.43628519773483276, 'learning_rate': 2.260580670003422e-05, 'epoch': 1.63} +{'eval_loss': 0.3540497422218323, 'eval_runtime': 92.6802, 'eval_samples_per_second': 673.801, 'eval_steps_per_second': 42.113, 'epoch': 1.63} +{'loss': 0.3728, 'grad_norm': 0.45482581853866577, 'learning_rate': 2.2487819385862864e-05, 'epoch': 1.64} +{'eval_loss': 0.3534764051437378, 'eval_runtime': 112.9841, 'eval_samples_per_second': 552.715, 'eval_steps_per_second': 34.545, 'epoch': 1.64} +{'loss': 0.3692, 'grad_norm': 0.39559775590896606, 'learning_rate': 2.2369888571058552e-05, 'epoch': 1.64} +{'eval_loss': 0.3540787696838379, 'eval_runtime': 109.052, 'eval_samples_per_second': 572.644, 'eval_steps_per_second': 35.79, 'epoch': 1.64} +{'loss': 0.3726, 'grad_norm': 0.558574914932251, 'learning_rate': 2.225201690790527e-05, 'epoch': 1.65} +{'eval_loss': 0.3539111316204071, 'eval_runtime': 143.3352, 'eval_samples_per_second': 435.678, 'eval_steps_per_second': 27.23, 'epoch': 1.65} +{'loss': 0.3734, 'grad_norm': 0.4407023787498474, 'learning_rate': 2.213420704735665e-05, 'epoch': 1.65} +{'eval_loss': 0.3545396327972412, 'eval_runtime': 99.5671, 'eval_samples_per_second': 627.195, 'eval_steps_per_second': 39.2, 'epoch': 1.65} +{'loss': 0.3707, 'grad_norm': 0.46235963702201843, 'learning_rate': 2.201646163897641e-05, 'epoch': 1.66} +{'eval_loss': 0.3537743091583252, 'eval_runtime': 82.4673, 'eval_samples_per_second': 757.246, 'eval_steps_per_second': 47.328, 'epoch': 1.66} +{'loss': 0.3737, 'grad_norm': 0.577813982963562, 'learning_rate': 2.1898783330878687e-05, 'epoch': 1.66} +{'eval_loss': 0.35447394847869873, 'eval_runtime': 110.429, 'eval_samples_per_second': 565.504, 'eval_steps_per_second': 35.344, 'epoch': 1.66} +{'loss': 0.3721, 'grad_norm': 0.3923017084598541, 'learning_rate': 2.178117476966856e-05, 'epoch': 1.66} +{'eval_loss': 0.35423389077186584, 'eval_runtime': 110.5394, 'eval_samples_per_second': 564.939, 'eval_steps_per_second': 35.309, 'epoch': 1.66} +{'loss': 0.373, 'grad_norm': 0.3426123857498169, 'learning_rate': 2.1663638600382453e-05, 'epoch': 1.67} +{'eval_loss': 0.3541487753391266, 'eval_runtime': 137.9696, 'eval_samples_per_second': 452.621, 'eval_steps_per_second': 28.289, 'epoch': 1.67} +{'loss': 0.372, 'grad_norm': 0.38961607217788696, 'learning_rate': 2.1546177466428695e-05, 'epoch': 1.67} +{'eval_loss': 0.3530898988246918, 'eval_runtime': 146.6947, 'eval_samples_per_second': 425.7, 'eval_steps_per_second': 26.606, 'epoch': 1.67} +{'loss': 0.3736, 'grad_norm': 0.4614410102367401, 'learning_rate': 2.1428794009528068e-05, 'epoch': 1.68} +{'eval_loss': 0.35456132888793945, 'eval_runtime': 89.0327, 'eval_samples_per_second': 701.405, 'eval_steps_per_second': 43.838, 'epoch': 1.68} +{'loss': 0.368, 'grad_norm': 0.5541991591453552, 'learning_rate': 2.131149086965439e-05, 'epoch': 1.68} +{'eval_loss': 0.35481372475624084, 'eval_runtime': 75.9911, 'eval_samples_per_second': 821.781, 'eval_steps_per_second': 51.361, 'epoch': 1.68} +{'loss': 0.3675, 'grad_norm': 0.4624869227409363, 'learning_rate': 2.1194270684975103e-05, 'epoch': 1.69} +{'eval_loss': 0.3541395366191864, 'eval_runtime': 109.5406, 'eval_samples_per_second': 570.09, 'eval_steps_per_second': 35.631, 'epoch': 1.69} +{'loss': 0.3704, 'grad_norm': 0.42251721024513245, 'learning_rate': 2.1077136091792012e-05, 'epoch': 1.69} +{'eval_loss': 0.35364484786987305, 'eval_runtime': 118.3636, 'eval_samples_per_second': 527.595, 'eval_steps_per_second': 32.975, 'epoch': 1.69} +{'loss': 0.3739, 'grad_norm': 0.5105276107788086, 'learning_rate': 2.096008972448193e-05, 'epoch': 1.7} +{'eval_loss': 0.3545808494091034, 'eval_runtime': 119.2587, 'eval_samples_per_second': 523.635, 'eval_steps_per_second': 32.727, 'epoch': 1.7} +{'loss': 0.3717, 'grad_norm': 0.38263341784477234, 'learning_rate': 2.084313421543745e-05, 'epoch': 1.7} +{'eval_loss': 0.35453006625175476, 'eval_runtime': 131.8274, 'eval_samples_per_second': 473.71, 'eval_steps_per_second': 29.607, 'epoch': 1.7} +{'loss': 0.3694, 'grad_norm': 0.4769500195980072, 'learning_rate': 2.0726272195007756e-05, 'epoch': 1.7} +{'eval_loss': 0.3539934456348419, 'eval_runtime': 83.676, 'eval_samples_per_second': 746.307, 'eval_steps_per_second': 46.644, 'epoch': 1.7} +{'loss': 0.3759, 'grad_norm': 0.37108179926872253, 'learning_rate': 2.0609506291439483e-05, 'epoch': 1.71} +{'eval_loss': 0.35363906621932983, 'eval_runtime': 85.6189, 'eval_samples_per_second': 729.372, 'eval_steps_per_second': 45.586, 'epoch': 1.71} +{'loss': 0.3741, 'grad_norm': 0.46357840299606323, 'learning_rate': 2.049283913081754e-05, 'epoch': 1.71} +{'eval_loss': 0.353270947933197, 'eval_runtime': 119.3067, 'eval_samples_per_second': 523.424, 'eval_steps_per_second': 32.714, 'epoch': 1.71} +{'loss': 0.375, 'grad_norm': 0.3924376666545868, 'learning_rate': 2.0376273337006115e-05, 'epoch': 1.72} +{'eval_loss': 0.35361072421073914, 'eval_runtime': 119.1627, 'eval_samples_per_second': 524.056, 'eval_steps_per_second': 32.754, 'epoch': 1.72} +{'loss': 0.3745, 'grad_norm': 0.5547502040863037, 'learning_rate': 2.0259811531589633e-05, 'epoch': 1.72} +{'eval_loss': 0.35319074988365173, 'eval_runtime': 126.5585, 'eval_samples_per_second': 493.432, 'eval_steps_per_second': 30.839, 'epoch': 1.72} +{'loss': 0.3673, 'grad_norm': 0.42087534070014954, 'learning_rate': 2.014345633381379e-05, 'epoch': 1.73} +{'eval_loss': 0.3537929356098175, 'eval_runtime': 106.9348, 'eval_samples_per_second': 583.982, 'eval_steps_per_second': 36.499, 'epoch': 1.73} +{'loss': 0.3756, 'grad_norm': 0.5745177865028381, 'learning_rate': 2.0027210360526676e-05, 'epoch': 1.73} +{'eval_loss': 0.3535372018814087, 'eval_runtime': 87.9675, 'eval_samples_per_second': 709.899, 'eval_steps_per_second': 44.369, 'epoch': 1.73} +{'loss': 0.3693, 'grad_norm': 0.4028908908367157, 'learning_rate': 1.991107622611991e-05, 'epoch': 1.73} +{'eval_loss': 0.353519469499588, 'eval_runtime': 100.9167, 'eval_samples_per_second': 618.807, 'eval_steps_per_second': 38.675, 'epoch': 1.73} +{'loss': 0.3712, 'grad_norm': 0.4955699145793915, 'learning_rate': 1.9795056542469816e-05, 'epoch': 1.74} +{'eval_loss': 0.3530327379703522, 'eval_runtime': 125.9255, 'eval_samples_per_second': 495.912, 'eval_steps_per_second': 30.995, 'epoch': 1.74} +{'loss': 0.3731, 'grad_norm': 0.425199955701828, 'learning_rate': 1.9679153918878703e-05, 'epoch': 1.74} +{'eval_loss': 0.353931188583374, 'eval_runtime': 112.0234, 'eval_samples_per_second': 557.455, 'eval_steps_per_second': 34.841, 'epoch': 1.74} +{'loss': 0.3722, 'grad_norm': 0.48417234420776367, 'learning_rate': 1.9563370962016194e-05, 'epoch': 1.75} +{'eval_loss': 0.3530556857585907, 'eval_runtime': 131.0699, 'eval_samples_per_second': 476.448, 'eval_steps_per_second': 29.778, 'epoch': 1.75} +{'loss': 0.3697, 'grad_norm': 0.37093955278396606, 'learning_rate': 1.9447710275860563e-05, 'epoch': 1.75} +{'eval_loss': 0.35333147644996643, 'eval_runtime': 100.4371, 'eval_samples_per_second': 621.763, 'eval_steps_per_second': 38.86, 'epoch': 1.75} +{'loss': 0.3684, 'grad_norm': 0.4697344899177551, 'learning_rate': 1.9332174461640224e-05, 'epoch': 1.76} +{'eval_loss': 0.3542337417602539, 'eval_runtime': 91.6127, 'eval_samples_per_second': 681.652, 'eval_steps_per_second': 42.603, 'epoch': 1.76} +{'loss': 0.373, 'grad_norm': 0.4282759428024292, 'learning_rate': 1.921676611777519e-05, 'epoch': 1.76} +{'eval_loss': 0.3537404239177704, 'eval_runtime': 121.8841, 'eval_samples_per_second': 512.355, 'eval_steps_per_second': 32.022, 'epoch': 1.76} +{'loss': 0.3684, 'grad_norm': 0.3765365779399872, 'learning_rate': 1.9101487839818626e-05, 'epoch': 1.77} +{'eval_loss': 0.35426461696624756, 'eval_runtime': 114.9082, 'eval_samples_per_second': 543.46, 'eval_steps_per_second': 33.966, 'epoch': 1.77} +{'loss': 0.3684, 'grad_norm': 0.4593093991279602, 'learning_rate': 1.898634222039852e-05, 'epoch': 1.77} +{'eval_loss': 0.35352107882499695, 'eval_runtime': 152.0607, 'eval_samples_per_second': 410.678, 'eval_steps_per_second': 25.667, 'epoch': 1.77} +{'loss': 0.3712, 'grad_norm': 0.4661525785923004, 'learning_rate': 1.8871331849159334e-05, 'epoch': 1.77} +{'eval_loss': 0.3539964258670807, 'eval_runtime': 92.8617, 'eval_samples_per_second': 672.484, 'eval_steps_per_second': 42.03, 'epoch': 1.77} +{'loss': 0.3711, 'grad_norm': 0.5607286095619202, 'learning_rate': 1.875645931270376e-05, 'epoch': 1.78} +{'eval_loss': 0.3530033826828003, 'eval_runtime': 89.3365, 'eval_samples_per_second': 699.02, 'eval_steps_per_second': 43.689, 'epoch': 1.78} +{'loss': 0.3723, 'grad_norm': 0.5544097423553467, 'learning_rate': 1.8641727194534593e-05, 'epoch': 1.78} +{'eval_loss': 0.35386693477630615, 'eval_runtime': 96.9343, 'eval_samples_per_second': 644.23, 'eval_steps_per_second': 40.264, 'epoch': 1.78} +{'loss': 0.3686, 'grad_norm': 0.5004563331604004, 'learning_rate': 1.852713807499658e-05, 'epoch': 1.79} +{'eval_loss': 0.3544034957885742, 'eval_runtime': 136.9569, 'eval_samples_per_second': 455.968, 'eval_steps_per_second': 28.498, 'epoch': 1.79} +{'loss': 0.3733, 'grad_norm': 0.467056542634964, 'learning_rate': 1.8412694531218405e-05, 'epoch': 1.79} +{'eval_loss': 0.3534033000469208, 'eval_runtime': 111.3279, 'eval_samples_per_second': 560.937, 'eval_steps_per_second': 35.059, 'epoch': 1.79} +{'loss': 0.373, 'grad_norm': 0.3809981942176819, 'learning_rate': 1.8298399137054733e-05, 'epoch': 1.8} +{'eval_loss': 0.35370275378227234, 'eval_runtime': 154.9827, 'eval_samples_per_second': 402.935, 'eval_steps_per_second': 25.183, 'epoch': 1.8} +{'loss': 0.3701, 'grad_norm': 0.45749062299728394, 'learning_rate': 1.818425446302831e-05, 'epoch': 1.8} +{'eval_loss': 0.35289791226387024, 'eval_runtime': 83.6661, 'eval_samples_per_second': 746.395, 'eval_steps_per_second': 46.65, 'epoch': 1.8} +{'loss': 0.3751, 'grad_norm': 0.5308125615119934, 'learning_rate': 1.8070263076272163e-05, 'epoch': 1.81} +{'eval_loss': 0.35231491923332214, 'eval_runtime': 92.3856, 'eval_samples_per_second': 675.949, 'eval_steps_per_second': 42.247, 'epoch': 1.81} +{'loss': 0.3744, 'grad_norm': 0.4967895746231079, 'learning_rate': 1.7956427540471883e-05, 'epoch': 1.81} +{'eval_loss': 0.35239240527153015, 'eval_runtime': 109.2832, 'eval_samples_per_second': 571.433, 'eval_steps_per_second': 35.715, 'epoch': 1.81} +{'loss': 0.3757, 'grad_norm': 0.43874597549438477, 'learning_rate': 1.7842750415807923e-05, 'epoch': 1.81} +{'eval_loss': 0.3534022271633148, 'eval_runtime': 129.5669, 'eval_samples_per_second': 481.975, 'eval_steps_per_second': 30.123, 'epoch': 1.81} +{'loss': 0.3714, 'grad_norm': 0.417511910200119, 'learning_rate': 1.7729234258898042e-05, 'epoch': 1.82} +{'eval_loss': 0.35418224334716797, 'eval_runtime': 150.1886, 'eval_samples_per_second': 415.797, 'eval_steps_per_second': 25.987, 'epoch': 1.82} +{'loss': 0.3719, 'grad_norm': 0.4674279987812042, 'learning_rate': 1.7615881622739825e-05, 'epoch': 1.82} +{'eval_loss': 0.3529779314994812, 'eval_runtime': 111.9203, 'eval_samples_per_second': 557.968, 'eval_steps_per_second': 34.873, 'epoch': 1.82} +{'loss': 0.3725, 'grad_norm': 0.401701956987381, 'learning_rate': 1.750269505665324e-05, 'epoch': 1.83} +{'eval_loss': 0.3529259264469147, 'eval_runtime': 92.83, 'eval_samples_per_second': 672.714, 'eval_steps_per_second': 42.045, 'epoch': 1.83} +{'loss': 0.3707, 'grad_norm': 0.4308571517467499, 'learning_rate': 1.7389677106223286e-05, 'epoch': 1.83} +{'eval_loss': 0.35282742977142334, 'eval_runtime': 125.4891, 'eval_samples_per_second': 497.637, 'eval_steps_per_second': 31.102, 'epoch': 1.83} +{'loss': 0.3704, 'grad_norm': 0.4236815273761749, 'learning_rate': 1.727683031324281e-05, 'epoch': 1.84} +{'eval_loss': 0.35293638706207275, 'eval_runtime': 127.826, 'eval_samples_per_second': 488.539, 'eval_steps_per_second': 30.534, 'epoch': 1.84} +{'loss': 0.3718, 'grad_norm': 0.46587228775024414, 'learning_rate': 1.7164157215655276e-05, 'epoch': 1.84} +{'eval_loss': 0.35321810841560364, 'eval_runtime': 141.7178, 'eval_samples_per_second': 440.65, 'eval_steps_per_second': 27.541, 'epoch': 1.84} +{'loss': 0.3729, 'grad_norm': 0.42264464497566223, 'learning_rate': 1.7051660347497708e-05, 'epoch': 1.84} +{'eval_loss': 0.35341882705688477, 'eval_runtime': 118.3954, 'eval_samples_per_second': 527.453, 'eval_steps_per_second': 32.966, 'epoch': 1.84} +{'loss': 0.3674, 'grad_norm': 0.3718993067741394, 'learning_rate': 1.693934223884371e-05, 'epoch': 1.85} +{'eval_loss': 0.35364529490470886, 'eval_runtime': 98.1762, 'eval_samples_per_second': 636.081, 'eval_steps_per_second': 39.755, 'epoch': 1.85} +{'loss': 0.3706, 'grad_norm': 0.4804409444332123, 'learning_rate': 1.6827205415746536e-05, 'epoch': 1.85} +{'eval_loss': 0.3529655635356903, 'eval_runtime': 107.7486, 'eval_samples_per_second': 579.572, 'eval_steps_per_second': 36.223, 'epoch': 1.85} +{'loss': 0.3683, 'grad_norm': 0.4777027666568756, 'learning_rate': 1.6715252400182326e-05, 'epoch': 1.86} +{'eval_loss': 0.352885901927948, 'eval_runtime': 153.9881, 'eval_samples_per_second': 405.538, 'eval_steps_per_second': 25.346, 'epoch': 1.86} +{'loss': 0.3695, 'grad_norm': 0.46460893750190735, 'learning_rate': 1.660348570999333e-05, 'epoch': 1.86} +{'eval_loss': 0.35324689745903015, 'eval_runtime': 161.3633, 'eval_samples_per_second': 387.002, 'eval_steps_per_second': 24.188, 'epoch': 1.86} +{'loss': 0.3721, 'grad_norm': 0.5226637721061707, 'learning_rate': 1.649190785883133e-05, 'epoch': 1.87} +{'eval_loss': 0.35291600227355957, 'eval_runtime': 112.6376, 'eval_samples_per_second': 554.415, 'eval_steps_per_second': 34.651, 'epoch': 1.87} +{'loss': 0.3716, 'grad_norm': 0.40147414803504944, 'learning_rate': 1.6380521356101065e-05, 'epoch': 1.87} +{'eval_loss': 0.35307157039642334, 'eval_runtime': 89.7865, 'eval_samples_per_second': 695.517, 'eval_steps_per_second': 43.47, 'epoch': 1.87} +{'loss': 0.3704, 'grad_norm': 0.4411596357822418, 'learning_rate': 1.6269328706903835e-05, 'epoch': 1.88} +{'eval_loss': 0.3533298373222351, 'eval_runtime': 119.4188, 'eval_samples_per_second': 522.933, 'eval_steps_per_second': 32.683, 'epoch': 1.88} +{'loss': 0.3686, 'grad_norm': 0.4119294285774231, 'learning_rate': 1.6158332411981127e-05, 'epoch': 1.88} +{'eval_loss': 0.3527662754058838, 'eval_runtime': 141.8421, 'eval_samples_per_second': 440.264, 'eval_steps_per_second': 27.517, 'epoch': 1.88} +{'loss': 0.3721, 'grad_norm': 0.41393163800239563, 'learning_rate': 1.60475349676584e-05, 'epoch': 1.88} +{'eval_loss': 0.35351061820983887, 'eval_runtime': 159.031, 'eval_samples_per_second': 392.678, 'eval_steps_per_second': 24.542, 'epoch': 1.88} +{'loss': 0.3715, 'grad_norm': 0.4255130887031555, 'learning_rate': 1.5936938865788917e-05, 'epoch': 1.89} +{'eval_loss': 0.35292956233024597, 'eval_runtime': 118.0571, 'eval_samples_per_second': 528.965, 'eval_steps_per_second': 33.06, 'epoch': 1.89} +{'loss': 0.3659, 'grad_norm': 0.40701112151145935, 'learning_rate': 1.5826546593697726e-05, 'epoch': 1.89} +{'eval_loss': 0.3535211980342865, 'eval_runtime': 82.0847, 'eval_samples_per_second': 760.775, 'eval_steps_per_second': 47.548, 'epoch': 1.89} +{'loss': 0.3757, 'grad_norm': 0.5612780451774597, 'learning_rate': 1.57163606341257e-05, 'epoch': 1.9} +{'eval_loss': 0.3525923192501068, 'eval_runtime': 110.5543, 'eval_samples_per_second': 564.863, 'eval_steps_per_second': 35.304, 'epoch': 1.9} +{'loss': 0.3711, 'grad_norm': 0.40943485498428345, 'learning_rate': 1.5606383465173717e-05, 'epoch': 1.9} +{'eval_loss': 0.3522811233997345, 'eval_runtime': 99.3114, 'eval_samples_per_second': 628.81, 'eval_steps_per_second': 39.301, 'epoch': 1.9} +{'loss': 0.3729, 'grad_norm': 0.45880836248397827, 'learning_rate': 1.54966175602469e-05, 'epoch': 1.91} +{'eval_loss': 0.3527698218822479, 'eval_runtime': 138.3249, 'eval_samples_per_second': 451.459, 'eval_steps_per_second': 28.216, 'epoch': 1.91} +{'loss': 0.3729, 'grad_norm': 0.4454316794872284, 'learning_rate': 1.5387065387999045e-05, 'epoch': 1.91} +{'eval_loss': 0.35304415225982666, 'eval_runtime': 126.5268, 'eval_samples_per_second': 493.556, 'eval_steps_per_second': 30.847, 'epoch': 1.91} +{'loss': 0.3725, 'grad_norm': 0.33828872442245483, 'learning_rate': 1.5277729412277018e-05, 'epoch': 1.91} +{'eval_loss': 0.35338452458381653, 'eval_runtime': 75.6917, 'eval_samples_per_second': 825.031, 'eval_steps_per_second': 51.564, 'epoch': 1.91} +{'loss': 0.373, 'grad_norm': 0.4296139180660248, 'learning_rate': 1.5168612092065426e-05, 'epoch': 1.92} +{'eval_loss': 0.3525680601596832, 'eval_runtime': 100.7779, 'eval_samples_per_second': 619.66, 'eval_steps_per_second': 38.729, 'epoch': 1.92} +{'loss': 0.3735, 'grad_norm': 0.5248010754585266, 'learning_rate': 1.5059715881431257e-05, 'epoch': 1.92} +{'eval_loss': 0.3527299761772156, 'eval_runtime': 127.8264, 'eval_samples_per_second': 488.537, 'eval_steps_per_second': 30.534, 'epoch': 1.92} +{'loss': 0.3736, 'grad_norm': 0.3976741433143616, 'learning_rate': 1.4951043229468717e-05, 'epoch': 1.93} +{'eval_loss': 0.35301387310028076, 'eval_runtime': 118.862, 'eval_samples_per_second': 525.382, 'eval_steps_per_second': 32.836, 'epoch': 1.93} +{'loss': 0.3667, 'grad_norm': 0.3925624191761017, 'learning_rate': 1.484259658024412e-05, 'epoch': 1.93} +{'eval_loss': 0.3527185618877411, 'eval_runtime': 128.0975, 'eval_samples_per_second': 487.504, 'eval_steps_per_second': 30.469, 'epoch': 1.93} +{'loss': 0.3751, 'grad_norm': 0.45544323325157166, 'learning_rate': 1.4734378372740976e-05, 'epoch': 1.94} +{'eval_loss': 0.3523600101470947, 'eval_runtime': 91.2443, 'eval_samples_per_second': 684.404, 'eval_steps_per_second': 42.775, 'epoch': 1.94} +{'loss': 0.3671, 'grad_norm': 0.3557186424732208, 'learning_rate': 1.4626391040805068e-05, 'epoch': 1.94} +{'eval_loss': 0.35222914814949036, 'eval_runtime': 74.2605, 'eval_samples_per_second': 840.932, 'eval_steps_per_second': 52.558, 'epoch': 1.94} +{'loss': 0.3718, 'grad_norm': 0.3961358964443207, 'learning_rate': 1.4518637013089764e-05, 'epoch': 1.95} +{'eval_loss': 0.352795273065567, 'eval_runtime': 87.6885, 'eval_samples_per_second': 712.157, 'eval_steps_per_second': 44.51, 'epoch': 1.95} +{'loss': 0.3727, 'grad_norm': 0.4014914631843567, 'learning_rate': 1.441111871300139e-05, 'epoch': 1.95} +{'eval_loss': 0.3517891466617584, 'eval_runtime': 145.8543, 'eval_samples_per_second': 428.153, 'eval_steps_per_second': 26.76, 'epoch': 1.95} +{'loss': 0.3684, 'grad_norm': 0.4096909761428833, 'learning_rate': 1.4303838558644676e-05, 'epoch': 1.95} +{'eval_loss': 0.35228508710861206, 'eval_runtime': 145.5139, 'eval_samples_per_second': 429.155, 'eval_steps_per_second': 26.822, 'epoch': 1.95} +{'loss': 0.3701, 'grad_norm': 0.4173333942890167, 'learning_rate': 1.4196798962768482e-05, 'epoch': 1.96} +{'eval_loss': 0.3528241813182831, 'eval_runtime': 92.162, 'eval_samples_per_second': 677.589, 'eval_steps_per_second': 42.349, 'epoch': 1.96} +{'loss': 0.3713, 'grad_norm': 0.5503092408180237, 'learning_rate': 1.40900023327114e-05, 'epoch': 1.96} +{'eval_loss': 0.3522227108478546, 'eval_runtime': 89.6198, 'eval_samples_per_second': 696.81, 'eval_steps_per_second': 43.551, 'epoch': 1.96} +{'loss': 0.3722, 'grad_norm': 0.4499627649784088, 'learning_rate': 1.3983451070347723e-05, 'epoch': 1.97} +{'eval_loss': 0.3530903458595276, 'eval_runtime': 96.9261, 'eval_samples_per_second': 644.285, 'eval_steps_per_second': 40.268, 'epoch': 1.97} +{'loss': 0.3705, 'grad_norm': 0.4025648534297943, 'learning_rate': 1.3877147572033364e-05, 'epoch': 1.97} +{'eval_loss': 0.3519740402698517, 'eval_runtime': 129.6654, 'eval_samples_per_second': 481.609, 'eval_steps_per_second': 30.101, 'epoch': 1.97} +{'loss': 0.3695, 'grad_norm': 0.39975687861442566, 'learning_rate': 1.3771094228551998e-05, 'epoch': 1.98} +{'eval_loss': 0.3518427312374115, 'eval_runtime': 123.4728, 'eval_samples_per_second': 505.763, 'eval_steps_per_second': 31.61, 'epoch': 1.98} +{'loss': 0.3756, 'grad_norm': 0.44773054122924805, 'learning_rate': 1.3665293425061232e-05, 'epoch': 1.98} +{'eval_loss': 0.35242584347724915, 'eval_runtime': 132.7555, 'eval_samples_per_second': 470.399, 'eval_steps_per_second': 29.4, 'epoch': 1.98} +{'loss': 0.3705, 'grad_norm': 0.4423580765724182, 'learning_rate': 1.3559747541039078e-05, 'epoch': 1.99} +{'eval_loss': 0.35279518365859985, 'eval_runtime': 89.8537, 'eval_samples_per_second': 694.997, 'eval_steps_per_second': 43.437, 'epoch': 1.99} +{'loss': 0.3681, 'grad_norm': 0.41779884696006775, 'learning_rate': 1.3454458950230291e-05, 'epoch': 1.99} +{'eval_loss': 0.3524220883846283, 'eval_runtime': 100.7267, 'eval_samples_per_second': 619.974, 'eval_steps_per_second': 38.748, 'epoch': 1.99} +{'loss': 0.3684, 'grad_norm': 0.4141429364681244, 'learning_rate': 1.3349430020593112e-05, 'epoch': 1.99} +{'eval_loss': 0.35262569785118103, 'eval_runtime': 138.5491, 'eval_samples_per_second': 450.728, 'eval_steps_per_second': 28.171, 'epoch': 1.99} +{'loss': 0.3704, 'grad_norm': 0.3685065507888794, 'learning_rate': 1.3244663114245928e-05, 'epoch': 2.0} +{'eval_loss': 0.3524908125400543, 'eval_runtime': 113.2659, 'eval_samples_per_second': 551.34, 'eval_steps_per_second': 34.459, 'epoch': 2.0} +{'loss': 0.3728, 'grad_norm': 0.4484209418296814, 'learning_rate': 1.3140160587414202e-05, 'epoch': 2.0} +{'eval_loss': 0.35161611437797546, 'eval_runtime': 111.9322, 'eval_samples_per_second': 557.909, 'eval_steps_per_second': 34.869, 'epoch': 2.0} +{'loss': 0.3689, 'grad_norm': 0.4140912890434265, 'learning_rate': 1.3035924790377413e-05, 'epoch': 2.01} +{'eval_loss': 0.3524651527404785, 'eval_runtime': 88.3659, 'eval_samples_per_second': 706.698, 'eval_steps_per_second': 44.169, 'epoch': 2.01} +{'loss': 0.3738, 'grad_norm': 0.4781247079372406, 'learning_rate': 1.2931958067416317e-05, 'epoch': 2.01} +{'eval_loss': 0.35183125734329224, 'eval_runtime': 104.2804, 'eval_samples_per_second': 598.847, 'eval_steps_per_second': 37.428, 'epoch': 2.01} +{'loss': 0.3675, 'grad_norm': 0.4306248724460602, 'learning_rate': 1.2828262756760062e-05, 'epoch': 2.02} +{'eval_loss': 0.3523595929145813, 'eval_runtime': 134.3561, 'eval_samples_per_second': 464.795, 'eval_steps_per_second': 29.05, 'epoch': 2.02} +{'loss': 0.3635, 'grad_norm': 0.4202248752117157, 'learning_rate': 1.2724841190533754e-05, 'epoch': 2.02} +{'eval_loss': 0.3527405560016632, 'eval_runtime': 134.4371, 'eval_samples_per_second': 464.515, 'eval_steps_per_second': 29.032, 'epoch': 2.02} +{'loss': 0.3698, 'grad_norm': 0.39436104893684387, 'learning_rate': 1.26216956947059e-05, 'epoch': 2.02} +{'eval_loss': 0.35196182131767273, 'eval_runtime': 113.5466, 'eval_samples_per_second': 549.977, 'eval_steps_per_second': 34.374, 'epoch': 2.02} +{'loss': 0.3745, 'grad_norm': 0.4525001347064972, 'learning_rate': 1.2518828589036164e-05, 'epoch': 2.03} +{'eval_loss': 0.3519538938999176, 'eval_runtime': 89.5164, 'eval_samples_per_second': 697.615, 'eval_steps_per_second': 43.601, 'epoch': 2.03} +{'loss': 0.3701, 'grad_norm': 0.4682912826538086, 'learning_rate': 1.241624218702315e-05, 'epoch': 2.03} +{'eval_loss': 0.3519061505794525, 'eval_runtime': 131.9598, 'eval_samples_per_second': 473.235, 'eval_steps_per_second': 29.577, 'epoch': 2.03} +{'loss': 0.3643, 'grad_norm': 0.4254605174064636, 'learning_rate': 1.231393879585241e-05, 'epoch': 2.04} +{'eval_loss': 0.3525539040565491, 'eval_runtime': 154.1163, 'eval_samples_per_second': 405.2, 'eval_steps_per_second': 25.325, 'epoch': 2.04} +{'loss': 0.3694, 'grad_norm': 0.4035811722278595, 'learning_rate': 1.2211920716344497e-05, 'epoch': 2.04} +{'eval_loss': 0.3523533046245575, 'eval_runtime': 138.5136, 'eval_samples_per_second': 450.844, 'eval_steps_per_second': 28.178, 'epoch': 2.04} +{'loss': 0.3738, 'grad_norm': 0.47682619094848633, 'learning_rate': 1.2110190242903292e-05, 'epoch': 2.05} +{'eval_loss': 0.3520210385322571, 'eval_runtime': 96.2007, 'eval_samples_per_second': 649.143, 'eval_steps_per_second': 40.571, 'epoch': 2.05} +{'loss': 0.3734, 'grad_norm': 0.48561352491378784, 'learning_rate': 1.2008749663464353e-05, 'epoch': 2.05} +{'eval_loss': 0.35222169756889343, 'eval_runtime': 113.9249, 'eval_samples_per_second': 548.15, 'eval_steps_per_second': 34.259, 'epoch': 2.05} +{'loss': 0.3682, 'grad_norm': 0.421908438205719, 'learning_rate': 1.1907601259443473e-05, 'epoch': 2.06} +{'eval_loss': 0.3519914150238037, 'eval_runtime': 140.8069, 'eval_samples_per_second': 443.501, 'eval_steps_per_second': 27.719, 'epoch': 2.06} +{'loss': 0.3706, 'grad_norm': 0.44066622853279114, 'learning_rate': 1.1806747305685375e-05, 'epoch': 2.06} +{'eval_loss': 0.3521014451980591, 'eval_runtime': 143.7357, 'eval_samples_per_second': 434.464, 'eval_steps_per_second': 27.154, 'epoch': 2.06} +{'loss': 0.3709, 'grad_norm': 0.43614423274993896, 'learning_rate': 1.170619007041255e-05, 'epoch': 2.06} +{'eval_loss': 0.3516845405101776, 'eval_runtime': 120.8136, 'eval_samples_per_second': 516.895, 'eval_steps_per_second': 32.306, 'epoch': 2.06} +{'loss': 0.3668, 'grad_norm': 0.41295379400253296, 'learning_rate': 1.1605931815174215e-05, 'epoch': 2.07} +{'eval_loss': 0.35178112983703613, 'eval_runtime': 102.0155, 'eval_samples_per_second': 612.142, 'eval_steps_per_second': 38.259, 'epoch': 2.07} +{'loss': 0.3745, 'grad_norm': 0.38009586930274963, 'learning_rate': 1.1505974794795502e-05, 'epoch': 2.07} +{'eval_loss': 0.3513350188732147, 'eval_runtime': 88.3438, 'eval_samples_per_second': 706.875, 'eval_steps_per_second': 44.18, 'epoch': 2.07} +{'loss': 0.3721, 'grad_norm': 0.4864519238471985, 'learning_rate': 1.1406321257326707e-05, 'epoch': 2.08} +{'eval_loss': 0.35260093212127686, 'eval_runtime': 117.5274, 'eval_samples_per_second': 531.348, 'eval_steps_per_second': 33.209, 'epoch': 2.08} +{'loss': 0.3715, 'grad_norm': 0.5327666401863098, 'learning_rate': 1.1306973443992758e-05, 'epoch': 2.08} +{'eval_loss': 0.35177719593048096, 'eval_runtime': 125.7922, 'eval_samples_per_second': 496.438, 'eval_steps_per_second': 31.027, 'epoch': 2.08} +{'loss': 0.3685, 'grad_norm': 0.4431646764278412, 'learning_rate': 1.1207933589142774e-05, 'epoch': 2.09} +{'eval_loss': 0.3516465127468109, 'eval_runtime': 107.9848, 'eval_samples_per_second': 578.304, 'eval_steps_per_second': 36.144, 'epoch': 2.09} +{'loss': 0.3658, 'grad_norm': 0.409123033285141, 'learning_rate': 1.1109203920199865e-05, 'epoch': 2.09} +{'eval_loss': 0.3516082167625427, 'eval_runtime': 82.4755, 'eval_samples_per_second': 757.17, 'eval_steps_per_second': 47.323, 'epoch': 2.09} +{'loss': 0.3699, 'grad_norm': 0.4567689299583435, 'learning_rate': 1.1010786657610964e-05, 'epoch': 2.09} +{'eval_loss': 0.35146135091781616, 'eval_runtime': 100.2639, 'eval_samples_per_second': 622.836, 'eval_steps_per_second': 38.927, 'epoch': 2.09} +{'loss': 0.3683, 'grad_norm': 0.4800879657268524, 'learning_rate': 1.0912684014796972e-05, 'epoch': 2.1} +{'eval_loss': 0.35169264674186707, 'eval_runtime': 127.4138, 'eval_samples_per_second': 490.12, 'eval_steps_per_second': 30.632, 'epoch': 2.1} +{'loss': 0.3676, 'grad_norm': 0.4328778386116028, 'learning_rate': 1.0814898198102927e-05, 'epoch': 2.1} +{'eval_loss': 0.3520660400390625, 'eval_runtime': 125.7777, 'eval_samples_per_second': 496.495, 'eval_steps_per_second': 31.031, 'epoch': 2.1} +{'loss': 0.3704, 'grad_norm': 0.4659443795681, 'learning_rate': 1.0717431406748382e-05, 'epoch': 2.11} +{'eval_loss': 0.3522498905658722, 'eval_runtime': 142.947, 'eval_samples_per_second': 436.861, 'eval_steps_per_second': 27.304, 'epoch': 2.11} +{'loss': 0.369, 'grad_norm': 0.4187769591808319, 'learning_rate': 1.062028583277797e-05, 'epoch': 2.11} +{'eval_loss': 0.3513807952404022, 'eval_runtime': 96.9525, 'eval_samples_per_second': 644.109, 'eval_steps_per_second': 40.257, 'epoch': 2.11} +{'loss': 0.3697, 'grad_norm': 0.3523486852645874, 'learning_rate': 1.0523463661012099e-05, 'epoch': 2.12} +{'eval_loss': 0.35196858644485474, 'eval_runtime': 96.6655, 'eval_samples_per_second': 646.022, 'eval_steps_per_second': 40.376, 'epoch': 2.12} +{'loss': 0.3672, 'grad_norm': 0.42393404245376587, 'learning_rate': 1.0426967068997767e-05, 'epoch': 2.12} +{'eval_loss': 0.35149750113487244, 'eval_runtime': 103.0289, 'eval_samples_per_second': 606.121, 'eval_steps_per_second': 37.883, 'epoch': 2.12} +{'loss': 0.3697, 'grad_norm': 0.3667372167110443, 'learning_rate': 1.0330798226959668e-05, 'epoch': 2.13} +{'eval_loss': 0.3520899713039398, 'eval_runtime': 129.4432, 'eval_samples_per_second': 482.435, 'eval_steps_per_second': 30.152, 'epoch': 2.13} +{'loss': 0.3677, 'grad_norm': 0.4480126202106476, 'learning_rate': 1.0234959297751328e-05, 'epoch': 2.13} +{'eval_loss': 0.35245996713638306, 'eval_runtime': 129.0831, 'eval_samples_per_second': 483.782, 'eval_steps_per_second': 30.236, 'epoch': 2.13} +{'loss': 0.3679, 'grad_norm': 0.3902183771133423, 'learning_rate': 1.0139452436806482e-05, 'epoch': 2.13} +{'eval_loss': 0.35168182849884033, 'eval_runtime': 84.9685, 'eval_samples_per_second': 734.955, 'eval_steps_per_second': 45.935, 'epoch': 2.13} +{'loss': 0.368, 'grad_norm': 0.4385850727558136, 'learning_rate': 1.0044279792090594e-05, 'epoch': 2.14} +{'eval_loss': 0.3514123857021332, 'eval_runtime': 84.5473, 'eval_samples_per_second': 738.616, 'eval_steps_per_second': 46.163, 'epoch': 2.14} +{'loss': 0.3743, 'grad_norm': 0.44872912764549255, 'learning_rate': 9.94944350405255e-06, 'epoch': 2.14} +{'eval_loss': 0.35172250866889954, 'eval_runtime': 114.9419, 'eval_samples_per_second': 543.301, 'eval_steps_per_second': 33.956, 'epoch': 2.14} +{'loss': 0.3704, 'grad_norm': 0.4338383376598358, 'learning_rate': 9.854945705576496e-06, 'epoch': 2.15} +{'eval_loss': 0.3517483174800873, 'eval_runtime': 120.159, 'eval_samples_per_second': 519.712, 'eval_steps_per_second': 32.482, 'epoch': 2.15} +{'loss': 0.3672, 'grad_norm': 0.49341562390327454, 'learning_rate': 9.76078852193392e-06, 'epoch': 2.15} +{'eval_loss': 0.351521372795105, 'eval_runtime': 156.2518, 'eval_samples_per_second': 399.663, 'eval_steps_per_second': 24.979, 'epoch': 2.15} +{'loss': 0.3691, 'grad_norm': 0.46430492401123047, 'learning_rate': 9.66697407073581e-06, 'epoch': 2.16} +{'eval_loss': 0.3520546853542328, 'eval_runtime': 145.2871, 'eval_samples_per_second': 429.825, 'eval_steps_per_second': 26.864, 'epoch': 2.16} +{'loss': 0.3711, 'grad_norm': 0.45261889696121216, 'learning_rate': 9.573504461885043e-06, 'epoch': 2.16} +{'eval_loss': 0.3513006269931793, 'eval_runtime': 90.1051, 'eval_samples_per_second': 693.058, 'eval_steps_per_second': 43.316, 'epoch': 2.16} +{'loss': 0.3684, 'grad_norm': 0.4342949092388153, 'learning_rate': 9.480381797528945e-06, 'epoch': 2.17} +{'eval_loss': 0.3512662947177887, 'eval_runtime': 92.4217, 'eval_samples_per_second': 675.685, 'eval_steps_per_second': 42.23, 'epoch': 2.17} +{'loss': 0.3697, 'grad_norm': 0.3631267547607422, 'learning_rate': 9.387608172011993e-06, 'epoch': 2.17} +{'eval_loss': 0.3523181080818176, 'eval_runtime': 138.4488, 'eval_samples_per_second': 451.055, 'eval_steps_per_second': 28.191, 'epoch': 2.17} +{'loss': 0.3728, 'grad_norm': 0.48277807235717773, 'learning_rate': 9.29518567182871e-06, 'epoch': 2.17} +{'eval_loss': 0.35175639390945435, 'eval_runtime': 145.24, 'eval_samples_per_second': 429.964, 'eval_steps_per_second': 26.873, 'epoch': 2.17} +{'loss': 0.367, 'grad_norm': 0.3787611722946167, 'learning_rate': 9.203116375576767e-06, 'epoch': 2.18} +{'eval_loss': 0.35180583596229553, 'eval_runtime': 96.764, 'eval_samples_per_second': 645.364, 'eval_steps_per_second': 40.335, 'epoch': 2.18} +{'loss': 0.3682, 'grad_norm': 0.36162692308425903, 'learning_rate': 9.111402353910217e-06, 'epoch': 2.18} +{'eval_loss': 0.3513328433036804, 'eval_runtime': 91.4069, 'eval_samples_per_second': 683.187, 'eval_steps_per_second': 42.699, 'epoch': 2.18} +{'loss': 0.3633, 'grad_norm': 0.4839751124382019, 'learning_rate': 9.020045669492919e-06, 'epoch': 2.19} +{'eval_loss': 0.3519132733345032, 'eval_runtime': 93.5338, 'eval_samples_per_second': 667.652, 'eval_steps_per_second': 41.728, 'epoch': 2.19} +{'loss': 0.3709, 'grad_norm': 0.40421923995018005, 'learning_rate': 8.929048376952167e-06, 'epoch': 2.19} +{'eval_loss': 0.35207903385162354, 'eval_runtime': 114.4017, 'eval_samples_per_second': 545.866, 'eval_steps_per_second': 34.117, 'epoch': 2.19} +{'loss': 0.3722, 'grad_norm': 0.41748347878456116, 'learning_rate': 8.838412522832474e-06, 'epoch': 2.2} +{'eval_loss': 0.3521324694156647, 'eval_runtime': 101.7268, 'eval_samples_per_second': 613.88, 'eval_steps_per_second': 38.367, 'epoch': 2.2} +{'loss': 0.3606, 'grad_norm': 0.4314954876899719, 'learning_rate': 8.748140145549513e-06, 'epoch': 2.2} +{'eval_loss': 0.3514702618122101, 'eval_runtime': 156.0078, 'eval_samples_per_second': 400.288, 'eval_steps_per_second': 25.018, 'epoch': 2.2} +{'loss': 0.3728, 'grad_norm': 0.378442645072937, 'learning_rate': 8.658233275344336e-06, 'epoch': 2.2} +{'eval_loss': 0.3516296148300171, 'eval_runtime': 83.7089, 'eval_samples_per_second': 746.014, 'eval_steps_per_second': 46.626, 'epoch': 2.2} +{'loss': 0.3684, 'grad_norm': 0.4157877266407013, 'learning_rate': 8.568693934237661e-06, 'epoch': 2.21} +{'eval_loss': 0.3513619601726532, 'eval_runtime': 81.3091, 'eval_samples_per_second': 768.033, 'eval_steps_per_second': 48.002, 'epoch': 2.21} +{'loss': 0.3666, 'grad_norm': 0.40366876125335693, 'learning_rate': 8.479524135984424e-06, 'epoch': 2.21} +{'eval_loss': 0.35119739174842834, 'eval_runtime': 99.852, 'eval_samples_per_second': 625.406, 'eval_steps_per_second': 39.088, 'epoch': 2.21} +{'loss': 0.3677, 'grad_norm': 0.4373210668563843, 'learning_rate': 8.39072588602847e-06, 'epoch': 2.22} +{'eval_loss': 0.35215380787849426, 'eval_runtime': 120.4907, 'eval_samples_per_second': 518.281, 'eval_steps_per_second': 32.393, 'epoch': 2.22} +{'loss': 0.3726, 'grad_norm': 0.3845120966434479, 'learning_rate': 8.302301181457472e-06, 'epoch': 2.22} +{'eval_loss': 0.35124513506889343, 'eval_runtime': 130.8222, 'eval_samples_per_second': 477.35, 'eval_steps_per_second': 29.834, 'epoch': 2.22} +{'loss': 0.3673, 'grad_norm': 0.37425848841667175, 'learning_rate': 8.214252010957981e-06, 'epoch': 2.23} +{'eval_loss': 0.35194557905197144, 'eval_runtime': 114.9153, 'eval_samples_per_second': 543.426, 'eval_steps_per_second': 33.964, 'epoch': 2.23} +{'loss': 0.3666, 'grad_norm': 0.5034060478210449, 'learning_rate': 8.12658035477074e-06, 'epoch': 2.23} +{'eval_loss': 0.3514699637889862, 'eval_runtime': 87.9129, 'eval_samples_per_second': 710.34, 'eval_steps_per_second': 44.396, 'epoch': 2.23} +{'loss': 0.3674, 'grad_norm': 0.359164834022522, 'learning_rate': 8.039288184646157e-06, 'epoch': 2.24} +{'eval_loss': 0.3522956669330597, 'eval_runtime': 102.3113, 'eval_samples_per_second': 610.372, 'eval_steps_per_second': 38.148, 'epoch': 2.24} +{'loss': 0.3703, 'grad_norm': 0.48279401659965515, 'learning_rate': 7.952377463799876e-06, 'epoch': 2.24} +{'eval_loss': 0.35198503732681274, 'eval_runtime': 112.2828, 'eval_samples_per_second': 556.167, 'eval_steps_per_second': 34.76, 'epoch': 2.24} +{'loss': 0.3656, 'grad_norm': 0.5102318525314331, 'learning_rate': 7.865850146868725e-06, 'epoch': 2.24} +{'eval_loss': 0.351750910282135, 'eval_runtime': 137.6094, 'eval_samples_per_second': 453.806, 'eval_steps_per_second': 28.363, 'epoch': 2.24} +{'loss': 0.3699, 'grad_norm': 0.46633797883987427, 'learning_rate': 7.779708179866707e-06, 'epoch': 2.25} +{'eval_loss': 0.35132917761802673, 'eval_runtime': 123.1489, 'eval_samples_per_second': 507.093, 'eval_steps_per_second': 31.693, 'epoch': 2.25} +{'loss': 0.3742, 'grad_norm': 0.40820541977882385, 'learning_rate': 7.693953500141223e-06, 'epoch': 2.25} +{'eval_loss': 0.35083454847335815, 'eval_runtime': 170.4425, 'eval_samples_per_second': 366.387, 'eval_steps_per_second': 22.899, 'epoch': 2.25} +{'loss': 0.3673, 'grad_norm': 0.453761488199234, 'learning_rate': 7.608588036329525e-06, 'epoch': 2.26} +{'eval_loss': 0.3519810736179352, 'eval_runtime': 94.9808, 'eval_samples_per_second': 657.48, 'eval_steps_per_second': 41.093, 'epoch': 2.26} +{'loss': 0.3719, 'grad_norm': 0.5030949711799622, 'learning_rate': 7.523613708315361e-06, 'epoch': 2.26} +{'eval_loss': 0.35213685035705566, 'eval_runtime': 91.6089, 'eval_samples_per_second': 681.681, 'eval_steps_per_second': 42.605, 'epoch': 2.26} +{'loss': 0.3683, 'grad_norm': 0.4585345685482025, 'learning_rate': 7.439032427185724e-06, 'epoch': 2.27} +{'eval_loss': 0.3513634502887726, 'eval_runtime': 102.8653, 'eval_samples_per_second': 607.085, 'eval_steps_per_second': 37.943, 'epoch': 2.27} +{'loss': 0.3671, 'grad_norm': 0.5278978943824768, 'learning_rate': 7.3548460951879425e-06, 'epoch': 2.27} +{'eval_loss': 0.3519594371318817, 'eval_runtime': 120.4421, 'eval_samples_per_second': 518.49, 'eval_steps_per_second': 32.406, 'epoch': 2.27} +{'loss': 0.368, 'grad_norm': 0.4439230263233185, 'learning_rate': 7.271056605686874e-06, 'epoch': 2.27} +{'eval_loss': 0.3514157235622406, 'eval_runtime': 127.2475, 'eval_samples_per_second': 490.76, 'eval_steps_per_second': 30.673, 'epoch': 2.27} +{'loss': 0.3689, 'grad_norm': 0.4397352635860443, 'learning_rate': 7.1876658431222985e-06, 'epoch': 2.28} +{'eval_loss': 0.35138505697250366, 'eval_runtime': 128.3333, 'eval_samples_per_second': 486.608, 'eval_steps_per_second': 30.413, 'epoch': 2.28} +{'loss': 0.3702, 'grad_norm': 0.44172394275665283, 'learning_rate': 7.104675682966577e-06, 'epoch': 2.28} +{'eval_loss': 0.351948082447052, 'eval_runtime': 90.4223, 'eval_samples_per_second': 690.626, 'eval_steps_per_second': 43.164, 'epoch': 2.28} +{'loss': 0.3677, 'grad_norm': 0.4409579634666443, 'learning_rate': 7.022087991682474e-06, 'epoch': 2.29} +{'eval_loss': 0.35213032364845276, 'eval_runtime': 107.2361, 'eval_samples_per_second': 582.341, 'eval_steps_per_second': 36.396, 'epoch': 2.29} +{'loss': 0.3692, 'grad_norm': 0.4459713101387024, 'learning_rate': 6.939904626681115e-06, 'epoch': 2.29} +{'eval_loss': 0.351564884185791, 'eval_runtime': 117.8039, 'eval_samples_per_second': 530.101, 'eval_steps_per_second': 33.131, 'epoch': 2.29} +{'loss': 0.3717, 'grad_norm': 0.4956647753715515, 'learning_rate': 6.8581274362802955e-06, 'epoch': 2.3} +{'eval_loss': 0.35087496042251587, 'eval_runtime': 131.3376, 'eval_samples_per_second': 475.477, 'eval_steps_per_second': 29.717, 'epoch': 2.3} +{'loss': 0.3682, 'grad_norm': 0.3812435567378998, 'learning_rate': 6.776758259662866e-06, 'epoch': 2.3} +{'eval_loss': 0.3513571321964264, 'eval_runtime': 99.1729, 'eval_samples_per_second': 629.688, 'eval_steps_per_second': 39.355, 'epoch': 2.3} +{'loss': 0.3717, 'grad_norm': 0.5640867948532104, 'learning_rate': 6.695798926835364e-06, 'epoch': 2.31} +{'eval_loss': 0.3508698344230652, 'eval_runtime': 96.0489, 'eval_samples_per_second': 650.169, 'eval_steps_per_second': 40.636, 'epoch': 2.31} +{'loss': 0.37, 'grad_norm': 0.38414376974105835, 'learning_rate': 6.615251258586883e-06, 'epoch': 2.31} +{'eval_loss': 0.3515236973762512, 'eval_runtime': 107.6958, 'eval_samples_per_second': 579.855, 'eval_steps_per_second': 36.241, 'epoch': 2.31} +{'loss': 0.3659, 'grad_norm': 0.6027657985687256, 'learning_rate': 6.535117066448126e-06, 'epoch': 2.31} +{'eval_loss': 0.3512798249721527, 'eval_runtime': 134.3372, 'eval_samples_per_second': 464.86, 'eval_steps_per_second': 29.054, 'epoch': 2.31} +{'loss': 0.3688, 'grad_norm': 0.5126060843467712, 'learning_rate': 6.4553981526506155e-06, 'epoch': 2.32} +{'eval_loss': 0.35132741928100586, 'eval_runtime': 128.8626, 'eval_samples_per_second': 484.609, 'eval_steps_per_second': 30.288, 'epoch': 2.32} +{'loss': 0.3735, 'grad_norm': 0.6023508310317993, 'learning_rate': 6.376096310086219e-06, 'epoch': 2.32} +{'eval_loss': 0.35133612155914307, 'eval_runtime': 137.2595, 'eval_samples_per_second': 454.963, 'eval_steps_per_second': 28.435, 'epoch': 2.32} +{'loss': 0.3709, 'grad_norm': 0.48305854201316833, 'learning_rate': 6.297213322266795e-06, 'epoch': 2.33} +{'eval_loss': 0.3516114354133606, 'eval_runtime': 122.4781, 'eval_samples_per_second': 509.871, 'eval_steps_per_second': 31.867, 'epoch': 2.33} +{'loss': 0.367, 'grad_norm': 0.4667912423610687, 'learning_rate': 6.2187509632840675e-06, 'epoch': 2.33} +{'eval_loss': 0.3518548607826233, 'eval_runtime': 90.5091, 'eval_samples_per_second': 689.964, 'eval_steps_per_second': 43.123, 'epoch': 2.33} +{'loss': 0.373, 'grad_norm': 0.35102489590644836, 'learning_rate': 6.1407109977697856e-06, 'epoch': 2.34} +{'eval_loss': 0.35154497623443604, 'eval_runtime': 78.8784, 'eval_samples_per_second': 791.7, 'eval_steps_per_second': 49.481, 'epoch': 2.34} +{'loss': 0.3682, 'grad_norm': 0.4246480166912079, 'learning_rate': 6.06309518085598e-06, 'epoch': 2.34} +{'eval_loss': 0.35168012976646423, 'eval_runtime': 114.8749, 'eval_samples_per_second': 543.618, 'eval_steps_per_second': 33.976, 'epoch': 2.34} +{'loss': 0.3696, 'grad_norm': 0.4142530560493469, 'learning_rate': 5.985905258135485e-06, 'epoch': 2.35} +{'eval_loss': 0.3512050211429596, 'eval_runtime': 118.2144, 'eval_samples_per_second': 528.26, 'eval_steps_per_second': 33.016, 'epoch': 2.35} +{'loss': 0.3684, 'grad_norm': 0.39527401328086853, 'learning_rate': 5.909142965622735e-06, 'epoch': 2.35} +{'eval_loss': 0.35192370414733887, 'eval_runtime': 119.865, 'eval_samples_per_second': 520.986, 'eval_steps_per_second': 32.562, 'epoch': 2.35} +{'loss': 0.3667, 'grad_norm': 0.4497607350349426, 'learning_rate': 5.83281002971468e-06, 'epoch': 2.35} +{'eval_loss': 0.3516347110271454, 'eval_runtime': 122.5521, 'eval_samples_per_second': 509.563, 'eval_steps_per_second': 31.848, 'epoch': 2.35} +{'loss': 0.3714, 'grad_norm': 0.473808228969574, 'learning_rate': 5.756908167151942e-06, 'epoch': 2.36} +{'eval_loss': 0.35110482573509216, 'eval_runtime': 99.0883, 'eval_samples_per_second': 630.226, 'eval_steps_per_second': 39.389, 'epoch': 2.36} +{'loss': 0.3684, 'grad_norm': 0.4684038460254669, 'learning_rate': 5.681439084980275e-06, 'epoch': 2.36} +{'eval_loss': 0.35143211483955383, 'eval_runtime': 96.865, 'eval_samples_per_second': 644.691, 'eval_steps_per_second': 40.293, 'epoch': 2.36} +{'loss': 0.3634, 'grad_norm': 0.5112012624740601, 'learning_rate': 5.606404480512104e-06, 'epoch': 2.37} +{'eval_loss': 0.35200199484825134, 'eval_runtime': 121.9029, 'eval_samples_per_second': 512.277, 'eval_steps_per_second': 32.017, 'epoch': 2.37} +{'loss': 0.367, 'grad_norm': 0.4253312945365906, 'learning_rate': 5.531806041288365e-06, 'epoch': 2.37} +{'eval_loss': 0.35239124298095703, 'eval_runtime': 131.4263, 'eval_samples_per_second': 475.156, 'eval_steps_per_second': 29.697, 'epoch': 2.37} +{'loss': 0.3682, 'grad_norm': 0.46419382095336914, 'learning_rate': 5.457645445040588e-06, 'epoch': 2.38} +{'eval_loss': 0.35171961784362793, 'eval_runtime': 129.6645, 'eval_samples_per_second': 481.612, 'eval_steps_per_second': 30.101, 'epoch': 2.38} +{'loss': 0.366, 'grad_norm': 0.42607077956199646, 'learning_rate': 5.383924359653131e-06, 'epoch': 2.38} +{'eval_loss': 0.3513723909854889, 'eval_runtime': 95.0509, 'eval_samples_per_second': 656.995, 'eval_steps_per_second': 41.062, 'epoch': 2.38} +{'loss': 0.3697, 'grad_norm': 0.466932088136673, 'learning_rate': 5.310644443125659e-06, 'epoch': 2.38} +{'eval_loss': 0.35113251209259033, 'eval_runtime': 88.0999, 'eval_samples_per_second': 708.832, 'eval_steps_per_second': 44.302, 'epoch': 2.38} +{'loss': 0.3644, 'grad_norm': 0.43257272243499756, 'learning_rate': 5.237807343535914e-06, 'epoch': 2.39} +{'eval_loss': 0.3516930937767029, 'eval_runtime': 103.8343, 'eval_samples_per_second': 601.42, 'eval_steps_per_second': 37.589, 'epoch': 2.39} +{'loss': 0.3696, 'grad_norm': 0.38487327098846436, 'learning_rate': 5.165414699002588e-06, 'epoch': 2.39} +{'eval_loss': 0.35208094120025635, 'eval_runtime': 122.8988, 'eval_samples_per_second': 508.125, 'eval_steps_per_second': 31.758, 'epoch': 2.39} +{'loss': 0.3637, 'grad_norm': 0.5129764676094055, 'learning_rate': 5.093468137648491e-06, 'epoch': 2.4} +{'eval_loss': 0.3517530858516693, 'eval_runtime': 139.0505, 'eval_samples_per_second': 449.103, 'eval_steps_per_second': 28.069, 'epoch': 2.4} +{'loss': 0.3692, 'grad_norm': 0.38549894094467163, 'learning_rate': 5.02196927756397e-06, 'epoch': 2.4} +{'eval_loss': 0.35150325298309326, 'eval_runtime': 115.9034, 'eval_samples_per_second': 538.794, 'eval_steps_per_second': 33.675, 'epoch': 2.4} +{'loss': 0.3682, 'grad_norm': 0.44216859340667725, 'learning_rate': 4.950919726770489e-06, 'epoch': 2.41} +{'eval_loss': 0.3513908088207245, 'eval_runtime': 97.5277, 'eval_samples_per_second': 640.31, 'eval_steps_per_second': 40.019, 'epoch': 2.41} +{'loss': 0.3683, 'grad_norm': 0.4402216672897339, 'learning_rate': 4.880321083184447e-06, 'epoch': 2.41} +{'eval_loss': 0.35096144676208496, 'eval_runtime': 94.3138, 'eval_samples_per_second': 662.13, 'eval_steps_per_second': 41.383, 'epoch': 2.41} +{'loss': 0.3681, 'grad_norm': 0.38294824957847595, 'learning_rate': 4.810174934581304e-06, 'epoch': 2.42} +{'eval_loss': 0.35136523842811584, 'eval_runtime': 125.3836, 'eval_samples_per_second': 498.056, 'eval_steps_per_second': 31.128, 'epoch': 2.42} +{'loss': 0.3647, 'grad_norm': 0.383380264043808, 'learning_rate': 4.740482858559808e-06, 'epoch': 2.42} +{'eval_loss': 0.35157155990600586, 'eval_runtime': 127.6842, 'eval_samples_per_second': 489.082, 'eval_steps_per_second': 30.568, 'epoch': 2.42} +{'loss': 0.3679, 'grad_norm': 0.3648233115673065, 'learning_rate': 4.6712464225065286e-06, 'epoch': 2.42} +{'eval_loss': 0.35120806097984314, 'eval_runtime': 141.5636, 'eval_samples_per_second': 441.13, 'eval_steps_per_second': 27.571, 'epoch': 2.42} +{'loss': 0.3677, 'grad_norm': 0.4138227701187134, 'learning_rate': 4.602467183560633e-06, 'epoch': 2.43} +{'eval_loss': 0.3514823019504547, 'eval_runtime': 100.9275, 'eval_samples_per_second': 618.741, 'eval_steps_per_second': 38.671, 'epoch': 2.43} +{'loss': 0.3636, 'grad_norm': 0.45348599553108215, 'learning_rate': 4.5341466885788414e-06, 'epoch': 2.43} +{'eval_loss': 0.35130566358566284, 'eval_runtime': 82.6545, 'eval_samples_per_second': 755.531, 'eval_steps_per_second': 47.221, 'epoch': 2.43} +{'loss': 0.3652, 'grad_norm': 0.4356291592121124, 'learning_rate': 4.466286474100645e-06, 'epoch': 2.44} +{'eval_loss': 0.351826936006546, 'eval_runtime': 121.4109, 'eval_samples_per_second': 514.352, 'eval_steps_per_second': 32.147, 'epoch': 2.44} +{'loss': 0.3668, 'grad_norm': 0.34125441312789917, 'learning_rate': 4.398888066313747e-06, 'epoch': 2.44} +{'eval_loss': 0.35135695338249207, 'eval_runtime': 139.2695, 'eval_samples_per_second': 448.397, 'eval_steps_per_second': 28.025, 'epoch': 2.44} +{'loss': 0.3718, 'grad_norm': 0.378197580575943, 'learning_rate': 4.331952981019752e-06, 'epoch': 2.45} +{'eval_loss': 0.35104137659072876, 'eval_runtime': 133.769, 'eval_samples_per_second': 466.834, 'eval_steps_per_second': 29.177, 'epoch': 2.45} +{'loss': 0.3666, 'grad_norm': 0.4102264940738678, 'learning_rate': 4.265482723600039e-06, 'epoch': 2.45} +{'eval_loss': 0.3509397506713867, 'eval_runtime': 129.5076, 'eval_samples_per_second': 482.196, 'eval_steps_per_second': 30.137, 'epoch': 2.45} +{'loss': 0.3669, 'grad_norm': 0.4124221205711365, 'learning_rate': 4.199478788981947e-06, 'epoch': 2.46} +{'eval_loss': 0.3510943651199341, 'eval_runtime': 93.7518, 'eval_samples_per_second': 666.099, 'eval_steps_per_second': 41.631, 'epoch': 2.46} +{'loss': 0.3685, 'grad_norm': 0.418006032705307, 'learning_rate': 4.133942661605136e-06, 'epoch': 2.46} +{'eval_loss': 0.35142460465431213, 'eval_runtime': 85.8316, 'eval_samples_per_second': 727.564, 'eval_steps_per_second': 45.473, 'epoch': 2.46} +{'loss': 0.3658, 'grad_norm': 0.4095633327960968, 'learning_rate': 4.068875815388195e-06, 'epoch': 2.46} +{'eval_loss': 0.35122179985046387, 'eval_runtime': 123.4348, 'eval_samples_per_second': 505.919, 'eval_steps_per_second': 31.62, 'epoch': 2.46} +{'loss': 0.3675, 'grad_norm': 0.3779546022415161, 'learning_rate': 4.004279713695511e-06, 'epoch': 2.47} +{'eval_loss': 0.35136300325393677, 'eval_runtime': 137.8338, 'eval_samples_per_second': 453.068, 'eval_steps_per_second': 28.317, 'epoch': 2.47} +{'loss': 0.3652, 'grad_norm': 0.3696914613246918, 'learning_rate': 3.940155809304338e-06, 'epoch': 2.47} +{'eval_loss': 0.3511875867843628, 'eval_runtime': 156.7278, 'eval_samples_per_second': 398.449, 'eval_steps_per_second': 24.903, 'epoch': 2.47} +{'loss': 0.3661, 'grad_norm': 0.4482646584510803, 'learning_rate': 3.87650554437213e-06, 'epoch': 2.48} +{'eval_loss': 0.35099679231643677, 'eval_runtime': 92.0739, 'eval_samples_per_second': 678.238, 'eval_steps_per_second': 42.39, 'epoch': 2.48} +{'loss': 0.3674, 'grad_norm': 0.3926308751106262, 'learning_rate': 3.813330350404115e-06, 'epoch': 2.48} +{'eval_loss': 0.3510962724685669, 'eval_runtime': 80.4351, 'eval_samples_per_second': 776.378, 'eval_steps_per_second': 48.524, 'epoch': 2.48} +{'loss': 0.3685, 'grad_norm': 0.48471933603286743, 'learning_rate': 3.7506316482210953e-06, 'epoch': 2.49} +{'eval_loss': 0.3512639105319977, 'eval_runtime': 98.3846, 'eval_samples_per_second': 634.734, 'eval_steps_per_second': 39.671, 'epoch': 2.49} +{'loss': 0.3666, 'grad_norm': 0.4462575614452362, 'learning_rate': 3.6884108479274924e-06, 'epoch': 2.49} +{'eval_loss': 0.35129058361053467, 'eval_runtime': 120.9847, 'eval_samples_per_second': 516.165, 'eval_steps_per_second': 32.26, 'epoch': 2.49} +{'loss': 0.3706, 'grad_norm': 0.44375374913215637, 'learning_rate': 3.626669348879633e-06, 'epoch': 2.49} +{'eval_loss': 0.3506162166595459, 'eval_runtime': 127.3383, 'eval_samples_per_second': 490.41, 'eval_steps_per_second': 30.651, 'epoch': 2.49} +{'loss': 0.3715, 'grad_norm': 0.5077438950538635, 'learning_rate': 3.5654085396542774e-06, 'epoch': 2.5} +{'eval_loss': 0.35164642333984375, 'eval_runtime': 142.3651, 'eval_samples_per_second': 438.647, 'eval_steps_per_second': 27.415, 'epoch': 2.5} +{'loss': 0.3714, 'grad_norm': 0.39790230989456177, 'learning_rate': 3.504629798017384e-06, 'epoch': 2.5} +{'eval_loss': 0.35136154294013977, 'eval_runtime': 95.8688, 'eval_samples_per_second': 651.39, 'eval_steps_per_second': 40.712, 'epoch': 2.5} +{'loss': 0.363, 'grad_norm': 0.46687227487564087, 'learning_rate': 3.4443344908931392e-06, 'epoch': 2.51} +{'eval_loss': 0.35099467635154724, 'eval_runtime': 110.8227, 'eval_samples_per_second': 563.495, 'eval_steps_per_second': 35.218, 'epoch': 2.51} +{'loss': 0.3664, 'grad_norm': 0.5164113640785217, 'learning_rate': 3.3845239743332006e-06, 'epoch': 2.51} +{'eval_loss': 0.35126787424087524, 'eval_runtime': 128.5015, 'eval_samples_per_second': 485.971, 'eval_steps_per_second': 30.373, 'epoch': 2.51} +{'loss': 0.3631, 'grad_norm': 0.553812563419342, 'learning_rate': 3.325199593486206e-06, 'epoch': 2.52} +{'eval_loss': 0.35134491324424744, 'eval_runtime': 138.3781, 'eval_samples_per_second': 451.285, 'eval_steps_per_second': 28.205, 'epoch': 2.52} +{'loss': 0.3691, 'grad_norm': 0.42646127939224243, 'learning_rate': 3.2663626825675143e-06, 'epoch': 2.52} +{'eval_loss': 0.3515402674674988, 'eval_runtime': 153.0169, 'eval_samples_per_second': 408.112, 'eval_steps_per_second': 25.507, 'epoch': 2.52} +{'loss': 0.3667, 'grad_norm': 0.39521604776382446, 'learning_rate': 3.20801456482922e-06, 'epoch': 2.53} +{'eval_loss': 0.35145947337150574, 'eval_runtime': 90.9318, 'eval_samples_per_second': 686.757, 'eval_steps_per_second': 42.922, 'epoch': 2.53} +{'loss': 0.3645, 'grad_norm': 0.4303053021430969, 'learning_rate': 3.150156552530345e-06, 'epoch': 2.53} +{'eval_loss': 0.3512936532497406, 'eval_runtime': 105.7428, 'eval_samples_per_second': 590.565, 'eval_steps_per_second': 36.91, 'epoch': 2.53} +{'loss': 0.364, 'grad_norm': 0.3662849962711334, 'learning_rate': 3.092789946907382e-06, 'epoch': 2.53} +{'eval_loss': 0.3511791527271271, 'eval_runtime': 114.8314, 'eval_samples_per_second': 543.823, 'eval_steps_per_second': 33.989, 'epoch': 2.53} +{'loss': 0.3738, 'grad_norm': 0.38846585154533386, 'learning_rate': 3.035916038144998e-06, 'epoch': 2.54} +{'eval_loss': 0.3512164354324341, 'eval_runtime': 117.9938, 'eval_samples_per_second': 529.248, 'eval_steps_per_second': 33.078, 'epoch': 2.54} +{'loss': 0.371, 'grad_norm': 0.43464356660842896, 'learning_rate': 2.979536105347025e-06, 'epoch': 2.54} +{'eval_loss': 0.35147005319595337, 'eval_runtime': 170.5313, 'eval_samples_per_second': 366.197, 'eval_steps_per_second': 22.887, 'epoch': 2.54} +{'loss': 0.369, 'grad_norm': 0.35725393891334534, 'learning_rate': 2.923651416507689e-06, 'epoch': 2.55} +{'eval_loss': 0.351340115070343, 'eval_runtime': 129.2183, 'eval_samples_per_second': 483.275, 'eval_steps_per_second': 30.205, 'epoch': 2.55} +{'loss': 0.3664, 'grad_norm': 0.4899451434612274, 'learning_rate': 2.8682632284831007e-06, 'epoch': 2.55} +{'eval_loss': 0.3511752188205719, 'eval_runtime': 96.9918, 'eval_samples_per_second': 643.849, 'eval_steps_per_second': 40.241, 'epoch': 2.55} +{'loss': 0.3675, 'grad_norm': 0.48194587230682373, 'learning_rate': 2.813372786962973e-06, 'epoch': 2.56} +{'eval_loss': 0.3511773347854614, 'eval_runtime': 105.0096, 'eval_samples_per_second': 594.688, 'eval_steps_per_second': 37.168, 'epoch': 2.56} +{'loss': 0.3679, 'grad_norm': 0.4331003427505493, 'learning_rate': 2.758981326442625e-06, 'epoch': 2.56} +{'eval_loss': 0.35153886675834656, 'eval_runtime': 125.1988, 'eval_samples_per_second': 498.791, 'eval_steps_per_second': 31.174, 'epoch': 2.56} +{'loss': 0.3684, 'grad_norm': 0.46817338466644287, 'learning_rate': 2.705090070195207e-06, 'epoch': 2.56} +{'eval_loss': 0.351002037525177, 'eval_runtime': 114.9504, 'eval_samples_per_second': 543.26, 'eval_steps_per_second': 33.954, 'epoch': 2.56} +{'loss': 0.369, 'grad_norm': 0.4476917088031769, 'learning_rate': 2.6517002302441917e-06, 'epoch': 2.57} +{'eval_loss': 0.3512870967388153, 'eval_runtime': 143.8889, 'eval_samples_per_second': 434.002, 'eval_steps_per_second': 27.125, 'epoch': 2.57} +{'loss': 0.3704, 'grad_norm': 0.48543980717658997, 'learning_rate': 2.5988130073361093e-06, 'epoch': 2.57} +{'eval_loss': 0.35153377056121826, 'eval_runtime': 82.7142, 'eval_samples_per_second': 754.985, 'eval_steps_per_second': 47.187, 'epoch': 2.57} +{'loss': 0.368, 'grad_norm': 0.3830122649669647, 'learning_rate': 2.54642959091356e-06, 'epoch': 2.58} +{'eval_loss': 0.35141441226005554, 'eval_runtime': 89.3658, 'eval_samples_per_second': 698.791, 'eval_steps_per_second': 43.674, 'epoch': 2.58} +{'loss': 0.3649, 'grad_norm': 0.4512677788734436, 'learning_rate': 2.494551159088426e-06, 'epoch': 2.58} +{'eval_loss': 0.3515666723251343, 'eval_runtime': 134.8213, 'eval_samples_per_second': 463.191, 'eval_steps_per_second': 28.949, 'epoch': 2.58} +{'loss': 0.3724, 'grad_norm': 0.4476074278354645, 'learning_rate': 2.443178878615429e-06, 'epoch': 2.59} +{'eval_loss': 0.35135358572006226, 'eval_runtime': 123.4412, 'eval_samples_per_second': 505.893, 'eval_steps_per_second': 31.618, 'epoch': 2.59} +{'loss': 0.3675, 'grad_norm': 0.45600590109825134, 'learning_rate': 2.392313904865845e-06, 'epoch': 2.59} +{'eval_loss': 0.3508789539337158, 'eval_runtime': 154.2303, 'eval_samples_per_second': 404.901, 'eval_steps_per_second': 25.306, 'epoch': 2.59} +{'loss': 0.3674, 'grad_norm': 0.4150031805038452, 'learning_rate': 2.3419573818015376e-06, 'epoch': 2.6} +{'eval_loss': 0.35101068019866943, 'eval_runtime': 94.4755, 'eval_samples_per_second': 660.997, 'eval_steps_per_second': 41.312, 'epoch': 2.6} +{'loss': 0.3685, 'grad_norm': 0.4035954177379608, 'learning_rate': 2.29211044194923e-06, 'epoch': 2.6} +{'eval_loss': 0.3510052263736725, 'eval_runtime': 79.8641, 'eval_samples_per_second': 781.928, 'eval_steps_per_second': 48.871, 'epoch': 2.6} +{'loss': 0.3721, 'grad_norm': 0.43834781646728516, 'learning_rate': 2.2427742063750422e-06, 'epoch': 2.6} +{'eval_loss': 0.35085591673851013, 'eval_runtime': 102.3372, 'eval_samples_per_second': 610.218, 'eval_steps_per_second': 38.139, 'epoch': 2.6} +{'loss': 0.3718, 'grad_norm': 0.416268527507782, 'learning_rate': 2.1939497846592466e-06, 'epoch': 2.61} +{'eval_loss': 0.3508495092391968, 'eval_runtime': 112.9124, 'eval_samples_per_second': 553.066, 'eval_steps_per_second': 34.567, 'epoch': 2.61} +{'loss': 0.362, 'grad_norm': 0.39653027057647705, 'learning_rate': 2.1456382748713534e-06, 'epoch': 2.61} +{'eval_loss': 0.35131368041038513, 'eval_runtime': 141.0282, 'eval_samples_per_second': 442.805, 'eval_steps_per_second': 27.675, 'epoch': 2.61} +{'loss': 0.3685, 'grad_norm': 0.4130895733833313, 'learning_rate': 2.0978407635453946e-06, 'epoch': 2.62} +{'eval_loss': 0.35114437341690063, 'eval_runtime': 152.1045, 'eval_samples_per_second': 410.56, 'eval_steps_per_second': 25.66, 'epoch': 2.62} +{'loss': 0.3631, 'grad_norm': 0.4319893717765808, 'learning_rate': 2.0505583256554826e-06, 'epoch': 2.62} +{'eval_loss': 0.35119739174842834, 'eval_runtime': 81.8802, 'eval_samples_per_second': 762.676, 'eval_steps_per_second': 47.667, 'epoch': 2.62} +{'loss': 0.3705, 'grad_norm': 0.3915521800518036, 'learning_rate': 2.003792024591647e-06, 'epoch': 2.63} +{'eval_loss': 0.351089209318161, 'eval_runtime': 92.5459, 'eval_samples_per_second': 674.779, 'eval_steps_per_second': 42.174, 'epoch': 2.63} +{'loss': 0.3636, 'grad_norm': 0.4051631689071655, 'learning_rate': 1.957542912135915e-06, 'epoch': 2.63} +{'eval_loss': 0.3512691855430603, 'eval_runtime': 133.109, 'eval_samples_per_second': 469.149, 'eval_steps_per_second': 29.322, 'epoch': 2.63} +{'loss': 0.37, 'grad_norm': 0.4447749853134155, 'learning_rate': 1.9118120284386365e-06, 'epoch': 2.64} +{'eval_loss': 0.3513627052307129, 'eval_runtime': 108.3028, 'eval_samples_per_second': 576.606, 'eval_steps_per_second': 36.038, 'epoch': 2.64} +{'loss': 0.3642, 'grad_norm': 0.36979731917381287, 'learning_rate': 1.8666004019951444e-06, 'epoch': 2.64} +{'eval_loss': 0.35203421115875244, 'eval_runtime': 147.5907, 'eval_samples_per_second': 423.116, 'eval_steps_per_second': 26.445, 'epoch': 2.64} +{'loss': 0.3655, 'grad_norm': 0.35966137051582336, 'learning_rate': 1.82190904962255e-06, 'epoch': 2.64} +{'eval_loss': 0.35130181908607483, 'eval_runtime': 97.0886, 'eval_samples_per_second': 643.206, 'eval_steps_per_second': 40.2, 'epoch': 2.64} +{'loss': 0.3672, 'grad_norm': 0.4409732222557068, 'learning_rate': 1.777738976436935e-06, 'epoch': 2.65} +{'eval_loss': 0.35073888301849365, 'eval_runtime': 94.8687, 'eval_samples_per_second': 658.257, 'eval_steps_per_second': 41.141, 'epoch': 2.65} +{'loss': 0.3665, 'grad_norm': 0.39864692091941833, 'learning_rate': 1.7340911758307182e-06, 'epoch': 2.65} +{'eval_loss': 0.35099026560783386, 'eval_runtime': 121.0741, 'eval_samples_per_second': 515.783, 'eval_steps_per_second': 32.236, 'epoch': 2.65} +{'loss': 0.3698, 'grad_norm': 0.4684802293777466, 'learning_rate': 1.6909666294503246e-06, 'epoch': 2.66} +{'eval_loss': 0.3511410057544708, 'eval_runtime': 132.5296, 'eval_samples_per_second': 471.2, 'eval_steps_per_second': 29.45, 'epoch': 2.66} +{'loss': 0.3647, 'grad_norm': 0.40662533044815063, 'learning_rate': 1.6483663071740873e-06, 'epoch': 2.66} +{'eval_loss': 0.3511432409286499, 'eval_runtime': 127.6047, 'eval_samples_per_second': 489.386, 'eval_steps_per_second': 30.587, 'epoch': 2.66} +{'loss': 0.3706, 'grad_norm': 0.47807586193084717, 'learning_rate': 1.6062911670904763e-06, 'epoch': 2.67} +{'eval_loss': 0.35053378343582153, 'eval_runtime': 149.1455, 'eval_samples_per_second': 418.705, 'eval_steps_per_second': 26.169, 'epoch': 2.67} +{'loss': 0.3644, 'grad_norm': 0.45454785227775574, 'learning_rate': 1.5647421554765007e-06, 'epoch': 2.67} +{'eval_loss': 0.3510643243789673, 'eval_runtime': 89.5184, 'eval_samples_per_second': 697.6, 'eval_steps_per_second': 43.6, 'epoch': 2.67} +{'loss': 0.3715, 'grad_norm': 0.4715607762336731, 'learning_rate': 1.5237202067764634e-06, 'epoch': 2.67} +{'eval_loss': 0.35104724764823914, 'eval_runtime': 80.4344, 'eval_samples_per_second': 776.384, 'eval_steps_per_second': 48.524, 'epoch': 2.67} +{'loss': 0.366, 'grad_norm': 0.37225764989852905, 'learning_rate': 1.4832262435809291e-06, 'epoch': 2.68} +{'eval_loss': 0.35119783878326416, 'eval_runtime': 120.3833, 'eval_samples_per_second': 518.743, 'eval_steps_per_second': 32.421, 'epoch': 2.68} +{'loss': 0.3618, 'grad_norm': 0.4292348623275757, 'learning_rate': 1.4432611766059894e-06, 'epoch': 2.68} +{'eval_loss': 0.35088321566581726, 'eval_runtime': 120.9541, 'eval_samples_per_second': 516.295, 'eval_steps_per_second': 32.268, 'epoch': 2.68} +{'loss': 0.3649, 'grad_norm': 0.4157348871231079, 'learning_rate': 1.4038259046727508e-06, 'epoch': 2.69} +{'eval_loss': 0.3509560227394104, 'eval_runtime': 138.4352, 'eval_samples_per_second': 451.099, 'eval_steps_per_second': 28.194, 'epoch': 2.69} +{'loss': 0.372, 'grad_norm': 0.3891887664794922, 'learning_rate': 1.364921314687162e-06, 'epoch': 2.69} +{'eval_loss': 0.3517007529735565, 'eval_runtime': 122.818, 'eval_samples_per_second': 508.46, 'eval_steps_per_second': 31.779, 'epoch': 2.69} +{'loss': 0.3686, 'grad_norm': 0.46504390239715576, 'learning_rate': 1.3265482816200269e-06, 'epoch': 2.7} +{'eval_loss': 0.35135596990585327, 'eval_runtime': 96.2402, 'eval_samples_per_second': 648.877, 'eval_steps_per_second': 40.555, 'epoch': 2.7} +{'loss': 0.3653, 'grad_norm': 0.3750607669353485, 'learning_rate': 1.2887076684873545e-06, 'epoch': 2.7} +{'eval_loss': 0.351270467042923, 'eval_runtime': 95.5052, 'eval_samples_per_second': 653.87, 'eval_steps_per_second': 40.867, 'epoch': 2.7} +{'loss': 0.3658, 'grad_norm': 0.4639289379119873, 'learning_rate': 1.2514003263309372e-06, 'epoch': 2.71} +{'eval_loss': 0.35140520334243774, 'eval_runtime': 125.6556, 'eval_samples_per_second': 496.977, 'eval_steps_per_second': 31.061, 'epoch': 2.71} +{'loss': 0.3672, 'grad_norm': 0.4592922627925873, 'learning_rate': 1.2146270941992082e-06, 'epoch': 2.71} +{'eval_loss': 0.35114553570747375, 'eval_runtime': 109.6141, 'eval_samples_per_second': 569.708, 'eval_steps_per_second': 35.607, 'epoch': 2.71} +{'loss': 0.3672, 'grad_norm': 0.4615425169467926, 'learning_rate': 1.1783887991283826e-06, 'epoch': 2.71} +{'eval_loss': 0.3513965904712677, 'eval_runtime': 137.7791, 'eval_samples_per_second': 453.247, 'eval_steps_per_second': 28.328, 'epoch': 2.71} +{'loss': 0.3691, 'grad_norm': 0.5629684329032898, 'learning_rate': 1.142686256123851e-06, 'epoch': 2.72} +{'eval_loss': 0.35112500190734863, 'eval_runtime': 94.381, 'eval_samples_per_second': 661.658, 'eval_steps_per_second': 41.354, 'epoch': 2.72} +{'loss': 0.3697, 'grad_norm': 0.3995133936405182, 'learning_rate': 1.1075202681418374e-06, 'epoch': 2.72} +{'eval_loss': 0.35112032294273376, 'eval_runtime': 88.1705, 'eval_samples_per_second': 708.264, 'eval_steps_per_second': 44.266, 'epoch': 2.72} +{'loss': 0.3682, 'grad_norm': 0.46045616269111633, 'learning_rate': 1.0728916260713679e-06, 'epoch': 2.73} +{'eval_loss': 0.35107994079589844, 'eval_runtime': 95.8466, 'eval_samples_per_second': 651.541, 'eval_steps_per_second': 40.721, 'epoch': 2.73} +{'loss': 0.3686, 'grad_norm': 0.38850948214530945, 'learning_rate': 1.0388011087164612e-06, 'epoch': 2.73} +{'eval_loss': 0.35103949904441833, 'eval_runtime': 113.4697, 'eval_samples_per_second': 550.35, 'eval_steps_per_second': 34.397, 'epoch': 2.73} +{'loss': 0.3658, 'grad_norm': 0.44793838262557983, 'learning_rate': 1.0052494827786169e-06, 'epoch': 2.74} +{'eval_loss': 0.3508547842502594, 'eval_runtime': 111.569, 'eval_samples_per_second': 559.725, 'eval_steps_per_second': 34.983, 'epoch': 2.74} +{'loss': 0.3688, 'grad_norm': 0.45808571577072144, 'learning_rate': 9.722375028395819e-07, 'epoch': 2.74} +{'eval_loss': 0.3509957790374756, 'eval_runtime': 114.7751, 'eval_samples_per_second': 544.09, 'eval_steps_per_second': 34.006, 'epoch': 2.74} +{'loss': 0.3669, 'grad_norm': 0.42009884119033813, 'learning_rate': 9.39765911344373e-07, 'epoch': 2.74} +{'eval_loss': 0.3509400486946106, 'eval_runtime': 93.2751, 'eval_samples_per_second': 669.503, 'eval_steps_per_second': 41.844, 'epoch': 2.74} +{'loss': 0.3619, 'grad_norm': 0.38514113426208496, 'learning_rate': 9.0783543858457e-07, 'epoch': 2.75} +{'eval_loss': 0.35110822319984436, 'eval_runtime': 83.9185, 'eval_samples_per_second': 744.15, 'eval_steps_per_second': 46.509, 'epoch': 2.75} +{'loss': 0.3659, 'grad_norm': 0.42309698462486267, 'learning_rate': 8.764468026819128e-07, 'epoch': 2.75} +{'eval_loss': 0.3513908386230469, 'eval_runtime': 108.015, 'eval_samples_per_second': 578.142, 'eval_steps_per_second': 36.134, 'epoch': 2.75} +{'loss': 0.3623, 'grad_norm': 0.4394840598106384, 'learning_rate': 8.45600709572128e-07, 'epoch': 2.76} +{'eval_loss': 0.35143470764160156, 'eval_runtime': 137.5521, 'eval_samples_per_second': 453.995, 'eval_steps_per_second': 28.375, 'epoch': 2.76} +{'loss': 0.3674, 'grad_norm': 0.4252680838108063, 'learning_rate': 8.152978529890748e-07, 'epoch': 2.76} +{'eval_loss': 0.35119831562042236, 'eval_runtime': 127.8116, 'eval_samples_per_second': 488.594, 'eval_steps_per_second': 30.537, 'epoch': 2.76} +{'loss': 0.3687, 'grad_norm': 0.4132484495639801, 'learning_rate': 7.855389144491215e-07, 'epoch': 2.77} +{'eval_loss': 0.35118624567985535, 'eval_runtime': 101.8495, 'eval_samples_per_second': 613.14, 'eval_steps_per_second': 38.321, 'epoch': 2.77} +{'loss': 0.3653, 'grad_norm': 0.39942291378974915, 'learning_rate': 7.563245632358357e-07, 'epoch': 2.77} +{'eval_loss': 0.35106736421585083, 'eval_runtime': 89.165, 'eval_samples_per_second': 700.364, 'eval_steps_per_second': 43.773, 'epoch': 2.77} +{'loss': 0.3667, 'grad_norm': 0.4023812413215637, 'learning_rate': 7.276554563849097e-07, 'epoch': 2.78} +{'eval_loss': 0.3509797155857086, 'eval_runtime': 97.3752, 'eval_samples_per_second': 641.313, 'eval_steps_per_second': 40.082, 'epoch': 2.78} +{'loss': 0.3648, 'grad_norm': 0.45711442828178406, 'learning_rate': 6.99532238669412e-07, 'epoch': 2.78} +{'eval_loss': 0.3513014018535614, 'eval_runtime': 121.0798, 'eval_samples_per_second': 515.759, 'eval_steps_per_second': 32.235, 'epoch': 2.78} +{'loss': 0.3718, 'grad_norm': 0.5096409320831299, 'learning_rate': 6.719555425852703e-07, 'epoch': 2.78} +{'eval_loss': 0.35115715861320496, 'eval_runtime': 111.7503, 'eval_samples_per_second': 558.817, 'eval_steps_per_second': 34.926, 'epoch': 2.78} +{'loss': 0.3684, 'grad_norm': 0.45939719676971436, 'learning_rate': 6.44925988337039e-07, 'epoch': 2.79} +{'eval_loss': 0.35093191266059875, 'eval_runtime': 130.0395, 'eval_samples_per_second': 480.223, 'eval_steps_per_second': 30.014, 'epoch': 2.79} +{'loss': 0.3669, 'grad_norm': 0.4110739529132843, 'learning_rate': 6.184441838239713e-07, 'epoch': 2.79} +{'eval_loss': 0.35143327713012695, 'eval_runtime': 93.9812, 'eval_samples_per_second': 664.473, 'eval_steps_per_second': 41.53, 'epoch': 2.79} +{'loss': 0.3697, 'grad_norm': 0.4315016269683838, 'learning_rate': 5.9251072462633e-07, 'epoch': 2.8} +{'eval_loss': 0.3509359061717987, 'eval_runtime': 88.6358, 'eval_samples_per_second': 704.546, 'eval_steps_per_second': 44.034, 'epoch': 2.8} +{'loss': 0.3706, 'grad_norm': 0.3906550109386444, 'learning_rate': 5.671261939919986e-07, 'epoch': 2.8} +{'eval_loss': 0.35094746947288513, 'eval_runtime': 131.5894, 'eval_samples_per_second': 474.567, 'eval_steps_per_second': 29.66, 'epoch': 2.8} +{'loss': 0.3737, 'grad_norm': 0.4342848062515259, 'learning_rate': 5.422911628233662e-07, 'epoch': 2.81} +{'eval_loss': 0.3513799011707306, 'eval_runtime': 141.4321, 'eval_samples_per_second': 441.54, 'eval_steps_per_second': 27.596, 'epoch': 2.81} +{'loss': 0.3681, 'grad_norm': 0.45895883440971375, 'learning_rate': 5.180061896644856e-07, 'epoch': 2.81} +{'eval_loss': 0.3510804772377014, 'eval_runtime': 131.8407, 'eval_samples_per_second': 473.663, 'eval_steps_per_second': 29.604, 'epoch': 2.81} +{'loss': 0.3677, 'grad_norm': 0.5005194544792175, 'learning_rate': 4.942718206885133e-07, 'epoch': 2.82} +{'eval_loss': 0.35118669271469116, 'eval_runtime': 106.1511, 'eval_samples_per_second': 588.294, 'eval_steps_per_second': 36.768, 'epoch': 2.82} +{'loss': 0.3681, 'grad_norm': 0.44260910153388977, 'learning_rate': 4.7108858968542005e-07, 'epoch': 2.82} +{'eval_loss': 0.3508269488811493, 'eval_runtime': 93.0117, 'eval_samples_per_second': 671.399, 'eval_steps_per_second': 41.962, 'epoch': 2.82} +{'loss': 0.3732, 'grad_norm': 0.39278268814086914, 'learning_rate': 4.4845701804999974e-07, 'epoch': 2.82} +{'eval_loss': 0.35097867250442505, 'eval_runtime': 99.6479, 'eval_samples_per_second': 626.687, 'eval_steps_per_second': 39.168, 'epoch': 2.82} +{'loss': 0.3663, 'grad_norm': 0.4718637466430664, 'learning_rate': 4.2637761477012374e-07, 'epoch': 2.83} +{'eval_loss': 0.3508548140525818, 'eval_runtime': 109.599, 'eval_samples_per_second': 569.786, 'eval_steps_per_second': 35.612, 'epoch': 2.83} +{'loss': 0.3674, 'grad_norm': 0.4400103986263275, 'learning_rate': 4.0485087641530526e-07, 'epoch': 2.83} +{'eval_loss': 0.3506525456905365, 'eval_runtime': 146.3411, 'eval_samples_per_second': 426.729, 'eval_steps_per_second': 26.671, 'epoch': 2.83} +{'loss': 0.3689, 'grad_norm': 0.45679983496665955, 'learning_rate': 3.838772871255336e-07, 'epoch': 2.84} +{'eval_loss': 0.3507118225097656, 'eval_runtime': 111.1081, 'eval_samples_per_second': 562.047, 'eval_steps_per_second': 35.128, 'epoch': 2.84} +{'loss': 0.3699, 'grad_norm': 0.415866881608963, 'learning_rate': 3.6345731860037977e-07, 'epoch': 2.84} +{'eval_loss': 0.3512241840362549, 'eval_runtime': 89.846, 'eval_samples_per_second': 695.056, 'eval_steps_per_second': 43.441, 'epoch': 2.84} +{'loss': 0.368, 'grad_norm': 0.4824216663837433, 'learning_rate': 3.435914300883941e-07, 'epoch': 2.85} +{'eval_loss': 0.35092663764953613, 'eval_runtime': 109.6969, 'eval_samples_per_second': 569.278, 'eval_steps_per_second': 35.58, 'epoch': 2.85} +{'loss': 0.3636, 'grad_norm': 0.4795026481151581, 'learning_rate': 3.2428006837676997e-07, 'epoch': 2.85} +{'eval_loss': 0.3513595163822174, 'eval_runtime': 113.1578, 'eval_samples_per_second': 551.866, 'eval_steps_per_second': 34.492, 'epoch': 2.85} +{'loss': 0.3659, 'grad_norm': 0.4096687436103821, 'learning_rate': 3.0552366778129336e-07, 'epoch': 2.85} +{'eval_loss': 0.3511112630367279, 'eval_runtime': 149.891, 'eval_samples_per_second': 416.623, 'eval_steps_per_second': 26.039, 'epoch': 2.85} +{'loss': 0.3689, 'grad_norm': 0.43160685896873474, 'learning_rate': 2.873226501365928e-07, 'epoch': 2.86} +{'eval_loss': 0.3513765335083008, 'eval_runtime': 129.803, 'eval_samples_per_second': 481.098, 'eval_steps_per_second': 30.069, 'epoch': 2.86} +{'loss': 0.3694, 'grad_norm': 0.4026382863521576, 'learning_rate': 2.696774247866324e-07, 'epoch': 2.86} +{'eval_loss': 0.3509191572666168, 'eval_runtime': 88.5142, 'eval_samples_per_second': 705.514, 'eval_steps_per_second': 44.095, 'epoch': 2.86} +{'loss': 0.3637, 'grad_norm': 0.3494757413864136, 'learning_rate': 2.5258838857551706e-07, 'epoch': 2.87} +{'eval_loss': 0.3509976267814636, 'eval_runtime': 79.7924, 'eval_samples_per_second': 782.63, 'eval_steps_per_second': 48.914, 'epoch': 2.87} +{'loss': 0.3653, 'grad_norm': 0.5308791995048523, 'learning_rate': 2.3605592583856307e-07, 'epoch': 2.87} +{'eval_loss': 0.3507205843925476, 'eval_runtime': 106.5162, 'eval_samples_per_second': 586.277, 'eval_steps_per_second': 36.642, 'epoch': 2.87} +{'loss': 0.3637, 'grad_norm': 0.3922443389892578, 'learning_rate': 2.2008040839365252e-07, 'epoch': 2.88} +{'eval_loss': 0.35095104575157166, 'eval_runtime': 104.6458, 'eval_samples_per_second': 596.756, 'eval_steps_per_second': 37.297, 'epoch': 2.88} +{'loss': 0.3692, 'grad_norm': 0.4426773190498352, 'learning_rate': 2.0466219553287592e-07, 'epoch': 2.88} +{'eval_loss': 0.35050168633461, 'eval_runtime': 134.945, 'eval_samples_per_second': 462.766, 'eval_steps_per_second': 28.923, 'epoch': 2.88} +{'loss': 0.3706, 'grad_norm': 0.3489360809326172, 'learning_rate': 1.8980163401444984e-07, 'epoch': 2.89} +{'eval_loss': 0.351498544216156, 'eval_runtime': 113.0824, 'eval_samples_per_second': 552.235, 'eval_steps_per_second': 34.515, 'epoch': 2.89} +{'loss': 0.3665, 'grad_norm': 0.44714200496673584, 'learning_rate': 1.7549905805491762e-07, 'epoch': 2.89} +{'eval_loss': 0.35072797536849976, 'eval_runtime': 98.6368, 'eval_samples_per_second': 633.111, 'eval_steps_per_second': 39.569, 'epoch': 2.89} +{'loss': 0.3695, 'grad_norm': 0.41609352827072144, 'learning_rate': 1.6175478932163036e-07, 'epoch': 2.89} +{'eval_loss': 0.35083237290382385, 'eval_runtime': 87.6367, 'eval_samples_per_second': 712.578, 'eval_steps_per_second': 44.536, 'epoch': 2.89} +{'loss': 0.3632, 'grad_norm': 0.4400606155395508, 'learning_rate': 1.4856913692551656e-07, 'epoch': 2.9} +{'eval_loss': 0.3511298596858978, 'eval_runtime': 110.9906, 'eval_samples_per_second': 562.642, 'eval_steps_per_second': 35.165, 'epoch': 2.9} +{'loss': 0.3667, 'grad_norm': 0.34448227286338806, 'learning_rate': 1.3594239741413495e-07, 'epoch': 2.9} +{'eval_loss': 0.3507257103919983, 'eval_runtime': 142.2892, 'eval_samples_per_second': 438.881, 'eval_steps_per_second': 27.43, 'epoch': 2.9} +{'loss': 0.3627, 'grad_norm': 0.4481442868709564, 'learning_rate': 1.2387485476499094e-07, 'epoch': 2.91} +{'eval_loss': 0.35108333826065063, 'eval_runtime': 143.0255, 'eval_samples_per_second': 436.622, 'eval_steps_per_second': 27.289, 'epoch': 2.91} +{'loss': 0.3678, 'grad_norm': 0.4151548743247986, 'learning_rate': 1.123667803791556e-07, 'epoch': 2.91} +{'eval_loss': 0.3513585925102234, 'eval_runtime': 83.2406, 'eval_samples_per_second': 750.211, 'eval_steps_per_second': 46.888, 'epoch': 2.91} +{'loss': 0.3658, 'grad_norm': 0.3967318832874298, 'learning_rate': 1.0141843307517606e-07, 'epoch': 2.92} +{'eval_loss': 0.3507043123245239, 'eval_runtime': 95.7903, 'eval_samples_per_second': 651.924, 'eval_steps_per_second': 40.745, 'epoch': 2.92} +{'loss': 0.3671, 'grad_norm': 0.4078242778778076, 'learning_rate': 9.103005908323026e-08, 'epoch': 2.92} +{'eval_loss': 0.35138261318206787, 'eval_runtime': 127.7608, 'eval_samples_per_second': 488.788, 'eval_steps_per_second': 30.549, 'epoch': 2.92} +{'loss': 0.3699, 'grad_norm': 0.46738553047180176, 'learning_rate': 8.120189203960904e-08, 'epoch': 2.93} +{'eval_loss': 0.3508794903755188, 'eval_runtime': 109.1354, 'eval_samples_per_second': 572.207, 'eval_steps_per_second': 35.763, 'epoch': 2.93} +{'loss': 0.3707, 'grad_norm': 0.45415225625038147, 'learning_rate': 7.193415298145101e-08, 'epoch': 2.93} +{'eval_loss': 0.3507305979728699, 'eval_runtime': 143.9631, 'eval_samples_per_second': 433.778, 'eval_steps_per_second': 27.111, 'epoch': 2.93} +{'loss': 0.3658, 'grad_norm': 0.4644428789615631, 'learning_rate': 6.322705034177978e-08, 'epoch': 2.93} +{'eval_loss': 0.3515267074108124, 'eval_runtime': 91.4215, 'eval_samples_per_second': 683.078, 'eval_steps_per_second': 42.692, 'epoch': 2.93} +{'loss': 0.3614, 'grad_norm': 0.46198973059654236, 'learning_rate': 5.508077994479943e-08, 'epoch': 2.94} +{'eval_loss': 0.3510913848876953, 'eval_runtime': 92.432, 'eval_samples_per_second': 675.611, 'eval_steps_per_second': 42.226, 'epoch': 2.94} +{'loss': 0.3721, 'grad_norm': 0.4331490397453308, 'learning_rate': 4.749552500151466e-08, 'epoch': 2.94} +{'eval_loss': 0.35093238949775696, 'eval_runtime': 111.5794, 'eval_samples_per_second': 559.673, 'eval_steps_per_second': 34.98, 'epoch': 2.94} +{'loss': 0.3678, 'grad_norm': 0.48135751485824585, 'learning_rate': 4.047145610559244e-08, 'epoch': 2.95} +{'eval_loss': 0.3509337902069092, 'eval_runtime': 104.839, 'eval_samples_per_second': 595.656, 'eval_steps_per_second': 37.228, 'epoch': 2.95} +{'loss': 0.372, 'grad_norm': 0.44093143939971924, 'learning_rate': 3.400873122953174e-08, 'epoch': 2.95} +{'eval_loss': 0.3509073257446289, 'eval_runtime': 122.3194, 'eval_samples_per_second': 510.532, 'eval_steps_per_second': 31.908, 'epoch': 2.95} +{'loss': 0.3667, 'grad_norm': 0.49598580598831177, 'learning_rate': 2.8107495721102495e-08, 'epoch': 2.96} +{'eval_loss': 0.3507257103919983, 'eval_runtime': 110.2907, 'eval_samples_per_second': 566.212, 'eval_steps_per_second': 35.388, 'epoch': 2.96} +{'loss': 0.3651, 'grad_norm': 0.3804068863391876, 'learning_rate': 2.276788230009541e-08, 'epoch': 2.96} +{'eval_loss': 0.3514425456523895, 'eval_runtime': 98.2078, 'eval_samples_per_second': 635.876, 'eval_steps_per_second': 39.742, 'epoch': 2.96} +{'loss': 0.3693, 'grad_norm': 0.5194698572158813, 'learning_rate': 1.7990011055318833e-08, 'epoch': 2.96} +{'eval_loss': 0.3511887192726135, 'eval_runtime': 106.5304, 'eval_samples_per_second': 586.199, 'eval_steps_per_second': 36.637, 'epoch': 2.96} +{'loss': 0.3664, 'grad_norm': 0.5402436256408691, 'learning_rate': 1.3773989441903668e-08, 'epoch': 2.97} +{'eval_loss': 0.35086843371391296, 'eval_runtime': 129.1184, 'eval_samples_per_second': 483.649, 'eval_steps_per_second': 30.228, 'epoch': 2.97} +{'loss': 0.3674, 'grad_norm': 0.4166225790977478, 'learning_rate': 1.0119912278888644e-08, 'epoch': 2.97} +{'eval_loss': 0.3513844907283783, 'eval_runtime': 139.7902, 'eval_samples_per_second': 446.726, 'eval_steps_per_second': 27.92, 'epoch': 2.97} +{'loss': 0.3706, 'grad_norm': 0.38080069422721863, 'learning_rate': 7.027861747091469e-09, 'epoch': 2.98} +{'eval_loss': 0.3509041666984558, 'eval_runtime': 97.7727, 'eval_samples_per_second': 638.706, 'eval_steps_per_second': 39.919, 'epoch': 2.98} +{'loss': 0.3653, 'grad_norm': 0.36023059487342834, 'learning_rate': 4.497907387246425e-09, 'epoch': 2.98} +{'eval_loss': 0.3507620692253113, 'eval_runtime': 94.7326, 'eval_samples_per_second': 659.203, 'eval_steps_per_second': 41.2, 'epoch': 2.98} +{'loss': 0.3699, 'grad_norm': 0.4545798599720001, 'learning_rate': 2.530106098458385e-09, 'epoch': 2.99} +{'eval_loss': 0.35133281350135803, 'eval_runtime': 110.4322, 'eval_samples_per_second': 565.487, 'eval_steps_per_second': 35.343, 'epoch': 2.99} +{'loss': 0.3663, 'grad_norm': 0.41778281331062317, 'learning_rate': 1.1245021369121755e-09, 'epoch': 2.99} +{'eval_loss': 0.3508669137954712, 'eval_runtime': 124.2416, 'eval_samples_per_second': 502.633, 'eval_steps_per_second': 31.415, 'epoch': 2.99} +{'loss': 0.3646, 'grad_norm': 0.409932017326355, 'learning_rate': 2.811271148761563e-10, 'epoch': 3.0} +{'eval_loss': 0.35047122836112976, 'eval_runtime': 142.3213, 'eval_samples_per_second': 438.782, 'eval_steps_per_second': 27.424, 'epoch': 3.0} +{'train_runtime': 80815.4027, 'train_samples_per_second': 10.818, 'train_steps_per_second': 0.169, 'train_loss': 0.4309546582144973, 'epoch': 3.0} +***** train metrics ***** + epoch = 2.9995 + total_flos = 52746520GF + train_loss = 0.431 + train_runtime = 22:26:55.40 + train_samples_per_second = 10.818 + train_steps_per_second = 0.169 +***** eval metrics ***** + epoch = 2.9995 + eval_loss = 0.3506 + eval_runtime = 0:01:30.74 + eval_samples_per_second = 688.189 + eval_steps_per_second = 43.012 + perplexity = 1.42 +wandb: +wandb: 🚀 View run Se124M500KInfPrompt_EOS at: https://wandb.ai/symbolic-gression/huggingface/runs/1xk2u046 +wandb: Find logs at: wandb/run-20250510_155211-1xk2u046/logs diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..5f8a3b3a62fa16fce789a47af07bdb9127c69862 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,30 @@ +--extra-index-url https://download.pytorch.org/whl/cu121 +# Core Hugging Face e Deep Learning +transformers==4.51.3 +torch==2.5.1 +torchvision==0.20.1 +torchaudio==2.5.1 + +accelerate==1.6.0 +python-dotenv==1.0.1 +datasets==3.5.0 +evaluate==0.4.1 +huggingface-hub==0.30.2 + +# Parameter-Efficient Fine-Tuning (PEFT) +peft==0.15.1 + +# Avaliação e utilitários +scikit-learn==1.6.1 +numpy==1.26.4 +pandas==2.2.1 +tqdm==4.67.1 +sympy==1.13.1 +regex==2024.11.6 + +# Logging e visualização +tensorboard==2.16.2 +wandb>=0.24.1 # Versão atualizada para suportar novo formato de API key (wandb_v1_...) + +# Fine-tuning avançado (SFT, DPO, etc.) +trl==0.16.1 diff --git a/scripts/aws/analyze_model.sh b/scripts/aws/analyze_model.sh new file mode 100644 index 0000000000000000000000000000000000000000..48a2e24de319bcc8fdb43c2a0e822129fe0aabde --- /dev/null +++ b/scripts/aws/analyze_model.sh @@ -0,0 +1,203 @@ +#!/bin/bash +# Automatic Model Analysis Script +# Runs evaluation and generation analysis after training + +set -e + +# Colors +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +print_status() { echo -e "${GREEN}[INFO]${NC} $1"; } +print_header() { echo -e "\n${BLUE}========================================\n$1\n========================================${NC}\n"; } + +# Parameters +MODEL_PATH="${1:-./output/Se124M_700K_infix}" +DATA_COLUMN="${2:-i_prompt_n}" +DATASET_REPO="augustocsc/sintetico_natural" +DATA_DIR="700K" +NUM_SAMPLES=500 +NUM_GENERATIONS=100 + +# Directories +PROJECT_DIR="/home/ubuntu/seriguela" +OUTPUT_DIR="$HOME/analysis_results_$(date +%Y%m%d_%H%M%S)" +mkdir -p "$OUTPUT_DIR" + +cd "$PROJECT_DIR" +source venv/bin/activate + +print_header "Automatic Model Analysis" +print_status "Model: $MODEL_PATH" +print_status "Output: $OUTPUT_DIR" +echo "" + +# ============================================================================= +# 1. EVALUATE MODEL +# ============================================================================= +print_header "Step 1: Model Evaluation" +print_status "Running evaluation on $NUM_SAMPLES samples..." + +python scripts/evaluate.py \ + --model_path "$MODEL_PATH" \ + --dataset_repo_id "$DATASET_REPO" \ + --data_dir "$DATA_DIR" \ + --data_column "$DATA_COLUMN" \ + --num_samples "$NUM_SAMPLES" \ + --output_dir "$OUTPUT_DIR/evaluation" \ + --temperature 0.7 \ + --seed 42 \ + 2>&1 | tee "$OUTPUT_DIR/evaluation.log" + +if [ $? -eq 0 ]; then + print_status "✅ Evaluation completed" +else + print_status "⚠️ Evaluation had issues" +fi + +# ============================================================================= +# 2. GENERATE SAMPLES +# ============================================================================= +print_header "Step 2: Sample Generation & Validation" +print_status "Generating $NUM_GENERATIONS samples with validation..." + +python scripts/generate.py \ + --model_path "$MODEL_PATH" \ + --num_generations "$NUM_GENERATIONS" \ + --validate \ + --output_file "$OUTPUT_DIR/generations.txt" \ + --temperature 0.8 \ + --top_p 0.95 \ + --seed 42 \ + 2>&1 | tee "$OUTPUT_DIR/generation.log" + +if [ $? -eq 0 ]; then + print_status "✅ Generation completed" +else + print_status "⚠️ Generation had issues" +fi + +# ============================================================================= +# 3. ANALYZE TRAINING LOGS +# ============================================================================= +print_header "Step 3: Training Log Analysis" +print_status "Extracting training metrics..." + +TRAINING_LOG="$HOME/training_success.log" + +if [ -f "$TRAINING_LOG" ]; then + # Extract loss values + grep -E "'loss':|train_loss|eval_loss" "$TRAINING_LOG" > "$OUTPUT_DIR/training_metrics.txt" 2>/dev/null || true + + # Extract epoch summaries + grep -E "epoch.*loss" "$TRAINING_LOG" | tail -20 > "$OUTPUT_DIR/epoch_summary.txt" 2>/dev/null || true + + # Count total steps + TOTAL_STEPS=$(grep -E "[0-9]+/21882" "$TRAINING_LOG" | tail -1 | sed 's/.*\([0-9]\+\)\/21882.*/\1/' || echo "0") + + print_status "Total training steps: $TOTAL_STEPS" +fi + +# ============================================================================= +# 4. CREATE SUMMARY REPORT +# ============================================================================= +print_header "Step 4: Creating Analysis Report" + +cat > "$OUTPUT_DIR/ANALYSIS_REPORT.md" << 'EOFREPORT' +# Training Analysis Report +**Generated:** $(date) + +## 📊 Model Information +- **Architecture:** GPT-2 Small (124M parameters) +- **Training Method:** LoRA (294K trainable parameters, 0.24%) +- **Dataset:** 700K samples (infix notation) +- **Training Duration:** $(grep "Training Duration:" $HOME/training_notification.txt 2>/dev/null | head -1 || echo "N/A") + +## 📈 Training Metrics + +### Loss Progression +``` +$(tail -20 $OUTPUT_DIR/training_metrics.txt 2>/dev/null || echo "No metrics available") +``` + +### Epoch Summary +``` +$(cat $OUTPUT_DIR/epoch_summary.txt 2>/dev/null || echo "No epoch data available") +``` + +## 🎯 Evaluation Results + +### Performance Metrics +``` +$(grep -E "Accuracy|Loss|Perplexity" $OUTPUT_DIR/evaluation.log 2>/dev/null || echo "Check evaluation.log for details") +``` + +### Sample Predictions +``` +$(head -50 $OUTPUT_DIR/evaluation/*.txt 2>/dev/null | head -20 || echo "No evaluation samples found") +``` + +## 🔮 Generation Quality + +### Validation Results +``` +$(grep -E "Valid:|Success|Failed" $OUTPUT_DIR/generation.log | head -20 || echo "Check generation.log") +``` + +### Sample Generations +``` +$(head -30 $OUTPUT_DIR/generations.txt 2>/dev/null || echo "No generations file found") +``` + +## 📁 Output Files +- Evaluation results: `evaluation/` +- Generated samples: `generations.txt` +- Full logs: `evaluation.log`, `generation.log` +- Training metrics: `training_metrics.txt` + +## 🔗 Resources +- **Wandb Dashboard:** https://wandb.ai/symbolic-gression/seriguela_700K_test +- **HuggingFace Model:** https://huggingface.co/augustocsc/Se124M_700K_infix +- **Analysis Directory:** $OUTPUT_DIR + +--- +*Generated automatically by analyze_model.sh* +EOFREPORT + +# Evaluate the report with actual values +eval "cat > \"$OUTPUT_DIR/ANALYSIS_REPORT.md\" << 'EOFREPORT' +$(cat "$OUTPUT_DIR/ANALYSIS_REPORT.md") +EOFREPORT" + +print_status "Report created: $OUTPUT_DIR/ANALYSIS_REPORT.md" + +# ============================================================================= +# 5. FINAL SUMMARY +# ============================================================================= +print_header "Analysis Complete!" +echo "" +print_status "All results saved to: $OUTPUT_DIR" +print_status "Main report: $OUTPUT_DIR/ANALYSIS_REPORT.md" +echo "" +print_status "Key files:" +echo " - Evaluation: $OUTPUT_DIR/evaluation.log" +echo " - Generation: $OUTPUT_DIR/generation.log" +echo " - Metrics: $OUTPUT_DIR/training_metrics.txt" +echo " - Report: $OUTPUT_DIR/ANALYSIS_REPORT.md" +echo "" +print_status "View the full report with:" +echo " cat $OUTPUT_DIR/ANALYSIS_REPORT.md" +echo "" + +# Create a quick summary +EVAL_SUCCESS=$(grep -c "✅" "$OUTPUT_DIR/evaluation.log" 2>/dev/null || echo "0") +GEN_SUCCESS=$(grep -c "Valid" "$OUTPUT_DIR/generation.log" 2>/dev/null || echo "0") + +print_header "Quick Summary" +echo "Evaluation samples processed: $NUM_SAMPLES" +echo "Generations created: $NUM_GENERATIONS" +echo "Check logs for detailed metrics and quality assessment" +echo "" +print_status "Done!" diff --git a/scripts/aws/evaluate_models.sh b/scripts/aws/evaluate_models.sh new file mode 100644 index 0000000000000000000000000000000000000000..5c4cafce5992a4f983c93295c6ad5acbb0f3b114 --- /dev/null +++ b/scripts/aws/evaluate_models.sh @@ -0,0 +1,62 @@ +#!/bin/bash +# Script to evaluate two models on AWS and compare results +# This script compares the original model (without end token) with the v2 model (with end token) +# Usage: bash scripts/aws/evaluate_models.sh + +set -e + +echo "==========================================" +echo "Model Comparison: v1 vs v2" +echo "==========================================" +echo "Model 1: augustocsc/Se124M_700K_infix (original)" +echo "Model 2: augustocsc/Se124M_700K_infix_v2 (with <|endofex|> token)" +echo "==========================================" +echo "" + +# Activate virtual environment +source ~/seriguela/venv/bin/activate +cd ~/seriguela + +# Set up logging +LOG_FILE="evaluation_$(date +%Y%m%d_%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "[$(date)] Starting evaluation..." +echo "" + +# Check GPU availability +echo "Checking GPU..." +if nvidia-smi &> /dev/null; then + nvidia-smi --query-gpu=name,memory.total,memory.free --format=csv,noheader + echo "" +else + echo "WARNING: No GPU detected. Evaluation will be slow." + echo "" +fi + +# Run comparison +echo "Running model comparison..." +echo "This will evaluate both models on 500 samples from the test set." +echo "" + +python scripts/compare_models.py \ + --model1 augustocsc/Se124M_700K_infix \ + --model2 augustocsc/Se124M_700K_infix_v2 \ + --model1_name "Original (no end token)" \ + --model2_name "V2 (with <|endofex|>)" \ + --num_samples 500 \ + --dataset_repo_id augustocsc/sintetico_natural \ + --data_dir 700K \ + --data_column i_prompt_n \ + --output_dir ./evaluation_results/comparison + +echo "" +echo "==========================================" +echo "Evaluation Complete!" +echo "==========================================" +echo "Results saved to: ./evaluation_results/comparison" +echo "Log file: $LOG_FILE" +echo "" +echo "To view results:" +echo " cat ./evaluation_results/comparison/comparison_*.json | jq" +echo "" diff --git a/scripts/aws/launch_evaluation_instance.sh b/scripts/aws/launch_evaluation_instance.sh new file mode 100644 index 0000000000000000000000000000000000000000..992c55ef2e1341e124867f840aff16364c07feb2 --- /dev/null +++ b/scripts/aws/launch_evaluation_instance.sh @@ -0,0 +1,299 @@ +#!/bin/bash +# Script to launch AWS instance for model evaluation +# Evaluates two models: original (Se124M_700K_infix) vs v2 (with end token) +# Usage: ./launch_evaluation_instance.sh [--hf-token TOKEN] + +set -e + +# Colors +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +RED='\033[0;31m' +BLUE='\033[0;34m' +NC='\033[0m' + +print_status() { echo -e "${GREEN}[INFO]${NC} $1"; } +print_warning() { echo -e "${YELLOW}[WARN]${NC} $1"; } +print_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +# Default configuration +INSTANCE_TYPE="g5.xlarge" +AMI_ID="" +KEY_NAME="" +SECURITY_GROUP="" +REGION=$(aws configure get region 2>/dev/null || echo "us-east-1") +VOLUME_SIZE=80 +INSTANCE_NAME="seriguela-evaluation" +HF_TOKEN="" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --hf-token) HF_TOKEN="$2"; shift 2;; + --instance-type) INSTANCE_TYPE="$2"; shift 2;; + --key-name) KEY_NAME="$2"; shift 2;; + --help) + echo "Usage: $0 [OPTIONS]" + echo "Options:" + echo " --hf-token TOKEN HuggingFace token (optional, for accessing models)" + echo " --instance-type TYPE Instance type (default: g5.xlarge)" + echo " --key-name NAME SSH key pair name" + echo "" + echo "Example:" + echo " $0 --hf-token hf_xxx" + exit 0;; + *) echo "Unknown option: $1"; exit 1;; + esac +done + +if [ -z "$HF_TOKEN" ]; then + print_warning "HuggingFace token not provided. Public models will still work." + print_warning "Get your token from: https://huggingface.co/settings/tokens" +fi + +print_status "Launching Seriguela evaluation instance..." + +# Find Deep Learning AMI +print_status "Finding Deep Learning AMI..." +AMI_ID=$(aws ec2 describe-images \ + --owners amazon \ + --filters "Name=name,Values=*Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 22.04)*" \ + --query "Images | sort_by(@, &CreationDate) | [-1].ImageId" \ + --output text) + +if [ -z "$AMI_ID" ] || [ "$AMI_ID" == "None" ]; then + print_error "Could not find Deep Learning AMI" + exit 1 +fi +print_status "Using AMI: $AMI_ID" + +# Find or select key pair +if [ -z "$KEY_NAME" ]; then + KEY_NAME=$(aws ec2 describe-key-pairs --query "KeyPairs[0].KeyName" --output text 2>/dev/null) +fi +if [ -z "$KEY_NAME" ] || [ "$KEY_NAME" == "None" ]; then + print_error "No SSH key pair found. Create one first or specify with --key-name" + exit 1 +fi +print_status "Using key pair: $KEY_NAME" + +# Find or create security group +SECURITY_GROUP=$(aws ec2 describe-security-groups \ + --filters "Name=group-name,Values=seriguela-sg" \ + --query "SecurityGroups[0].GroupId" \ + --output text 2>/dev/null) + +if [ -z "$SECURITY_GROUP" ] || [ "$SECURITY_GROUP" == "None" ]; then + print_status "Creating security group..." + SECURITY_GROUP=$(aws ec2 create-security-group \ + --group-name seriguela-sg \ + --description "Security group for Seriguela" \ + --query "GroupId" --output text) + + # Get current IP and add SSH rule + MY_IP=$(curl -s ifconfig.me) + aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP" \ + --protocol tcp --port 22 \ + --cidr "${MY_IP}/32" + print_status "Created security group with SSH access from $MY_IP" +else + # Update security group with current IP + MY_IP=$(curl -s ifconfig.me) + aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP" \ + --protocol tcp --port 22 \ + --cidr "${MY_IP}/32" 2>/dev/null || true +fi +print_status "Using security group: $SECURITY_GROUP" + +# Create user-data script for automatic setup +USER_DATA=$(cat << 'USERDATA' +#!/bin/bash +exec > /var/log/user-data.log 2>&1 +set -x + +echo "==========================================" +echo "Seriguela Evaluation Instance Setup" +echo "Started: $(date)" +echo "==========================================" + +# Wait for cloud-init to complete +cloud-init status --wait + +# Setup as ubuntu user +sudo -u ubuntu bash << 'UBUNTUSETUP' +cd /home/ubuntu + +echo "[1/7] Installing system dependencies..." +sudo apt-get update -qq +sudo apt-get install -y -qq python3-venv python3-pip git jq + +echo "[2/7] Cloning repository..." +git clone https://github.com/augustocsc/seriguela.git +cd seriguela + +echo "[3/7] Creating virtual environment..." +python3 -m venv venv +source venv/bin/activate + +echo "[4/7] Upgrading pip..." +pip install --upgrade pip -q + +echo "[5/7] Installing requirements..." +pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121 -q + +echo "[6/7] Testing setup..." +python3 << 'PYCHECK' +import sys +print("Testing imports...") +try: + import transformers + print(f"✅ transformers {transformers.__version__}") + import torch + print(f"✅ torch {torch.__version__}") + print(f"✅ CUDA available: {torch.cuda.is_available()}") + import peft + print(f"✅ peft {peft.__version__}") + import datasets + print(f"✅ datasets {datasets.__version__}") +except ImportError as e: + print(f"❌ Import failed: {e}") + sys.exit(1) +PYCHECK + +if [ $? -ne 0 ]; then + echo "❌ Package validation failed" + exit 1 +fi + +echo "[7/7] Checking GPU..." +if nvidia-smi &> /dev/null; then + echo "✅ GPU detected:" + nvidia-smi --query-gpu=name,memory.total --format=csv,noheader +else + echo "⚠️ No GPU detected (will be slower)" +fi + +# Configure HuggingFace token if provided +if [ -n "$HF_TOKEN" ]; then + echo "Configuring HuggingFace authentication..." + mkdir -p ~/.cache/huggingface + echo "$HF_TOKEN" > ~/.cache/huggingface/token + echo "✅ HuggingFace token configured" +fi + +# Make evaluation script executable +chmod +x ~/seriguela/scripts/aws/evaluate_models.sh + +# Create completion marker +touch /home/ubuntu/.setup_complete + +# Create info file +cat > /home/ubuntu/setup_info.txt << 'INFOFILE' +Seriguela Evaluation Instance - Ready! + +Setup completed successfully: +- Python packages installed +- GPU available (if supported) +- Repository cloned and configured + +To run the evaluation: + cd ~/seriguela + source venv/bin/activate + bash scripts/aws/evaluate_models.sh + +This will compare: + - Model 1: augustocsc/Se124M_700K_infix (original) + - Model 2: augustocsc/Se124M_700K_infix_v2 (with <|endofex|> token) + +On 500 test samples to evaluate if the ending token improves generation stopping. +INFOFILE + +echo "" +echo "==========================================" +echo "✅ Setup Complete!" +echo "Finished: $(date)" +echo "==========================================" +cat ~/setup_info.txt + +UBUNTUSETUP + +echo "User-data script completed" +USERDATA +) + +# Replace HF_TOKEN placeholder +USER_DATA="${USER_DATA//\$HF_TOKEN/$HF_TOKEN}" + +# Launch instance +print_status "Launching instance..." +INSTANCE_ID=$(aws ec2 run-instances \ + --image-id "$AMI_ID" \ + --instance-type "$INSTANCE_TYPE" \ + --key-name "$KEY_NAME" \ + --security-group-ids "$SECURITY_GROUP" \ + --block-device-mappings "[{\"DeviceName\":\"/dev/sda1\",\"Ebs\":{\"VolumeSize\":$VOLUME_SIZE,\"VolumeType\":\"gp3\"}}]" \ + --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=$INSTANCE_NAME},{Key=Project,Value=seriguela},{Key=Purpose,Value=evaluation}]" \ + --user-data "$USER_DATA" \ + --query "Instances[0].InstanceId" \ + --output text) + +print_status "Instance launched: $INSTANCE_ID" + +# Wait for instance to be running +print_status "Waiting for instance to start..." +aws ec2 wait instance-running --instance-ids "$INSTANCE_ID" + +# Get public IP +PUBLIC_IP=$(aws ec2 describe-instances \ + --instance-ids "$INSTANCE_ID" \ + --query "Reservations[0].Instances[0].PublicIpAddress" \ + --output text) + +echo "" +echo "==========================================" +echo -e "${GREEN}Instance Ready!${NC}" +echo "==========================================" +echo "Instance ID: $INSTANCE_ID" +echo "Public IP: $PUBLIC_IP" +echo "Key Pair: $KEY_NAME" +echo "" +echo -e "${BLUE}Connect with:${NC}" +echo " ssh -i ~/.ssh/${KEY_NAME}.pem ubuntu@${PUBLIC_IP}" +echo "" +echo -e "${BLUE}Check setup progress:${NC}" +echo " ssh -i ~/.ssh/${KEY_NAME}.pem ubuntu@${PUBLIC_IP} 'tail -f /var/log/user-data.log'" +echo "" +echo -e "${BLUE}Wait for setup to complete (takes ~5-10 minutes):${NC}" +echo " ssh -i ~/.ssh/${KEY_NAME}.pem ubuntu@${PUBLIC_IP} 'while [ ! -f ~/.setup_complete ]; do sleep 10; echo \"Setup in progress...\"; done; echo \"✅ Setup complete!\"; cat ~/setup_info.txt'" +echo "" +echo -e "${BLUE}Then run evaluation:${NC}" +echo " ssh -i ~/.ssh/${KEY_NAME}.pem ubuntu@${PUBLIC_IP} 'cd seriguela && source venv/bin/activate && bash scripts/aws/evaluate_models.sh'" +echo "" +echo -e "${BLUE}Or run in one command:${NC}" +echo " ssh -i ~/.ssh/${KEY_NAME}.pem ubuntu@${PUBLIC_IP} 'cd seriguela && source venv/bin/activate && nohup bash scripts/aws/evaluate_models.sh > evaluation.log 2>&1 &'" +echo "" +echo -e "${YELLOW}IMPORTANT:${NC} Remember to stop the instance when done:" +echo " aws ec2 stop-instances --instance-ids $INSTANCE_ID" +echo "" + +# Save instance info +INFO_DIR="${HOME}/.seriguela" +mkdir -p "$INFO_DIR" +echo "$INSTANCE_ID" > "$INFO_DIR/last_evaluation_instance_id.txt" +echo "$PUBLIC_IP" > "$INFO_DIR/last_evaluation_instance_ip.txt" +echo "$KEY_NAME" > "$INFO_DIR/last_evaluation_key_name.txt" + +cat > "$INFO_DIR/last_evaluation_instance_info.txt" << INFOEND +Instance ID: $INSTANCE_ID +Public IP: $PUBLIC_IP +Key Name: $KEY_NAME +Instance Type: $INSTANCE_TYPE +Region: $REGION +Launched: $(date) +Purpose: Model Evaluation (v1 vs v2) +INFOEND + +print_status "Instance info saved to: $INFO_DIR/" +echo "" diff --git a/scripts/aws/launch_instance.sh b/scripts/aws/launch_instance.sh new file mode 100644 index 0000000000000000000000000000000000000000..3ec11f1889f16f038985656aca680c1db7a9dbf7 --- /dev/null +++ b/scripts/aws/launch_instance.sh @@ -0,0 +1,196 @@ +#!/bin/bash +# Script to launch and configure AWS g5.xlarge instance for Seriguela training +# Usage: ./launch_instance.sh [--hf-token TOKEN] [--wandb-key KEY] + +set -e + +# Colors +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +RED='\033[0;31m' +NC='\033[0m' + +print_status() { echo -e "${GREEN}[INFO]${NC} $1"; } +print_warning() { echo -e "${YELLOW}[WARN]${NC} $1"; } +print_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +# Default configuration +INSTANCE_TYPE="g5.xlarge" +AMI_ID="" # Will be auto-detected +KEY_NAME="" # Will be auto-detected +SECURITY_GROUP="" # Will be auto-detected or created +REGION=$(aws configure get region 2>/dev/null || echo "us-east-1") +VOLUME_SIZE=100 +INSTANCE_NAME="seriguela-training" +HF_TOKEN="" +WANDB_KEY="" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --hf-token) HF_TOKEN="$2"; shift 2;; + --wandb-key) WANDB_KEY="$2"; shift 2;; + --instance-type) INSTANCE_TYPE="$2"; shift 2;; + --key-name) KEY_NAME="$2"; shift 2;; + --help) + echo "Usage: $0 [OPTIONS]" + echo "Options:" + echo " --hf-token TOKEN HuggingFace token" + echo " --wandb-key KEY Wandb API key" + echo " --instance-type TYPE Instance type (default: g5.xlarge)" + echo " --key-name NAME SSH key pair name" + exit 0;; + *) echo "Unknown option: $1"; exit 1;; + esac +done + +print_status "Launching Seriguela training instance..." + +# Find Deep Learning AMI +print_status "Finding Deep Learning AMI..." +AMI_ID=$(aws ec2 describe-images \ + --owners amazon \ + --filters "Name=name,Values=*Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 22.04)*" \ + --query "Images | sort_by(@, &CreationDate) | [-1].ImageId" \ + --output text) + +if [ -z "$AMI_ID" ] || [ "$AMI_ID" == "None" ]; then + print_error "Could not find Deep Learning AMI" + exit 1 +fi +print_status "Using AMI: $AMI_ID" + +# Find or select key pair +if [ -z "$KEY_NAME" ]; then + KEY_NAME=$(aws ec2 describe-key-pairs --query "KeyPairs[0].KeyName" --output text 2>/dev/null) +fi +if [ -z "$KEY_NAME" ] || [ "$KEY_NAME" == "None" ]; then + print_error "No SSH key pair found. Create one first or specify with --key-name" + exit 1 +fi +print_status "Using key pair: $KEY_NAME" + +# Find or create security group +SECURITY_GROUP=$(aws ec2 describe-security-groups \ + --filters "Name=group-name,Values=seriguela-sg" \ + --query "SecurityGroups[0].GroupId" \ + --output text 2>/dev/null) + +if [ -z "$SECURITY_GROUP" ] || [ "$SECURITY_GROUP" == "None" ]; then + print_status "Creating security group..." + SECURITY_GROUP=$(aws ec2 create-security-group \ + --group-name seriguela-sg \ + --description "Security group for Seriguela training" \ + --query "GroupId" --output text) + + # Get current IP and add SSH rule + MY_IP=$(curl -s ifconfig.me) + aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP" \ + --protocol tcp --port 22 \ + --cidr "${MY_IP}/32" + print_status "Created security group with SSH access from $MY_IP" +else + # Update security group with current IP + MY_IP=$(curl -s ifconfig.me) + aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP" \ + --protocol tcp --port 22 \ + --cidr "${MY_IP}/32" 2>/dev/null || true +fi +print_status "Using security group: $SECURITY_GROUP" + +# Create user-data script for automatic setup +USER_DATA=$(cat << 'USERDATA' +#!/bin/bash +exec > /var/log/user-data.log 2>&1 +set -x + +# Wait for cloud-init to complete +cloud-init status --wait + +# Setup as ubuntu user +sudo -u ubuntu bash << 'UBUNTUSETUP' +cd /home/ubuntu + +# Install dependencies +sudo apt-get update -qq +sudo apt-get install -y -qq python3-venv python3-pip git + +# Clone repository +git clone https://github.com/augustocsc/seriguela.git +cd seriguela + +# Create virtual environment +python3 -m venv venv +source venv/bin/activate + +# Install requirements +pip install --upgrade pip -q +pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121 -q + +# Create marker file to indicate setup complete +touch /home/ubuntu/.setup_complete +UBUNTUSETUP +USERDATA +) + +# Add tokens to user-data if provided +if [ -n "$HF_TOKEN" ] || [ -n "$WANDB_KEY" ]; then + TOKEN_SETUP=" +# Configure tokens +cd /home/ubuntu/seriguela +echo 'HF_TOKEN=$HF_TOKEN' > .env +echo 'WANDB_API_KEY=$WANDB_KEY' >> .env +" + USER_DATA="${USER_DATA}${TOKEN_SETUP}" +fi + +# Launch instance +print_status "Launching instance..." +INSTANCE_ID=$(aws ec2 run-instances \ + --image-id "$AMI_ID" \ + --instance-type "$INSTANCE_TYPE" \ + --key-name "$KEY_NAME" \ + --security-group-ids "$SECURITY_GROUP" \ + --block-device-mappings "[{\"DeviceName\":\"/dev/sda1\",\"Ebs\":{\"VolumeSize\":$VOLUME_SIZE,\"VolumeType\":\"gp3\"}}]" \ + --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=$INSTANCE_NAME}]" \ + --user-data "$USER_DATA" \ + --query "Instances[0].InstanceId" \ + --output text) + +print_status "Instance launched: $INSTANCE_ID" + +# Wait for instance to be running +print_status "Waiting for instance to start..." +aws ec2 wait instance-running --instance-ids "$INSTANCE_ID" + +# Get public IP +PUBLIC_IP=$(aws ec2 describe-instances \ + --instance-ids "$INSTANCE_ID" \ + --query "Reservations[0].Instances[0].PublicIpAddress" \ + --output text) + +echo "" +echo "==========================================" +echo -e "${GREEN}Instance Ready!${NC}" +echo "==========================================" +echo "Instance ID: $INSTANCE_ID" +echo "Public IP: $PUBLIC_IP" +echo "" +echo "Connect with:" +echo " ssh -i ~/.ssh/${KEY_NAME}.pem ubuntu@${PUBLIC_IP}" +echo "" +echo "Check setup progress:" +echo " ssh ubuntu@${PUBLIC_IP} 'tail -f /var/log/user-data.log'" +echo "" +echo "Wait for setup to complete (check for .setup_complete):" +echo " ssh ubuntu@${PUBLIC_IP} 'while [ ! -f ~/.setup_complete ]; do sleep 10; done; echo Done!'" +echo "" +echo "Then run training:" +echo " ssh ubuntu@${PUBLIC_IP} 'cd seriguela && source venv/bin/activate && bash scripts/aws/run_all_training.sh'" +echo "" + +# Save instance info +echo "$INSTANCE_ID" > /tmp/seriguela_instance_id.txt +echo "$PUBLIC_IP" > /tmp/seriguela_instance_ip.txt diff --git a/scripts/aws/launch_instance_fixed.sh b/scripts/aws/launch_instance_fixed.sh new file mode 100644 index 0000000000000000000000000000000000000000..f2f9b25332402d74e9ae1053deae85fccde951a8 --- /dev/null +++ b/scripts/aws/launch_instance_fixed.sh @@ -0,0 +1,371 @@ +#!/bin/bash +# Script to launch and configure AWS g5.xlarge instance for Seriguela training +# FIXED VERSION - Includes Wandb validation and proper setup +# Usage: ./launch_instance_fixed.sh [--hf-token TOKEN] [--wandb-key KEY] + +set -e + +# Colors +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +RED='\033[0;31m' +BLUE='\033[0;34m' +NC='\033[0m' + +print_status() { echo -e "${GREEN}[INFO]${NC} $1"; } +print_warning() { echo -e "${YELLOW}[WARN]${NC} $1"; } +print_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +# Default configuration +INSTANCE_TYPE="g5.xlarge" +AMI_ID="" # Will be auto-detected +KEY_NAME="" # Will be auto-detected +SECURITY_GROUP="" # Will be auto-detected or created +REGION=$(aws configure get region 2>/dev/null || echo "us-east-1") +VOLUME_SIZE=100 +INSTANCE_NAME="seriguela-training" +HF_TOKEN="" +WANDB_KEY="" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --hf-token) HF_TOKEN="$2"; shift 2;; + --wandb-key) WANDB_KEY="$2"; shift 2;; + --instance-type) INSTANCE_TYPE="$2"; shift 2;; + --key-name) KEY_NAME="$2"; shift 2;; + --help) + echo "Usage: $0 [OPTIONS]" + echo "Options:" + echo " --hf-token TOKEN HuggingFace token (required for push to hub)" + echo " --wandb-key KEY Wandb API key (required for logging)" + echo " --instance-type TYPE Instance type (default: g5.xlarge)" + echo " --key-name NAME SSH key pair name" + echo "" + echo "Example:" + echo " $0 --hf-token hf_xxx --wandb-key wandb_v1_xxx" + exit 0;; + *) echo "Unknown option: $1"; exit 1;; + esac +done + +# Validate required tokens +if [ -z "$WANDB_KEY" ]; then + print_error "Wandb API key is required! Use --wandb-key" + print_warning "Get your key from: https://wandb.ai/authorize" + exit 1 +fi + +if [ -z "$HF_TOKEN" ]; then + print_warning "HuggingFace token not provided. Model won't be pushed to Hub." + print_warning "Get your token from: https://huggingface.co/settings/tokens" +fi + +print_status "Launching Seriguela training instance with validated setup..." + +# Find Deep Learning AMI +print_status "Finding Deep Learning AMI..." +AMI_ID=$(aws ec2 describe-images \ + --owners amazon \ + --filters "Name=name,Values=*Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 22.04)*" \ + --query "Images | sort_by(@, &CreationDate) | [-1].ImageId" \ + --output text) + +if [ -z "$AMI_ID" ] || [ "$AMI_ID" == "None" ]; then + print_error "Could not find Deep Learning AMI" + exit 1 +fi +print_status "Using AMI: $AMI_ID" + +# Find or select key pair +if [ -z "$KEY_NAME" ]; then + KEY_NAME=$(aws ec2 describe-key-pairs --query "KeyPairs[0].KeyName" --output text 2>/dev/null) +fi +if [ -z "$KEY_NAME" ] || [ "$KEY_NAME" == "None" ]; then + print_error "No SSH key pair found. Create one first or specify with --key-name" + exit 1 +fi +print_status "Using key pair: $KEY_NAME" + +# Find or create security group +SECURITY_GROUP=$(aws ec2 describe-security-groups \ + --filters "Name=group-name,Values=seriguela-sg" \ + --query "SecurityGroups[0].GroupId" \ + --output text 2>/dev/null) + +if [ -z "$SECURITY_GROUP" ] || [ "$SECURITY_GROUP" == "None" ]; then + print_status "Creating security group..." + SECURITY_GROUP=$(aws ec2 create-security-group \ + --group-name seriguela-sg \ + --description "Security group for Seriguela training" \ + --query "GroupId" --output text) + + # Get current IP and add SSH rule + MY_IP=$(curl -s ifconfig.me) + aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP" \ + --protocol tcp --port 22 \ + --cidr "${MY_IP}/32" + print_status "Created security group with SSH access from $MY_IP" +else + # Update security group with current IP + MY_IP=$(curl -s ifconfig.me) + aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP" \ + --protocol tcp --port 22 \ + --cidr "${MY_IP}/32" 2>/dev/null || true +fi +print_status "Using security group: $SECURITY_GROUP" + +# Create user-data script for automatic setup with validation +USER_DATA=$(cat << USERDATA +#!/bin/bash +exec > /var/log/user-data.log 2>&1 +set -x + +echo "==========================================" +echo "Seriguela Instance Setup - VALIDATED" +echo "Started: \$(date)" +echo "==========================================" + +# Wait for cloud-init to complete +cloud-init status --wait + +# Setup as ubuntu user +sudo -u ubuntu bash << 'UBUNTUSETUP' +cd /home/ubuntu + +echo "[1/8] Installing system dependencies..." +sudo apt-get update -qq +sudo apt-get install -y -qq python3-venv python3-pip git dos2unix + +echo "[2/8] Cloning repository..." +git clone https://github.com/augustocsc/seriguela.git +cd seriguela + +echo "[3/8] Creating virtual environment..." +python3 -m venv venv +source venv/bin/activate + +echo "[4/8] Upgrading pip..." +pip install --upgrade pip -q + +echo "[5/8] Installing requirements..." +pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121 -q + +echo "[6/8] Upgrading Wandb to latest version..." +pip install --upgrade 'wandb>=0.24.1' -q + +echo "[7/8] Configuring environment..." +# Create .env file +cat > .env << 'ENVFILE' +HF_TOKEN=$HF_TOKEN +WANDB_API_KEY=$WANDB_KEY +ENVFILE + +echo "[8/8] Validating setup..." + +# Validate Python packages +python3 << 'PYCHECK' +import sys +print("Testing imports...") +try: + import transformers + print(f"✅ transformers {transformers.__version__}") + import torch + print(f"✅ torch {torch.__version__}") + import wandb + print(f"✅ wandb {wandb.__version__}") + import peft + print(f"✅ peft {peft.__version__}") +except ImportError as e: + print(f"❌ Import failed: {e}") + sys.exit(1) +PYCHECK + +if [ \$? -ne 0 ]; then + echo "❌ Package validation failed" + exit 1 +fi + +# Validate GPU +echo "Checking GPU..." +if nvidia-smi &> /dev/null; then + echo "✅ GPU detected:" + nvidia-smi --query-gpu=name,memory.total --format=csv,noheader +else + echo "❌ No GPU detected" + exit 1 +fi + +# Validate Wandb authentication +if [ -n "$WANDB_KEY" ]; then + echo "Validating Wandb authentication..." + python3 << PYVALIDATE +import wandb +import os +try: + result = wandb.login(key='$WANDB_KEY') + if result: + print("✅ Wandb authentication successful") + # Get user info + import requests + response = requests.get('https://api.wandb.ai/graphql', + headers={'Authorization': f'Bearer $WANDB_KEY'}, + json={'query': '{viewer{entity}}'}) + if response.status_code == 200: + print(f" Logged in to Wandb") + else: + print("❌ Wandb authentication failed") + exit(1) +except Exception as e: + print(f"❌ Wandb validation error: {e}") + exit(1) +PYVALIDATE + + if [ \$? -ne 0 ]; then + echo "❌ Wandb authentication failed" + exit 1 + fi +else + echo "⚠️ No Wandb key provided - skipping validation" +fi + +# Validate HuggingFace token +if [ -n "$HF_TOKEN" ]; then + echo "Validating HuggingFace authentication..." + python3 << PYVALIDATE +from huggingface_hub import HfApi +try: + api = HfApi(token='$HF_TOKEN') + user = api.whoami() + print(f"✅ HuggingFace authentication successful") + print(f" Logged in as: {user.get('name', 'unknown')}") +except Exception as e: + print(f"❌ HuggingFace validation error: {e}") + exit(1) +PYVALIDATE + + if [ \$? -ne 0 ]; then + echo "❌ HuggingFace authentication failed" + exit 1 + fi +else + echo "⚠️ No HuggingFace token provided - model won't be pushed to Hub" +fi + +# All validations passed +echo "" +echo "==========================================" +echo "✅ Setup Complete and Validated!" +echo "Finished: \$(date)" +echo "==========================================" + +# Create completion markers +touch /home/ubuntu/.setup_complete +touch /home/ubuntu/.setup_validated + +# Create info file +cat > /home/ubuntu/setup_info.txt << 'INFOFILE' +Setup completed successfully! + +Validated: +- Python packages installed +- GPU detected +- Wandb authenticated +- HuggingFace authenticated (if token provided) + +Ready to train! + +Quick commands: + cd ~/seriguela + source venv/bin/activate + python scripts/train.py --help + +Monitor scripts: + bash scripts/aws/monitor_training_auto.sh +INFOFILE + +echo "Setup info saved to ~/setup_info.txt" +UBUNTUSETUP + +# End of setup +echo "User-data script completed" +USERDATA +) + +# Replace placeholder tokens in user-data +USER_DATA="${USER_DATA//\$HF_TOKEN/$HF_TOKEN}" +USER_DATA="${USER_DATA//\$WANDB_KEY/$WANDB_KEY}" + +# Launch instance +print_status "Launching instance..." +INSTANCE_ID=$(aws ec2 run-instances \ + --image-id "$AMI_ID" \ + --instance-type "$INSTANCE_TYPE" \ + --key-name "$KEY_NAME" \ + --security-group-ids "$SECURITY_GROUP" \ + --block-device-mappings "[{\"DeviceName\":\"/dev/sda1\",\"Ebs\":{\"VolumeSize\":$VOLUME_SIZE,\"VolumeType\":\"gp3\"}}]" \ + --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=$INSTANCE_NAME},{Key=Project,Value=seriguela},{Key=AutoSetup,Value=validated}]" \ + --user-data "$USER_DATA" \ + --query "Instances[0].InstanceId" \ + --output text) + +print_status "Instance launched: $INSTANCE_ID" + +# Wait for instance to be running +print_status "Waiting for instance to start..." +aws ec2 wait instance-running --instance-ids "$INSTANCE_ID" + +# Get public IP +PUBLIC_IP=$(aws ec2 describe-instances \ + --instance-ids "$INSTANCE_ID" \ + --query "Reservations[0].Instances[0].PublicIpAddress" \ + --output text) + +echo "" +echo "==========================================" +echo -e "${GREEN}Instance Ready!${NC}" +echo "==========================================" +echo "Instance ID: $INSTANCE_ID" +echo "Public IP: $PUBLIC_IP" +echo "Key Pair: $KEY_NAME" +echo "" +echo -e "${BLUE}Connect with:${NC}" +echo " ssh -i ~/.ssh/${KEY_NAME}.pem ubuntu@${PUBLIC_IP}" +echo "" +echo -e "${BLUE}Check setup progress:${NC}" +echo " ssh ubuntu@${PUBLIC_IP} 'tail -f /var/log/user-data.log'" +echo "" +echo -e "${BLUE}Wait for VALIDATED setup to complete:${NC}" +echo " ssh ubuntu@${PUBLIC_IP} 'while [ ! -f ~/.setup_validated ]; do sleep 10; echo \"Setup in progress...\"; done; echo \"✅ Setup validated!\"; cat ~/setup_info.txt'" +echo "" +echo -e "${BLUE}Then run training:${NC}" +echo " ssh ubuntu@${PUBLIC_IP} 'cd seriguela && source venv/bin/activate && bash scripts/aws/run_all_training.sh'" +echo "" +echo -e "${YELLOW}Setup includes:${NC}" +echo " ✅ Wandb 0.24.1+ with authentication test" +echo " ✅ HuggingFace authentication test" +echo " ✅ GPU validation" +echo " ✅ All packages validated" +echo "" + +# Save instance info +INFO_DIR="${HOME}/.seriguela" +mkdir -p "$INFO_DIR" +echo "$INSTANCE_ID" > "$INFO_DIR/last_instance_id.txt" +echo "$PUBLIC_IP" > "$INFO_DIR/last_instance_ip.txt" +echo "$KEY_NAME" > "$INFO_DIR/last_key_name.txt" + +cat > "$INFO_DIR/last_instance_info.txt" << INFOEND +Instance ID: $INSTANCE_ID +Public IP: $PUBLIC_IP +Key Name: $KEY_NAME +Instance Type: $INSTANCE_TYPE +Region: $REGION +Launched: $(date) +Setup: Validated (Wandb + HF + GPU) +INFOEND + +print_status "Instance info saved to: $INFO_DIR/" +echo "" diff --git a/scripts/aws/monitor_evaluation.sh b/scripts/aws/monitor_evaluation.sh new file mode 100644 index 0000000000000000000000000000000000000000..723c70287829d8f3ea7862ce568f44f7fe56fa2f --- /dev/null +++ b/scripts/aws/monitor_evaluation.sh @@ -0,0 +1,116 @@ +#!/bin/bash +# Script to monitor evaluation progress and download results +# Usage: bash scripts/aws/monitor_evaluation.sh [PUBLIC_IP] + +set -e + +# Colors +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +print_status() { echo -e "${GREEN}[INFO]${NC} $1"; } +print_warning() { echo -e "${YELLOW}[WARN]${NC} $1"; } + +# Get IP from argument or saved info +if [ -n "$1" ]; then + PUBLIC_IP="$1" +else + INFO_DIR="${HOME}/.seriguela" + if [ -f "$INFO_DIR/last_evaluation_instance_ip.txt" ]; then + PUBLIC_IP=$(cat "$INFO_DIR/last_evaluation_instance_ip.txt") + print_status "Using saved IP: $PUBLIC_IP" + else + echo "Error: No IP provided and no saved IP found." + echo "Usage: $0 " + exit 1 + fi +fi + +# Get key name +INFO_DIR="${HOME}/.seriguela" +if [ -f "$INFO_DIR/last_evaluation_key_name.txt" ]; then + KEY_NAME=$(cat "$INFO_DIR/last_evaluation_key_name.txt") +else + KEY_NAME=$(aws ec2 describe-key-pairs --query "KeyPairs[0].KeyName" --output text 2>/dev/null) +fi + +SSH_CMD="ssh -i ~/.ssh/${KEY_NAME}.pem -o StrictHostKeyChecking=no ubuntu@${PUBLIC_IP}" + +echo "==========================================" +echo "Monitoring Evaluation" +echo "==========================================" +echo "Instance: $PUBLIC_IP" +echo "Key: $KEY_NAME" +echo "" + +# Check if setup is complete +print_status "Checking setup status..." +if $SSH_CMD 'test -f ~/.setup_complete'; then + print_status "✅ Setup complete" +else + print_warning "Setup still in progress. Waiting..." + $SSH_CMD 'while [ ! -f ~/.setup_complete ]; do sleep 5; done; echo "Setup complete!"' +fi + +echo "" +echo "==========================================" +echo "Evaluation Progress" +echo "==========================================" +echo "Press Ctrl+C to stop monitoring (evaluation will continue)" +echo "" + +# Check if evaluation has started +if $SSH_CMD 'test -f ~/seriguela/evaluation_*.log'; then + print_status "Evaluation in progress. Showing logs..." + echo "" + $SSH_CMD 'tail -f ~/seriguela/evaluation_*.log' || true +else + print_warning "Evaluation hasn't started yet." + echo "" + echo "To start evaluation, run:" + echo " $SSH_CMD 'cd seriguela && source venv/bin/activate && bash scripts/aws/evaluate_models.sh'" + echo "" + echo "Or run in background:" + echo " $SSH_CMD 'cd seriguela && source venv/bin/activate && nohup bash scripts/aws/evaluate_models.sh > evaluation.log 2>&1 &'" +fi + +echo "" +echo "==========================================" +echo "Download Results" +echo "==========================================" +echo "" + +# Download results if available +if $SSH_CMD 'test -d ~/seriguela/evaluation_results/comparison'; then + print_status "Downloading results..." + + # Create local directory + mkdir -p ./evaluation_results/comparison + + # Download results + scp -i ~/.ssh/${KEY_NAME}.pem -o StrictHostKeyChecking=no -r \ + ubuntu@${PUBLIC_IP}:~/seriguela/evaluation_results/comparison/* \ + ./evaluation_results/comparison/ 2>/dev/null || true + + # Download log files + scp -i ~/.ssh/${KEY_NAME}.pem -o StrictHostKeyChecking=no \ + ubuntu@${PUBLIC_IP}:~/seriguela/evaluation_*.log \ + ./evaluation_results/ 2>/dev/null || true + + print_status "Results downloaded to: ./evaluation_results/" + echo "" + + # Show latest comparison + LATEST_COMPARISON=$(ls -t ./evaluation_results/comparison/comparison_*.json 2>/dev/null | head -1) + if [ -n "$LATEST_COMPARISON" ]; then + echo "Latest comparison results:" + echo "" + cat "$LATEST_COMPARISON" | jq '.comparison' 2>/dev/null || cat "$LATEST_COMPARISON" + fi +else + print_warning "No results available yet." +fi + +echo "" diff --git a/scripts/aws/monitor_training_auto.sh b/scripts/aws/monitor_training_auto.sh new file mode 100644 index 0000000000000000000000000000000000000000..3fd7286b84a73ddb32170a84e1c3d3d7fc183c54 --- /dev/null +++ b/scripts/aws/monitor_training_auto.sh @@ -0,0 +1,179 @@ +#!/bin/bash +# Automatic Training Monitor and Notifier +# Monitors training process and runs analysis when complete + +set -e + +# Colors +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +RED='\033[0;31m' +BLUE='\033[0;34m' +NC='\033[0m' + +print_status() { echo -e "${GREEN}[$(date '+%H:%M:%S')]${NC} $1"; } +print_warning() { echo -e "${YELLOW}[$(date '+%H:%M:%S')]${NC} $1"; } +print_error() { echo -e "${RED}[$(date '+%H:%M:%S')]${NC} $1"; } +print_header() { echo -e "\n${BLUE}========================================\n$1\n========================================${NC}\n"; } + +# Configuration +PROJECT_DIR="/home/ubuntu/seriguela" +LOG_FILE="$HOME/training_success.log" +MONITOR_LOG="$HOME/monitor_output.log" +TRAINING_PID="" +CHECK_INTERVAL=60 # Check every 60 seconds +MODEL_PATH="./output/Se124M_700K_infix" +DATASET_REPO="augustocsc/sintetico_natural" +DATA_DIR="700K" +DATA_COLUMN="i_prompt_n" + +cd "$PROJECT_DIR" +source venv/bin/activate + +# Get training PID +get_training_pid() { + TRAINING_PID=$(ps aux | grep "python scripts/train.py" | grep -v grep | awk '{print $2}') +} + +# Check if training is running +is_training_running() { + get_training_pid + if [ -z "$TRAINING_PID" ]; then + return 1 + else + return 0 + fi +} + +# Get training progress from log +get_progress() { + if [ -f "$LOG_FILE" ]; then + # Get last progress line + tail -100 "$LOG_FILE" | grep -E "([0-9]+)%\|" | tail -1 | sed 's/.*\([0-9]\+\)%|.*/\1/' || echo "0" + else + echo "0" + fi +} + +# Get current epoch and step +get_training_stats() { + if [ -f "$LOG_FILE" ]; then + local last_line=$(tail -100 "$LOG_FILE" | grep -E "[0-9]+/21882" | tail -1) + echo "$last_line" + fi +} + +# Send notification (multiple methods) +send_notification() { + local title="$1" + local message="$2" + + print_header "$title" + echo "$message" + + # Save to notification file + cat > "$HOME/training_notification.txt" << EOF +================================================================================ +$title +$(date '+%Y-%m-%d %H:%M:%S') +================================================================================ + +$message + +================================================================================ +EOF + + print_status "Notification saved to: $HOME/training_notification.txt" +} + +# Monitor training +print_header "Training Monitor Started" +print_status "Monitoring training process..." +print_status "Log file: $LOG_FILE" +print_status "Check interval: ${CHECK_INTERVAL}s" + +START_TIME=$(date +%s) +LAST_PROGRESS=0 + +while true; do + if is_training_running; then + CURRENT_PROGRESS=$(get_progress) + TRAINING_STATS=$(get_training_stats) + + # Show progress every check + print_status "Training running (PID: $TRAINING_PID) - Progress: ${CURRENT_PROGRESS}%" + + if [ ! -z "$TRAINING_STATS" ]; then + echo " $TRAINING_STATS" + fi + + # Check GPU + GPU_INFO=$(nvidia-smi --query-gpu=utilization.gpu,memory.used --format=csv,noheader,nounits) + echo " GPU: $GPU_INFO" + + LAST_PROGRESS=$CURRENT_PROGRESS + sleep $CHECK_INTERVAL + else + # Training finished or crashed + END_TIME=$(date +%s) + DURATION=$((END_TIME - START_TIME)) + HOURS=$((DURATION / 3600)) + MINUTES=$(((DURATION % 3600) / 60)) + + print_header "Training Process Ended" + + # Check if training completed successfully + if grep -q "Training finished" "$LOG_FILE" 2>/dev/null || \ + grep -q "100%|" "$LOG_FILE" 2>/dev/null; then + + # SUCCESS - Training completed + print_status "Training completed successfully!" + print_status "Total time: ${HOURS}h ${MINUTES}m" + + # Extract final metrics + FINAL_METRICS=$(tail -200 "$LOG_FILE" | grep -E "(train_loss|eval_loss)" | tail -5) + + send_notification "✅ Training Completed Successfully" \ +"Training Duration: ${HOURS}h ${MINUTES}m +Model: GPT-2 (124M) with LoRA +Dataset: 700K infix +Output: $MODEL_PATH + +Final Metrics: +$FINAL_METRICS + +Wandb Dashboard: +https://wandb.ai/symbolic-gression/seriguela_700K_test + +Starting automatic analysis... +" + + # Run automatic analysis + print_header "Starting Automatic Analysis" + bash "$PROJECT_DIR/scripts/aws/analyze_model.sh" "$MODEL_PATH" "$DATA_COLUMN" 2>&1 | tee "$HOME/analysis_output.log" + + print_status "Analysis complete! Check: $HOME/analysis_output.log" + + else + # FAILED - Training crashed or was killed + print_error "Training ended unexpectedly!" + + # Get last errors + ERRORS=$(tail -50 "$LOG_FILE" | grep -E "(Error|Exception|Traceback)" | head -10) + + send_notification "❌ Training Failed or Interrupted" \ +"Training Duration: ${HOURS}h ${MINUTES}m +Last Progress: ${LAST_PROGRESS}% + +Possible Errors: +$ERRORS + +Check full log: $LOG_FILE +" + fi + + break + fi +done + +print_status "Monitor finished. Check notification file: $HOME/training_notification.txt" diff --git a/scripts/aws/run_all_training.sh b/scripts/aws/run_all_training.sh new file mode 100644 index 0000000000000000000000000000000000000000..89013dc90c24e56c58fce6588510ba33264ba915 --- /dev/null +++ b/scripts/aws/run_all_training.sh @@ -0,0 +1,365 @@ +#!/bin/bash +# Workflow completo de treinamento para AWS g5.xlarge +# Projeto Seriguela - Treinar 6 modelos GPT-2 (3 tamanhos x 2 formatos) + +set -e # Exit on error + +echo "==========================================" +echo "Seriguela - Full Training Workflow" +echo "==========================================" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' # No Color + +print_status() { + echo -e "${GREEN}[INFO]${NC} $1" +} + +print_warning() { + echo -e "${YELLOW}[WARNING]${NC} $1" +} + +print_error() { + echo -e "${RED}[ERROR]${NC} $1" +} + +print_header() { + echo "" + echo -e "${BLUE}==========================================" + echo "$1" + echo -e "==========================================${NC}" + echo "" +} + +# Configuration +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_DIR="$(dirname "$(dirname "$SCRIPT_DIR")")" +cd "$PROJECT_DIR" + +# Check if virtual environment is activated +if [ -z "$VIRTUAL_ENV" ]; then + print_warning "Virtual environment not activated. Activating..." + source venv/bin/activate 2>/dev/null || { + print_error "Could not activate virtual environment. Please run setup_aws.sh first." + exit 1 + } +fi + +# Check environment variables +if [ -z "$HF_TOKEN" ]; then + print_error "HF_TOKEN not set. Please export HF_TOKEN='your_token'" + exit 1 +fi + +# Check GPU +print_status "Checking GPU..." +nvidia-smi --query-gpu=name,memory.total,memory.free --format=csv || { + print_error "GPU not available!" + exit 1 +} + +# Dataset configuration +DATASET_REPO="augustocsc/sintetico_natural" +DATA_DIR="700K" +HF_USER="augustocsc" + +# Common training parameters +WANDB_PROJECT="seriguela_700K" +SEED=42 +BLOCK_SIZE=128 + +# Output directories +OUTPUT_BASE="./output" +EVAL_OUTPUT="./evaluation_results" +mkdir -p "$OUTPUT_BASE" "$EVAL_OUTPUT" + +# Training configurations +# Format: "model_name|epochs|batch_size|grad_accum|learning_rate|run_suffix" +declare -a CONFIGS=( + # GPT-2 Small (124M) + "gpt2|3|16|4|5e-5|Se124M" + # GPT-2 Medium (355M) + "gpt2-medium|2|8|8|3e-5|Se355M" + # GPT-2 Large (774M) + "gpt2-large|2|4|16|2e-5|Se774M" +) + +# Data columns for formats +declare -a DATA_COLUMNS=( + "i_prompt_n|infix" + "p_prompt_n|prefix" +) + +# Function to run training +run_training() { + local model_name=$1 + local epochs=$2 + local batch_size=$3 + local grad_accum=$4 + local lr=$5 + local run_suffix=$6 + local data_column=$7 + local format=$8 + + local run_name="${run_suffix}_${DATA_DIR}_${format}" + local output_dir="${OUTPUT_BASE}/${run_name}" + local hub_model_id="${HF_USER}/${run_name}" + + print_header "Training: $run_name" + echo "Model: $model_name" + echo "Epochs: $epochs" + echo "Batch size: $batch_size" + echo "Gradient accumulation: $grad_accum" + echo "Effective batch size: $((batch_size * grad_accum))" + echo "Learning rate: $lr" + echo "Data column: $data_column" + echo "Output: $output_dir" + echo "Hub ID: $hub_model_id" + echo "" + + # Run training + python scripts/train.py \ + --model_name_or_path "$model_name" \ + --dataset_repo_id "$DATASET_REPO" \ + --data_dir "$DATA_DIR" \ + --data_column "$data_column" \ + --approach "$format" \ + --output_dir "$output_dir" \ + --num_train_epochs "$epochs" \ + --per_device_train_batch_size "$batch_size" \ + --per_device_eval_batch_size "$batch_size" \ + --gradient_accumulation_steps "$grad_accum" \ + --learning_rate "$lr" \ + --weight_decay 0.01 \ + --warmup_steps 100 \ + --block_size "$BLOCK_SIZE" \ + --logging_steps 50 \ + --eval_strategy epoch \ + --save_strategy epoch \ + --save_total_limit 2 \ + --load_best_model_at_end \ + --fp16 \ + --seed "$SEED" \ + --wandb_project "$WANDB_PROJECT" \ + --wandb_run_name "$run_name" \ + --push_to_hub \ + --hub_model_id "$hub_model_id" + + # Check if training was successful + if [ $? -eq 0 ]; then + print_status "Training completed successfully: $run_name" + return 0 + else + print_error "Training failed: $run_name" + return 1 + fi +} + +# Function to run evaluation +run_evaluation() { + local model_path=$1 + local data_column=$2 + local num_samples=${3:-500} + + print_status "Evaluating model: $model_path" + + python scripts/evaluate.py \ + --model_path "$model_path" \ + --dataset_repo_id "$DATASET_REPO" \ + --data_dir "$DATA_DIR" \ + --data_column "$data_column" \ + --num_samples "$num_samples" \ + --output_dir "$EVAL_OUTPUT" \ + --temperature 0.7 \ + --seed "$SEED" + + if [ $? -eq 0 ]; then + print_status "Evaluation completed: $model_path" + else + print_warning "Evaluation had issues: $model_path" + fi +} + +# Parse command line arguments +RUN_TEST=false +RUN_TRAINING=true +RUN_EVAL=true +SPECIFIC_MODEL="" + +while [[ $# -gt 0 ]]; do + case $1 in + --test-only) + RUN_TEST=true + RUN_TRAINING=false + RUN_EVAL=false + shift + ;; + --no-eval) + RUN_EVAL=false + shift + ;; + --eval-only) + RUN_TRAINING=false + RUN_EVAL=true + shift + ;; + --model) + SPECIFIC_MODEL="$2" + shift 2 + ;; + --help) + echo "Usage: $0 [OPTIONS]" + echo "" + echo "Options:" + echo " --test-only Run only the test training (1 epoch)" + echo " --no-eval Skip evaluation after training" + echo " --eval-only Run only evaluation (skip training)" + echo " --model NAME Train only specific model (gpt2, gpt2-medium, gpt2-large)" + echo " --help Show this help message" + exit 0 + ;; + *) + print_error "Unknown option: $1" + exit 1 + ;; + esac +done + +# Test run +if [ "$RUN_TEST" = true ]; then + print_header "Running Test Training (1 epoch with gpt2)" + + python scripts/train.py \ + --model_name_or_path gpt2 \ + --dataset_repo_id "$DATASET_REPO" \ + --data_dir "$DATA_DIR" \ + --data_column "i_prompt_n" \ + --approach "infix" \ + --output_dir "${OUTPUT_BASE}/test_run" \ + --num_train_epochs 1 \ + --per_device_train_batch_size 16 \ + --gradient_accumulation_steps 4 \ + --learning_rate 5e-5 \ + --block_size "$BLOCK_SIZE" \ + --logging_steps 20 \ + --eval_strategy epoch \ + --save_strategy epoch \ + --fp16 \ + --seed "$SEED" \ + --wandb_project "${WANDB_PROJECT}_test" + + print_status "Test training completed!" + print_status "Checklist:" + echo " [ ] GPU detected and functioning" + echo " [ ] Dataset loaded correctly" + echo " [ ] Training completed without errors" + echo " [ ] Wandb received metrics" + echo " [ ] Model saved locally" + echo "" + echo "Now test evaluate.py and generate.py:" + echo " python scripts/evaluate.py --model_path ./output/test_run --num_samples 50" + echo " python scripts/generate.py --model_path ./output/test_run --num_generations 5 --validate" + exit 0 +fi + +# Track completed trainings +declare -a COMPLETED_MODELS=() +declare -a FAILED_MODELS=() + +# Main training loop +if [ "$RUN_TRAINING" = true ]; then + print_header "Starting Full Training Workflow" + + START_TIME=$(date +%s) + + for config in "${CONFIGS[@]}"; do + IFS='|' read -r model_name epochs batch_size grad_accum lr run_suffix <<< "$config" + + # Skip if specific model requested and this is not it + if [ -n "$SPECIFIC_MODEL" ] && [ "$model_name" != "$SPECIFIC_MODEL" ]; then + continue + fi + + for data_config in "${DATA_COLUMNS[@]}"; do + IFS='|' read -r data_column format <<< "$data_config" + + run_name="${run_suffix}_${DATA_DIR}_${format}" + + print_status "Starting training: $run_name" + + if run_training "$model_name" "$epochs" "$batch_size" "$grad_accum" "$lr" "$run_suffix" "$data_column" "$format"; then + COMPLETED_MODELS+=("${HF_USER}/${run_name}|${data_column}") + else + FAILED_MODELS+=("$run_name") + fi + + # Small delay between trainings + sleep 10 + done + done + + END_TIME=$(date +%s) + DURATION=$((END_TIME - START_TIME)) + HOURS=$((DURATION / 3600)) + MINUTES=$(((DURATION % 3600) / 60)) + + print_header "Training Summary" + echo "Total time: ${HOURS}h ${MINUTES}m" + echo "" + echo "Completed models (${#COMPLETED_MODELS[@]}):" + for model in "${COMPLETED_MODELS[@]}"; do + echo " - ${model%|*}" + done + echo "" + if [ ${#FAILED_MODELS[@]} -gt 0 ]; then + echo "Failed models (${#FAILED_MODELS[@]}):" + for model in "${FAILED_MODELS[@]}"; do + echo " - $model" + done + fi +fi + +# Evaluation +if [ "$RUN_EVAL" = true ]; then + print_header "Running Evaluations" + + # If we just trained, use those models + if [ ${#COMPLETED_MODELS[@]} -gt 0 ]; then + for model_info in "${COMPLETED_MODELS[@]}"; do + IFS='|' read -r model_path data_column <<< "$model_info" + run_evaluation "$model_path" "$data_column" 500 + done + else + # Otherwise, evaluate all expected models + for config in "${CONFIGS[@]}"; do + IFS='|' read -r model_name epochs batch_size grad_accum lr run_suffix <<< "$config" + + for data_config in "${DATA_COLUMNS[@]}"; do + IFS='|' read -r data_column format <<< "$data_config" + + run_name="${run_suffix}_${DATA_DIR}_${format}" + model_path="${HF_USER}/${run_name}" + + run_evaluation "$model_path" "$data_column" 500 + done + done + fi + + print_header "Evaluation Complete" + echo "Results saved to: $EVAL_OUTPUT" +fi + +print_header "Workflow Complete!" +echo "" +echo "Next steps:" +echo "1. Check training results on wandb: https://wandb.ai/${WANDB_PROJECT}" +echo "2. Check models on HuggingFace Hub: https://huggingface.co/${HF_USER}" +echo "3. Review evaluation results in: $EVAL_OUTPUT" +echo "" +echo "To test a model interactively:" +echo " python scripts/generate.py --model_path ${HF_USER}/Se124M_700K_infix --interactive --validate" +echo "" diff --git a/scripts/aws/setup_and_train_exp_a.sh b/scripts/aws/setup_and_train_exp_a.sh new file mode 100644 index 0000000000000000000000000000000000000000..89bea2ce72c47b748cbd03f34ebb54549e282128 --- /dev/null +++ b/scripts/aws/setup_and_train_exp_a.sh @@ -0,0 +1,83 @@ +#!/bin/bash +# Complete setup and training script for EXP-A (JSON format) +# Run this on a fresh AWS instance + +set -e + +echo "==============================================" +echo "EXP-A: Complete Setup and Training" +echo "JSON Format with <|endofex|> marker" +echo "==============================================" +echo "Started: $(date)" +echo "" + +cd /home/ubuntu/seriguela + +# Activate environment +source venv/bin/activate + +# Step 1: Prepare data +echo "[1/3] Preparing training data..." +echo "This will download from HuggingFace Hub and convert to JSON format" +echo "" + +mkdir -p data/experiments + +python scripts/data/prepare_experiment_data.py \ + --dataset_repo_id augustocsc/sintetico_natural \ + --data_dir 700K \ + --data_column i_prompt_n \ + --output_base_dir ./data/experiments + +# Verify data +if [ ! -f "./data/experiments/exp_a_json/train.csv" ]; then + echo "ERROR: Data preparation failed!" + exit 1 +fi + +TRAIN_COUNT=$(wc -l < ./data/experiments/exp_a_json/train.csv) +echo "Training samples: $TRAIN_COUNT" + +# Step 2: Run training +echo "" +echo "[2/3] Starting training..." +echo "Output: ./output/exp_a_json" +echo "" + +python scripts/train_experiment.py \ + --experiment_name "exp_a_json" \ + --train_file ./data/experiments/exp_a_json/train.csv \ + --validation_file ./data/experiments/exp_a_json/validation.csv \ + --output_dir ./output/exp_a_json \ + --json_format \ + --end_marker '"}' \ + --num_train_epochs 3 \ + --per_device_train_batch_size 8 \ + --gradient_accumulation_steps 4 \ + --learning_rate 5e-5 \ + --block_size 256 \ + --fp16 \ + --wandb_project seriguela_experiments \ + --wandb_run_name "exp_a_json_$(date +%Y%m%d_%H%M%S)" + +# Step 3: Evaluate +echo "" +echo "[3/3] Evaluating model..." +echo "" + +python scripts/evaluate_experiments.py \ + --model_path ./output/exp_a_json \ + --experiment_type json \ + --num_samples 200 \ + --output_file ./output/exp_a_json/evaluation_results.json + +echo "" +echo "==============================================" +echo "EXP-A Complete!" +echo "==============================================" +echo "Finished: $(date)" +echo "Model: ./output/exp_a_json" +echo "Results: ./output/exp_a_json/evaluation_results.json" + +# Create completion marker +touch /home/ubuntu/.exp_a_complete diff --git a/scripts/aws/setup_and_train_exp_b.sh b/scripts/aws/setup_and_train_exp_b.sh new file mode 100644 index 0000000000000000000000000000000000000000..31d3226d4d5f4b700c3e15884cfc2928b1385de5 --- /dev/null +++ b/scripts/aws/setup_and_train_exp_b.sh @@ -0,0 +1,83 @@ +#!/bin/bash +# Complete setup and training script for EXP-B (EOS format) +# Run this on a fresh AWS instance + +set -e + +echo "==============================================" +echo "EXP-B: Complete Setup and Training" +echo "EOS Format with <|endoftext|> marker" +echo "==============================================" +echo "Started: $(date)" +echo "" + +cd /home/ubuntu/seriguela + +# Activate environment +source venv/bin/activate + +# Step 1: Prepare data +echo "[1/3] Preparing training data..." +echo "This will download from HuggingFace Hub and convert to EOS format" +echo "" + +mkdir -p data/experiments + +python scripts/data/prepare_experiment_data.py \ + --dataset_repo_id augustocsc/sintetico_natural \ + --data_dir 700K \ + --data_column i_prompt_n \ + --output_base_dir ./data/experiments + +# Verify data +if [ ! -f "./data/experiments/exp_b_eos/train.csv" ]; then + echo "ERROR: Data preparation failed!" + exit 1 +fi + +TRAIN_COUNT=$(wc -l < ./data/experiments/exp_b_eos/train.csv) +echo "Training samples: $TRAIN_COUNT" + +# Step 2: Run training +echo "" +echo "[2/3] Starting training..." +echo "Output: ./output/exp_b_eos" +echo "" + +python scripts/train_experiment.py \ + --experiment_name "exp_b_eos" \ + --train_file ./data/experiments/exp_b_eos/train.csv \ + --validation_file ./data/experiments/exp_b_eos/validation.csv \ + --output_dir ./output/exp_b_eos \ + --end_marker "<|endoftext|>" \ + --use_native_eos \ + --num_train_epochs 3 \ + --per_device_train_batch_size 8 \ + --gradient_accumulation_steps 4 \ + --learning_rate 5e-5 \ + --block_size 128 \ + --fp16 \ + --wandb_project seriguela_experiments \ + --wandb_run_name "exp_b_eos_$(date +%Y%m%d_%H%M%S)" + +# Step 3: Evaluate +echo "" +echo "[3/3] Evaluating model..." +echo "" + +python scripts/evaluate_experiments.py \ + --model_path ./output/exp_b_eos \ + --experiment_type eos \ + --num_samples 200 \ + --output_file ./output/exp_b_eos/evaluation_results.json + +echo "" +echo "==============================================" +echo "EXP-B Complete!" +echo "==============================================" +echo "Finished: $(date)" +echo "Model: ./output/exp_b_eos" +echo "Results: ./output/exp_b_eos/evaluation_results.json" + +# Create completion marker +touch /home/ubuntu/.exp_b_complete diff --git a/scripts/aws/setup_aws.sh b/scripts/aws/setup_aws.sh new file mode 100644 index 0000000000000000000000000000000000000000..0a75a64a071345720e302bce75d3265e61fc21f4 --- /dev/null +++ b/scripts/aws/setup_aws.sh @@ -0,0 +1,87 @@ +#!/bin/bash +# Setup script for AWS g5.xlarge instance (Deep Learning AMI Ubuntu) +# Project: Seriguela - GPT-2 Fine-tuning for Symbolic Regression +# Optimized for faster setup + +set -e + +echo "==========================================" +echo "Seriguela AWS Setup Script (Optimized)" +echo "==========================================" + +# Colors +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +RED='\033[0;31m' +NC='\033[0m' + +print_status() { echo -e "${GREEN}[INFO]${NC} $1"; } +print_warning() { echo -e "${YELLOW}[WARN]${NC} $1"; } +print_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +# Configuration +REPO_URL="https://github.com/augustocsc/seriguela.git" +REPO_DIR="$HOME/seriguela" +PYTHON_VERSION="python3" + +# Check GPU +print_status "Checking GPU..." +if ! nvidia-smi &>/dev/null; then + print_error "GPU not detected!" + exit 1 +fi +nvidia-smi --query-gpu=name,memory.total --format=csv,noheader + +# Install system dependencies (minimal) +print_status "Installing system dependencies..." +sudo apt-get update -qq +sudo apt-get install -y -qq python3-venv python3-pip git htop + +# Clone or update repository +if [ -d "$REPO_DIR" ]; then + print_status "Updating repository..." + cd "$REPO_DIR" && git pull +else + print_status "Cloning repository..." + git clone "$REPO_URL" "$REPO_DIR" +fi +cd "$REPO_DIR" + +# Setup virtual environment +print_status "Setting up virtual environment..." +$PYTHON_VERSION -m venv venv +source venv/bin/activate + +# Upgrade pip and install dependencies in one step +print_status "Installing all dependencies (this may take a few minutes)..." +pip install --upgrade pip -q +pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121 -q + +# Verify installation +print_status "Verifying installation..." +python -c " +import torch +import transformers +import peft +print(f'PyTorch: {torch.__version__}') +print(f'CUDA available: {torch.cuda.is_available()}') +if torch.cuda.is_available(): + print(f'GPU: {torch.cuda.get_device_name(0)}') + print(f'Memory: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f} GB') +print(f'Transformers: {transformers.__version__}') +print(f'PEFT: {peft.__version__}') +" + +echo "" +echo "==========================================" +echo -e "${GREEN}Setup Complete!${NC}" +echo "==========================================" +echo "" +echo "Next: Configure tokens in .env file:" +echo " echo 'HF_TOKEN=your_token' > .env" +echo " echo 'WANDB_API_KEY=your_key' >> .env" +echo "" +echo "Then run training:" +echo " source venv/bin/activate" +echo " bash scripts/aws/run_all_training.sh --test-only" +echo "" diff --git a/scripts/aws/train_exp_a.sh b/scripts/aws/train_exp_a.sh new file mode 100644 index 0000000000000000000000000000000000000000..62eb9b00870306b0d26834ddd2b2866ccbbf0552 --- /dev/null +++ b/scripts/aws/train_exp_a.sh @@ -0,0 +1,57 @@ +#!/bin/bash +# EXP-A: Training with JSON structured format +# Uses <|endofex|> as end marker + +set -e + +echo "==============================================" +echo "EXP-A: JSON Format Training" +echo "==============================================" + +cd ~/seriguela + +# Activate virtual environment +source venv/bin/activate + +# Check data exists +if [ ! -f "./data/experiments/exp_a_json/train.csv" ]; then + echo "ERROR: Training data not found!" + echo "Expected: ./data/experiments/exp_a_json/train.csv" + exit 1 +fi + +# Count samples +TRAIN_COUNT=$(wc -l < ./data/experiments/exp_a_json/train.csv) +echo "Training samples: $TRAIN_COUNT" + +# Training configuration +export WANDB_PROJECT="seriguela_experiments" +export HF_TOKEN="${HF_TOKEN:-}" +export WANDB_API_KEY="${WANDB_API_KEY:-}" + +# Run training +echo "" +echo "Starting training..." +echo "Output: ./output/exp_a_json" +echo "" + +python scripts/train_experiment.py \ + --experiment_name "exp_a_json" \ + --train_file ./data/experiments/exp_a_json/train.csv \ + --validation_file ./data/experiments/exp_a_json/validation.csv \ + --output_dir ./output/exp_a_json \ + --end_marker "<|endofex|>" \ + --num_train_epochs 3 \ + --per_device_train_batch_size 8 \ + --gradient_accumulation_steps 4 \ + --learning_rate 5e-5 \ + --block_size 256 \ + --fp16 \ + --wandb_project seriguela_experiments \ + --wandb_run_name "exp_a_json_$(date +%Y%m%d_%H%M%S)" + +echo "" +echo "==============================================" +echo "EXP-A Training Complete!" +echo "==============================================" +echo "Model saved to: ./output/exp_a_json" diff --git a/scripts/aws/train_exp_b.sh b/scripts/aws/train_exp_b.sh new file mode 100644 index 0000000000000000000000000000000000000000..b640e0045e1a7b3c4ed3f876043cd05b6be4e9b4 --- /dev/null +++ b/scripts/aws/train_exp_b.sh @@ -0,0 +1,58 @@ +#!/bin/bash +# EXP-B: Training with GPT-2 EOS token (<|endoftext|>) +# Uses native GPT-2 EOS token (ID 50256) + +set -e + +echo "==============================================" +echo "EXP-B: EOS Token Format Training" +echo "==============================================" + +cd ~/seriguela + +# Activate virtual environment +source venv/bin/activate + +# Check data exists +if [ ! -f "./data/experiments/exp_b_eos/train.csv" ]; then + echo "ERROR: Training data not found!" + echo "Expected: ./data/experiments/exp_b_eos/train.csv" + exit 1 +fi + +# Count samples +TRAIN_COUNT=$(wc -l < ./data/experiments/exp_b_eos/train.csv) +echo "Training samples: $TRAIN_COUNT" + +# Training configuration +export WANDB_PROJECT="seriguela_experiments" +export HF_TOKEN="${HF_TOKEN:-}" +export WANDB_API_KEY="${WANDB_API_KEY:-}" + +# Run training +echo "" +echo "Starting training..." +echo "Output: ./output/exp_b_eos" +echo "" + +python scripts/train_experiment.py \ + --experiment_name "exp_b_eos" \ + --train_file ./data/experiments/exp_b_eos/train.csv \ + --validation_file ./data/experiments/exp_b_eos/validation.csv \ + --output_dir ./output/exp_b_eos \ + --end_marker "<|endoftext|>" \ + --use_native_eos \ + --num_train_epochs 3 \ + --per_device_train_batch_size 8 \ + --gradient_accumulation_steps 4 \ + --learning_rate 5e-5 \ + --block_size 128 \ + --fp16 \ + --wandb_project seriguela_experiments \ + --wandb_run_name "exp_b_eos_$(date +%Y%m%d_%H%M%S)" + +echo "" +echo "==============================================" +echo "EXP-B Training Complete!" +echo "==============================================" +echo "Model saved to: ./output/exp_b_eos" diff --git a/scripts/aws/train_fixed_model.sh b/scripts/aws/train_fixed_model.sh new file mode 100644 index 0000000000000000000000000000000000000000..f77b2179cd02c51ac4ff0f3b5da61933ae6eea15 --- /dev/null +++ b/scripts/aws/train_fixed_model.sh @@ -0,0 +1,144 @@ +#!/bin/bash +# Train model with proper end-of-expression markers +# This script retrains the Seriguela model with <|endofex|> markers in the training data +# so the model learns to stop generation correctly. + +set -e # Exit on error + +echo "================================================================" +echo "SERIGUELA - Training Model with Proper End Markers" +echo "================================================================" + +# Configuration +MODEL_NAME="gpt2" +DATASET_REPO="augustocsc/sintetico_natural" +DATA_DIR="700K" +DATA_COLUMN="i_prompt_n" # or p_prompt_n for prefix +OUTPUT_DIR="./output/Se124M_700K_infix_v2" +HUB_MODEL_ID="augustocsc/Se124M_700K_infix_v2" # NEW REPO NAME + +# Hyperparameters +EPOCHS=3 +BATCH_SIZE=8 +LEARNING_RATE=5e-5 +BLOCK_SIZE=128 +LORA_R=8 +LORA_ALPHA=32 +LORA_DROPOUT=0.05 + +echo "" +echo "Configuration:" +echo " Model: $MODEL_NAME" +echo " Dataset: $DATASET_REPO/$DATA_DIR" +echo " Data Column: $DATA_COLUMN" +echo " Output: $OUTPUT_DIR" +echo " Hub Model: $HUB_MODEL_ID" +echo "" +echo "Hyperparameters:" +echo " Epochs: $EPOCHS" +echo " Batch Size: $BATCH_SIZE" +echo " Learning Rate: $LEARNING_RATE" +echo " Block Size: $BLOCK_SIZE" +echo " LoRA r: $LORA_R" +echo " LoRA alpha: $LORA_ALPHA" +echo " LoRA dropout: $LORA_DROPOUT" +echo "================================================================" + +# Check if data preparation is needed +echo "" +echo "[Step 1/3] Checking data preparation..." +if [ ! -f "./data/processed/700K_fixed/train_700K.csv" ]; then + echo "Training data not found. Preparing data with end markers..." + + python scripts/data/prepare_training_data_fixed.py \ + --dataset_repo_id $DATASET_REPO \ + --data_dir $DATA_DIR \ + --data_column $DATA_COLUMN \ + --output_dir ./data/processed/700K_fixed \ + --validate + + if [ $? -ne 0 ]; then + echo "❌ Data preparation failed!" + exit 1 + fi + + echo "✅ Data preparation complete!" +else + echo "✅ Training data already prepared (./data/processed/700K_fixed/)" +fi + +# Optional: Show sample of prepared data +echo "" +echo "Sample of prepared data:" +head -n 2 ./data/processed/700K_fixed/train_700K.csv +echo "" + +# Start training +echo "" +echo "[Step 2/3] Starting training..." +echo "================================================================" +echo "" + +python scripts/train.py \ + --model_name_or_path $MODEL_NAME \ + --dataset_repo_id $DATASET_REPO \ + --data_dir $DATA_DIR \ + --data_column $DATA_COLUMN \ + --output_dir $OUTPUT_DIR \ + --num_train_epochs $EPOCHS \ + --per_device_train_batch_size $BATCH_SIZE \ + --learning_rate $LEARNING_RATE \ + --block_size $BLOCK_SIZE \ + --eval_strategy epoch \ + --save_strategy epoch \ + --save_total_limit 2 \ + --load_best_model_at_end \ + --lora_r $LORA_R \ + --lora_alpha $LORA_ALPHA \ + --lora_dropout $LORA_DROPOUT \ + --push_to_hub \ + --hub_model_id $HUB_MODEL_ID \ + --logging_steps 100 \ + --seed 42 + +if [ $? -ne 0 ]; then + echo "" + echo "❌ Training failed!" + exit 1 +fi + +echo "" +echo "✅ Training complete!" + +# Quick test generation +echo "" +echo "[Step 3/3] Testing model generation..." +echo "================================================================" +echo "" + +python scripts/generate.py \ + --model_path $OUTPUT_DIR \ + --num_generations 5 \ + --validate + +if [ $? -ne 0 ]; then + echo "" + echo "⚠️ Generation test failed, but model was trained successfully" +else + echo "" + echo "✅ Generation test passed!" +fi + +# Summary +echo "" +echo "================================================================" +echo "TRAINING COMPLETE" +echo "================================================================" +echo "Model saved to: $OUTPUT_DIR" +echo "Model pushed to: $HUB_MODEL_ID" +echo "" +echo "Next steps:" +echo " 1. Evaluate the model: python scripts/evaluate.py --model_path $OUTPUT_DIR" +echo " 2. Compare with old model: python scripts/compare_models.py --model1 ./output/Se124M_700K_infix --model2 $OUTPUT_DIR" +echo " 3. Generate more samples: python scripts/generate.py --model_path $OUTPUT_DIR --num_generations 20" +echo "================================================================" diff --git a/scripts/aws/train_v3_model.sh b/scripts/aws/train_v3_model.sh new file mode 100644 index 0000000000000000000000000000000000000000..597816f00e7c3c65866e51846da213b1d710d5aa --- /dev/null +++ b/scripts/aws/train_v3_model.sh @@ -0,0 +1,144 @@ +#!/bin/bash +# Training script for v3 model with proper end markers +# This script is designed to be run on AWS EC2 instances with GPU + +set -e # Exit on error + +echo "==================================================" +echo "Seriguela v3 Model Training" +echo "==================================================" +echo "Start time: $(date)" +echo "" + +# Configuration +PROJECT_DIR="${HOME}/seriguela" +OUTPUT_DIR="${PROJECT_DIR}/output/Se124M_700K_infix_v3" +CONFIG_FILE="${PROJECT_DIR}/configs/training_v3.json" +DATA_DIR="${PROJECT_DIR}/data/processed/700K_fixed" + +# Check if running in project directory +if [ ! -d "$PROJECT_DIR" ]; then + echo "ERROR: Project directory not found: $PROJECT_DIR" + exit 1 +fi + +cd "$PROJECT_DIR" + +# Activate virtual environment +echo "Activating virtual environment..." +if [ -d "venv" ]; then + source venv/bin/activate +elif [ -d ".seriguela" ]; then + source .seriguela/bin/activate +else + echo "ERROR: Virtual environment not found!" + exit 1 +fi + +# Verify GPU availability +echo "" +echo "Checking GPU availability..." +python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}'); print(f'GPU count: {torch.cuda.device_count()}'); print(f'GPU name: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"N/A\"}')" + +if ! python -c "import torch; exit(0 if torch.cuda.is_available() else 1)"; then + echo "WARNING: GPU not detected! Training will be slow on CPU." + read -p "Continue anyway? (y/n) " -n 1 -r + echo + if [[ ! $REPLY =~ ^[Yy]$ ]]; then + exit 1 + fi +fi + +# Verify data files exist +echo "" +echo "Verifying training data..." +if [ ! -f "$DATA_DIR/train_700K.csv" ]; then + echo "ERROR: Training data not found: $DATA_DIR/train_700K.csv" + echo "Please ensure data preparation step was completed." + exit 1 +fi + +if [ ! -f "$DATA_DIR/validation_700K.csv" ]; then + echo "ERROR: Validation data not found: $DATA_DIR/validation_700K.csv" + exit 1 +fi + +# Check for end markers in data +echo "Checking for end markers in training data..." +MARKER_COUNT=$(head -100 "$DATA_DIR/train_700K.csv" | grep -c "<|endofex|>" || true) +if [ "$MARKER_COUNT" -eq 0 ]; then + echo "ERROR: No <|endofex|> markers found in training data!" + echo "Please run data preparation script first." + exit 1 +else + echo "✓ End markers detected in training data" +fi + +# Verify config file exists +if [ ! -f "$CONFIG_FILE" ]; then + echo "ERROR: Config file not found: $CONFIG_FILE" + exit 1 +fi + +echo "" +echo "Configuration:" +echo " Config file: $CONFIG_FILE" +echo " Output directory: $OUTPUT_DIR" +echo " Training data: $DATA_DIR/train_700K.csv" +echo " Validation data: $DATA_DIR/validation_700K.csv" +echo "" + +# Create output directory +mkdir -p "$OUTPUT_DIR" + +# Set environment variables +export WANDB_PROJECT="seriguela_v3" +export WANDB_RUN_NAME="v3_proper_markers_$(date +%Y%m%d_%H%M%S)" + +# Check if wandb is configured +if ! python -c "import wandb; wandb.api.api_key" 2>/dev/null; then + echo "WARNING: Weights & Biases not configured. Training will proceed without W&B logging." + echo "To enable W&B: wandb login" +fi + +# Start training +echo "" +echo "==================================================" +echo "Starting training..." +echo "==================================================" +echo "" + +# Run training with config file +python scripts/train.py \ + --config "$CONFIG_FILE" \ + --output_dir "$OUTPUT_DIR" \ + --use_local_csvs \ + --train_file "$DATA_DIR/train_700K.csv" \ + --validation_file "$DATA_DIR/validation_700K.csv" \ + --wandb_project seriguela_v3 \ + --wandb_run_name "$WANDB_RUN_NAME" + +TRAIN_EXIT_CODE=$? + +echo "" +echo "==================================================" +echo "Training completed" +echo "==================================================" +echo "End time: $(date)" +echo "Exit code: $TRAIN_EXIT_CODE" +echo "" + +if [ $TRAIN_EXIT_CODE -eq 0 ]; then + echo "✓ Training completed successfully!" + echo "" + echo "Model saved to: $OUTPUT_DIR" + echo "" + echo "Next steps:" + echo "1. Run evaluation: python scripts/evaluate.py --model_path $OUTPUT_DIR" + echo "2. Test generation: python scripts/generate.py --model_path $OUTPUT_DIR --num_generations 50 --validate" + echo "3. Push to Hub (if configured): huggingface-cli upload augustocsc/Se124M_700K_infix_v3 $OUTPUT_DIR" +else + echo "✗ Training failed with exit code $TRAIN_EXIT_CODE" + echo "Check logs above for error details." + exit $TRAIN_EXIT_CODE +fi diff --git a/scripts/aws/validate_setup.sh b/scripts/aws/validate_setup.sh new file mode 100644 index 0000000000000000000000000000000000000000..9aa0020e4b302d81ff48fd9adcb0569adbb65217 --- /dev/null +++ b/scripts/aws/validate_setup.sh @@ -0,0 +1,285 @@ +#!/bin/bash +# Validate Seriguela Training Setup +# This script validates that everything is configured correctly before training +# Usage: ./validate_setup.sh + +set -e + +GREEN='\033[0;32m' +RED='\033[0;31m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +print_success() { echo -e "${GREEN}✅${NC} $1"; } +print_error() { echo -e "${RED}❌${NC} $1"; } +print_warning() { echo -e "${YELLOW}⚠️${NC} $1"; } +print_header() { echo -e "\n${BLUE}========== $1 ==========${NC}"; } + +ERRORS=0 + +print_header "Seriguela Setup Validation" + +# Change to project directory +if [ -d "/home/ubuntu/seriguela" ]; then + cd /home/ubuntu/seriguela +elif [ -d "$(pwd)/seriguela" ]; then + cd seriguela +else + cd . +fi + +print_header "1. Python Environment" + +# Check Python version +if python3 --version &> /dev/null; then + PYTHON_VERSION=$(python3 --version) + print_success "Python installed: $PYTHON_VERSION" +else + print_error "Python not found" + ERRORS=$((ERRORS + 1)) +fi + +# Check venv +if [ -d "venv" ]; then + print_success "Virtual environment exists" + source venv/bin/activate +else + print_error "Virtual environment not found" + ERRORS=$((ERRORS + 1)) +fi + +# Check pip +if pip --version &> /dev/null; then + PIP_VERSION=$(pip --version | cut -d' ' -f2) + print_success "pip version: $PIP_VERSION" +else + print_error "pip not found" + ERRORS=$((ERRORS + 1)) +fi + +print_header "2. Python Packages" + +# Check critical packages +PACKAGES=( + "transformers:Hugging Face Transformers" + "torch:PyTorch" + "wandb:Weights & Biases" + "peft:Parameter-Efficient Fine-Tuning" + "datasets:Hugging Face Datasets" +) + +for pkg_info in "${PACKAGES[@]}"; do + IFS=':' read -r pkg_name pkg_desc <<< "$pkg_info" + + if python3 -c "import $pkg_name" &> /dev/null; then + VERSION=$(python3 -c "import $pkg_name; print($pkg_name.__version__)" 2>/dev/null || echo "unknown") + print_success "$pkg_desc ($pkg_name) - version $VERSION" + else + print_error "$pkg_desc ($pkg_name) not installed" + ERRORS=$((ERRORS + 1)) + fi +done + +# Check Wandb version specifically +WANDB_VERSION=$(python3 -c "import wandb; print(wandb.__version__)" 2>/dev/null || echo "0.0.0") +REQUIRED_VERSION="0.24.0" + +if python3 << VERSIONCHECK +import sys +from packaging import version +current = version.parse("$WANDB_VERSION") +required = version.parse("$REQUIRED_VERSION") +sys.exit(0 if current >= required else 1) +VERSIONCHECK +then + print_success "Wandb version $WANDB_VERSION (>= $REQUIRED_VERSION required)" +else + print_warning "Wandb version $WANDB_VERSION is older than recommended $REQUIRED_VERSION" + print_warning "New API key format (wandb_v1_...) requires Wandb >= 0.24.0" +fi + +print_header "3. Environment Variables" + +# Load .env if exists +if [ -f ".env" ]; then + source <(grep -v '^#' .env | sed 's/^/export /') + print_success ".env file loaded" +else + print_warning ".env file not found" +fi + +# Check HF_TOKEN +if [ -n "$HF_TOKEN" ]; then + TOKEN_LEN=${#HF_TOKEN} + print_success "HF_TOKEN set ($TOKEN_LEN characters)" +else + print_warning "HF_TOKEN not set (model won't be pushed to Hub)" +fi + +# Check WANDB_API_KEY +if [ -n "$WANDB_API_KEY" ]; then + KEY_LEN=${#WANDB_API_KEY} + print_success "WANDB_API_KEY set ($KEY_LEN characters)" +else + print_error "WANDB_API_KEY not set" + ERRORS=$((ERRORS + 1)) +fi + +print_header "4. GPU / CUDA" + +# Check nvidia-smi +if nvidia-smi &> /dev/null; then + GPU_NAME=$(nvidia-smi --query-gpu=name --format=csv,noheader | head -1) + GPU_MEMORY=$(nvidia-smi --query-gpu=memory.total --format=csv,noheader | head -1) + print_success "GPU detected: $GPU_NAME ($GPU_MEMORY)" +else + print_error "GPU not detected (nvidia-smi failed)" + ERRORS=$((ERRORS + 1)) +fi + +# Check CUDA +if python3 -c "import torch; assert torch.cuda.is_available()" &> /dev/null; then + CUDA_VERSION=$(python3 -c "import torch; print(torch.version.cuda)") + GPU_COUNT=$(python3 -c "import torch; print(torch.cuda.device_count())") + print_success "CUDA available: version $CUDA_VERSION ($GPU_COUNT GPU(s))" +else + print_error "CUDA not available in PyTorch" + ERRORS=$((ERRORS + 1)) +fi + +print_header "5. Wandb Authentication" + +if [ -n "$WANDB_API_KEY" ]; then + if python3 << WANDBCHECK +import wandb +import sys +try: + result = wandb.login(key="$WANDB_API_KEY", relogin=True) + if result: + print("Login successful") + sys.exit(0) + else: + print("Login failed") + sys.exit(1) +except Exception as e: + print(f"Error: {e}") + sys.exit(1) +WANDBCHECK + then + print_success "Wandb authentication successful" + + # Get user info + WANDB_USER=$(python3 << 'GETUSER' +import wandb +try: + api = wandb.Api() + print(api.viewer.get("username", "unknown")) +except: + print("unknown") +GETUSER +) + print_success "Logged in as: $WANDB_USER" + else + print_error "Wandb authentication failed" + ERRORS=$((ERRORS + 1)) + fi +else + print_warning "Skipping Wandb auth (no API key)" +fi + +print_header "6. HuggingFace Authentication" + +if [ -n "$HF_TOKEN" ]; then + if python3 << HFCHECK +from huggingface_hub import HfApi +import sys +try: + api = HfApi(token="$HF_TOKEN") + user = api.whoami() + print(f"Login successful: {user.get('name', 'unknown')}") + sys.exit(0) +except Exception as e: + print(f"Error: {e}") + sys.exit(1) +HFCHECK + then + print_success "HuggingFace authentication successful" + else + print_error "HuggingFace authentication failed" + ERRORS=$((ERRORS + 1)) + fi +else + print_warning "Skipping HF auth (no token)" +fi + +print_header "7. Dataset Access" + +# Test dataset loading +if python3 << DATASETCHECK +from datasets import load_dataset +import sys +try: + # Quick test load (just get info, don't download) + ds = load_dataset("augustocsc/sintetico_natural", split="train", streaming=True) + print("Dataset accessible") + sys.exit(0) +except Exception as e: + print(f"Error: {e}") + sys.exit(1) +DATASETCHECK +then + print_success "Dataset accessible: augustocsc/sintetico_natural" +else + print_warning "Could not verify dataset access (may require authentication)" +fi + +print_header "8. Scripts" + +SCRIPTS=( + "scripts/train.py" + "scripts/evaluate.py" + "scripts/generate.py" + "scripts/aws/monitor_training_auto.sh" + "scripts/aws/analyze_model.sh" +) + +for script in "${SCRIPTS[@]}"; do + if [ -f "$script" ]; then + print_success "$script exists" + else + print_warning "$script not found" + fi +done + +# Final summary +print_header "Validation Summary" +echo "" + +if [ $ERRORS -eq 0 ]; then + echo -e "${GREEN}╔══════════════════════════════════════╗${NC}" + echo -e "${GREEN}║ ║${NC}" + echo -e "${GREEN}║ ✅ ALL VALIDATIONS PASSED ✅ ║${NC}" + echo -e "${GREEN}║ ║${NC}" + echo -e "${GREEN}║ Ready for training! 🚀 ║${NC}" + echo -e "${GREEN}║ ║${NC}" + echo -e "${GREEN}╚══════════════════════════════════════╝${NC}" + echo "" + echo "You can now run:" + echo " python scripts/train.py --help" + echo " bash scripts/aws/run_all_training.sh" + echo "" + exit 0 +else + echo -e "${RED}╔══════════════════════════════════════╗${NC}" + echo -e "${RED}║ ║${NC}" + echo -e "${RED}║ ❌ VALIDATION FAILED ❌ ║${NC}" + echo -e "${RED}║ ║${NC}" + echo -e "${RED}║ $ERRORS error(s) found ║${NC}" + echo -e "${RED}║ ║${NC}" + echo -e "${RED}╚══════════════════════════════════════╝${NC}" + echo "" + echo "Please fix the errors above before training." + echo "" + exit 1 +fi diff --git a/scripts/compare_models.py b/scripts/compare_models.py new file mode 100644 index 0000000000000000000000000000000000000000..f51725d98ff062c49f9d521d1aa512eb429a20de --- /dev/null +++ b/scripts/compare_models.py @@ -0,0 +1,271 @@ +""" +Compare two models: band-aided vs properly trained. +Evaluates both on same test set and reports metrics. + +Usage: + python scripts/compare_models.py \ + --model1 ./output/Se124M_700K_infix \ + --model2 ./output/Se124M_700K_infix_v2 \ + --num_samples 500 +""" + +import argparse +import json +import os +import sys +from datetime import datetime + +# Import evaluate_model from evaluate.py +sys.path.insert(0, os.path.dirname(os.path.abspath(__file__))) +from evaluate import evaluate_model + + +def format_metric(value, metric_type): + """Format metric value for display.""" + if metric_type == "rate": + return f"{value * 100:5.1f}%" + elif metric_type == "float": + return f"{value:7.2f}" + elif metric_type == "int": + return f"{int(value):7d}" + else: + return f"{value:7}" + + +def print_comparison_table(metrics1, metrics2, model1_name, model2_name): + """Print formatted comparison table.""" + print("\n" + "=" * 80) + print("COMPARISON RESULTS") + print("=" * 80) + + # Header + print(f"{'Metric':<35} {model1_name:>20} {model2_name:>20}") + print("-" * 80) + + # Define metrics to compare + comparison_metrics = [ + ("valid_rate", "Valid Rate", "rate"), + ("parseable_rate", "Parseable Rate", "rate"), + ("constraints_met_rate", "Constraints Met", "rate"), + ("diversity_rate", "Diversity", "rate"), + ("avg_expression_length", "Avg Expression Length", "float"), + ("total_samples", "Total Samples", "int"), + ("total_valid", "Total Valid", "int"), + ] + + improvements = [] + + for key, label, metric_type in comparison_metrics: + val1 = metrics1.get(key, 0) + val2 = metrics2.get(key, 0) + + formatted_val1 = format_metric(val1, metric_type) + formatted_val2 = format_metric(val2, metric_type) + + print(f"{label:<35} {formatted_val1:>20} {formatted_val2:>20}") + + # Calculate improvement for rate metrics + if metric_type == "rate" and val1 > 0: + improvement = ((val2 - val1) / val1) * 100 + improvements.append((label, improvement, val2 - val1)) + + print("=" * 80) + + # Show improvements + print("\nIMPROVEMENTS (Model 2 vs Model 1):") + print("-" * 80) + + for label, improvement, absolute_diff in improvements: + sign = "+" if improvement > 0 else "" + abs_sign = "+" if absolute_diff > 0 else "" + print(f"{label:<35} {sign}{improvement:>6.1f}% ({abs_sign}{absolute_diff * 100:>5.1f} pp)") + + print("-" * 80) + + # Determine winner + valid_rate_improvement = metrics2.get("valid_rate", 0) - metrics1.get("valid_rate", 0) + + print("\n" + "=" * 80) + if valid_rate_improvement > 0.20: # >20% improvement + print(f"🎯 SIGNIFICANT IMPROVEMENT: Model 2 wins by {valid_rate_improvement * 100:.1f} percentage points") + print(" The properly trained model significantly outperforms the band-aided version!") + elif valid_rate_improvement > 0.05: # >5% improvement + print(f"✅ IMPROVEMENT: Model 2 wins by {valid_rate_improvement * 100:.1f} percentage points") + print(" The properly trained model shows clear improvement.") + elif valid_rate_improvement > 0: # Any improvement + print(f"📈 SLIGHT IMPROVEMENT: Model 2 wins by {valid_rate_improvement * 100:.1f} percentage points") + print(" The properly trained model shows modest improvement.") + elif valid_rate_improvement == 0: + print("⚖️ TIE: Both models perform equally") + print(" No significant difference between models.") + else: + print(f"⚠️ REGRESSION: Model 1 wins by {-valid_rate_improvement * 100:.1f} percentage points") + print(" The band-aided model performs better - retraining may need adjustment.") + + print("=" * 80) + + +def save_comparison_report(metrics1, metrics2, model1_name, model2_name, output_dir): + """Save detailed comparison report to JSON.""" + os.makedirs(output_dir, exist_ok=True) + + timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") + report_file = os.path.join(output_dir, f"comparison_{timestamp}.json") + + report = { + "timestamp": timestamp, + "model1": { + "name": model1_name, + "metrics": metrics1 + }, + "model2": { + "name": model2_name, + "metrics": metrics2 + }, + "comparison": { + "valid_rate_diff": metrics2.get("valid_rate", 0) - metrics1.get("valid_rate", 0), + "parseable_rate_diff": metrics2.get("parseable_rate", 0) - metrics1.get("parseable_rate", 0), + "constraints_met_diff": metrics2.get("constraints_met_rate", 0) - metrics1.get("constraints_met_rate", 0), + "diversity_diff": metrics2.get("diversity_rate", 0) - metrics1.get("diversity_rate", 0), + } + } + + with open(report_file, "w") as f: + json.dump(report, f, indent=2) + + print(f"\n📄 Detailed comparison report saved to: {report_file}") + return report_file + + +def compare_models(model1_path, model2_path, model1_name, model2_name, + num_samples=500, dataset_repo_id="augustocsc/sintetico_natural", + data_dir="700K", data_column="i_prompt_n", output_dir="./evaluation_results/comparison"): + """Compare two models on same test set.""" + + print("=" * 80) + print("MODEL COMPARISON") + print("=" * 80) + print(f"Model 1 ({model1_name}): {model1_path}") + print(f"Model 2 ({model2_name}): {model2_path}") + print(f"Samples: {num_samples}") + print(f"Dataset: {dataset_repo_id}/{data_dir}") + print("=" * 80) + + # Create output directory + os.makedirs(output_dir, exist_ok=True) + + # Evaluate Model 1 (band-aided) + print(f"\n[1/2] Evaluating Model 1: {model1_name}") + print("-" * 80) + + args1 = argparse.Namespace( + model_path=model1_path, + base_model=None, + dataset_repo_id=dataset_repo_id, + data_dir=data_dir, + data_column=data_column, + num_samples=num_samples, + num_generations=1, + max_new_tokens=128, + temperature=0.7, + top_p=0.9, + output_dir=os.path.join(output_dir, "model1"), + seed=42, + device="auto" + ) + + try: + metrics1 = evaluate_model(args1) + except Exception as e: + print(f"\n❌ Error evaluating Model 1: {e}") + import traceback + traceback.print_exc() + sys.exit(1) + + # Evaluate Model 2 (properly trained) + print(f"\n[2/2] Evaluating Model 2: {model2_name}") + print("-" * 80) + + args2 = argparse.Namespace( + model_path=model2_path, + base_model=None, + dataset_repo_id=dataset_repo_id, + data_dir=data_dir, + data_column=data_column, + num_samples=num_samples, + num_generations=1, + max_new_tokens=128, + temperature=0.7, + top_p=0.9, + output_dir=os.path.join(output_dir, "model2"), + seed=42, + device="auto" + ) + + try: + metrics2 = evaluate_model(args2) + except Exception as e: + print(f"\n❌ Error evaluating Model 2: {e}") + import traceback + traceback.print_exc() + sys.exit(1) + + # Print comparison + print_comparison_table(metrics1, metrics2, model1_name, model2_name) + + # Save report + save_comparison_report(metrics1, metrics2, model1_name, model2_name, output_dir) + + return metrics1, metrics2 + + +def main(): + parser = argparse.ArgumentParser( + description="Compare two models on the same test set" + ) + parser.add_argument("--model1", type=str, required=True, + help="Path to first model (band-aided)") + parser.add_argument("--model2", type=str, required=True, + help="Path to second model (properly trained)") + parser.add_argument("--model1_name", type=str, default="Band-Aided", + help="Display name for model 1") + parser.add_argument("--model2_name", type=str, default="Proper", + help="Display name for model 2") + parser.add_argument("--num_samples", type=int, default=500, + help="Number of samples to evaluate") + parser.add_argument("--dataset_repo_id", type=str, default="augustocsc/sintetico_natural", + help="HuggingFace dataset repository") + parser.add_argument("--data_dir", type=str, default="700K", + help="Data directory within dataset") + parser.add_argument("--data_column", type=str, default="i_prompt_n", + help="Column name for prompts") + parser.add_argument("--output_dir", type=str, default="./evaluation_results/comparison", + help="Directory to save comparison results") + + args = parser.parse_args() + + # Run comparison + try: + compare_models( + model1_path=args.model1, + model2_path=args.model2, + model1_name=args.model1_name, + model2_name=args.model2_name, + num_samples=args.num_samples, + dataset_repo_id=args.dataset_repo_id, + data_dir=args.data_dir, + data_column=args.data_column, + output_dir=args.output_dir + ) + + print("\n✅ Comparison complete!") + + except Exception as e: + print(f"\n❌ Error during comparison: {e}") + import traceback + traceback.print_exc() + sys.exit(1) + + +if __name__ == "__main__": + main() diff --git a/scripts/compare_v1_v2_simple.py b/scripts/compare_v1_v2_simple.py new file mode 100644 index 0000000000000000000000000000000000000000..c79e1961a505a494944c870457e3a9e271fa05dd --- /dev/null +++ b/scripts/compare_v1_v2_simple.py @@ -0,0 +1,240 @@ +#!/usr/bin/env python3 +""" +Simple comparison of V1 vs V2 model generation quality +""" + +import sys +import torch +from pathlib import Path +from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria, StoppingCriteriaList +from peft import PeftModel + +sys.path.insert(0, str(Path(__file__).parent.parent)) +from classes.expression import Expression + + +class ExpressionStoppingCriteria(StoppingCriteria): + def __init__(self, tokenizer, stop_sequences): + self.tokenizer = tokenizer + self.stop_ids = [tokenizer.encode(seq, add_special_tokens=False) + for seq in stop_sequences] + + def __call__(self, input_ids, scores, **kwargs): + for stop_ids in self.stop_ids: + if len(stop_ids) > 0 and len(input_ids[0]) >= len(stop_ids): + if input_ids[0][-len(stop_ids):].tolist() == stop_ids: + return True + return False + + +def load_model(model_name, model_label): + print(f"\n{'='*60}") + print(f"Loading {model_label}: {model_name}") + print('='*60) + + # Load base GPT-2 + print("Loading base GPT-2...") + model = AutoModelForCausalLM.from_pretrained( + "gpt2", + torch_dtype=torch.float16, + device_map="auto" + ) + + # Setup tokenizer + tokenizer = AutoTokenizer.from_pretrained("gpt2") + tokenizer.add_special_tokens({ + "additional_special_tokens": ["<|startofex|>", "<|endofex|>"] + }) + + # Resize embeddings + model.resize_token_embeddings(len(tokenizer)) + + # Load adapter and merge + print(f"Loading adapter from {model_name}...") + model = PeftModel.from_pretrained(model, model_name) + print("Merging adapter...") + model = model.merge_and_unload() + model.eval() + + print(f"✓ {model_label} loaded successfully") + return model, tokenizer + + +def test_model(model, tokenizer, model_label, n_samples=20): + print(f"\n{'='*60}") + print(f"Testing {model_label} - {n_samples} generations") + print('='*60) + + # Same prompt for both models + prompt = """vars: x_1, x_2 +oper: *, +, -, sin, cos +cons: C +expr:""" + + inputs = tokenizer(prompt, return_tensors="pt").to(model.device) + + # Stopping criteria + stopping_criteria = StoppingCriteriaList([ + ExpressionStoppingCriteria(tokenizer, ["<|endofex|>", "\n\nvars:"]) + ]) + + # Use OPTIMAL config for each model (from FINAL_RESULTS_V1_VS_V2.md) + if model_label == "V1": + # V1 optimal: 83.3% valid rate + gen_config = { + "temperature": 0.5, + "top_k": 40, + "top_p": 0.9, + "repetition_penalty": 1.15, + "max_new_tokens": 100, + "do_sample": True, + "pad_token_id": tokenizer.eos_token_id, + } + print("Using V1 optimal config: temp=0.5, top_k=40, rep_penalty=1.15") + else: # V2 + # V2 optimal: 90% valid rate + gen_config = { + "temperature": 0.7, + "top_k": 0, + "top_p": 0.8, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": True, + "pad_token_id": tokenizer.eos_token_id, + } + print("Using V2 optimal config: temp=0.7, top_p=0.8 (nucleus sampling)") + + results = { + "valid_count": 0, + "correct_symbols_count": 0, + "expressions": [] + } + + allowed_vars = {"x_1", "x_2", "C"} + allowed_ops = {"*", "+", "-", "sin", "cos", "(", ")"} + + print(f"\nGenerating {n_samples} expressions...\n") + + for i in range(n_samples): + output = model.generate( + **inputs, + **gen_config, + stopping_criteria=stopping_criteria + ) + text = tokenizer.decode(output[0], skip_special_tokens=False) + + # Extract expression + if "expr:" in text: + expr_str = text.split("expr:")[-1].strip() + expr_str = expr_str.split("<|endofex|>")[0].strip() + else: + expr_str = text + + # Check if valid (can be parsed and evaluated) + is_valid = False + try: + expr = Expression(expr_str, is_prefix=False) + X_test = [[1.0, 2.0]] # Simple test + result = expr.evaluate(X_test) + if len(result) > 0 and all(x != float('inf') and x != float('-inf') and x == x for x in result): + is_valid = True + results["valid_count"] += 1 + except: + pass + + # Check if uses only correct symbols + has_correct_symbols = True + # Remove spaces and check tokens + expr_clean = expr_str.replace(" ", "") + # Check for allowed patterns + for char in expr_clean: + if char.isalpha() and char not in "xCsinco_": + has_correct_symbols = False + break + + # Check for garbage words + garbage_words = ["Buyable", "Instore", "Online", "Muslims", "crash", "Berman", + "vars:", "oper:", "expressed", "fluent", "Avenger", "repositories"] + for word in garbage_words: + if word in expr_str: + has_correct_symbols = False + break + + if has_correct_symbols: + results["correct_symbols_count"] += 1 + + results["expressions"].append({ + "index": i + 1, + "expression": expr_str[:80], # Limit display length + "valid": is_valid, + "correct_symbols": has_correct_symbols + }) + + # Show first 5 samples + if i < 5: + status = "✓ Valid" if is_valid else "✗ Invalid" + symbols = "✓ Clean" if has_correct_symbols else "✗ Garbage" + print(f" [{i+1:2d}] {status:10s} {symbols:10s} | {expr_str[:60]}") + + print(f"\n{'-'*60}") + print(f"RESULTS FOR {model_label}:") + print(f" Valid expressions: {results['valid_count']:2d}/{n_samples} ({results['valid_count']/n_samples*100:.1f}%)") + print(f" Correct symbols only: {results['correct_symbols_count']:2d}/{n_samples} ({results['correct_symbols_count']/n_samples*100:.1f}%)") + print(f"{'-'*60}") + + return results + + +def main(): + print("\n" + "="*60) + print("V1 vs V2 MODEL COMPARISON") + print("="*60) + print("Testing same prompt on both models") + print("Measuring: valid expressions + symbol correctness\n") + + # Test V1 + v1_model, v1_tokenizer = load_model("augustocsc/Se124M_700K_infix", "V1") + v1_results = test_model(v1_model, v1_tokenizer, "V1", n_samples=20) + + # Clean up V1 from memory + del v1_model + torch.cuda.empty_cache() + + # Test V2 + v2_model, v2_tokenizer = load_model("augustocsc/Se124M_700K_infix_v2", "V2") + v2_results = test_model(v2_model, v2_tokenizer, "V2", n_samples=20) + + # Final comparison + print("\n" + "="*60) + print("FINAL COMPARISON") + print("="*60) + print(f"\n{'Metric':<30s} {'V1':>10s} {'V2':>10s} {'Winner':>10s}") + print("-"*60) + + v1_valid = v1_results["valid_count"] + v2_valid = v2_results["valid_count"] + valid_winner = "V1" if v1_valid > v2_valid else ("V2" if v2_valid > v1_valid else "TIE") + print(f"{'Valid Expressions':<30s} {v1_valid:>10d} {v2_valid:>10d} {valid_winner:>10s}") + + v1_clean = v1_results["correct_symbols_count"] + v2_clean = v2_results["correct_symbols_count"] + clean_winner = "V1" if v1_clean > v2_clean else ("V2" if v2_clean > v1_clean else "TIE") + print(f"{'Correct Symbols Only':<30s} {v1_clean:>10d} {v2_clean:>10d} {clean_winner:>10s}") + + print("-"*60) + print(f"{'Valid Rate':<30s} {v1_valid/20*100:>9.1f}% {v2_valid/20*100:>9.1f}%") + print(f"{'Clean Symbol Rate':<30s} {v1_clean/20*100:>9.1f}% {v2_clean/20*100:>9.1f}%") + print("="*60) + + # Conclusion + print("\nConclusion:") + if v1_valid > v2_valid and v1_clean > v2_clean: + print(" → V1 is better on both metrics") + elif v2_valid > v1_valid and v2_clean > v1_clean: + print(" → V2 is better on both metrics") + else: + print(" → Mixed results - models have different strengths") + + +if __name__ == "__main__": + main() diff --git a/scripts/data/data_augmentation.py b/scripts/data/data_augmentation.py new file mode 100644 index 0000000000000000000000000000000000000000..5c50ecf32bb7b485d1dbca155da68a523315be2f --- /dev/null +++ b/scripts/data/data_augmentation.py @@ -0,0 +1,63 @@ +# augmentor.py + +import random +import re + +ALL_OPERANDS = ['+', '-', '*', '/', 'log', 'exp', 'cos', 'sqrt', 'asin', 'sin', '**', 'tan', 'abs'] + +def extract_operators(expr_str): + ops = set() + if 'exp' in expr_str: ops.add('exp') + if 'log' in expr_str: ops.add('log') + if 'cos' in expr_str: ops.add('cos') + if 'sin' in expr_str: ops.add('sin') + if '**' in expr_str: ops.add('**') + if 'sqrt' in expr_str: ops.add('sqrt') + if 'asin' in expr_str: ops.add('asin') + if 'tan' in expr_str: ops.add('tan') + if 'abs' in expr_str: ops.add('abs') + if '/' in expr_str: ops.add('/') + for op in ['+', '-', '*']: + if op in expr_str: ops.add(op) + return list(ops) + +def infer_max_var(expr_str): + matches = re.findall(r'x_(\d+)', expr_str) + return max([int(m) for m in matches]) if matches else 1 + +def generate_expression_instructions(expr_str): + max_var = infer_max_var(expr_str) + + variables = [f"x_{i}" for i in range(1, max_var + random.randint(1, (max_var) + 1))] + + used_ops = extract_operators(expr_str) + extra_ops = list(set(ALL_OPERANDS) - set(used_ops)) + added_ops = random.sample(extra_ops, random.randint(1, len(extra_ops))) if extra_ops else [] + all_ops = sorted(set(used_ops + added_ops)) + constants = ['C'] + wrapped_expr = f"{expr_str}" + + return { + "Simple_Instruct": f"Instruction: Generate a mathematical expression using variables {variables} and operands {all_ops} and {constants} as constant.\nExpression: {wrapped_expr}", + "Key_Value": f"Variables: {variables}\nOperands: {all_ops}\nConstant: {constants}\nExpression: {wrapped_expr}", + "Delimiter_Based": f"Input: Variables={variables}, Operands={all_ops}, Constant={constants}\nOutput: {wrapped_expr}", + "Minimalist": f"{variables} | {all_ops} | {constants} => {wrapped_expr}" + } + +def generate_expression_instruction(expr_str): + max_var = infer_max_var(expr_str) + + variables = [f"x_{i}" for i in range(1, max_var + random.randint(1, (max_var) + 1))] + + used_ops = extract_operators(expr_str) + extra_ops = list(set(ALL_OPERANDS) - set(used_ops)) + added_ops = random.sample(extra_ops, random.randint(1, len(extra_ops))) if extra_ops else [] + all_ops = sorted(set(used_ops + added_ops)) + constants = ['C'] + wrapped_expr = f"{expr_str}" + + return { + #"instriction": f"{','.join(variables)}\n{', '.join(all_ops)}\n{', '.join(constants)}\n{wrapped_expr}" + "instriction": f"vars: {', '.join(variables)}\noper: {', '.join(all_ops)}\ncons: {', '.join(constants)}\nexpr: {wrapped_expr}" + } +#print(generate_expression_instruction("x_1 - (x_4 - C)*(x_3 + exp(C*x_2) + C)")) \ No newline at end of file diff --git a/scripts/data/data_cleaning.py b/scripts/data/data_cleaning.py new file mode 100644 index 0000000000000000000000000000000000000000..f722e96b9c80de8f5fb135b1c8537676954eac60 --- /dev/null +++ b/scripts/data/data_cleaning.py @@ -0,0 +1,90 @@ +import re +import pandas as pd +import numpy as np +from sympy import sympify, Eq +from sympy.parsing.sympy_parser import parse_expr +from sympy.core.sympify import SympifyError +from concurrent.futures import ProcessPoolExecutor +import multiprocessing as mp +from sympy import simplify, sympify +from sympy.core.sympify import SympifyError +import swifter +import random + +from joblib import Parallel, delayed + + +from tqdm.auto import tqdm + +def apply_chunk(chunk, func): + """Helper function to apply a function to a chunk of data.""" + return chunk.apply(func) + +def parallel_apply(series, func, n_jobs=None): + n_jobs = mp.cpu_count() if n_jobs is None else n_jobs + # Split into roughly equal chunks + chunks = np.array_split(series, n_jobs) + with mp.Pool(n_jobs) as pool: + # Use the helper function instead of a lambda + results = pool.starmap(apply_chunk, [(chunk, func) for chunk in chunks]) + # Concatenate the resulting Series + return pd.concat(results) + +def canonicalize_expr(expr, canonicalizer=simplify): + canon = canonicalizer(expr) + return (hash(canon), canon, expr) + +def replace_constants(equation): + # Match positive/negative floats and integers not part of variable names + pattern = r'(?) + +Usage: + python scripts/data/prepare_experiment_data.py \ + --dataset_repo_id augustocsc/sintetico_natural \ + --data_dir 700K \ + --data_column i_prompt_n \ + --output_base_dir ./data/experiments +""" + +import argparse +import json +import logging +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + +from datasets import load_dataset, Dataset, DatasetDict +import pandas as pd + +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s' +) +logger = logging.getLogger(__name__) + + +def parse_original_format(text: str) -> Optional[Dict]: + """ + Parse the original format into components. + + Original format: + vars: x_1, x_2 + oper: *, +, sin + cons: C + expr: C*sin(x_1) + x_2 + + Returns: + Dictionary with vars, ops, cons, expr or None if parsing fails + """ + result = { + 'vars': [], + 'ops': [], + 'cons': None, + 'expr': None, + 'raw_text': text + } + + lines = text.strip().split('\n') + + for line in lines: + line = line.strip() + if not line: + continue + + if line.startswith('vars:') or line.startswith('Variables:'): + # Extract variables + var_part = line.split(':', 1)[1].strip() + vars_list = [v.strip() for v in var_part.split(',') if v.strip()] + result['vars'] = vars_list + + elif line.startswith('oper:') or line.startswith('Operators:'): + # Extract operators + op_part = line.split(':', 1)[1].strip() + ops_list = [o.strip() for o in op_part.split(',') if o.strip()] + result['ops'] = ops_list + + elif line.startswith('cons:') or line.startswith('Constants:'): + # Extract constants + cons_part = line.split(':', 1)[1].strip() + result['cons'] = cons_part if cons_part else None + + elif line.startswith('expr:'): + # Extract expression - everything after 'expr:' + expr_part = line.split(':', 1)[1].strip() + # Clean expression: remove any markers or trailing content + expr_part = expr_part.split('<|')[0].strip() # Remove any existing markers + expr_part = expr_part.split('\n')[0].strip() # Remove newlines + result['expr'] = expr_part + + # Validate we got the essential parts + if not result['expr']: + return None + + return result + + +def convert_to_json_format(parsed: Dict) -> str: + """ + Convert parsed data to JSON format (EXP-A). + + Output format: + {"vars": ["x_1", "x_2"], "ops": ["*", "+", "sin"], "cons": "C", "expr": "C*sin(x_1) + x_2"} + """ + json_obj = { + 'vars': parsed['vars'], + 'ops': parsed['ops'], + } + + if parsed['cons']: + json_obj['cons'] = parsed['cons'] + + json_obj['expr'] = parsed['expr'] + + return json.dumps(json_obj, ensure_ascii=False) + + +def convert_to_eos_format(parsed: Dict) -> str: + """ + Convert parsed data to EOS token format (EXP-B). + + Output format: + vars: x_1, x_2 + oper: *, +, sin + cons: C + expr: C*sin(x_1) + x_2<|endoftext|> + """ + lines = [] + + if parsed['vars']: + lines.append(f"vars: {', '.join(parsed['vars'])}") + + if parsed['ops']: + lines.append(f"oper: {', '.join(parsed['ops'])}") + + if parsed['cons']: + lines.append(f"cons: {parsed['cons']}") + + # Add expression with EOS token + lines.append(f"expr: {parsed['expr']}<|endoftext|>") + + return '\n'.join(lines) + + +def process_example_json(example: Dict) -> Dict: + """Process a single example into JSON format.""" + text = example['text'] + parsed = parse_original_format(text) + + if parsed is None: + logger.warning(f"Failed to parse: {text[:100]}...") + return {'text': '', 'valid': False} + + json_text = convert_to_json_format(parsed) + return {'text': json_text, 'valid': True} + + +def process_example_eos(example: Dict) -> Dict: + """Process a single example into EOS format.""" + text = example['text'] + parsed = parse_original_format(text) + + if parsed is None: + logger.warning(f"Failed to parse: {text[:100]}...") + return {'text': '', 'valid': False} + + eos_text = convert_to_eos_format(parsed) + return {'text': eos_text, 'valid': True} + + +def validate_json_format(text: str) -> bool: + """Validate JSON format is correct.""" + try: + obj = json.loads(text) + return 'expr' in obj and 'vars' in obj and 'ops' in obj + except: + return False + + +def validate_eos_format(text: str) -> bool: + """Validate EOS format is correct.""" + return '<|endoftext|>' in text and 'expr:' in text + + +def process_dataset( + dataset_repo_id: str, + data_dir: str, + data_column: str, + output_base_dir: Path, + max_samples: Optional[int] = None +) -> Dict: + """ + Process the dataset into both formats. + + Args: + dataset_repo_id: HuggingFace dataset repository ID + data_dir: Subdirectory within the dataset + data_column: Column containing the text data + output_base_dir: Base directory for output + max_samples: Optional limit on number of samples (for testing) + + Returns: + Dictionary with processing statistics + """ + logger.info(f"Loading dataset from {dataset_repo_id}/{data_dir}...") + + # Load dataset + dataset = load_dataset( + dataset_repo_id, + data_dir=data_dir, + split=None + ) + + if not isinstance(dataset, dict): + dataset = {'train': dataset} + + logger.info(f"Loaded {len(dataset)} split(s): {list(dataset.keys())}") + + # Show sample + if 'train' in dataset: + sample = dataset['train'][0][data_column] + logger.info(f"\nSample ORIGINAL format:\n{sample}\n") + + # Create output directories + output_json = output_base_dir / 'exp_a_json' + output_eos = output_base_dir / 'exp_b_eos' + output_json.mkdir(parents=True, exist_ok=True) + output_eos.mkdir(parents=True, exist_ok=True) + + statistics = { + 'total': 0, + 'json_valid': 0, + 'eos_valid': 0, + 'json_invalid': 0, + 'eos_invalid': 0, + 'splits': {} + } + + for split_name, split_data in dataset.items(): + logger.info(f"\n{'='*60}") + logger.info(f"Processing {split_name} split ({len(split_data)} examples)") + logger.info('='*60) + + # Rename column if needed + if data_column != 'text': + split_data = split_data.rename_column(data_column, 'text') + + # Limit samples if specified + if max_samples and len(split_data) > max_samples: + logger.info(f"Limiting to {max_samples} samples for testing") + split_data = split_data.select(range(max_samples)) + + statistics['total'] += len(split_data) + + # Process to JSON format + logger.info("\nConverting to JSON format (EXP-A)...") + json_data = split_data.map( + process_example_json, + desc=f"JSON format ({split_name})" + ) + + # Filter valid examples + json_valid = json_data.filter(lambda x: x['valid']) + json_invalid_count = len(json_data) - len(json_valid) + + logger.info(f"JSON format: {len(json_valid)}/{len(json_data)} valid") + + if len(json_valid) > 0: + logger.info(f"\nSample JSON format:\n{json_valid[0]['text']}\n") + + # Process to EOS format + logger.info("\nConverting to EOS format (EXP-B)...") + eos_data = split_data.map( + process_example_eos, + desc=f"EOS format ({split_name})" + ) + + # Filter valid examples + eos_valid = eos_data.filter(lambda x: x['valid']) + eos_invalid_count = len(eos_data) - len(eos_valid) + + logger.info(f"EOS format: {len(eos_valid)}/{len(eos_data)} valid") + + if len(eos_valid) > 0: + logger.info(f"\nSample EOS format:\n{eos_valid[0]['text']}\n") + + # Update statistics + statistics['json_valid'] += len(json_valid) + statistics['json_invalid'] += json_invalid_count + statistics['eos_valid'] += len(eos_valid) + statistics['eos_invalid'] += eos_invalid_count + statistics['splits'][split_name] = { + 'total': len(split_data), + 'json_valid': len(json_valid), + 'eos_valid': len(eos_valid) + } + + # Save JSON format + json_df = pd.DataFrame({'text': [ex['text'] for ex in json_valid]}) + json_file = output_json / f'{split_name}.csv' + json_df.to_csv(json_file, index=False) + logger.info(f"Saved JSON: {json_file} ({len(json_df)} examples)") + + # Save EOS format + eos_df = pd.DataFrame({'text': [ex['text'] for ex in eos_valid]}) + eos_file = output_eos / f'{split_name}.csv' + eos_df.to_csv(eos_file, index=False) + logger.info(f"Saved EOS: {eos_file} ({len(eos_df)} examples)") + + return statistics + + +def validate_output_files(output_base_dir: Path) -> Dict: + """ + Validate the generated output files. + + Returns: + Validation results dictionary + """ + logger.info("\n" + "="*60) + logger.info("VALIDATION OF OUTPUT FILES") + logger.info("="*60) + + results = { + 'exp_a_json': {'valid': True, 'issues': []}, + 'exp_b_eos': {'valid': True, 'issues': []} + } + + # Validate JSON format (EXP-A) + json_dir = output_base_dir / 'exp_a_json' + for csv_file in json_dir.glob('*.csv'): + logger.info(f"\nValidating {csv_file.name}...") + df = pd.read_csv(csv_file) + + valid_count = 0 + invalid_samples = [] + + for idx, row in df.iterrows(): + text = row['text'] + if validate_json_format(text): + valid_count += 1 + else: + if len(invalid_samples) < 3: + invalid_samples.append(text[:100]) + + rate = valid_count / len(df) * 100 if len(df) > 0 else 0 + logger.info(f" Valid: {valid_count}/{len(df)} ({rate:.1f}%)") + + if invalid_samples: + results['exp_a_json']['valid'] = False + results['exp_a_json']['issues'].extend(invalid_samples) + + # Validate EOS format (EXP-B) + eos_dir = output_base_dir / 'exp_b_eos' + for csv_file in eos_dir.glob('*.csv'): + logger.info(f"\nValidating {csv_file.name}...") + df = pd.read_csv(csv_file) + + valid_count = 0 + invalid_samples = [] + + for idx, row in df.iterrows(): + text = row['text'] + if validate_eos_format(text): + valid_count += 1 + else: + if len(invalid_samples) < 3: + invalid_samples.append(text[:100]) + + rate = valid_count / len(df) * 100 if len(df) > 0 else 0 + logger.info(f" Valid: {valid_count}/{len(df)} ({rate:.1f}%)") + + if invalid_samples: + results['exp_b_eos']['valid'] = False + results['exp_b_eos']['issues'].extend(invalid_samples) + + return results + + +def print_final_report(statistics: Dict, validation: Dict): + """Print final processing report.""" + logger.info("\n" + "="*60) + logger.info("FINAL REPORT") + logger.info("="*60) + + logger.info(f"\nTotal examples processed: {statistics['total']}") + + logger.info("\nEXP-A (JSON Format):") + logger.info(f" Valid: {statistics['json_valid']}") + logger.info(f" Invalid: {statistics['json_invalid']}") + json_rate = statistics['json_valid'] / statistics['total'] * 100 if statistics['total'] > 0 else 0 + logger.info(f" Success rate: {json_rate:.1f}%") + logger.info(f" Validation: {'PASS' if validation['exp_a_json']['valid'] else 'FAIL'}") + + logger.info("\nEXP-B (EOS Format):") + logger.info(f" Valid: {statistics['eos_valid']}") + logger.info(f" Invalid: {statistics['eos_invalid']}") + eos_rate = statistics['eos_valid'] / statistics['total'] * 100 if statistics['total'] > 0 else 0 + logger.info(f" Success rate: {eos_rate:.1f}%") + logger.info(f" Validation: {'PASS' if validation['exp_b_eos']['valid'] else 'FAIL'}") + + logger.info("\nPer-split breakdown:") + for split_name, split_stats in statistics['splits'].items(): + logger.info(f"\n {split_name.upper()}:") + logger.info(f" Total: {split_stats['total']}") + logger.info(f" JSON valid: {split_stats['json_valid']}") + logger.info(f" EOS valid: {split_stats['eos_valid']}") + + logger.info("\n" + "="*60) + + all_valid = validation['exp_a_json']['valid'] and validation['exp_b_eos']['valid'] + if all_valid: + logger.info("STATUS: ALL VALIDATIONS PASSED") + else: + logger.info("STATUS: SOME VALIDATIONS FAILED") + + logger.info("="*60) + + return all_valid + + +def main(): + parser = argparse.ArgumentParser( + description="Prepare experiment data in JSON and EOS formats" + ) + parser.add_argument( + "--dataset_repo_id", + type=str, + default="augustocsc/sintetico_natural", + help="HuggingFace dataset repository ID" + ) + parser.add_argument( + "--data_dir", + type=str, + default="700K", + help="Subdirectory within the dataset" + ) + parser.add_argument( + "--data_column", + type=str, + default="i_prompt_n", + help="Column containing text data" + ) + parser.add_argument( + "--output_base_dir", + type=str, + default="./data/experiments", + help="Base directory for output" + ) + parser.add_argument( + "--max_samples", + type=int, + default=None, + help="Maximum samples per split (for testing)" + ) + parser.add_argument( + "--skip_validation", + action="store_true", + help="Skip output file validation" + ) + + args = parser.parse_args() + + output_base_dir = Path(args.output_base_dir) + + logger.info("="*60) + logger.info("EXPERIMENT DATA PREPARATION") + logger.info("="*60) + logger.info(f"Dataset: {args.dataset_repo_id}/{args.data_dir}") + logger.info(f"Column: {args.data_column}") + logger.info(f"Output: {output_base_dir}") + if args.max_samples: + logger.info(f"Max samples: {args.max_samples}") + logger.info("="*60) + + try: + # Process dataset + statistics = process_dataset( + dataset_repo_id=args.dataset_repo_id, + data_dir=args.data_dir, + data_column=args.data_column, + output_base_dir=output_base_dir, + max_samples=args.max_samples + ) + + # Validate output + if not args.skip_validation: + validation = validate_output_files(output_base_dir) + else: + validation = { + 'exp_a_json': {'valid': True, 'issues': []}, + 'exp_b_eos': {'valid': True, 'issues': []} + } + + # Print report + all_valid = print_final_report(statistics, validation) + + if all_valid: + logger.info("\nData preparation completed successfully!") + logger.info(f"\nOutput directories:") + logger.info(f" EXP-A (JSON): {output_base_dir / 'exp_a_json'}") + logger.info(f" EXP-B (EOS): {output_base_dir / 'exp_b_eos'}") + sys.exit(0) + else: + logger.error("\nData preparation completed with validation errors!") + sys.exit(1) + + except Exception as e: + logger.error(f"\nFailed to prepare data: {e}") + import traceback + traceback.print_exc() + sys.exit(1) + + +if __name__ == "__main__": + main() diff --git a/scripts/data/prepare_training_data_fixed.py b/scripts/data/prepare_training_data_fixed.py new file mode 100644 index 0000000000000000000000000000000000000000..e035a3bbd967e33a0d49f50bbdf7f83f7e33d548 --- /dev/null +++ b/scripts/data/prepare_training_data_fixed.py @@ -0,0 +1,408 @@ +""" +Data preparation script that adds proper <|endofex|> markers to training data. + +This script processes the existing dataset and wraps expressions with end-of-expression +markers so the model learns to stop generation correctly. + +Usage: + python scripts/data/prepare_training_data_fixed.py \ + --dataset_repo_id augustocsc/sintetico_natural \ + --data_dir 700K \ + --data_column i_prompt_n \ + --output_dir ./data/processed/700K_fixed \ + --validate +""" + +import argparse +import logging +import os +import sys +from pathlib import Path +from typing import Dict, Tuple + +from datasets import load_dataset, Dataset, DatasetDict +import pandas as pd + +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s' +) +logger = logging.getLogger(__name__) + + +def add_end_markers(example: Dict) -> Dict: + """ + Add end-of-expression markers to training data. + + This function: + 1. Locates the expression in the text (after 'expr:') + 2. Finds the natural end boundary (before 'vars:', newlines, etc.) + 3. Inserts <|endofex|> marker at the end + 4. Preserves any remaining content after the marker + + Args: + example: Dictionary containing 'text' field with training data + + Returns: + Dictionary with modified 'text' field containing end markers + """ + text = example['text'] + + # Check if expression part exists + if 'expr:' not in text: + logger.warning(f"No 'expr:' found in text: {text[:100]}...") + return {'text': text} + + # Split at expr: and add marker after expression + parts = text.split('expr:', 1) + if len(parts) != 2: + logger.warning(f"Unexpected format in text: {text[:100]}...") + return {'text': text} + + prefix = parts[0] + expression_part = parts[1] + + # Check if marker already exists + if '<|endofex|>' in expression_part: + logger.debug("Marker already present, skipping") + return {'text': text} + + # Find natural end of expression (before vars:, newline, etc) + end_idx = len(expression_part) + boundaries = ['\nvars:', '\nVariables:', '\n\n', '\nvar:', '\nVariable:'] + + for boundary in boundaries: + idx = expression_part.find(boundary) + if idx != -1 and idx < end_idx: + end_idx = idx + + # Insert marker + clean_expr = expression_part[:end_idx].strip() + remaining = expression_part[end_idx:] + + # Reconstruct text with marker + new_text = f"{prefix}expr: {clean_expr}<|endofex|>{remaining}" + + return {'text': new_text} + + +def validate_markers(example: Dict) -> Dict: + """ + Validate that markers are properly present in the text. + + Args: + example: Dictionary containing 'text' field + + Returns: + Dictionary with validation metadata + """ + text = example['text'] + start_count = text.count('<|startofex|>') + end_count = text.count('<|endofex|>') + + # Valid if we have at least one end marker + # (start marker is optional depending on format) + valid = end_count > 0 + + return { + 'valid': valid, + 'start_count': start_count, + 'end_count': end_count, + 'text': text + } + + +def process_dataset( + dataset_repo_id: str, + data_dir: str, + data_column: str, + output_dir: Path, + validate: bool = True +) -> Tuple[DatasetDict, Dict]: + """ + Process the dataset by adding end markers to all splits. + + Args: + dataset_repo_id: HuggingFace dataset repository ID + data_dir: Subdirectory within the dataset (e.g., '700K') + data_column: Column to use for training data + output_dir: Directory to save processed dataset + validate: Whether to run validation after processing + + Returns: + Tuple of (processed_dataset, statistics) + """ + logger.info(f"Loading dataset from {dataset_repo_id}/{data_dir}...") + + try: + # Load dataset from HuggingFace Hub + dataset = load_dataset( + dataset_repo_id, + data_dir=data_dir, + split=None # Load all splits + ) + + if not isinstance(dataset, dict): + # If single split, convert to dict + dataset = {'train': dataset} + + logger.info(f"Loaded {len(dataset)} split(s): {list(dataset.keys())}") + + # Show sample before processing + if 'train' in dataset and len(dataset['train']) > 0: + logger.info(f"\nSample BEFORE processing:") + logger.info(f"{dataset['train'][0][data_column][:200]}...") + + except Exception as e: + logger.error(f"Failed to load dataset: {e}") + raise + + # Process each split + processed_dataset = {} + statistics = { + 'total_examples': 0, + 'processed_examples': 0, + 'already_marked': 0, + 'splits': {} + } + + for split_name, split_data in dataset.items(): + logger.info(f"\nProcessing {split_name} split ({len(split_data)} examples)...") + + # Rename column to 'text' if needed + if data_column != 'text': + split_data = split_data.rename_column(data_column, 'text') + + # Count examples that already have markers + already_marked = sum(1 for ex in split_data if '<|endofex|>' in ex['text']) + statistics['already_marked'] += already_marked + + if already_marked > 0: + logger.info(f"Found {already_marked} examples already with markers") + + # Apply marker addition + processed_split = split_data.map( + add_end_markers, + desc=f"Adding markers to {split_name}" + ) + + processed_dataset[split_name] = processed_split + + # Update statistics + split_stats = { + 'total': len(split_data), + 'processed': len(processed_split), + 'already_marked': already_marked + } + statistics['splits'][split_name] = split_stats + statistics['total_examples'] += len(split_data) + statistics['processed_examples'] += len(processed_split) + + # Show sample after processing + if len(processed_split) > 0: + logger.info(f"\nSample AFTER processing:") + logger.info(f"{processed_split[0]['text'][:200]}...") + + # Validate if requested + if validate: + logger.info("\n" + "="*60) + logger.info("VALIDATION") + logger.info("="*60) + + for split_name, split_data in processed_dataset.items(): + logger.info(f"\nValidating {split_name} split...") + + # Apply validation + validated = split_data.map(validate_markers) + + # Count valid examples + valid_count = sum(validated['valid']) + invalid_count = len(validated) - valid_count + + valid_rate = valid_count / len(validated) * 100 + + logger.info(f"Valid examples: {valid_count}/{len(validated)} ({valid_rate:.1f}%)") + + if invalid_count > 0: + logger.warning(f"Found {invalid_count} invalid examples!") + + # Show first few invalid examples + invalid_examples = [ + ex for ex in validated if not ex['valid'] + ][:3] + + for i, ex in enumerate(invalid_examples): + logger.warning(f"\nInvalid example {i+1}:") + logger.warning(f"Start markers: {ex['start_count']}") + logger.warning(f"End markers: {ex['end_count']}") + logger.warning(f"Text: {ex['text'][:200]}...") + + # Update statistics + statistics['splits'][split_name]['valid'] = valid_count + statistics['splits'][split_name]['invalid'] = invalid_count + statistics['splits'][split_name]['valid_rate'] = valid_rate + + # Convert back to DatasetDict + processed_dataset = DatasetDict(processed_dataset) + + return processed_dataset, statistics + + +def save_dataset(dataset: DatasetDict, output_dir: Path, data_dir: str): + """ + Save processed dataset to local directory. + + Args: + dataset: Processed dataset to save + output_dir: Directory to save to + data_dir: Original data directory name (for filename) + """ + output_dir.mkdir(parents=True, exist_ok=True) + + logger.info(f"\nSaving processed dataset to {output_dir}...") + + for split_name, split_data in dataset.items(): + # Save as CSV + output_file = output_dir / f"{split_name}_{data_dir}.csv" + + # Convert to pandas and save + df = split_data.to_pandas() + df.to_csv(output_file, index=False) + + logger.info(f"Saved {split_name} split: {output_file} ({len(df)} examples)") + + logger.info("Dataset saved successfully!") + + +def print_statistics(statistics: Dict): + """ + Print processing statistics in a formatted table. + + Args: + statistics: Dictionary containing processing statistics + """ + logger.info("\n" + "="*60) + logger.info("PROCESSING STATISTICS") + logger.info("="*60) + + logger.info(f"\nTotal examples: {statistics['total_examples']}") + logger.info(f"Processed examples: {statistics['processed_examples']}") + logger.info(f"Already marked: {statistics['already_marked']}") + + logger.info("\nPer-split statistics:") + logger.info("-"*60) + + for split_name, split_stats in statistics['splits'].items(): + logger.info(f"\n{split_name.upper()}:") + logger.info(f" Total: {split_stats['total']}") + logger.info(f" Processed: {split_stats['processed']}") + logger.info(f" Already marked: {split_stats.get('already_marked', 0)}") + + if 'valid' in split_stats: + logger.info(f" Valid: {split_stats['valid']}") + logger.info(f" Invalid: {split_stats['invalid']}") + logger.info(f" Valid rate: {split_stats['valid_rate']:.1f}%") + + logger.info("="*60) + + +def main(): + parser = argparse.ArgumentParser( + description="Prepare training data with proper end-of-expression markers" + ) + parser.add_argument( + "--dataset_repo_id", + type=str, + required=True, + help="HuggingFace dataset repository ID" + ) + parser.add_argument( + "--data_dir", + type=str, + required=True, + help="Subdirectory within the dataset (e.g., '700K')" + ) + parser.add_argument( + "--data_column", + type=str, + required=True, + help="Column to use for training data (e.g., 'i_prompt_n')" + ) + parser.add_argument( + "--output_dir", + type=str, + required=True, + help="Directory to save processed dataset" + ) + parser.add_argument( + "--validate", + action="store_true", + help="Run validation after processing" + ) + parser.add_argument( + "--push_to_hub", + action="store_true", + help="Push processed dataset to HuggingFace Hub" + ) + parser.add_argument( + "--hub_repo_id", + type=str, + default=None, + help="HuggingFace repository ID for pushing (if --push_to_hub)" + ) + + args = parser.parse_args() + + # Convert output_dir to Path + output_dir = Path(args.output_dir) + + # Process dataset + try: + processed_dataset, statistics = process_dataset( + dataset_repo_id=args.dataset_repo_id, + data_dir=args.data_dir, + data_column=args.data_column, + output_dir=output_dir, + validate=args.validate + ) + + # Print statistics + print_statistics(statistics) + + # Save to local directory + save_dataset(processed_dataset, output_dir, args.data_dir) + + # Push to Hub if requested + if args.push_to_hub: + if not args.hub_repo_id: + logger.error("--hub_repo_id required when using --push_to_hub") + sys.exit(1) + + logger.info(f"\nPushing to HuggingFace Hub: {args.hub_repo_id}") + processed_dataset.push_to_hub(args.hub_repo_id) + logger.info("Successfully pushed to Hub!") + + # Check if any validation failed + if args.validate: + all_valid = all( + split_stats.get('invalid', 0) == 0 + for split_stats in statistics['splits'].values() + ) + + if not all_valid: + logger.error("\n⚠️ Some examples failed validation!") + sys.exit(1) + else: + logger.info("\n✅ All examples validated successfully!") + + logger.info("\n✅ Data preparation complete!") + + except Exception as e: + logger.error(f"\n❌ Error during processing: {e}") + import traceback + traceback.print_exc() + sys.exit(1) + + +if __name__ == "__main__": + main() diff --git a/scripts/evaluate.py b/scripts/evaluate.py new file mode 100644 index 0000000000000000000000000000000000000000..570277552c5fcb177c553ff41e37efa515df105f --- /dev/null +++ b/scripts/evaluate.py @@ -0,0 +1,432 @@ +# Script para avaliacao customizada de modelos treinados +# Projeto Seriguela - Avaliacao de expressoes simbolicas geradas + +import argparse +import json +import os +import sys +import re +from collections import Counter +from datetime import datetime + +import numpy as np +import torch +from datasets import load_dataset +from transformers import AutoModelForCausalLM, AutoTokenizer +from peft import PeftModel +from tqdm import tqdm + +# Add parent directory to path for imports +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +from classes.expression import Expression + + +def parse_args(): + parser = argparse.ArgumentParser(description="Evaluate a trained model on expression generation") + parser.add_argument("--model_path", type=str, required=True, + help="Path to model (local or HuggingFace Hub)") + parser.add_argument("--base_model", type=str, default=None, + help="Base model for PEFT (if model_path is adapter)") + parser.add_argument("--dataset_repo_id", type=str, default="augustocsc/sintetico_natural", + help="HuggingFace dataset repository") + parser.add_argument("--data_dir", type=str, default="700K", + help="Data directory within dataset") + parser.add_argument("--data_column", type=str, default="i_prompt_n", + help="Column name for prompts (i_prompt_n for infix, p_prompt_n for prefix)") + parser.add_argument("--num_samples", type=int, default=500, + help="Number of samples to evaluate") + parser.add_argument("--num_generations", type=int, default=1, + help="Number of generations per prompt") + parser.add_argument("--max_new_tokens", type=int, default=128, + help="Maximum new tokens to generate") + parser.add_argument("--temperature", type=float, default=0.7, + help="Sampling temperature") + parser.add_argument("--top_p", type=float, default=0.9, + help="Top-p sampling parameter") + parser.add_argument("--output_dir", type=str, default="./evaluation_results", + help="Directory to save evaluation results") + parser.add_argument("--seed", type=int, default=42, + help="Random seed") + parser.add_argument("--device", type=str, default="auto", + help="Device to use (auto, cuda, cpu)") + return parser.parse_args() + + +def extract_expression_from_output(output: str, is_prefix: bool = False) -> str: + """Extract the expression from model output.""" + # Try marker-based first + start_marker = "<|startofex|>" + end_marker = "<|endofex|>" + + if start_marker in output and end_marker in output: + start_idx = output.find(start_marker) + len(start_marker) + end_idx = output.find(end_marker) + if start_idx < end_idx: + return output[start_idx:end_idx].strip() + + # Fallback: Extract first complete expression after start marker + if start_marker in output: + start_idx = output.find(start_marker) + len(start_marker) + remaining = output[start_idx:].strip() + + # Split at common boundaries + for boundary in ["\nvars:", "\nVariables:", "\nOperators:", "\n\n", "<|endoftext|>"]: + if boundary in remaining: + remaining = remaining.split(boundary)[0].strip() + break + + # Remove any trailing incomplete text - take just the first line + remaining = remaining.split("\n")[0].strip() + + # Limit length if unreasonably long + if len(remaining) > 150: + remaining = remaining[:150] + + return remaining + + # Last resort: look for "expr:" or "Expression:" pattern + match = re.search(r'(?:expr|Expression):\s*(.+?)(?:\n|$)', output, re.IGNORECASE) + if match: + return match.group(1).strip() + + # Give up: return first line, limited length + first_line = output.strip().split("\n")[0] + return first_line[:100] if len(first_line) > 100 else first_line + + +def validate_expression(expr_str: str, is_prefix: bool = False) -> dict: + """Validate if expression is syntactically correct.""" + result = { + "valid": False, + "parseable": False, + "error": None, + "expression_obj": None + } + + if not expr_str or expr_str.strip() == "": + result["error"] = "Empty expression" + return result + + try: + expr_obj = Expression(expr_str, is_prefix=is_prefix) + result["parseable"] = True + result["valid"] = True + result["expression_obj"] = expr_obj + except Exception as e: + result["error"] = str(e) + + return result + + +def check_prompt_adherence(expr_str: str, prompt: str, is_prefix: bool = False) -> dict: + """Check if expression adheres to prompt constraints.""" + result = { + "uses_allowed_vars": False, + "uses_allowed_ops": False, + "all_constraints_met": False + } + + # Extract allowed vars and ops from prompt + # Typical prompt format: "Variables: x_1, x_2, x_3\nOperators: +, -, *, sin\n..." + + # Extract variables from prompt + var_match = re.search(r"Variables?:\s*([^\n]+)", prompt, re.IGNORECASE) + allowed_vars = set() + if var_match: + var_str = var_match.group(1) + # Match patterns like x_1, x_2, etc. + allowed_vars = set(re.findall(r"x_\d+", var_str)) + + # Extract operators from prompt + op_match = re.search(r"Operators?:\s*([^\n]+)", prompt, re.IGNORECASE) + allowed_ops = set() + if op_match: + op_str = op_match.group(1) + # Common operators + ops = ['+', '-', '*', '/', '**', 'sin', 'cos', 'tan', 'log', 'sqrt', 'exp'] + for op in ops: + if op in op_str: + allowed_ops.add(op) + + # Check variables in expression + expr_vars = set(re.findall(r"x_\d+", expr_str)) + if allowed_vars: + result["uses_allowed_vars"] = expr_vars.issubset(allowed_vars) + else: + result["uses_allowed_vars"] = True # No constraint specified + + # Check operators (simplified check) + result["uses_allowed_ops"] = True # Default to true if no ops specified + if allowed_ops: + # This is a simplified check - would need more sophisticated parsing for accuracy + for op in ['sin', 'cos', 'tan', 'log', 'sqrt', 'exp']: + if op in expr_str and op not in allowed_ops: + result["uses_allowed_ops"] = False + break + + result["all_constraints_met"] = result["uses_allowed_vars"] and result["uses_allowed_ops"] + + return result + + +def load_model_and_tokenizer(model_path: str, base_model: str = None, device: str = "auto"): + """Load model and tokenizer.""" + print(f"Loading model from: {model_path}") + + # Determine device + if device == "auto": + device = "cuda" if torch.cuda.is_available() else "cpu" + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path) + if tokenizer.pad_token is None: + tokenizer.pad_token = tokenizer.eos_token + + # Check if this is a PEFT model + is_peft = os.path.exists(os.path.join(model_path, "adapter_config.json")) if os.path.isdir(model_path) else False + + if is_peft or base_model: + # Load base model first + base = base_model or "gpt2" + print(f"Loading base model: {base}") + model = AutoModelForCausalLM.from_pretrained(base) + model.resize_token_embeddings(len(tokenizer)) + + # Load PEFT adapter + print("Loading PEFT adapter...") + model = PeftModel.from_pretrained(model, model_path) + model = model.merge_and_unload() # Merge for faster inference + else: + # Load full model + model = AutoModelForCausalLM.from_pretrained(model_path) + model.resize_token_embeddings(len(tokenizer)) + + model = model.to(device) + model.eval() + + return model, tokenizer, device + + +def generate_expression(model, tokenizer, prompt: str, device: str, + max_new_tokens: int = 128, temperature: float = 0.7, + top_p: float = 0.9, num_return_sequences: int = 1): + """Generate expression(s) from prompt.""" + inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=512) + inputs = {k: v.to(device) for k, v in inputs.items()} + + with torch.no_grad(): + outputs = model.generate( + **inputs, + max_new_tokens=max_new_tokens, + temperature=temperature, + top_p=top_p, + do_sample=True, + num_return_sequences=num_return_sequences, + pad_token_id=tokenizer.pad_token_id, + eos_token_id=tokenizer.eos_token_id, + ) + + generated = tokenizer.batch_decode(outputs, skip_special_tokens=False) + return generated + + +def evaluate_model(args): + """Main evaluation function.""" + # Set seed + torch.manual_seed(args.seed) + np.random.seed(args.seed) + + # Load model + model, tokenizer, device = load_model_and_tokenizer( + args.model_path, args.base_model, args.device + ) + + # Load dataset + print(f"Loading dataset: {args.dataset_repo_id}/{args.data_dir}") + try: + dataset = load_dataset( + args.dataset_repo_id, + data_files={ + "test": f"{args.data_dir}/test_{args.data_dir}.csv" + } + )["test"] + except Exception as e: + print(f"Error loading test set, trying validation: {e}") + dataset = load_dataset( + args.dataset_repo_id, + data_files={ + "validation": f"{args.data_dir}/val_{args.data_dir}.csv" + } + )["validation"] + + # Sample if needed + if len(dataset) > args.num_samples: + indices = np.random.choice(len(dataset), args.num_samples, replace=False) + dataset = dataset.select(indices) + + print(f"Evaluating on {len(dataset)} samples...") + + # Determine if prefix or infix + is_prefix = args.data_column.startswith("p_") + + # Evaluation metrics + metrics = { + "total_samples": 0, + "total_generations": 0, + "valid_expressions": 0, + "parseable_expressions": 0, + "uses_allowed_vars": 0, + "uses_allowed_ops": 0, + "all_constraints_met": 0, + "unique_expressions": set(), + "expression_lengths": [], + "errors": Counter(), + } + + results = [] + + # Generate and evaluate + for idx, sample in enumerate(tqdm(dataset, desc="Evaluating")): + prompt = sample[args.data_column] + + # Extract just the prompt part (before the expression) + # Typically the prompt ends before <|startofex|> + if "<|startofex|>" in prompt: + prompt_only = prompt.split("<|startofex|>")[0] + "<|startofex|>" + else: + prompt_only = prompt + + generations = generate_expression( + model, tokenizer, prompt_only, device, + max_new_tokens=args.max_new_tokens, + temperature=args.temperature, + top_p=args.top_p, + num_return_sequences=args.num_generations + ) + + metrics["total_samples"] += 1 + + for gen_output in generations: + metrics["total_generations"] += 1 + + # Extract expression + expr_str = extract_expression_from_output(gen_output, is_prefix) + + # Validate + validation = validate_expression(expr_str, is_prefix) + + # Check adherence + adherence = check_prompt_adherence(expr_str, prompt_only, is_prefix) + + # Update metrics + if validation["valid"]: + metrics["valid_expressions"] += 1 + if validation["parseable"]: + metrics["parseable_expressions"] += 1 + metrics["unique_expressions"].add(expr_str) + metrics["expression_lengths"].append(len(expr_str)) + if validation["error"]: + metrics["errors"][validation["error"][:50]] += 1 + + if adherence["uses_allowed_vars"]: + metrics["uses_allowed_vars"] += 1 + if adherence["uses_allowed_ops"]: + metrics["uses_allowed_ops"] += 1 + if adherence["all_constraints_met"]: + metrics["all_constraints_met"] += 1 + + results.append({ + "sample_idx": idx, + "prompt": prompt_only[:200], # Truncate for storage + "generated_output": gen_output[:500], + "extracted_expression": expr_str, + "valid": validation["valid"], + "parseable": validation["parseable"], + "error": validation["error"], + "uses_allowed_vars": adherence["uses_allowed_vars"], + "uses_allowed_ops": adherence["uses_allowed_ops"], + }) + + # Calculate final metrics + total_gen = metrics["total_generations"] + final_metrics = { + "model_path": args.model_path, + "dataset": f"{args.dataset_repo_id}/{args.data_dir}", + "data_column": args.data_column, + "is_prefix": is_prefix, + "num_samples": metrics["total_samples"], + "num_generations": total_gen, + "temperature": args.temperature, + "top_p": args.top_p, + + # Validity metrics + "valid_rate": metrics["valid_expressions"] / total_gen if total_gen > 0 else 0, + "parseable_rate": metrics["parseable_expressions"] / total_gen if total_gen > 0 else 0, + + # Adherence metrics + "uses_allowed_vars_rate": metrics["uses_allowed_vars"] / total_gen if total_gen > 0 else 0, + "uses_allowed_ops_rate": metrics["uses_allowed_ops"] / total_gen if total_gen > 0 else 0, + "constraints_met_rate": metrics["all_constraints_met"] / total_gen if total_gen > 0 else 0, + + # Diversity metrics + "unique_expressions": len(metrics["unique_expressions"]), + "diversity_rate": len(metrics["unique_expressions"]) / total_gen if total_gen > 0 else 0, + "avg_expression_length": np.mean(metrics["expression_lengths"]) if metrics["expression_lengths"] else 0, + + # Error distribution (top 10) + "top_errors": dict(metrics["errors"].most_common(10)), + + "timestamp": datetime.now().isoformat(), + } + + # Print results + print("\n" + "="*60) + print("EVALUATION RESULTS") + print("="*60) + print(f"Model: {args.model_path}") + print(f"Dataset: {args.dataset_repo_id}/{args.data_dir}") + print(f"Format: {'Prefix' if is_prefix else 'Infix'}") + print("-"*60) + print(f"Total samples: {metrics['total_samples']}") + print(f"Total generations: {total_gen}") + print("-"*60) + print("VALIDITY METRICS:") + print(f" Valid rate: {final_metrics['valid_rate']:.2%}") + print(f" Parseable rate: {final_metrics['parseable_rate']:.2%}") + print("-"*60) + print("ADHERENCE METRICS:") + print(f" Uses allowed vars: {final_metrics['uses_allowed_vars_rate']:.2%}") + print(f" Uses allowed ops: {final_metrics['uses_allowed_ops_rate']:.2%}") + print(f" All constraints met: {final_metrics['constraints_met_rate']:.2%}") + print("-"*60) + print("DIVERSITY METRICS:") + print(f" Unique expressions: {final_metrics['unique_expressions']}") + print(f" Diversity rate: {final_metrics['diversity_rate']:.2%}") + print(f" Avg expression length: {final_metrics['avg_expression_length']:.1f}") + print("="*60) + + # Save results + os.makedirs(args.output_dir, exist_ok=True) + + # Create filename from model path + model_name = args.model_path.replace("/", "_").replace("\\", "_") + timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") + + # Save metrics + metrics_file = os.path.join(args.output_dir, f"metrics_{model_name}_{timestamp}.json") + with open(metrics_file, "w") as f: + json.dump(final_metrics, f, indent=2) + print(f"\nMetrics saved to: {metrics_file}") + + # Save detailed results + results_file = os.path.join(args.output_dir, f"results_{model_name}_{timestamp}.json") + with open(results_file, "w") as f: + json.dump(results, f, indent=2) + print(f"Detailed results saved to: {results_file}") + + return final_metrics + + +if __name__ == "__main__": + args = parse_args() + evaluate_model(args) diff --git a/scripts/evaluate_experiments.py b/scripts/evaluate_experiments.py new file mode 100644 index 0000000000000000000000000000000000000000..a891f39534f7da7d06363c67d7e3a4d638a2c90d --- /dev/null +++ b/scripts/evaluate_experiments.py @@ -0,0 +1,487 @@ +#!/usr/bin/env python3 +""" +Evaluation script for expression generation experiments. + +Evaluates trained models on: +1. Valid Rate: % expressions that can be parsed and evaluated +2. Stopping Rate: % that stop correctly (contain end marker) +3. Symbol Accuracy: % that use only symbols from prompt +4. Garbage Rate: % with non-mathematical tokens + +Usage: + python scripts/evaluate_experiments.py \ + --model_path ./output/exp_a_json \ + --experiment_type json \ + --num_samples 200 \ + --output_file ./results/exp_a_results.json +""" + +import argparse +import json +import logging +import os +import re +import sys +from pathlib import Path +from typing import Dict, List, Optional, Tuple + +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList +from peft import PeftModel + +# Add parent directory to path +sys.path.insert(0, str(Path(__file__).parent.parent)) +from classes.expression import Expression + +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s' +) +logger = logging.getLogger(__name__) + + +# Garbage words that indicate model failure +GARBAGE_WORDS = [ + "Buyable", "Instore", "Online", "Stockholm", "Muslims", "crash", + "Berman", "expressed", "fluent", "Avenger", "repositories", + "GREEN", "intuition", "records", "xstatics", "xid", "sinmod", + "Pressure", "XP", "Variables", "Operators", "Constants" +] + + +class ExpressionStoppingCriteria(StoppingCriteria): + """Stop generation when end marker is detected.""" + + def __init__(self, tokenizer, stop_sequences: List[str]): + self.tokenizer = tokenizer + self.stop_ids = [] + for seq in stop_sequences: + ids = tokenizer.encode(seq, add_special_tokens=False) + if ids: + self.stop_ids.append(ids) + + def __call__(self, input_ids, scores, **kwargs) -> bool: + for stop_ids in self.stop_ids: + if len(input_ids[0]) >= len(stop_ids): + if input_ids[0][-len(stop_ids):].tolist() == stop_ids: + return True + return False + + +def load_model(model_path: str, experiment_type: str) -> Tuple: + """Load trained model and tokenizer.""" + logger.info(f"Loading model from {model_path}") + + # Load experiment info + exp_info_path = os.path.join(model_path, "experiment_info.json") + if os.path.exists(exp_info_path): + with open(exp_info_path) as f: + exp_info = json.load(f) + logger.info(f"Experiment info: {exp_info}") + use_native_eos = exp_info.get("use_native_eos", False) + else: + use_native_eos = (experiment_type == "eos") + logger.warning("No experiment_info.json found, inferring from experiment_type") + + # Load base model + logger.info("Loading base GPT-2...") + model = AutoModelForCausalLM.from_pretrained( + "gpt2", + torch_dtype=torch.float16, + device_map="auto" + ) + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained("gpt2") + + # Add special tokens if not using native EOS + if not use_native_eos: + tokenizer.add_special_tokens({ + "additional_special_tokens": ["<|startofex|>", "<|endofex|>"] + }) + model.resize_token_embeddings(len(tokenizer)) + + # Load adapter + logger.info("Loading adapter...") + model = PeftModel.from_pretrained(model, model_path) + model = model.merge_and_unload() + model.eval() + + return model, tokenizer, use_native_eos + + +def create_prompt_json(vars_list: List[str], ops_list: List[str], cons: str = "C") -> str: + """Create JSON format prompt for generation.""" + prompt = { + "vars": vars_list, + "ops": ops_list, + "cons": cons, + "expr": "" + } + # Return partial JSON to let model complete + prompt_str = json.dumps(prompt, ensure_ascii=False) + # Remove closing part: , "expr": ""} + prompt_str = prompt_str.rsplit('"expr":', 1)[0] + '"expr": "' + return prompt_str + + +def create_prompt_eos(vars_list: List[str], ops_list: List[str], cons: str = "C") -> str: + """Create EOS format prompt for generation.""" + lines = [ + f"vars: {', '.join(vars_list)}", + f"oper: {', '.join(ops_list)}", + f"cons: {cons}", + "expr: " + ] + return "\n".join(lines) + + +def extract_expression_json(output: str) -> Optional[str]: + """Extract expression from JSON format output.""" + try: + # Try to extract from complete JSON + if output.strip().endswith("}"): + obj = json.loads(output) + return obj.get("expr", None) + except: + pass + + # Try to extract expression between "expr": " and " + match = re.search(r'"expr":\s*"([^"]*)"', output) + if match: + return match.group(1) + + # Try to extract after "expr": " + match = re.search(r'"expr":\s*"([^"]*)', output) + if match: + return match.group(1) + + return None + + +def extract_expression_eos(output: str, end_marker: str) -> Optional[str]: + """Extract expression from EOS format output.""" + if "expr:" not in output: + return None + + # Get everything after expr: + expr_part = output.split("expr:")[-1].strip() + + # Remove end marker + if end_marker in expr_part: + expr_part = expr_part.split(end_marker)[0].strip() + + # Remove any trailing garbage + expr_part = expr_part.split("\n")[0].strip() + + return expr_part if expr_part else None + + +def validate_expression(expr_str: str, allowed_vars: set, allowed_ops: set) -> Dict: + """Validate an expression for correctness.""" + result = { + "raw": expr_str, + "is_valid": False, + "is_parseable": False, + "uses_correct_symbols": False, + "has_garbage": False, + "error": None + } + + if not expr_str or not expr_str.strip(): + result["error"] = "Empty expression" + return result + + # Check for garbage words + for word in GARBAGE_WORDS: + if word.lower() in expr_str.lower(): + result["has_garbage"] = True + result["error"] = f"Contains garbage: {word}" + return result + + # Try to parse expression + try: + expr = Expression(expr_str, is_prefix=False) + result["is_parseable"] = True + + # Try to evaluate + X_test = [[1.0] * 10] # Provide enough variables + eval_result = expr.evaluate(X_test) + if len(eval_result) > 0: + val = eval_result[0] + if val == val and val != float('inf') and val != float('-inf'): + result["is_valid"] = True + + except Exception as e: + result["error"] = str(e)[:100] + + # Check symbol correctness + expr_clean = expr_str.replace(" ", "") + + # Extract used variables + used_vars = set(re.findall(r'x_\d+', expr_clean)) + used_ops = set() + + for op in ["sin", "cos", "tan", "exp", "log", "sqrt", "abs", "asin", "acos", "atan"]: + if op in expr_clean: + used_ops.add(op) + + for op in ["+", "-", "*", "/", "**"]: + if op in expr_clean: + used_ops.add(op) + + # Check if using allowed symbols + var_ok = used_vars.issubset(allowed_vars) + op_ok = used_ops.issubset(allowed_ops) + result["uses_correct_symbols"] = var_ok and op_ok + + if not var_ok: + invalid_vars = used_vars - allowed_vars + result["error"] = f"Invalid vars: {invalid_vars}" + + return result + + +def generate_and_evaluate( + model, + tokenizer, + experiment_type: str, + use_native_eos: bool, + num_samples: int = 100, + test_prompts: Optional[List[Dict]] = None +) -> Dict: + """Generate expressions and evaluate quality.""" + + if test_prompts is None: + # Default test prompts + test_prompts = [ + {"vars": ["x_1", "x_2"], "ops": ["*", "+", "-", "sin", "cos"], "cons": "C"}, + {"vars": ["x_1", "x_2", "x_3"], "ops": ["*", "+", "/", "exp", "log"], "cons": "C"}, + {"vars": ["x_1"], "ops": ["*", "**", "sin", "sqrt"], "cons": "C"}, + {"vars": ["x_1", "x_2", "x_3", "x_4"], "ops": ["*", "+", "-", "/"], "cons": "C"}, + ] + + # Determine end marker and stopping sequences + if use_native_eos: + end_marker = "<|endoftext|>" + stop_sequences = ["<|endoftext|>", "\n\nvars:"] + else: + end_marker = "<|endofex|>" + stop_sequences = ["<|endofex|>", '"}', "\n\nvars:"] + + stopping_criteria = StoppingCriteriaList([ + ExpressionStoppingCriteria(tokenizer, stop_sequences) + ]) + + # Generation config + gen_config = { + "temperature": 0.7, + "top_k": 50, + "top_p": 0.9, + "max_new_tokens": 128, + "do_sample": True, + "pad_token_id": tokenizer.eos_token_id, + } + + results = { + "total": 0, + "valid": 0, + "parseable": 0, + "correct_symbols": 0, + "garbage": 0, + "stopped_correctly": 0, + "samples": [] + } + + samples_per_prompt = num_samples // len(test_prompts) + + logger.info(f"Generating {num_samples} samples ({samples_per_prompt} per prompt)...") + + for prompt_config in test_prompts: + vars_list = prompt_config["vars"] + ops_list = prompt_config["ops"] + cons = prompt_config.get("cons", "C") + + allowed_vars = set(vars_list) | {cons} + allowed_ops = set(ops_list) | {"(", ")"} + + # Create prompt based on experiment type + if experiment_type == "json": + prompt = create_prompt_json(vars_list, ops_list, cons) + else: + prompt = create_prompt_eos(vars_list, ops_list, cons) + + inputs = tokenizer(prompt, return_tensors="pt").to(model.device) + + for i in range(samples_per_prompt): + results["total"] += 1 + + # Generate + output = model.generate( + **inputs, + **gen_config, + stopping_criteria=stopping_criteria + ) + output_text = tokenizer.decode(output[0], skip_special_tokens=False) + + # Extract expression + if experiment_type == "json": + expr_str = extract_expression_json(output_text) + else: + expr_str = extract_expression_eos(output_text, end_marker) + + # Check stopping + stopped_correctly = end_marker in output_text + if stopped_correctly: + results["stopped_correctly"] += 1 + + # Validate expression + if expr_str: + validation = validate_expression(expr_str, allowed_vars, allowed_ops) + + if validation["is_valid"]: + results["valid"] += 1 + if validation["is_parseable"]: + results["parseable"] += 1 + if validation["uses_correct_symbols"]: + results["correct_symbols"] += 1 + if validation["has_garbage"]: + results["garbage"] += 1 + + # Store sample + sample = { + "prompt_vars": vars_list, + "prompt_ops": ops_list, + "expression": expr_str, + "stopped_correctly": stopped_correctly, + **validation + } + results["samples"].append(sample) + else: + results["garbage"] += 1 + results["samples"].append({ + "prompt_vars": vars_list, + "prompt_ops": ops_list, + "expression": None, + "stopped_correctly": stopped_correctly, + "is_valid": False, + "error": "Could not extract expression" + }) + + # Log progress + if results["total"] % 20 == 0: + logger.info(f"Progress: {results['total']}/{num_samples}") + + return results + + +def print_report(results: Dict, experiment_name: str): + """Print evaluation report.""" + total = results["total"] + + print("\n" + "=" * 60) + print(f"EVALUATION REPORT: {experiment_name}") + print("=" * 60) + + print(f"\nTotal samples: {total}") + + metrics = [ + ("Valid Rate", results["valid"] / total * 100), + ("Parseable Rate", results["parseable"] / total * 100), + ("Correct Symbols", results["correct_symbols"] / total * 100), + ("Stopping Rate", results["stopped_correctly"] / total * 100), + ("Garbage Rate", results["garbage"] / total * 100), + ] + + print("\nMetrics:") + print("-" * 40) + for name, value in metrics: + status = "PASS" if (name != "Garbage Rate" and value >= 80) or (name == "Garbage Rate" and value < 5) else "FAIL" + print(f" {name:<20s}: {value:6.1f}% [{status}]") + + # Show sample outputs + print("\n" + "-" * 40) + print("Sample Outputs:") + print("-" * 40) + + valid_samples = [s for s in results["samples"] if s.get("is_valid")] + invalid_samples = [s for s in results["samples"] if not s.get("is_valid")] + + print("\nValid examples:") + for sample in valid_samples[:5]: + expr = sample.get("expression", "N/A") + vars_str = ", ".join(sample.get("prompt_vars", [])) + print(f" [{vars_str}] -> {expr}") + + print("\nInvalid examples:") + for sample in invalid_samples[:5]: + expr = sample.get("expression", "N/A") + error = sample.get("error", "Unknown") + print(f" {expr[:50]}... | Error: {error}") + + print("\n" + "=" * 60) + + # Summary + valid_rate = results["valid"] / total * 100 + stopping_rate = results["stopped_correctly"] / total * 100 + garbage_rate = results["garbage"] / total * 100 + + success = valid_rate >= 80 and stopping_rate >= 90 and garbage_rate < 5 + + print(f"\nOVERALL: {'SUCCESS' if success else 'NEEDS IMPROVEMENT'}") + print("=" * 60) + + +def main(): + parser = argparse.ArgumentParser( + description="Evaluate expression generation experiments" + ) + parser.add_argument("--model_path", type=str, required=True, + help="Path to trained model") + parser.add_argument("--experiment_type", type=str, required=True, + choices=["json", "eos"], + help="Experiment type (json or eos)") + parser.add_argument("--num_samples", type=int, default=200, + help="Number of samples to generate") + parser.add_argument("--output_file", type=str, default=None, + help="Path to save results JSON") + + args = parser.parse_args() + + # Load model + model, tokenizer, use_native_eos = load_model( + args.model_path, + args.experiment_type + ) + + # Generate and evaluate + results = generate_and_evaluate( + model=model, + tokenizer=tokenizer, + experiment_type=args.experiment_type, + use_native_eos=use_native_eos, + num_samples=args.num_samples + ) + + # Print report + experiment_name = f"EXP-{'A' if args.experiment_type == 'json' else 'B'} ({args.experiment_type.upper()})" + print_report(results, experiment_name) + + # Save results + if args.output_file: + os.makedirs(os.path.dirname(args.output_file), exist_ok=True) + + # Remove samples for smaller file + save_results = {k: v for k, v in results.items() if k != "samples"} + save_results["sample_count"] = len(results["samples"]) + save_results["valid_samples"] = [s for s in results["samples"] if s.get("is_valid")][:20] + save_results["invalid_samples"] = [s for s in results["samples"] if not s.get("is_valid")][:20] + + with open(args.output_file, "w") as f: + json.dump(save_results, f, indent=2) + + logger.info(f"Results saved to: {args.output_file}") + + +if __name__ == "__main__": + main() diff --git a/scripts/evaluate_ppo.py b/scripts/evaluate_ppo.py new file mode 100644 index 0000000000000000000000000000000000000000..ae748583b66b9a8ae8800c29565fb69e0123d259 --- /dev/null +++ b/scripts/evaluate_ppo.py @@ -0,0 +1,446 @@ +#!/usr/bin/env python3 +""" +PPO Evaluation Script for Seriguela Block 3 +Tests if PPO finetuning can find symbolic regression expressions +""" + +import os +import sys +import json +import numpy as np +import torch +from pathlib import Path +from typing import Dict, List, Tuple +from datetime import datetime + +# Add project root to path +sys.path.insert(0, str(Path(__file__).parent.parent)) + +from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria, StoppingCriteriaList +from classes.expression import Expression + + +class ExpressionStoppingCriteria(StoppingCriteria): + """Stop generation at natural expression boundaries.""" + def __init__(self, tokenizer, stop_sequences): + self.tokenizer = tokenizer + self.stop_ids = [tokenizer.encode(seq, add_special_tokens=False) + for seq in stop_sequences] + + def __call__(self, input_ids, scores, **kwargs): + # Check if any stop sequence appears in generated text + for stop_ids in self.stop_ids: + if len(stop_ids) > 0 and len(input_ids[0]) >= len(stop_ids): + if input_ids[0][-len(stop_ids):].tolist() == stop_ids: + return True + return False + +class PPOEvaluator: + """Evaluates if PPO training works for symbolic regression""" + + def __init__(self, model_name: str, output_dir: str): + self.model_name = model_name + self.output_dir = Path(output_dir) + self.output_dir.mkdir(parents=True, exist_ok=True) + + # Load V2 model with optimal inference config (90% valid rate) + print(f"Loading model: {model_name}") + + # Load base model first without adapters + print("Loading base GPT-2 model...") + self.model = AutoModelForCausalLM.from_pretrained( + "gpt2", + torch_dtype=torch.float16, + device_map="auto" + ) + + # Configure tokenizer with special tokens + print("Configuring tokenizer with special tokens...") + self.tokenizer = AutoTokenizer.from_pretrained("gpt2") + self.tokenizer.add_special_tokens({ + "additional_special_tokens": ["<|startofex|>", "<|endofex|>"] + }) + + # Resize embeddings to match tokenizer + print(f"Resizing embeddings from {self.model.get_input_embeddings().weight.shape[0]} to {len(self.tokenizer)}...") + self.model.resize_token_embeddings(len(self.tokenizer)) + + # Now load the V2 adapter weights + print(f"Loading V2 adapter from {model_name}...") + try: + from peft import PeftModel + self.model = PeftModel.from_pretrained(self.model, model_name) + print("V2 adapter loaded successfully (LoRA weights)") + print("Merging adapter into base model...") + self.model = self.model.merge_and_unload() + print("Adapter merged successfully") + except Exception as e: + print(f"Warning: Could not load as PEFT model: {e}") + print("Attempting to load as full model...") + # If not a PEFT model, load full weights + self.model = AutoModelForCausalLM.from_pretrained( + model_name, + torch_dtype=torch.float16, + device_map="auto" + ) + + self.model.eval() + + # V2 optimal generation config (from FINAL_RESULTS) + self.generation_config = { + "temperature": 0.7, + "top_k": 0, + "top_p": 0.8, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": True, + "pad_token_id": self.tokenizer.eos_token_id, + } + + print(f"Model loaded. Using optimal V2 configuration.") + + def create_synthetic_dataset(self, formula: str, n_samples: int = 100) -> Tuple[np.ndarray, np.ndarray]: + """Create synthetic dataset from a known formula""" + print(f"Creating dataset for formula: {formula}") + + # Generate random input data + X = np.random.uniform(-2, 2, (n_samples, 2)) + + # Evaluate true formula + try: + expr = Expression(formula, is_prefix=False) + y = expr.evaluate(X) + return X, y + except Exception as e: + print(f"Error creating dataset: {e}") + raise + + def test_baseline_generation(self, n_samples: int = 10) -> Dict: + """Test baseline: V2 generates valid expressions but not fitted to data""" + print("\n" + "="*60) + print("BASELINE TEST: V2 Generation Without PPO") + print("="*60) + + # Create test dataset (simple formula) + X, y = self.create_synthetic_dataset("x_1 * x_2", n_samples=50) + + results = { + "test": "baseline_generation", + "timestamp": datetime.now().isoformat(), + "generations": [], + "summary": {} + } + + prompt = """vars: x_1, x_2 +oper: *, +, -, sin, cos +cons: C +expr:""" + + inputs = self.tokenizer(prompt, return_tensors="pt").to(self.model.device) + + # Create stopping criteria for <|endofex|> + stopping_criteria = StoppingCriteriaList([ + ExpressionStoppingCriteria(self.tokenizer, ["<|endofex|>", "\n\nvars:"]) + ]) + + valid_count = 0 + r2_scores = [] + + print(f"\nGenerating {n_samples} expressions...") + for i in range(n_samples): + output = self.model.generate( + **inputs, + **self.generation_config, + stopping_criteria=stopping_criteria + ) + text = self.tokenizer.decode(output[0], skip_special_tokens=False) + + # Extract expression + if "expr:" in text: + expr_str = text.split("expr:")[-1].strip() + expr_str = expr_str.split("<|endofex|>")[0].strip() + else: + expr_str = text + + # Debug: Show first few generations + if i < 3: + print(f"\n DEBUG Sample {i+1}:") + print(f" Raw output: {text[:200]}") + print(f" Extracted: {expr_str[:100]}") + + # Validate and compute R² + is_valid = False + r2 = -1.0 + + try: + expr = Expression(expr_str, is_prefix=False) + # Check if expression can be evaluated on dataset + if expr.is_valid_on_dataset(X): + is_valid = True + valid_count += 1 + + # Fit constants and compute R² + try: + r2 = expr.fit_constants(X, y) + if np.isfinite(r2): + r2_scores.append(r2) + else: + r2 = -1.0 + except: + r2 = -1.0 + except: + pass + + results["generations"].append({ + "index": i + 1, + "expression": expr_str, + "valid": is_valid, + "r2_score": float(r2) if r2 != -1.0 else None + }) + + if (i + 1) % 5 == 0: + print(f"Generated {i + 1}/{n_samples} - Valid: {valid_count}, Avg R²: {np.mean(r2_scores) if r2_scores else 'N/A'}") + + # Summary + results["summary"] = { + "total_generations": n_samples, + "valid_count": valid_count, + "valid_rate": valid_count / n_samples, + "r2_scores": r2_scores, + "mean_r2": float(np.mean(r2_scores)) if r2_scores else None, + "max_r2": float(np.max(r2_scores)) if r2_scores else None, + "conclusion": "Baseline generates valid expressions but R² is low (not fitted to target)" + } + + print("\n" + "-"*60) + print(f"BASELINE RESULTS:") + print(f" Valid Rate: {results['summary']['valid_rate']:.1%} ({valid_count}/{n_samples})") + print(f" Mean R²: {results['summary']['mean_r2']:.4f}" if r2_scores else " Mean R²: N/A") + print(f" Max R²: {results['summary']['max_r2']:.4f}" if r2_scores else " Max R²: N/A") + print(f" Interpretation: V2 generates valid expressions (good!), but doesn't fit target data (expected without PPO)") + print("-"*60) + + # Save results + output_file = self.output_dir / "baseline_results.json" + with open(output_file, 'w') as f: + json.dump(results, f, indent=2) + print(f"\nResults saved to: {output_file}") + + return results + + def test_ppo_simulation(self, target_formula: str = "x_1 * x_2", n_iterations: int = 10) -> Dict: + """Simulate PPO: Generate expressions and check if best reward improves""" + print("\n" + "="*60) + print("PPO SIMULATION TEST: Check if Reward Can Improve") + print("="*60) + print(f"Target formula: {target_formula}") + print("Note: This simulates PPO by generating multiple expressions") + print(" and tracking best R² score. Real PPO would optimize") + print(" the model to generate better expressions over time.") + + # Create target dataset + X, y = self.create_synthetic_dataset(target_formula, n_samples=100) + + prompt = """vars: x_1, x_2 +oper: *, +, -, sin, cos +cons: C +expr:""" + + inputs = self.tokenizer(prompt, return_tensors="pt").to(self.model.device) + + # Create stopping criteria + stopping_criteria = StoppingCriteriaList([ + ExpressionStoppingCriteria(self.tokenizer, ["<|endofex|>", "\n\nvars:"]) + ]) + + results = { + "test": "ppo_simulation", + "timestamp": datetime.now().isoformat(), + "target_formula": target_formula, + "iterations": [], + "summary": {} + } + + print(f"\nGenerating {n_iterations} expressions and tracking best R²...") + + best_r2 = -np.inf + best_expr = None + r2_history = [] + valid_count = 0 + + for i in range(n_iterations): + output = self.model.generate( + **inputs, + **self.generation_config, + stopping_criteria=stopping_criteria + ) + text = self.tokenizer.decode(output[0], skip_special_tokens=False) + + # Extract expression + if "expr:" in text: + expr_str = text.split("expr:")[-1].strip() + expr_str = expr_str.split("<|endofex|>")[0].strip() + else: + expr_str = text + + # Compute reward (R²) + is_valid = False + r2 = -1.0 + + try: + expr = Expression(expr_str, is_prefix=False) + if expr.is_valid_on_dataset(X): + is_valid = True + valid_count += 1 + r2 = expr.fit_constants(X, y) + + if np.isfinite(r2): + r2_history.append(r2) + if r2 > best_r2: + best_r2 = r2 + best_expr = expr_str + else: + r2 = -1.0 + except: + pass + + results["iterations"].append({ + "iteration": i + 1, + "expression": expr_str, + "valid": is_valid, + "r2": float(r2) if np.isfinite(r2) else None, + "is_best": (r2 == best_r2) if np.isfinite(r2) else False + }) + + if (i + 1) % 5 == 0: + print(f"Iteration {i + 1}/{n_iterations} - Valid: {valid_count}, Best R²: {best_r2:.4f}") + + # Summary + results["summary"] = { + "total_iterations": n_iterations, + "valid_count": valid_count, + "valid_rate": valid_count / n_iterations, + "best_r2": float(best_r2) if np.isfinite(best_r2) else None, + "best_expression": best_expr, + "r2_history": [float(r) for r in r2_history], + "mean_r2": float(np.mean(r2_history)) if r2_history else None, + "conclusion": self._analyze_ppo_simulation(best_r2, r2_history) + } + + print("\n" + "-"*60) + print("PPO SIMULATION RESULTS:") + print(f" Valid expressions: {valid_count}/{n_iterations}") + print(f" Best R²: {best_r2:.4f}" if np.isfinite(best_r2) else " Best R²: N/A") + print(f" Mean R²: {results['summary']['mean_r2']:.4f}" if r2_history else " Mean R²: N/A") + print(f" Best expression: {best_expr}") + print(f"\n Interpretation:") + print(f" - Baseline (Test 1) shows random expressions have low R² (~0.2)") + print(f" - PPO should improve this by learning to generate fitted expressions") + print(f" - Best R² of {best_r2:.4f} shows what's possible with current model") + if best_r2 >= 0.9: + print(f" ✅ Model CAN find high-quality solutions (R² >= 0.9)") + elif best_r2 >= 0.5: + print(f" ⚠️ Model can find partial solutions (R² >= 0.5)") + else: + print(f" ❌ Model struggles to find good solutions (R² < 0.5)") + print("-"*60) + + # Save results + output_file = self.output_dir / "ppo_simulation_results.json" + with open(output_file, 'w') as f: + json.dump(results, f, indent=2) + print(f"\nResults saved to: {output_file}") + + return results + + def _analyze_ppo_simulation(self, best_r2: float, r2_history: List[float]) -> str: + """Analyze PPO simulation results""" + if not r2_history: + return "❌ No valid expressions generated" + + if best_r2 >= 0.9: + return f"✅ EXCELLENT: Found high-quality solution (R² = {best_r2:.4f}). PPO training should work well." + elif best_r2 >= 0.5: + return f"⚠️ MODERATE: Found partial solution (R² = {best_r2:.4f}). PPO may help but needs tuning." + else: + return f"❌ POOR: Best solution is weak (R² = {best_r2:.4f}). PPO will struggle with current model." + + def _analyze_ppo_results(self, training_results: Dict) -> str: + """Analyze PPO training results and provide conclusion""" + if "epoch_rewards" not in training_results: + return "Unable to analyze: No reward history found" + + rewards = training_results["epoch_rewards"] + initial = rewards[0] + final = rewards[-1] + best = max(rewards) + improvement = final - initial + + if best >= 0.9: + return f"✅ EXCELLENT: Found high-quality solution (R² = {best:.4f})" + elif improvement > 0.2: + return f"✅ GOOD: Significant improvement ({improvement:+.4f}), PPO is working" + elif improvement > 0.05: + return f"⚠️ MODERATE: Some improvement ({improvement:+.4f}), may need more epochs" + elif improvement > 0: + return f"⚠️ WEAK: Minimal improvement ({improvement:+.4f}), check hyperparameters" + else: + return f"❌ POOR: No improvement or decline ({improvement:+.4f}), PPO not working properly" + + +def main(): + print("="*60) + print("SERIGUELA BLOCK 3: PPO EVALUATION") + print("="*60) + print("Objective: Test if PPO finetuning works for symbolic regression") + print("Model: V2 (augustocsc/Se124M_700K_infix_v2)") + print("="*60) + + # Initialize evaluator + evaluator = PPOEvaluator( + model_name="augustocsc/Se124M_700K_infix_v2", + output_dir="./logs/ppo_evaluation" + ) + + # Test 1: Baseline generation + print("\n📊 TEST 1: Baseline Generation (V2 without PPO)") + baseline_results = evaluator.test_baseline_generation(n_samples=30) + + # Test 2: PPO simulation + print("\n🎯 TEST 2: PPO Simulation (Check if reward CAN improve)") + ppo_results = evaluator.test_ppo_simulation(target_formula="x_1 * x_2", n_iterations=50) + + # Final summary + print("\n" + "="*60) + print("EVALUATION COMPLETE") + print("="*60) + print("\nResults saved to: ./logs/ppo_evaluation/") + print("\nKey Questions Answered:") + print("1. Does V2 generate valid expressions? Check baseline_results.json") + print(f" Answer: {baseline_results['summary']['valid_rate']:.1%} valid rate") + print("2. Can model find high R² expressions? Check ppo_simulation_results.json") + best_r2 = ppo_results['summary'].get('best_r2') + if best_r2 is None: + best_r2 = -1.0 + if best_r2 >= 0.9: + print(f" Answer: YES! Best R² = {best_r2:.4f} (excellent)") + elif best_r2 >= 0.5: + print(f" Answer: PARTIAL. Best R² = {best_r2:.4f} (moderate)") + else: + print(f" Answer: NO. Best R² = {best_r2:.4f} (poor)") + print("3. Would PPO training work?") + if best_r2 >= 0.9: + print(" Answer: YES - Model can find solutions, PPO should learn to find them consistently") + elif best_r2 >= 0.5: + print(" Answer: MAYBE - Model finds partial solutions, PPO may need tuning") + else: + print(" Answer: UNLIKELY - Model struggles to find solutions even randomly") + print("\nNext steps:") + print("- Review results to understand baseline performance") + print("- If simulation shows high R², PPO training is worth trying") + print("- If simulation shows low R², may need to retrain base model") + print("="*60) + + +if __name__ == "__main__": + main() diff --git a/scripts/finetuning/__init__.py b/scripts/finetuning/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scripts/finetuning/arguments.py b/scripts/finetuning/arguments.py new file mode 100644 index 0000000000000000000000000000000000000000..e1ced3fcc830073fa88c78f48b799cf6d042fe74 --- /dev/null +++ b/scripts/finetuning/arguments.py @@ -0,0 +1,94 @@ +# finetuning/arguments.py + +import argparse +from . import config # Import constants from config.py +from .utils import logger # Import logger + +def parse_arguments() -> argparse.Namespace: + """Parses command-line arguments.""" + parser = argparse.ArgumentParser( + description="Fine-tune GPT-2 model using PEFT (LoRA) on an equation dataset." + ) + + # Model & Data Args + parser.add_argument("--model_name_or_path", type=str, default=config.DEFAULT_MODEL_NAME, + help="Pretrained model name or path (e.g., 'gpt2', 'gpt2-medium').") + parser.add_argument("--dataset_repo_id", type=str, required=True, + help="Hugging Face Hub repository ID for the dataset (e.g., 'username/my-equation-dataset').") + parser.add_argument("--data_dir", type=str, default=config.DEFAULT_DATA_DIR, + help="Directory containing the dataset files within the repo (optional).") + parser.add_argument("--source_data_column", type=str, default=config.DEFAULT_SOURCE_DATA_COLUMN, + help="Column name in the *source* dataset to use for training (will be renamed to 'text').") + parser.add_argument("--block_size", type=int, default=config.DEFAULT_BLOCK_SIZE, + help="Block size for tokenizing and chunking.") + + # Training Hyperparameters + parser.add_argument("--num_train_epochs", type=int, default=config.DEFAULT_EPOCHS, help="Number of training epochs.") + parser.add_argument("--per_device_train_batch_size", type=int, default=config.DEFAULT_BATCH_SIZE, + help="Batch size per device during training.") + parser.add_argument("--per_device_eval_batch_size", type=int, default=config.DEFAULT_BATCH_SIZE, + help="Batch size per device during evaluation.") + parser.add_argument("--learning_rate", type=float, default=config.DEFAULT_LR, help="Learning rate.") + parser.add_argument("--lr_scheduler_type", type=str, default=config.DEFAULT_LR_SCHEDULER_TYPE, + choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant"], + help="Learning rate scheduler type.") + parser.add_argument("--weight_decay", type=float, default=config.DEFAULT_WEIGHT_DECAY, help="Weight decay.") + parser.add_argument("--gradient_accumulation_steps", type=int, default=config.DEFAULT_GRAD_ACCUM_STEPS, + help="Steps for gradient accumulation.") + parser.add_argument("--warmup_steps", type=int, default=config.DEFAULT_WARMUP_STEPS, help="Learning rate scheduler warmup steps.") + + # LoRA / PEFT Parameters + parser.add_argument("--lora_r", type=int, default=config.DEFAULT_LORA_R, help="LoRA rank (dimension).") + parser.add_argument("--lora_alpha", type=int, default=config.DEFAULT_LORA_ALPHA, help="LoRA alpha (scaling factor).") + parser.add_argument("--lora_dropout", type=float, default=config.DEFAULT_LORA_DROPOUT, help="LoRA dropout.") + parser.add_argument("--lora_target_modules", nargs='+', default=config.DEFAULT_LORA_TARGET_MODULES, + help="Module names to apply LoRA to (e.g., 'c_attn' for GPT-2 query/key/value).") + parser.add_argument("--lora_bias", type=str, default=config.DEFAULT_LORA_BIAS, choices=["none", "all", "lora_only"], + help="Bias type for LoRA.") + + # Logging, Saving & Evaluation Args + parser.add_argument("--output_dir", type=str, required=True, + help="Directory to save the fine-tuned model, checkpoints, and logs.") + parser.add_argument("--overwrite_output_dir", action='store_true', + help="Overwrite the content of the output directory if it exists.") + parser.add_argument("--logging_steps", type=int, default=config.DEFAULT_LOGGING_STEPS, help="Log training metrics every N steps.") + parser.add_argument("--eval_steps", type=int, default=config.DEFAULT_SAVE_EVAL_STEPS, + help="Evaluate every N steps (if eval_strategy='steps').") + parser.add_argument("--save_steps", type=int, default=config.DEFAULT_SAVE_EVAL_STEPS, + help="Save checkpoint every N steps (if save_strategy='steps').") + parser.add_argument("--eval_strategy", type=str, default=config.DEFAULT_EVAL_STRATEGY, choices=["steps", "epoch", "no"], help="Evaluation strategy.") + parser.add_argument("--save_strategy", type=str, default=config.DEFAULT_SAVE_STRATEGY, choices=["steps", "epoch", "no"], + help="Checkpoint saving strategy.") + parser.add_argument("--save_total_limit", type=int, default=config.DEFAULT_SAVE_TOTAL_LIMIT, + help="Limit the total number of checkpoints saved.") + parser.add_argument("--load_best_model_at_end", action='store_true', + help="Load the best model (based on evaluation loss) at the end.") + parser.add_argument("--early_stopping_patience", type=int, default=config.DEFAULT_EARLY_STOPPING_PATIENCE, + help="Number of evaluations with no improvement to trigger early stopping. Requires load_best_model_at_end.") + + # Technical Args + parser.add_argument("--fp16", action='store_true', help="Use mixed precision training (FP16).") + parser.add_argument("--seed", type=int, default=config.DEFAULT_SEED, help="Random seed for reproducibility.") + parser.add_argument("--report_to", type=str, default=config.DEFAULT_REPORT_TO, choices=["tensorboard", "wandb", "none"], + help="Where to report metrics.") + parser.add_argument("--run_name", type=str, default=config.DEFAULT_RUN_NAME, + help="Name of the run for logging purposes.") + + # Hugging Face Hub Args + parser.add_argument("--push_to_hub", action='store_true', help="Push the final model to the Hugging Face Hub.") + parser.add_argument("--hub_model_id", type=str, default=None, + help="Repository ID for pushing (e.g., 'username/gpt2-finetuned-equations'). Required if --push_to_hub.") + + args = parser.parse_args() + + # --- Argument Validation --- + if args.push_to_hub and not args.hub_model_id: + logger.error("--hub_model_id is required when --push_to_hub is set.") + raise ValueError("--hub_model_id is required when --push_to_hub is set.") + if args.early_stopping_patience is not None and args.early_stopping_patience > 0 and not args.load_best_model_at_end: + logger.warning("--early_stopping_patience is set, but --load_best_model_at_end is False. Early stopping requires loading the best model.") + if args.eval_strategy == "no" and (args.load_best_model_at_end or (args.early_stopping_patience is not None and args.early_stopping_patience > 0)): + logger.error("Cannot use --load_best_model_at_end or --early_stopping_patience without evaluation (set --eval_strategy to 'steps' or 'epoch').") + raise ValueError("Cannot use --load_best_model_at_end or --early_stopping_patience without evaluation.") + + return args \ No newline at end of file diff --git a/scripts/finetuning/config.py b/scripts/finetuning/config.py new file mode 100644 index 0000000000000000000000000000000000000000..48f11716f0c3b2183b8283fc5e191f303dbc7fc7 --- /dev/null +++ b/scripts/finetuning/config.py @@ -0,0 +1,47 @@ +# finetuning/config.py + +# --- Constants --- +SPECIAL_TOKENS_DICT = { + "eos_token": "", + "pad_token": "", + "additional_special_tokens": [""] +} +PAD_TOKEN = "" +EOS_TOKEN = "" +START_OF_EX_TOKEN = "" # Explicit constant for clarity if needed elsewhere + +DEFAULT_MODEL_NAME = "gpt2" +DEFAULT_BLOCK_SIZE = 128 +DEFAULT_EPOCHS = 3 +DEFAULT_BATCH_SIZE = 8 +DEFAULT_LR = 5e-5 +DEFAULT_WEIGHT_DECAY = 0.01 +DEFAULT_GRAD_ACCUM_STEPS = 1 +DEFAULT_LOGGING_STEPS = 100 +DEFAULT_SAVE_EVAL_STEPS = 500 +DEFAULT_SAVE_TOTAL_LIMIT = 2 +DEFAULT_SEED = 42 +DEFAULT_EVAL_STRATEGY = "epoch" +DEFAULT_SAVE_STRATEGY = "epoch" +DEFAULT_DATA_COLUMN = "text" # Default target column after processing +DEFAULT_LORA_R = 8 +DEFAULT_LORA_ALPHA = 32 +DEFAULT_LORA_DROPOUT = 0.05 +DEFAULT_LORA_TARGET_MODULES = ["c_attn"] +DEFAULT_LORA_BIAS = "none" +DEFAULT_WARMUP_STEPS = 0 +DEFAULT_LR_SCHEDULER_TYPE = "linear" +DEFAULT_EARLY_STOPPING_PATIENCE = 2 # Consistent naming +DEFAULT_REPORT_TO = "wandb" +DEFAULT_RUN_NAME = "train_gpt2_equations" + +# Source data column default from arguments +DEFAULT_SOURCE_DATA_COLUMN = "i_prompt_n" +DEFAULT_DATA_DIR = "700K" + +# Wandb defaults +DEFAULT_WANDB_PROJECT = "seriguela" +DEFAULT_WANDB_ENTITY = None + +# Dataset defaults +DEFAULT_DATASET_REPO_ID = "augustocsc/sintetico_natural" \ No newline at end of file diff --git a/scripts/finetuning/data_loader.py b/scripts/finetuning/data_loader.py new file mode 100644 index 0000000000000000000000000000000000000000..68556c0b9f7bfb69dde6052b7f2c21ccb1fe83ea --- /dev/null +++ b/scripts/finetuning/data_loader.py @@ -0,0 +1,113 @@ +# finetuning/data_loader.py + +import sys +from typing import Dict, Any, Optional, List +from datasets import load_dataset, DatasetDict, Dataset +from transformers import PreTrainedTokenizerBase +from .utils import logger # Import logger + +# Note: The original script had a commented-out section for group_texts. +# I've kept it commented out here as well, returning tokenized_datasets directly. +# If text grouping is needed, uncomment the relevant parts. + +def load_and_prepare_dataset( + dataset_repo_id: str, + data_dir: Optional[str], + source_column: str, + target_column: str, + tokenizer: PreTrainedTokenizerBase, + block_size: int, + eval_strategy: str # Keep for potential future use or warnings +) -> DatasetDict: + """Loads dataset, renames column, tokenizes, and optionally groups texts.""" + logger.info(f"Loading dataset from Hub: {dataset_repo_id} (data_dir: {data_dir})") + try: + raw_datasets = load_dataset(dataset_repo_id, data_dir=data_dir) + logger.info(f"Dataset loaded: {raw_datasets}") + except Exception as e: + logger.error(f"Failed to load dataset: {e}", exc_info=True) + sys.exit(1) + + # --- Preprocessing Steps --- + # 1. Rename source column to target column (e.g., 'text') + logger.info(f"Renaming column '{source_column}' to '{target_column}' and removing others.") + try: + def rename_and_keep_column(example: Dict[str, Any]) -> Dict[str, Any]: + if source_column not in example: + raise KeyError(f"Source column '{source_column}' not found in example: {list(example.keys())}") + return {target_column: example[source_column]} + + column_names_to_remove = {} + for split in raw_datasets.keys(): + column_names_to_remove[split] = [name for name in raw_datasets[split].column_names if name != source_column] + # Ensure target_column is not accidentally removed if it's the same as source_column initially + if source_column in column_names_to_remove[split]: # Should not happen if logic is correct + column_names_to_remove[split].remove(source_column) + + + processed_datasets = DatasetDict() + for split, original_cols in raw_datasets.items(): + cols_to_remove = [col for col in original_cols.column_names if col != source_column] + processed_datasets[split] = raw_datasets[split].map( + rename_and_keep_column, + batched=False, + remove_columns=cols_to_remove + ) + logger.info(f"Dataset after column renaming: {processed_datasets}") + + except KeyError as e: + logger.error(f"Error during column renaming: {e}. Ensure '{source_column}' exists.", exc_info=True) + sys.exit(1) + except Exception as e: + logger.error(f"An unexpected error occurred during column renaming/cleanup: {e}", exc_info=True) + sys.exit(1) + + # 2. Tokenize + logger.info("Tokenizing dataset...") + def tokenize_function(examples: Dict[str, List[str]]) -> Dict[str, List[Any]]: + # Ensure tokenizer handles truncation as per original intention + return tokenizer(examples[target_column], truncation=True, max_length=block_size if block_size else None) + + + try: + tokenized_datasets = processed_datasets.map( + tokenize_function, + batched=True, + remove_columns=processed_datasets["train"].column_names, # Removes the 'text' column + desc="Running tokenizer on dataset", + ) + logger.info("Tokenization complete.") + except Exception as e: + logger.error(f"Error during tokenization: {e}", exc_info=True) + sys.exit(1) + + + # 3. Group texts into blocks (Currently commented out in original script logic) + # logger.info(f"Grouping texts into blocks of size: {block_size}") + # def group_texts(examples: Dict[str, List[Any]]) -> Dict[str, List[Any]]: + # concatenated = {k: sum(examples[k], []) for k in examples.keys()} + # total_length = len(concatenated["input_ids"]) + # if total_length >= block_size: + # total_length = (total_length // block_size) * block_size + # else: + # logger.warning( + # f"Total length ({total_length}) < block_size ({block_size}), might return empty batches." + # ) + # result = { + # k: [t[i : i + block_size] for i in range(0, total_length, block_size)] + # for k, t in concatenated.items() + # } + # result["labels"] = [list(x) for x in result["input_ids"]] # Deep copy for labels + # return result + + # lm_datasets = tokenized_datasets.map( + # group_texts, + # batched=True, + # desc=f"Grouping texts into chunks of {block_size}", + # ) + # logger.info("Grouping complete.") + # logger.info(f"Processed dataset structure after grouping: {lm_datasets}") + # return lm_datasets + + logger.info(f"Processed dataset structure (tokenized only): {tokenized_datasets}") + return tokenized_datasets \ No newline at end of file diff --git a/scripts/finetuning/utils.py b/scripts/finetuning/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..661d27291210b86c549c8fa27416e652bb275ec3 --- /dev/null +++ b/scripts/finetuning/utils.py @@ -0,0 +1,27 @@ +# finetuning/utils.py + +import logging +import os +from dotenv import load_dotenv + +# --- Logging Configuration --- +def setup_logging(): + """Configures logging for the application.""" + logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', + datefmt='%Y-%m-%d %H:%M:%S' + ) + return logging.getLogger("finetuning_script") # Main logger name + +logger = setup_logging() # Initialize logger once + +def load_hf_token() -> str: + """Loads Hugging Face token from .env file.""" + load_dotenv() + token = os.getenv("HF_TOKEN") + if not token: + logger.error("Hugging Face token (HF_TOKEN) not found in .env file.") + raise ValueError("Hugging Face token not found in .env.") + logger.info("Hugging Face token loaded successfully.") + return token \ No newline at end of file diff --git a/scripts/generate.py b/scripts/generate.py new file mode 100644 index 0000000000000000000000000000000000000000..c0abd3ebd67c1d8185edcbac6f094143927c85c6 --- /dev/null +++ b/scripts/generate.py @@ -0,0 +1,490 @@ +# Script para geracao de texto com modelo treinado +# Projeto Seriguela - Geracao interativa de expressoes simbolicas + +import argparse +import os +import sys +import re + +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList +from peft import PeftModel + +# Add parent directory to path for imports +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +from classes.expression import Expression + + +class ExpressionStoppingCriteria(StoppingCriteria): + """Stop generation at natural expression boundaries.""" + def __init__(self, tokenizer, stop_sequences): + self.tokenizer = tokenizer + self.stop_ids = [tokenizer.encode(seq, add_special_tokens=False) + for seq in stop_sequences] + + def __call__(self, input_ids, scores, **kwargs): + # Check if any stop sequence appears in generated text + for stop_ids in self.stop_ids: + if len(stop_ids) > 0 and len(input_ids[0]) >= len(stop_ids): + if input_ids[0][-len(stop_ids):].tolist() == stop_ids: + return True + return False + + +def parse_args(): + parser = argparse.ArgumentParser(description="Generate expressions with a trained model") + parser.add_argument("--model_path", type=str, required=True, + help="Path to model (local or HuggingFace Hub)") + parser.add_argument("--base_model", type=str, default=None, + help="Base model for PEFT (if model_path is adapter)") + + # Prompt building arguments + parser.add_argument("--num_vars", type=int, default=3, + help="Number of variables (e.g., 3 for x_1, x_2, x_3)") + parser.add_argument("--operators", type=str, default="+,-,*,/,sin,cos", + help="Comma-separated operators (e.g., '+,-,*,/,sin,cos,log,sqrt,exp')") + parser.add_argument("--constants", type=str, default="C", + help="Constant symbol (default: C)") + parser.add_argument("--format", type=str, default="infix", choices=["infix", "prefix"], + help="Expression format (infix or prefix)") + + # Custom prompt + parser.add_argument("--custom_prompt", type=str, default=None, + help="Use a custom prompt instead of building one") + + # Generation parameters + parser.add_argument("--num_generations", type=int, default=5, + help="Number of expressions to generate") + parser.add_argument("--max_new_tokens", type=int, default=64, + help="Maximum new tokens to generate") + parser.add_argument("--temperature", type=float, default=0.7, + help="Sampling temperature (higher = more diverse)") + parser.add_argument("--top_p", type=float, default=0.9, + help="Top-p sampling parameter") + parser.add_argument("--top_k", type=int, default=50, + help="Top-k sampling parameter") + + # Behavior + parser.add_argument("--validate", action="store_true", + help="Validate generated expressions") + parser.add_argument("--interactive", action="store_true", + help="Run in interactive mode") + parser.add_argument("--device", type=str, default="auto", + help="Device to use (auto, cuda, cpu)") + parser.add_argument("--seed", type=int, default=None, + help="Random seed for reproducibility") + + return parser.parse_args() + + +def build_prompt(num_vars: int, operators: list, constants: str = "C", + format_type: str = "infix") -> str: + """Build a prompt for expression generation.""" + # Build variables string + vars_list = [f"x_{i}" for i in range(1, num_vars + 1)] + vars_str = ", ".join(vars_list) + + # Build operators string + ops_str = ", ".join(operators) + + # Build prompt based on format + if format_type == "infix": + prompt = f"""Variables: {vars_str} +Operators: {ops_str} +Constants: {constants} +Expression: <|startofex|>""" + else: # prefix + prompt = f"""Variables: {vars_str} +Operators: {ops_str} +Constants: {constants} +Prefix Expression: <|startofex|>""" + + return prompt + + +def load_model_and_tokenizer(model_path: str, base_model: str = None, device: str = "auto"): + """Load model and tokenizer.""" + print(f"Loading model from: {model_path}") + + # Determine device + if device == "auto": + device = "cuda" if torch.cuda.is_available() else "cpu" + print(f"Using device: {device}") + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(model_path) + if tokenizer.pad_token is None: + tokenizer.pad_token = tokenizer.eos_token + + # Check if this is a PEFT model + is_peft = os.path.exists(os.path.join(model_path, "adapter_config.json")) if os.path.isdir(model_path) else False + + if is_peft or base_model: + # Load base model first + base = base_model or "gpt2" + print(f"Loading base model: {base}") + model = AutoModelForCausalLM.from_pretrained(base) + model.resize_token_embeddings(len(tokenizer)) + + # Load PEFT adapter + print("Loading PEFT adapter...") + model = PeftModel.from_pretrained(model, model_path) + model = model.merge_and_unload() # Merge for faster inference + else: + # Load full model + model = AutoModelForCausalLM.from_pretrained(model_path) + model.resize_token_embeddings(len(tokenizer)) + + model = model.to(device) + model.eval() + + return model, tokenizer, device + + +def generate_expressions(model, tokenizer, prompt: str, device: str, + num_generations: int = 5, max_new_tokens: int = 64, + temperature: float = 0.7, top_p: float = 0.9, + top_k: int = 50): + """Generate expressions from a prompt.""" + inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=512) + inputs = {k: v.to(device) for k, v in inputs.items()} + + # Get special token IDs - prefer <|endofex|> as EOS + end_token_id = tokenizer.convert_tokens_to_ids("<|endofex|>") + if end_token_id == tokenizer.unk_token_id: + print("Warning: <|endofex|> not in tokenizer, using default eos_token_id") + end_token_id = tokenizer.eos_token_id + + # Create stopping criteria to stop at natural expression boundaries (backup) + stop_sequences = ["\nvars:", "\nVariables:", "\nOperators:", "\n\n"] + stopping_criteria = StoppingCriteriaList([ + ExpressionStoppingCriteria(tokenizer, stop_sequences) + ]) + + with torch.no_grad(): + outputs = model.generate( + **inputs, + max_new_tokens=max_new_tokens, + temperature=temperature, + top_p=top_p, + top_k=top_k, + do_sample=True, + num_return_sequences=num_generations, + pad_token_id=tokenizer.pad_token_id, + eos_token_id=end_token_id, # Use <|endofex|> as EOS + stopping_criteria=stopping_criteria, # Keep as backup + ) + + generated = tokenizer.batch_decode(outputs, skip_special_tokens=False) + return generated + + +def extract_expression(output: str) -> str: + """Extract the expression from generated output.""" + # Try marker-based first + start_marker = "<|startofex|>" + end_marker = "<|endofex|>" + + if start_marker in output and end_marker in output: + start_idx = output.find(start_marker) + len(start_marker) + end_idx = output.find(end_marker) + if start_idx < end_idx: + return output[start_idx:end_idx].strip() + + # Fallback: Extract first complete expression after start marker + if start_marker in output: + start_idx = output.find(start_marker) + len(start_marker) + remaining = output[start_idx:].strip() + + # Split at common boundaries + for boundary in ["\nvars:", "\nVariables:", "\nOperators:", "\n\n", "<|endoftext|>"]: + if boundary in remaining: + remaining = remaining.split(boundary)[0].strip() + break + + # Remove any trailing incomplete text - take just the first line + remaining = remaining.split("\n")[0].strip() + + # Limit length if unreasonably long + if len(remaining) > 150: + remaining = remaining[:150] + + return remaining + + # Last resort: look for "expr:" or "Expression:" pattern + match = re.search(r'(?:expr|Expression):\s*(.+?)(?:\n|$)', output, re.IGNORECASE) + if match: + return match.group(1).strip() + + # Give up: return first line, limited length + first_line = output.strip().split("\n")[0] + return first_line[:100] if len(first_line) > 100 else first_line + + +def validate_expression(expr_str: str, is_prefix: bool = False) -> dict: + """Validate an expression.""" + result = { + "valid": False, + "error": None, + "sympy_str": None + } + + if not expr_str: + result["error"] = "Empty expression" + return result + + try: + expr = Expression(expr_str, is_prefix=is_prefix) + result["valid"] = True + result["sympy_str"] = expr.sympy_str() + except Exception as e: + result["error"] = str(e) + + return result + + +def print_generation_result(idx: int, expr_str: str, validation: dict = None): + """Print a formatted generation result.""" + print(f"\n[{idx + 1}] {expr_str}") + if validation: + if validation["valid"]: + print(f" Status: VALID") + if validation["sympy_str"] != expr_str: + print(f" Sympy: {validation['sympy_str']}") + else: + print(f" Status: INVALID - {validation['error']}") + + +def interactive_mode(model, tokenizer, device, args): + """Run in interactive mode.""" + print("\n" + "="*60) + print("SERIGUELA - Interactive Expression Generator") + print("="*60) + print("Commands:") + print(" /vars N - Set number of variables (e.g., /vars 3)") + print(" /ops +,-,* - Set operators (e.g., /ops +,-,*,sin)") + print(" /format X - Set format (infix or prefix)") + print(" /temp T - Set temperature (e.g., /temp 0.8)") + print(" /n N - Set number of generations (e.g., /n 10)") + print(" /prompt - Show current prompt") + print(" /gen - Generate with current settings") + print(" /custom TEXT - Use custom prompt") + print(" /quit - Exit") + print("="*60) + + # Current settings + settings = { + "num_vars": args.num_vars, + "operators": args.operators.split(","), + "format": args.format, + "temperature": args.temperature, + "num_generations": args.num_generations, + "custom_prompt": None + } + + is_prefix = settings["format"] == "prefix" + + while True: + try: + user_input = input("\n> ").strip() + except (EOFError, KeyboardInterrupt): + print("\nGoodbye!") + break + + if not user_input: + continue + + if user_input.startswith("/"): + parts = user_input.split(maxsplit=1) + cmd = parts[0].lower() + arg = parts[1] if len(parts) > 1 else None + + if cmd == "/quit" or cmd == "/exit": + print("Goodbye!") + break + + elif cmd == "/vars" and arg: + try: + settings["num_vars"] = int(arg) + print(f"Variables set to {settings['num_vars']}") + except ValueError: + print("Invalid number") + + elif cmd == "/ops" and arg: + settings["operators"] = [op.strip() for op in arg.split(",")] + print(f"Operators set to: {settings['operators']}") + + elif cmd == "/format" and arg: + if arg.lower() in ["infix", "prefix"]: + settings["format"] = arg.lower() + is_prefix = settings["format"] == "prefix" + print(f"Format set to {settings['format']}") + else: + print("Invalid format. Use 'infix' or 'prefix'") + + elif cmd == "/temp" and arg: + try: + settings["temperature"] = float(arg) + print(f"Temperature set to {settings['temperature']}") + except ValueError: + print("Invalid temperature") + + elif cmd == "/n" and arg: + try: + settings["num_generations"] = int(arg) + print(f"Number of generations set to {settings['num_generations']}") + except ValueError: + print("Invalid number") + + elif cmd == "/prompt": + prompt = build_prompt( + settings["num_vars"], + settings["operators"], + "C", + settings["format"] + ) + print(f"\nCurrent prompt:\n{prompt}") + + elif cmd == "/custom" and arg: + settings["custom_prompt"] = arg + print(f"Custom prompt set") + + elif cmd == "/gen": + # Generate + if settings["custom_prompt"]: + prompt = settings["custom_prompt"] + else: + prompt = build_prompt( + settings["num_vars"], + settings["operators"], + "C", + settings["format"] + ) + + print(f"\nGenerating {settings['num_generations']} expressions...") + print("-"*40) + + outputs = generate_expressions( + model, tokenizer, prompt, device, + num_generations=settings["num_generations"], + temperature=settings["temperature"], + top_p=args.top_p, + top_k=args.top_k, + max_new_tokens=args.max_new_tokens + ) + + valid_count = 0 + for i, output in enumerate(outputs): + expr_str = extract_expression(output) + validation = validate_expression(expr_str, is_prefix) + print_generation_result(i, expr_str, validation) + if validation["valid"]: + valid_count += 1 + + print("-"*40) + print(f"Valid: {valid_count}/{len(outputs)}") + + else: + print(f"Unknown command: {cmd}") + + else: + # Treat as custom prompt and generate + prompt = user_input if "<|startofex|>" in user_input else user_input + " <|startofex|>" + + print(f"\nGenerating {settings['num_generations']} expressions...") + print("-"*40) + + outputs = generate_expressions( + model, tokenizer, prompt, device, + num_generations=settings["num_generations"], + temperature=settings["temperature"], + top_p=args.top_p, + top_k=args.top_k, + max_new_tokens=args.max_new_tokens + ) + + valid_count = 0 + for i, output in enumerate(outputs): + expr_str = extract_expression(output) + validation = validate_expression(expr_str, is_prefix) if args.validate else None + print_generation_result(i, expr_str, validation) + if validation and validation["valid"]: + valid_count += 1 + + if args.validate: + print("-"*40) + print(f"Valid: {valid_count}/{len(outputs)}") + + +def main(): + args = parse_args() + + # Set seed if provided + if args.seed is not None: + torch.manual_seed(args.seed) + + # Load model + model, tokenizer, device = load_model_and_tokenizer( + args.model_path, args.base_model, args.device + ) + + # Interactive mode + if args.interactive: + interactive_mode(model, tokenizer, device, args) + return + + # Build or use custom prompt + if args.custom_prompt: + prompt = args.custom_prompt + else: + operators = [op.strip() for op in args.operators.split(",")] + prompt = build_prompt( + args.num_vars, + operators, + args.constants, + args.format + ) + + print("\n" + "="*60) + print("SERIGUELA - Expression Generator") + print("="*60) + print(f"Model: {args.model_path}") + print(f"Format: {args.format}") + print(f"Temperature: {args.temperature}") + print("-"*60) + print("Prompt:") + print(prompt) + print("-"*60) + + # Generate + is_prefix = args.format == "prefix" + + outputs = generate_expressions( + model, tokenizer, prompt, device, + num_generations=args.num_generations, + max_new_tokens=args.max_new_tokens, + temperature=args.temperature, + top_p=args.top_p, + top_k=args.top_k + ) + + print(f"\nGenerated {len(outputs)} expressions:") + print("-"*60) + + valid_count = 0 + for i, output in enumerate(outputs): + expr_str = extract_expression(output) + validation = validate_expression(expr_str, is_prefix) if args.validate else None + print_generation_result(i, expr_str, validation) + if validation and validation["valid"]: + valid_count += 1 + + if args.validate: + print("-"*60) + print(f"\nSummary: {valid_count}/{len(outputs)} valid expressions ({valid_count/len(outputs)*100:.1f}%)") + + print("="*60) + + +if __name__ == "__main__": + main() diff --git a/scripts/load_to_hf.py b/scripts/load_to_hf.py new file mode 100644 index 0000000000000000000000000000000000000000..779b808991af2e34e0ca20345c489553a5a472b7 --- /dev/null +++ b/scripts/load_to_hf.py @@ -0,0 +1,218 @@ +# upload_dataset_to_hf.py + +import argparse +import os +import sys +import subprocess +from datasets import load_dataset, DatasetDict, Features, Value +from huggingface_hub import HfApi, HfFolder, login, HfApi +# Added import for HfFolder + +# --- Helper Function to Check Git LFS --- +def check_git_lfs_installed(): + """Checks if git-lfs is installed and configured.""" + try: + # Check if git-lfs command exists + subprocess.run(["git", "lfs", "--version"], check=True, capture_output=True) + # Check if git-lfs is initialized for the user (optional but good practice) + # This command might vary or not be strictly necessary depending on setup + # subprocess.run(["git", "config", "--global", "--get", "filter.lfs.smudge"], check=True, capture_output=True) + return True + except (subprocess.CalledProcessError, FileNotFoundError): + print("Warning: git-lfs command not found or not configured.") + print(" Please install git-lfs and run 'git lfs install --system' (or --user).") + print(" See: https://git-lfs.com/") + # Optionally exit if git-lfs is strictly required + # sys.exit(1) + return False # Allow script to continue but warn user + +# --- Main Script Logic --- +def main(): + parser = argparse.ArgumentParser( + description="Upload CSV dataset splits from a local directory to the Hugging Face Hub." + ) + + # --- Required Arguments --- + parser.add_argument( + "--local_dir", + type=str, + required=True, + help="Path to the local directory containing the dataset CSV files." + ) + parser.add_argument( + "--repo_id", + type=str, + required=True, + help="The Hugging Face Hub repository ID (e.g., 'username/my-equation-dataset')." + ) + parser.add_argument( + "--data_column", + type=str, + required=True, + help="Name of the column in the CSV files containing the actual data (e.g., 'text', 'equation')." + ) + + # --- Optional Arguments --- + parser.add_argument( + "--train_filename", + type=str, + default=None, + help="Filename of the training CSV within local_dir (e.g., 'train_data.csv')." + ) + parser.add_argument( + "--val_filename", + type=str, + default=None, + help="Filename of the validation CSV within local_dir (e.g., 'validation_set.csv')." + ) + parser.add_argument( + "--test_filename", + type=str, + default=None, + help="Filename of the test CSV within local_dir (optional, e.g., 'test_examples.csv')." + ) + parser.add_argument( + "--hf_token", + type=str, + default=None, + help="Your Hugging Face Hub access token (with write permissions). If not provided, script will try to use cached token or prompt login." + ) + parser.add_argument( + "--private", + action='store_true', # Makes the repo private if flag is present + help="Set the Hugging Face repository to private." + ) + + args = parser.parse_args() + + print("--- Starting Dataset Upload Script ---") + + # 1. Check Git LFS + print("Checking for git-lfs...") + check_git_lfs_installed() # Warns if not found + + # 2. Handle Authentication + token = args.hf_token + if not token: + token = HfFolder.get_token() # Try to get cached token + + if not token: + print("\nAttempting Hugging Face login...") + try: + login() # Will prompt user if not logged in via CLI + token = HfFolder.get_token() # Get token after successful login + if not token: + raise Exception("Login seemed successful but token could not be retrieved.") + except Exception as e: + print(f"Error during Hugging Face login: {e}") + print("Please ensure you are logged in via 'huggingface-cli login' or provide a token using --hf_token.") + sys.exit(1) + else: + print("Using provided/cached Hugging Face token.") + # Optionally verify token validity here if needed, though push_to_hub will fail if invalid + + + # 3. Determine Filenames + dir_name = os.path.basename(os.path.normpath(args.local_dir)) # Gets the last part of the path + + train_file = args.train_filename if args.train_filename else f"train_{dir_name}.csv" + val_file = args.val_filename if args.val_filename else f"val_{dir_name}.csv" # Using 'val' as abbreviation + test_file = args.test_filename if args.test_filename else f"test_{dir_name}.csv" + + print(f"Using directory: {args.local_dir}") + print(f"Target Hub repo: {args.repo_id}") + print(f"Expecting data column: '{args.data_column}'") + print(f"Using train file: '{train_file}'") + print(f"Using validation file: '{val_file}'") + # Test file is optional, only check if default or specific name provided + if args.test_filename or os.path.exists(os.path.join(args.local_dir, test_file)): + print(f"Using test file: '{test_file}'") + else: + print("No test file specified or default test file not found, skipping.") + test_file = None # Ensure test_file is None if not used + + + # 4. Construct Full Paths and Check Existence + train_path = os.path.join(args.local_dir, train_file) + val_path = os.path.join(args.local_dir, val_file) + test_path = os.path.join(args.local_dir, test_file) if test_file else None + + data_files = {} + if os.path.exists(train_path): + data_files["train"] = train_path + else: + print(f"Error: Training file not found at '{train_path}'") + sys.exit(1) + + if os.path.exists(val_path): + data_files["validation"] = val_path + else: + print(f"Error: Validation file not found at '{val_path}'") + sys.exit(1) + + if test_path and os.path.exists(test_path): + data_files["test"] = test_path + elif args.test_filename: # If user specified a test file but it wasn't found + print(f"Warning: Specified test file '{args.test_filename}' not found at '{test_path}'. Skipping test split.") + + + # 5. Load Dataset Locally + print("\nLoading local CSV files...") + try: + # Define features to ensure the data column is read as string + features = Features({args.data_column: Value('string')}) + dataset_dict = load_dataset("csv", data_files=data_files, features=features) + print("Local dataset loaded successfully:") + print(dataset_dict) + + # Verify the data column exists in the loaded dataset + for split in dataset_dict: + if args.data_column not in dataset_dict[split].column_names: + print(f"Error: Column '{args.data_column}' not found in loaded '{split}' split.") + print(f"Available columns: {dataset_dict[split].column_names}") + sys.exit(1) + + except Exception as e: + print(f"Error loading dataset from CSV files: {e}") + print("Please check file paths, CSV format, and column names.") + sys.exit(1) + + # 6. Rename column if necessary (optional, often good to standardize to 'text') + # If you always want the main data column to be named 'text' on the Hub: + if args.data_column != 'text': + print(f"Renaming column '{args.data_column}' to 'text'...") + try: + dataset_dict = dataset_dict.rename_column(args.data_column, "text") + print("Column renamed successfully.") + print(dataset_dict) + except Exception as e: + print(f"Error renaming column: {e}") + # Decide if you want to exit or proceed with the original column name + # sys.exit(1) + + + # 7. Push to Hub + print(f"\nAttempting to push dataset to Hub repository: {args.repo_id}...") + try: + dataset_dict.push_to_hub( + repo_id=args.repo_id, + private=args.private, + token=token # Pass token explicitly + ) + print("\n--- Upload Successful! ---") + hub_url = f"https://huggingface.co/datasets/{args.repo_id}" + print(f"Dataset available at: {hub_url}") + + except Exception as e: + print(f"\n--- Error During Upload ---") + print(f"An error occurred: {e}") + print("Possible causes:") + print("- Invalid Hugging Face token or insufficient permissions (needs write access).") + print("- Repository ID format incorrect (should be 'username/dataset_name').") + print("- Network issues.") + print("- Git LFS not installed or properly configured.") + print("- Conflicts if the repository already exists with incompatible content.") + sys.exit(1) + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/scripts/preprocess_data.py b/scripts/preprocess_data.py new file mode 100644 index 0000000000000000000000000000000000000000..881dbf67fd99e58ba07e8ef831f29916c40d8505 --- /dev/null +++ b/scripts/preprocess_data.py @@ -0,0 +1 @@ +# Script para pré-processar dados (raw -> processed) diff --git a/scripts/symbolic_rl/config.py b/scripts/symbolic_rl/config.py new file mode 100644 index 0000000000000000000000000000000000000000..a97e31468d9fd28a02ce905dbc78a0ff3ca1ef85 --- /dev/null +++ b/scripts/symbolic_rl/config.py @@ -0,0 +1,17 @@ +from dataclasses import dataclass +from typing import Optional + +@dataclass +class TrainConfig: + model_name: str = "gpt2" + prompt: str = "Find the best expression for the dataset:" + dataset_path: str = "data/data.csv" + stop_reward: float = 0.99 # critério de parada baseado em R² + max_epochs: int = 1000 + batch_size: int = 4 + learning_rate: float = 1e-5 + generation_max_length: int = 64 + device: str = "cuda" + output_dir: str = "checkpoints" + log_interval: int = 10 + seed: int = 42 diff --git a/scripts/symbolic_rl/trainer.py b/scripts/symbolic_rl/trainer.py new file mode 100644 index 0000000000000000000000000000000000000000..fe15039cbe4276f55fc346ee41194dbec8b34c99 --- /dev/null +++ b/scripts/symbolic_rl/trainer.py @@ -0,0 +1,280 @@ +import os +import torch +import numpy as np +from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead +from transformers import AutoTokenizer +from datasets import Dataset +from peft import PeftModel, AutoPeftModelForCausalLM +import sys +from transformers import AutoModelForCausalLM + +# Add path for Expression class +sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../classes'))) +from expression import Expression +from dataset import RegressionDataset + +# === Reward function === +def compute_reward(expression_str: str) -> float: + try: + expr = Expression(expression_str) + + # Check if the expression is valid and can be evaluated + if expr.is_valid_on_dataset(X): + score = expr.fit_constants(X, y) + return max(0.1 , (float(score) if np.isfinite(score) else -1.0)) + else: + #print(f"Expressão inválida: {expression_str}") + return -1.0 + except Exception as e: + #print(f"Erro ao avaliar expressão: {expression_str} - {e}") + return -1.0 + +# === Helper to extract expression === +def extract_expression(response: str) -> str: + return response.split("expr: ")[1].split("<|endoftext|>")[0].strip() + +# === Load Data === +#reg = RegressionDataset('../data/evaluate/srsd-feynman_hard/train', 'feynman-bonus.12.txt', delimiter=' ') +reg = RegressionDataset('./data/evaluate/srsd-feynman_easy/train', 'feynman-i.18.16.txt', delimiter=' ') +X, y = reg.get_numpy() + +# === Configs === +BASE_MODEL = "augustocsc/Se124M100KInfPrompt_EOS_Merged" +LORA_REPO = "augustocsc/Se124M100KInfPrompt_EOS_Merged" +TOKENIZER_REPO = LORA_REPO + +# ppo_config = PPOConfig( +# #model_name=BASE_MODEL, +# learning_rate=1e-5, +# batch_size=32, +# mini_batch_size=8, +# gradient_accumulation_steps=1, +# ) + + +model = AutoModelForCausalLMWithValueHead.from_pretrained(BASE_MODEL) +ref_model = AutoModelForCausalLMWithValueHead.from_pretrained(BASE_MODEL) +tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_REPO) + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") +model = model.to(device) +ref_model = ref_model.to(device) + + + +import os +os.environ["CUDA_LAUNCH_BLOCKING"] = "1" + + +import numpy as np + +def get_safe_functions(X, functions=['log', 'sqrt', 'asin', 'tan', 'abs', 'exp', 'sin', 'cos']): + """ + Returns a list of functions from `functions` that are safe to use on all columns of X. + + Parameters: + X: np.ndarray of shape (n_samples, n_features) + functions: list of function names to check + + Returns: + List of function names that are safe to use given the data + """ + safe_functions = [] + + for fn in functions: + if fn in {'sin', 'cos', 'exp', 'abs'}: + # These are defined for all real values + safe_functions.append(fn) + + elif fn == 'log': + if np.all(X > 0): + safe_functions.append(fn) + + elif fn == 'sqrt': + if np.all(X >= 0): + safe_functions.append(fn) + + elif fn == 'asin': + if np.all((X >= -1) & (X <= 1)): + safe_functions.append(fn) + + elif fn == 'tan': + # Check if cos(x) ≈ 0 anywhere → tan(x) will explode + # We use np.cos to simulate tan issues (e.g., near π/2, 3π/2, etc.) + cos_vals = np.cos(X) + if np.all(np.abs(cos_vals) > 1e-6): # adjustable tolerance + safe_functions.append(fn) + + # else skip unknown functions + + return safe_functions + + +safe_functions = get_safe_functions(X) + +from tqdm import tqdm + +ppo_config = PPOConfig( + model_name=None, # definimos o modelo manualmente + learning_rate=1e-5, + batch_size=1024, # total prompts/responses por step + mini_batch_size=64, # 4 minibatches por batch + gradient_accumulation_steps=1, + ppo_epochs=4, # 4 passes por minibatch + log_with=None, # ou "wandb" + optimize_cuda_cache=True, # 👍 melhora uso da A100 +) + +# === PPO Trainer === +ppo_trainer = PPOTrainer( + config=ppo_config, + tokenizer=tokenizer, + model=model, + ref_model=ref_model, + +) + +# Define the prompt with the safe functions +# PROMPT = f""" +# vars: x_1, x_2, x_3 +# oper: *, +, /, **, {', '.join(safe_functions)} +# cons: C +# expr:""" + + + +PROMPT = f""" +vars: {", ".join([f"x_{i+1}" for i in range(X.shape[1])])} +oper: *, sin +cons: C +expr:""" + +# === Dummy dataset === +dummy_dataset = Dataset.from_dict({ + "prompt": [PROMPT] * 1024 +}) + +# saving current timestamp for logging +import datetime +import json +import subprocess +now = datetime.datetime.now() +timestamp = now.strftime("%Y-%m-%d_%H-%M") + +# Get the device of the model +device = next(model.parameters()).device + +# === PPO Training Loop === +# Tokenize the prompt and convert it to tensors +inputs = tokenizer([PROMPT] * ppo_config.batch_size, return_tensors="pt", padding=True) + +# Move inputs to the same device as the model +inputs = {key: value.to(device) for key, value in inputs.items()} + +# Clear the terminal before starting training +subprocess.run("clear", shell=True) +# Convert the batch tensor into a list of individual tensors +queries = [inputs["input_ids"][i] for i in range(inputs["input_ids"].size(0))] +all_rewards = [] +all_responses = [] +for epoch in tqdm(range(10), desc="Training Epochs"): # adjust as needed + responses = [] + constants = [] + rewards = [] + for i in tqdm(range(ppo_config.batch_size), desc="Batch Progress", leave=False): # Nested progress bar + try: + input_ids = inputs["input_ids"][i].unsqueeze(0) + attention_mask = inputs["attention_mask"][i].unsqueeze(0) + + # === VALIDATION PATCH === + assert torch.all((input_ids >= 0) & (input_ids < model.config.vocab_size)), \ + f"Token inválido detectado: max={input_ids.max().item()}, vocab_size={model.config.vocab_size}" + + # (opcional) + model.config.pad_token_id = tokenizer.pad_token_id + reward = -1 + while reward < 0: + output = model.generate( + input_ids=input_ids, + attention_mask=attention_mask, + max_new_tokens=30, + do_sample=True, + top_k=50, + top_p=0.95, + temperature=0.5, + eos_token_id=tokenizer.eos_token_id, + pad_token_id=tokenizer.pad_token_id, + return_dict_in_generate=True, + output_scores=False + ) + response_ids = output.sequences[0][input_ids.shape[1]:] + response = tokenizer.decode(response_ids, skip_special_tokens=True) + + reward = compute_reward(response) + + + except Exception as e: + print(f"Error at index {i}: {e}") + print(f"Input IDs: {input_ids}") + print(f"Token range: min={input_ids.min()}, max={input_ids.max()}, vocab_size={model.config.vocab_size}") + raise e + + responses.append(response) + rewards.append(reward) + all_responses.extend(responses) + all_rewards.extend(rewards) + + output_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../output")) + os.makedirs(output_dir, exist_ok=True) + output_file = os.path.join(output_dir, f"responses_{timestamp}.txt") + + # If file does not exist, write model and PPO config at the top + if not os.path.exists(output_file): + with open(output_file, "w") as f: + f.write("# Model config:\n") + f.write(json.dumps(model.config.to_dict(), indent=2)) + f.write("\n# PPO config:\n") + f.write(json.dumps(ppo_config.__dict__, indent=2)) + f.write("\n# Responses and rewards:\n") + + # Append responses and rewards for this epoch + with open(output_file, "a") as f: + for expr_str, rew in zip(responses, rewards): + f.write(json.dumps({"expression": expr_str, "reward": float(rew)}) + "\n") + + #if one reward is >= .9 break + if any(r >= 0.9 for r in rewards): + print("Reward >= 0.9 found, stopping training.") + break + + # Compute rewards with a progress bar + + import concurrent.futures + + # # Use process-based parallelism + # with concurrent.futures.ProcessPoolExecutor() as executor: + # rewards = list(tqdm(executor.map(compute_reward, responses), total=len(responses), desc="Computing Rewards", leave=False)) + + #rewards = [ compute_reward(response) for response in tqdm(responses, desc="Computing Rewards", leave=False)] + + # Convert rewards to a list of PyTorch tensors + rewards = [torch.tensor(reward, dtype=torch.float32, device=device) for reward in rewards] + + # Ensure responses are also tokenized and converted to tensors + responses = [tokenizer(response, return_tensors="pt", padding=True)["input_ids"].squeeze(0).to(device) for response in responses] + + # Pass the tokenized tensors to ppo_trainer.step() + ppo_trainer.step(queries, responses, rewards) + + # Log top expressions + top_k = 3 + sorted_responses = sorted(zip(responses, rewards), key=lambda x: -x[1]) + print(f"\nEpoch {epoch + 1} melhores expressões:") + for i, (expr, score) in enumerate(sorted_responses[:top_k]): + print(f"{i+1}. {tokenizer.decode(expr, skip_special_tokens=True)} -> R² = {score:.4f}") + # Print average, median, and std of rewards + avg_reward = torch.mean(torch.stack(rewards)).item() + median_reward = torch.median(torch.stack(rewards)).item() + count_invalid = sum(1 for r in rewards if r == -1.0) + print(f"Average Reward: {avg_reward:.4f}, Median Reward: {median_reward:.4f}, Invalid Count: {count_invalid}") + diff --git a/scripts/test_inference_configs.py b/scripts/test_inference_configs.py new file mode 100644 index 0000000000000000000000000000000000000000..cd21f5114970ac9e9335909200553d3807ec7628 --- /dev/null +++ b/scripts/test_inference_configs.py @@ -0,0 +1,522 @@ +""" +Test different inference configurations to find optimal generation parameters. + +This script tests various combinations of: +- Temperature (sampling randomness) +- Top-k and top-p (nucleus sampling) +- Repetition penalty +- Max length +- Stopping criteria + +Usage: + python scripts/test_inference_configs.py \ + --model_path ./output/Se124M_700K_infix_v3 \ + --num_samples 20 \ + --output_dir ./inference_tests/v3 +""" + +import argparse +import json +import logging +import sys +from pathlib import Path +from typing import Dict, List, Any +import time + +import torch +from transformers import ( + AutoTokenizer, + AutoModelForCausalLM, + StoppingCriteria, + StoppingCriteriaList, +) +from peft import PeftModel +import pandas as pd + +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s' +) +logger = logging.getLogger(__name__) + + +class ExpressionStoppingCriteria(StoppingCriteria): + """Stop generation at <|endofex|> token.""" + + def __init__(self, tokenizer, prompt_length: int): + self.tokenizer = tokenizer + self.prompt_length = prompt_length + self.end_token_id = tokenizer.encode("<|endofex|>", add_special_tokens=False) + + def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: + # Check if we've generated the end token + if input_ids.shape[1] <= self.prompt_length: + return False + + # Check last few tokens for end marker + recent_tokens = input_ids[0, -len(self.end_token_id):].tolist() + return recent_tokens == self.end_token_id + + +# Inference configurations to test +INFERENCE_CONFIGS = { + "default": { + "temperature": 1.0, + "top_k": 50, + "top_p": 1.0, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": True, + "description": "Default transformers settings" + }, + "greedy": { + "temperature": 1.0, + "top_k": 1, + "top_p": 1.0, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": False, + "description": "Greedy decoding (no sampling)" + }, + "low_temp": { + "temperature": 0.3, + "top_k": 50, + "top_p": 0.9, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": True, + "description": "Low temperature for more focused output" + }, + "high_temp": { + "temperature": 1.5, + "top_k": 50, + "top_p": 0.95, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": True, + "description": "Higher temperature for more diversity" + }, + "nucleus_strict": { + "temperature": 0.7, + "top_k": 0, + "top_p": 0.8, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": True, + "description": "Strict nucleus sampling (top-p=0.8)" + }, + "nucleus_relaxed": { + "temperature": 0.7, + "top_k": 0, + "top_p": 0.95, + "repetition_penalty": 1.0, + "max_new_tokens": 128, + "do_sample": True, + "description": "Relaxed nucleus sampling (top-p=0.95)" + }, + "with_repetition_penalty": { + "temperature": 0.7, + "top_k": 50, + "top_p": 0.9, + "repetition_penalty": 1.2, + "max_new_tokens": 128, + "do_sample": True, + "description": "With repetition penalty to avoid loops" + }, + "strong_repetition_penalty": { + "temperature": 0.7, + "top_k": 50, + "top_p": 0.9, + "repetition_penalty": 1.5, + "max_new_tokens": 128, + "do_sample": True, + "description": "Strong repetition penalty" + }, + "short_generation": { + "temperature": 0.7, + "top_k": 50, + "top_p": 0.9, + "repetition_penalty": 1.1, + "max_new_tokens": 64, + "do_sample": True, + "description": "Shorter max length (64 tokens)" + }, + "optimized": { + "temperature": 0.5, + "top_k": 40, + "top_p": 0.9, + "repetition_penalty": 1.15, + "max_new_tokens": 100, + "do_sample": True, + "description": "Optimized settings (balanced)" + }, +} + + +def load_model_and_tokenizer(model_path: str, base_model: str = "gpt2"): + """Load model and tokenizer, handling both base and LoRA models.""" + logger.info(f"Loading model from {model_path}...") + + # Load tokenizer + tokenizer = AutoTokenizer.from_pretrained(base_model) + + # Add special tokens if not present + special_tokens = { + "additional_special_tokens": ["<|startofex|>", "<|endofex|>"] + } + tokenizer.add_special_tokens(special_tokens) + tokenizer.pad_token = tokenizer.eos_token + + # Try loading as LoRA model first + try: + base = AutoModelForCausalLM.from_pretrained( + base_model, + torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, + device_map="auto" if torch.cuda.is_available() else None, + ) + base.resize_token_embeddings(len(tokenizer)) + + model = PeftModel.from_pretrained(base, model_path) + model = model.merge_and_unload() + logger.info("Loaded as LoRA model and merged") + except Exception as e: + # Load as regular model + logger.info(f"Loading as regular model (not LoRA): {e}") + model = AutoModelForCausalLM.from_pretrained( + model_path, + torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, + device_map="auto" if torch.cuda.is_available() else None, + ) + + model.eval() + logger.info(f"Model loaded on: {model.device}") + return model, tokenizer + + +def generate_with_config( + model, + tokenizer, + prompt: str, + config: Dict[str, Any], + use_stopping_criteria: bool = True +) -> tuple[str, Dict[str, Any]]: + """Generate text with specific configuration.""" + + # Encode prompt + inputs = tokenizer(prompt, return_tensors="pt") + if torch.cuda.is_available(): + inputs = {k: v.cuda() for k, v in inputs.items()} + + prompt_length = inputs["input_ids"].shape[1] + + # Setup stopping criteria + stopping_criteria = None + if use_stopping_criteria: + stopping_criteria = StoppingCriteriaList([ + ExpressionStoppingCriteria(tokenizer, prompt_length) + ]) + + # Generate + start_time = time.time() + with torch.no_grad(): + outputs = model.generate( + **inputs, + **{k: v for k, v in config.items() if k != "description"}, + stopping_criteria=stopping_criteria, + pad_token_id=tokenizer.eos_token_id, + ) + generation_time = time.time() - start_time + + # Decode + generated_text = tokenizer.decode(outputs[0], skip_special_tokens=False) + + # Extract only the generated part + generated_only = tokenizer.decode( + outputs[0][prompt_length:], + skip_special_tokens=False + ) + + # Statistics + stats = { + "total_tokens": outputs.shape[1], + "generated_tokens": outputs.shape[1] - prompt_length, + "generation_time": generation_time, + "tokens_per_second": (outputs.shape[1] - prompt_length) / generation_time, + } + + return generated_only, stats + + +def extract_expression(generated_text: str) -> tuple[str, str]: + """Extract expression from generated text.""" + + # Strategy 1: Look for <|endofex|> marker + if "<|endofex|>" in generated_text: + expr = generated_text.split("<|endofex|>")[0].strip() + # Remove "expr:" prefix if present + if "expr:" in expr: + expr = expr.split("expr:")[-1].strip() + return expr, "marker" + + # Strategy 2: Look for expr: prefix + if "expr:" in generated_text: + parts = generated_text.split("expr:") + if len(parts) > 1: + # Take until newline or vars: + expr = parts[1].split("\n")[0].strip() + expr = expr.split("vars:")[0].strip() + return expr, "prefix" + + # Strategy 3: Take first line + first_line = generated_text.split("\n")[0].strip() + if first_line: + return first_line, "first_line" + + return generated_text.strip(), "raw" + + +def validate_expression(expr: str) -> Dict[str, Any]: + """Simple validation of expression quality.""" + issues = [] + + # Check for repetition + if len(expr) > 10: + for i in range(len(expr) - 5): + substr = expr[i:i+3] + if expr.count(substr) > 3: + issues.append(f"repetition: '{substr}'") + break + + # Check for concatenation + if "<|endofex|>" in expr: + issues.append("marker_in_expression") + + # Check for garbage tokens + garbage_tokens = [ + "Buyable", "Instore", "AndOnline", "Store", "Online", + "Product", "Available", "Shopping" + ] + for token in garbage_tokens: + if token in expr: + issues.append(f"garbage: {token}") + + # Check for valid math operators + valid_operators = ["sin", "cos", "tan", "log", "exp", "sqrt", "abs", "+", "-", "*", "/", "**"] + has_operator = any(op in expr for op in valid_operators) + + # Check for variables + has_variable = any(f"x_{i}" in expr or f"C" in expr for i in range(1, 20)) + + return { + "is_valid": len(issues) == 0 and has_operator and has_variable, + "has_operator": has_operator, + "has_variable": has_variable, + "issues": issues, + "length": len(expr), + } + + +def test_configurations( + model, + tokenizer, + test_prompts: List[str], + output_dir: Path, + configs_to_test: List[str] = None +): + """Test all configurations on test prompts.""" + + if configs_to_test is None: + configs_to_test = list(INFERENCE_CONFIGS.keys()) + + results = [] + + logger.info(f"\nTesting {len(configs_to_test)} configurations on {len(test_prompts)} prompts...") + + for config_name in configs_to_test: + config = INFERENCE_CONFIGS[config_name] + logger.info(f"\n{'='*60}") + logger.info(f"Testing config: {config_name}") + logger.info(f"Description: {config['description']}") + logger.info(f"{'='*60}") + + config_results = [] + + for i, prompt in enumerate(test_prompts): + logger.info(f"\nPrompt {i+1}/{len(test_prompts)}: {prompt[:50]}...") + + # Test with stopping criteria + try: + generated, stats = generate_with_config( + model, tokenizer, prompt, config, use_stopping_criteria=True + ) + + # Extract expression + expr, extraction_method = extract_expression(generated) + + # Validate + validation = validate_expression(expr) + + result = { + "config_name": config_name, + "config_description": config["description"], + "prompt": prompt, + "generated_raw": generated[:200], + "expression": expr[:200], + "extraction_method": extraction_method, + "is_valid": validation["is_valid"], + "has_operator": validation["has_operator"], + "has_variable": validation["has_variable"], + "issues": ", ".join(validation["issues"]) if validation["issues"] else "", + "expr_length": validation["length"], + "total_tokens": stats["total_tokens"], + "generated_tokens": stats["generated_tokens"], + "generation_time": stats["generation_time"], + "tokens_per_second": stats["tokens_per_second"], + } + + config_results.append(result) + results.append(result) + + # Log result + status = "✅ VALID" if validation["is_valid"] else "❌ INVALID" + logger.info(f" {status}: {expr[:80]}") + if validation["issues"]: + logger.info(f" Issues: {', '.join(validation['issues'])}") + + except Exception as e: + logger.error(f"Error generating with {config_name}: {e}") + results.append({ + "config_name": config_name, + "config_description": config["description"], + "prompt": prompt, + "error": str(e), + "is_valid": False, + }) + + # Summary for this config + valid_count = sum(1 for r in config_results if r.get("is_valid", False)) + valid_rate = valid_count / len(config_results) * 100 if config_results else 0 + + avg_tokens = sum(r.get("generated_tokens", 0) for r in config_results) / len(config_results) if config_results else 0 + avg_time = sum(r.get("generation_time", 0) for r in config_results) / len(config_results) if config_results else 0 + + logger.info(f"\n{'='*60}") + logger.info(f"Config {config_name} Summary:") + logger.info(f" Valid: {valid_count}/{len(config_results)} ({valid_rate:.1f}%)") + logger.info(f" Avg tokens: {avg_tokens:.1f}") + logger.info(f" Avg time: {avg_time:.3f}s") + logger.info(f"{'='*60}\n") + + return results + + +def main(): + parser = argparse.ArgumentParser( + description="Test different inference configurations" + ) + parser.add_argument( + "--model_path", + type=str, + required=True, + help="Path to model (local or HuggingFace Hub)" + ) + parser.add_argument( + "--base_model", + type=str, + default="gpt2", + help="Base model for LoRA" + ) + parser.add_argument( + "--num_samples", + type=int, + default=20, + help="Number of test prompts to generate" + ) + parser.add_argument( + "--output_dir", + type=str, + required=True, + help="Directory to save results" + ) + parser.add_argument( + "--configs", + type=str, + nargs="+", + default=None, + help="Specific configs to test (default: all)" + ) + + args = parser.parse_args() + + # Create output directory + output_dir = Path(args.output_dir) + output_dir.mkdir(parents=True, exist_ok=True) + + # Load model + model, tokenizer = load_model_and_tokenizer(args.model_path, args.base_model) + + # Create test prompts + test_prompts = [ + "vars: x_1, x_2, x_3\noper: *, +, -, sin, cos, log\ncons: C\nexpr:", + "vars: x_1, x_2\noper: *, **, exp, log\ncons: C\nexpr:", + "vars: x_1, x_2, x_3, x_4\noper: *, +, /, sqrt, abs\ncons: C\nexpr:", + "vars: x_1\noper: sin, cos, exp\ncons: C\nexpr:", + "vars: x_1, x_2, x_3\noper: *, +, -, tan\ncons: C\nexpr:", + ] * (args.num_samples // 5 + 1) + test_prompts = test_prompts[:args.num_samples] + + # Test configurations + results = test_configurations( + model, + tokenizer, + test_prompts, + output_dir, + args.configs + ) + + # Save detailed results + df = pd.DataFrame(results) + results_file = output_dir / "inference_config_results.csv" + df.to_csv(results_file, index=False) + logger.info(f"\nDetailed results saved to: {results_file}") + + # Generate summary report + summary = {} + for config_name in df["config_name"].unique(): + config_df = df[df["config_name"] == config_name] + summary[config_name] = { + "description": config_df["config_description"].iloc[0] if len(config_df) > 0 else "", + "valid_rate": (config_df["is_valid"].sum() / len(config_df) * 100) if len(config_df) > 0 else 0, + "total_samples": len(config_df), + "valid_count": int(config_df["is_valid"].sum()), + "avg_tokens": float(config_df["generated_tokens"].mean()) if "generated_tokens" in config_df else 0, + "avg_time": float(config_df["generation_time"].mean()) if "generation_time" in config_df else 0, + "common_issues": config_df["issues"].value_counts().head(3).to_dict() if "issues" in config_df else {}, + } + + # Sort by valid rate + summary = dict(sorted(summary.items(), key=lambda x: x[1]["valid_rate"], reverse=True)) + + summary_file = output_dir / "inference_config_summary.json" + with open(summary_file, "w") as f: + json.dump(summary, f, indent=2) + logger.info(f"Summary saved to: {summary_file}") + + # Print summary + logger.info("\n" + "="*60) + logger.info("FINAL SUMMARY") + logger.info("="*60) + for config_name, stats in summary.items(): + logger.info(f"\n{config_name}:") + logger.info(f" Description: {stats['description']}") + logger.info(f" Valid rate: {stats['valid_rate']:.1f}% ({stats['valid_count']}/{stats['total_samples']})") + logger.info(f" Avg tokens: {stats['avg_tokens']:.1f}") + logger.info(f" Avg time: {stats['avg_time']:.3f}s") + + logger.info("\n" + "="*60) + logger.info("Testing complete!") + logger.info("="*60) + + +if __name__ == "__main__": + main() diff --git a/scripts/testing_empression.py b/scripts/testing_empression.py new file mode 100644 index 0000000000000000000000000000000000000000..f7989332fb86889701978c21638181b132dccae6 --- /dev/null +++ b/scripts/testing_empression.py @@ -0,0 +1,45 @@ +import unittest +import numpy as np +import sympy as sp +import os +import sys + +# Add path for Expression class +sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../classes'))) +from expression import Expression +from dataset import RegressionDataset +import pickle + +class TestExpression(unittest.TestCase): + def setUp(self): + # Generate synthetic dataset + reg = RegressionDataset('./data/evaluate/srsd-feynman_easy/train', 'feynman-i.12.1.txt', delimiter=' ') + self.X, self.y = reg.get_numpy() + + + def test_complex_expression(self): + # Define the complex expression + goal_expression = "C*(x_1 * x_2 + C)" + + + try: + expr_goal = Expression(goal_expression) + + r2 = expr_goal.fit_constants(self.X, self.y) + + resolved_expr = expr_goal.resolved_expression() + best_constants = expr_goal.best_constants + + # Assert R^2 is reasonable (close to 1 for a good fit) + self.assertGreater(r2, 0.9, f"R^2 is too low: {r2}") + + # Print results for debugging + print(f"Fitted Constants: {best_constants}") + print(f"Resolved Expression (SymPy): {resolved_expr}") + print(f"R^2: {r2:.4f}") + + except Exception as e: + self.fail(f"Error processing goal expression '{goal_expression}': {e}") + +if __name__ == "__main__": + unittest.main() \ No newline at end of file diff --git a/scripts/train.py b/scripts/train.py new file mode 100644 index 0000000000000000000000000000000000000000..f00f01b27715c740f390ee316b550fe2651efe97 --- /dev/null +++ b/scripts/train.py @@ -0,0 +1,439 @@ +# train_gpt2_equations.py +# Script to fine-tune a GPT-2 model on a dataset of equations from the Hugging Face Hub. +# Author: Your Name +# Date: April 17, 2025 + +import argparse +import os +import logging +from dotenv import load_dotenv +import sys +from transformers import EarlyStoppingCallback +import numpy as np +import wandb +import random + + +from datasets import load_dataset +from transformers import ( + AutoTokenizer, + AutoModelForCausalLM, + Trainer, + TrainingArguments, + DataCollatorForLanguageModeling, + set_seed +) + + +from peft import LoraConfig, get_peft_model, TaskType + +# Configure logging +logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') +logger = logging.getLogger(__name__) + + +# --- Preprocessing Functions --- + +def tokenize_function(examples, tokenizer): + """Applies the tokenizer to the 'text' field of the dataset examples.""" + return tokenizer(examples["text"]) + +def group_texts(examples, block_size): + """Groups texts into chunks of block_size.""" + # Concatenate all texts. + concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} + total_length = len(concatenated_examples[list(examples.keys())[0]]) + # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can + # customize this part to your needs. + if total_length >= block_size: + total_length = (total_length // block_size) * block_size + else: + # Handle case where total length is less than block size (might happen with very small datasets/splits) + # You might want to pad here, or simply return empty if Trainer handles it + logger.warning(f"Total length ({total_length}) is smaller than block_size ({block_size}). Chunking might result in empty data for small splits.") + # Returning empty might cause issues later, consider padding or adjusting block_size + # For now, let's proceed but be aware. + pass # Let the slicing below handle it, might result in empty lists + + # Split by chunks of block_size. + result = { + k: [t[i : i + block_size] for i in range(0, total_length, block_size)] + for k, t in concatenated_examples.items() + } + # For Causal LM, labels are usually the input_ids shifted, Trainer handles this if labels aren't provided + # or we can create them explicitly like this: + result["labels"] = result["input_ids"].copy() + return result + +# --- Main Training Function --- + +def main(): + parser = argparse.ArgumentParser(description="Fine-tune GPT-2 model on an equation dataset from Hugging Face Hub.") + + # --- Arguments --- + parser.add_argument("--model_name_or_path", type=str, default="gpt2", help="Pretrained model name or path (e.g., 'gpt2', 'gpt2-medium').") + parser.add_argument("--dataset_repo_id", type=str, required=True, help="Hugging Face Hub repository ID for the dataset (e.g., 'username/my-equation-dataset').") + parser.add_argument("--output_dir", type=str, required=True, help="Directory to save the fine-tuned model and checkpoints.") + parser.add_argument("--data_dir", type=str, required=True, help="Directory containing the dataset files.") + parser.add_argument("--data_column", type=str, default="i_prompt_n", help="Column name in the dataset to be used for training (e.g., 'i_prompt_n', 'p_prompt_n').") + parser.add_argument("--approach", default="infix", type=str, help="Approach to be used for training (e.g., 'infix', 'prefix').") + + # Wandb arguments + parser.add_argument("--wandb_project", type=str, default="seriguela", help="Wandb project name.") + parser.add_argument("--wandb_run_name", type=str, default=None, help="Wandb run name. If not set, will be auto-generated.") + parser.add_argument("--wandb_entity", type=str, default=None, help="Wandb entity (team or username).") + parser.add_argument("--block_size", type=int, default=128, help="Block size for tokenizing and chunking the dataset.") + parser.add_argument("--num_train_epochs", type=int, default=3, help="Number of training epochs.") + parser.add_argument("--per_device_train_batch_size", type=int, default=8, help="Batch size per device during training.") + parser.add_argument("--per_device_eval_batch_size", type=int, default=8, help="Batch size per device during evaluation.") + parser.add_argument("--learning_rate", type=float, default=5e-5, help="Learning rate for the optimizer.") + parser.add_argument("--weight_decay", type=float, default=0.01, help="Weight decay for regularization.") + parser.add_argument("--gradient_accumulation_steps", type=int, default=1, help="Number of steps to accumulate gradients before updating weights.") + parser.add_argument("--warmup_steps", type=int, default=0, help="Number of warmup steps for the learning rate scheduler.") + parser.add_argument("--logging_steps", type=int, default=100, help="Log training metrics every N steps.") + parser.add_argument("--eval_steps", type=int, default=500, help="Evaluate on the validation set every N steps. Ignored if eval_strategy='epoch'.") + parser.add_argument("--save_steps", type=int, default=500, help="Save a checkpoint every N steps. Ignored if save_strategy='epoch'.") + parser.add_argument("--eval_strategy", type=str, default="epoch", choices=["steps", "epoch", "no"], help="Evaluation strategy ('steps', 'epoch', 'no').") + parser.add_argument("--save_strategy", type=str, default="epoch", choices=["steps", "epoch", "no"], help="Checkpoint saving strategy ('steps', 'epoch', 'no').") + parser.add_argument("--save_total_limit", type=int, default=2, help="Limit the total number of checkpoints saved.") + parser.add_argument("--load_best_model_at_end", action='store_true', help="Load the best model (based on evaluation loss) at the end of training.") + parser.add_argument("--fp16", action='store_true', help="Use mixed precision training (FP16). Requires CUDA.") + parser.add_argument("--push_to_hub", action='store_true', help="Push the final model to the Hugging Face Hub.") + parser.add_argument("--hub_model_id", type=str, default=None, help="Repository ID for pushing the model (e.g., 'username/gpt2-finetuned-equations'). Required if --push_to_hub is set.") + parser.add_argument("--run_name", type=str, default=None, help="Optional run name for this training (used in output_dir and hub_model_id).") + parser.add_argument("--lora_r", type=int, default=8, help="LoRA rank (dimension of adapter matrices).") + parser.add_argument("--lora_alpha", type=int, default=32, help="LoRA alpha (scaling factor).") + parser.add_argument("--lora_dropout", type=float, default=0.05, help="Dropout probability for LoRA layers.") + parser.add_argument("--seed", type=int, default=42, help="Random seed for reproducibility.") + + args = parser.parse_args() + + # Carrega as variáveis do .env + load_dotenv() + + # Acessa o token + token = os.getenv("HF_TOKEN") + if not token: + raise ValueError("Token da Hugging Face não encontrado no .env.") + + # Configure Wandb API key + wandb_api_key = os.getenv("WANDB_API_KEY") + if wandb_api_key: + os.environ["WANDB_API_KEY"] = wandb_api_key + wandb.login(key=wandb_api_key) + + # Set seed for reproducibility + set_seed(args.seed) + + # Initialize wandb + wandb_run_name = args.wandb_run_name or f"{args.model_name_or_path}-{args.data_dir}-{args.approach}" + wandb.init( + project=args.wandb_project, + name=wandb_run_name, + entity=args.wandb_entity, + config={ + "model": args.model_name_or_path, + "dataset": args.dataset_repo_id, + "data_dir": args.data_dir, + "data_column": args.data_column, + "approach": args.approach, + "block_size": args.block_size, + "epochs": args.num_train_epochs, + "batch_size": args.per_device_train_batch_size, + "learning_rate": args.learning_rate, + "seed": args.seed, + } + ) + logger.info(f"Wandb initialized: project={args.wandb_project}, run={wandb_run_name}") + + logger.info(f"Starting fine-tuning with parameters: {args}") + + # 1. Load Dataset from Hub or local files + # Check if local prepared data exists + local_data_dir = "./data/processed/700K_fixed" + local_train = os.path.join(local_data_dir, f"train_{args.data_dir}.csv") + + if os.path.exists(local_train): + logger.info(f"Loading dataset from LOCAL files: {local_data_dir}") + try: + raw_datasets = load_dataset( + 'csv', + data_files={ + "train": os.path.join(local_data_dir, f"train_{args.data_dir}.csv"), + "validation": os.path.join(local_data_dir, f"validation_{args.data_dir}.csv"), + "test": os.path.join(local_data_dir, f"test_{args.data_dir}.csv") + } + ) + logger.info(f"Dataset loaded from local CSV files: {raw_datasets}") + except Exception as e: + logger.error(f"Failed to load local dataset: {e}") + logger.info(f"Falling back to Hub: {args.dataset_repo_id}") + raw_datasets = load_dataset( + args.dataset_repo_id, + data_files={ + "train": f"{args.data_dir}/train_{args.data_dir}.csv", + "validation": f"{args.data_dir}/val_{args.data_dir}.csv", + "test": f"{args.data_dir}/test_{args.data_dir}.csv" + } + ) + logger.info(f"Dataset loaded from Hub: {raw_datasets}") + else: + logger.info(f"Loading dataset from Hub: {args.dataset_repo_id}") + try: + # Carrega dataset com arquivos específicos para cada split + raw_datasets = load_dataset( + args.dataset_repo_id, + data_files={ + "train": f"{args.data_dir}/train_{args.data_dir}.csv", + "validation": f"{args.data_dir}/val_{args.data_dir}.csv", + "test": f"{args.data_dir}/test_{args.data_dir}.csv" + } + ) + logger.info(f"Dataset loaded: {raw_datasets}") + except Exception as e: + logger.error(f"Failed to load dataset: {e}") + sys.exit(1) + + # Renomeia a coluna de dados para 'text' + logger.info(f"Renaming column '{args.data_column}' to 'text'") + raw_datasets = raw_datasets.map( + lambda x: {"text": x[args.data_column]}, + remove_columns=raw_datasets["train"].column_names + ) + logger.info(f"Dataset after column rename: {raw_datasets}") + + # Basic validation: Check for train/validation splits + if "train" not in raw_datasets: + raise ValueError("Dataset missing 'train' split.") + if args.eval_strategy != "no" and "validation" not in raw_datasets: + raise ValueError("Dataset missing 'validation' split, required for evaluation.") + + # 2. Load Tokenizer + logger.info(f"Loading tokenizer for model: {args.model_name_or_path}") + try: + tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path) #, use_fast=True) # Consider use_fast=True + + # Handle GPT-2 specific padding token if necessary + if tokenizer.pad_token is None and "gpt2" in args.model_name_or_path.lower(): + logger.warning("GPT-2 tokenizer does not have a default pad token. Setting pad_token = eos_token.") + tokenizer.pad_token = tokenizer.eos_token + + # Adding special tokens + tokenizer.add_special_tokens({"additional_special_tokens": ["<|startofex|>", "<|endofex|>"]}) + + # Verify special tokens were added correctly + start_token_id = tokenizer.convert_tokens_to_ids("<|startofex|>") + end_token_id = tokenizer.convert_tokens_to_ids("<|endofex|>") + + if start_token_id == tokenizer.unk_token_id or end_token_id == tokenizer.unk_token_id: + logger.error("Special tokens not properly added to tokenizer!") + sys.exit(1) + + logger.info(f"Special token IDs: <|startofex|>={start_token_id}, <|endofex|>={end_token_id}") + + except Exception as e: + logger.error(f"Failed to load tokenizer: {e}") + sys.exit(1) + + # 3. Preprocess Dataset (Tokenize & Chunk) + logger.info("Tokenizing dataset...") + # Need functools.partial or lambda if tokenize_function needs tokenizer arg with map + tokenized_datasets = raw_datasets.map( + lambda examples: tokenize_function(examples, tokenizer), + batched=True, + # num_proc=4, # Optional: Use multiple processes for faster tokenization + remove_columns=raw_datasets["train"].column_names # Remove all original columns + ) + logger.info("Tokenization complete.") + + logger.info(f"Grouping texts into blocks of size: {args.block_size}") + # Need functools.partial or lambda if group_texts needs block_size arg with map + lm_datasets = tokenized_datasets.map( + lambda examples: group_texts(examples, args.block_size), + batched=True, + # num_proc=4 # Optional: Use multiple processes + ) + logger.info("Grouping complete.") + logger.info(f"Processed dataset structure: {lm_datasets}") + + # Ensure datasets aren't empty after processing + if not lm_datasets["train"]: + logger.error("Training dataset is empty after processing. Check block_size and original data.") + sys.exit(1) + if args.eval_strategy != "no" and not lm_datasets["validation"]: + logger.warning("Validation dataset is empty after processing. Evaluation might fail or be skipped.") + + # Validate that training data contains special tokens + logger.info("Validating special tokens in training data...") + + sample_indices = random.sample(range(len(lm_datasets["train"])), min(10, len(lm_datasets["train"]))) + valid_samples = 0 + + for idx in sample_indices: + sample = lm_datasets["train"][idx] + decoded = tokenizer.decode(sample["input_ids"]) + + if "<|endofex|>" in decoded: + valid_samples += 1 + + if valid_samples == 0: + logger.error("No training samples contain <|endofex|> marker!") + logger.error("Training data was not properly prepared. Use prepare_training_data_fixed.py") + sys.exit(1) + + logger.info(f"Validation: {valid_samples}/{len(sample_indices)} samples contain end markers") + + + # 4. Load Model + logger.info(f"Loading pretrained model: {args.model_name_or_path}") + try: + base_model = AutoModelForCausalLM.from_pretrained(args.model_name_or_path) + + # Update with tokenizer special tokens + base_model.resize_token_embeddings(len(tokenizer)) + + # Configure model to use <|endofex|> as EOS for generation + end_token_id = tokenizer.convert_tokens_to_ids("<|endofex|>") + base_model.config.eos_token_id = end_token_id + logger.info(f"Configured model EOS token: {end_token_id} (<|endofex|>)") + + except Exception as e: + logger.error(f"Failed to load model: {e}") + sys.exit(1) + + + # Define LoRA configuration + lora_config = LoraConfig( + task_type=TaskType.CAUSAL_LM, # Specify task type + r=args.lora_r, # LoRA rank (dimension of adapter matrices, e.g., 8, 16, 32) + lora_alpha=args.lora_alpha, # LoRA alpha (scaling factor, often 2*r) + target_modules=["c_attn"], # Modules to apply LoRA to in GPT-2. 'c_attn' often covers query/key/value projections. May need adjustment based on exact model variant. + lora_dropout=args.lora_dropout, # Dropout probability for LoRA layers + bias="none" # Usually set to 'none' or 'all' + # ... other LoraConfig parameters + ) + + # Apply PEFT + logger.info("Applying PEFT (LoRA) configuration to the model...") + model = get_peft_model(base_model, lora_config) + + for name, param in model.named_parameters(): + if param.requires_grad: + logger.info(f"Param will be trained: {name} | requires_grad={param.requires_grad}") + + model.train() + + requires_grad_params = [p for p in model.parameters() if p.requires_grad] + if not requires_grad_params: + logger.error("Nenhum parâmetro com requires_grad=True. O modelo está congelado e não pode ser treinado.") + sys.exit(1) + + model.print_trainable_parameters() # This will show how few parameters are actually trainable! + #model.gradient_checkpointing_enable() + + # 5. Configure Training Arguments + logger.info("Configuring training arguments...") + + # Determine effective values based on validation set availability + has_validation = "validation" in lm_datasets and lm_datasets["validation"] + effective_load_best = args.load_best_model_at_end and has_validation + effective_eval_strategy = args.eval_strategy if has_validation else "no" + + training_args = TrainingArguments( + output_dir=args.output_dir, + overwrite_output_dir=True, # Be careful with this in production + num_train_epochs=args.num_train_epochs, + per_device_train_batch_size=args.per_device_train_batch_size, + per_device_eval_batch_size=args.per_device_eval_batch_size, + learning_rate=args.learning_rate, + weight_decay=args.weight_decay, + gradient_accumulation_steps=args.gradient_accumulation_steps, + warmup_steps=args.warmup_steps, + logging_dir=os.path.join(args.output_dir, 'logs'), # Log Tensorboard data within output_dir + logging_steps=args.logging_steps, + eval_strategy=effective_eval_strategy, + save_strategy=args.save_strategy, + save_steps=args.save_steps if args.save_strategy == "steps" else 500, # Default save_steps if strategy is steps but value not provided + save_total_limit=args.save_total_limit, + load_best_model_at_end=effective_load_best, + metric_for_best_model="eval_loss" if effective_load_best else None, + greater_is_better=False if effective_load_best else None, + fp16=args.fp16, + report_to="wandb", + run_name=wandb_run_name, + push_to_hub=args.push_to_hub, + hub_model_id=args.hub_model_id if args.push_to_hub else None, + hub_token=token if args.push_to_hub else None, # Use the obtained token + seed=args.seed, + # Add deepspeed config path if using deepspeed + # deepspeed=args.deepspeed_config_path + ) + + # Data collator - for CLM, pads inputs dynamically. + # With chunking, sequences should already be block_size, but this handles potential variations/labels. + # `mlm=False` specifies Causal LM. + data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) + + # 6. Initialize Trainer + logger.info("Initializing Trainer...") + trainer = Trainer( + model=model, + args=training_args, + train_dataset=lm_datasets["train"], + eval_dataset=lm_datasets.get("validation"), # Use .get() to handle missing validation split gracefully if eval_strategy is 'no' + tokenizer=tokenizer, + data_collator=data_collator, + #compute_metrics=compute_metrics, # Optional: Define a function for custom eval metrics besides loss/perplexity + callbacks=[EarlyStoppingCallback(early_stopping_patience=2)] if effective_load_best else None, + ) + + # 7. Start Training + logger.info("*** Starting Training ***") + try: + train_result = trainer.train() + logger.info("Training finished.") + + # Log metrics + metrics = train_result.metrics + trainer.log_metrics("train", metrics) + trainer.save_metrics("train", metrics) + + # Save final model and tokenizer + logger.info(f"Saving final model to {args.output_dir}") + trainer.save_model() # Saves model, tokenizer, config, training args + # No need to call trainer.save_state() explicitly here unless needed outside Trainer's saves + tokenizer.save_pretrained(args.output_dir) + + except Exception as e: + logger.error(f"An error occurred during training: {e}") + sys.exit(1) + + + # 8. Evaluate (Optional, but good practice if validation set exists) + if training_args.do_eval and lm_datasets.get("validation"): # Check if evaluation was configured AND validation set exists + logger.info("*** Evaluating Final Model ***") + eval_metrics = trainer.evaluate() + logger.info(f"Evaluation metrics: {eval_metrics}") + trainer.log_metrics("eval", eval_metrics) + trainer.save_metrics("eval", eval_metrics) + + # 9. Push to Hub (if requested) + if args.push_to_hub: + if not args.hub_model_id: + logger.error("Cannot push to hub: --hub_model_id is required when --push_to_hub is set.") + else: + logger.info(f"Pushing final model to Hub repository: {args.hub_model_id}") + try: + # This pushes the content saved by save_model() + trainer.push_to_hub(commit_message="End of training") + logger.info("Model pushed successfully.") + except Exception as e: + logger.error(f"Failed to push model to Hub: {e}") + + # Finish wandb run + wandb.finish() + logger.info("--- Script Finished ---") + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/scripts/train_experiment.py b/scripts/train_experiment.py new file mode 100644 index 0000000000000000000000000000000000000000..aac617d674a329ec5ac99e30a5f14a6ac6f59a0a --- /dev/null +++ b/scripts/train_experiment.py @@ -0,0 +1,447 @@ +#!/usr/bin/env python3 +""" +Training script for expression generation experiments. + +Supports two formats: +- EXP-A (JSON): Uses custom <|endofex|> token +- EXP-B (EOS): Uses native GPT-2 <|endoftext|> token + +Usage: + # EXP-A (JSON format) + python scripts/train_experiment.py \ + --experiment_name exp_a_json \ + --train_file ./data/experiments/exp_a_json/train.csv \ + --output_dir ./output/exp_a_json \ + --end_marker "<|endofex|>" + + # EXP-B (EOS format) + python scripts/train_experiment.py \ + --experiment_name exp_b_eos \ + --train_file ./data/experiments/exp_b_eos/train.csv \ + --output_dir ./output/exp_b_eos \ + --end_marker "<|endoftext|>" \ + --use_native_eos +""" + +import argparse +import logging +import os +import random +import sys +from pathlib import Path + +import numpy as np +import torch +import wandb +from datasets import load_dataset +from dotenv import load_dotenv +from peft import LoraConfig, TaskType, get_peft_model +from transformers import ( + AutoModelForCausalLM, + AutoTokenizer, + DataCollatorForLanguageModeling, + EarlyStoppingCallback, + Trainer, + TrainingArguments, + set_seed, +) + +# Configure logging +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s' +) +logger = logging.getLogger(__name__) + + +def tokenize_function(examples, tokenizer): + """Tokenize the text field.""" + return tokenizer(examples["text"]) + + +def group_texts(examples, block_size): + """Group texts into blocks of block_size.""" + concatenated = {k: sum(examples[k], []) for k in examples.keys()} + total_length = len(concatenated[list(examples.keys())[0]]) + + if total_length >= block_size: + total_length = (total_length // block_size) * block_size + else: + logger.warning(f"Total length ({total_length}) < block_size ({block_size})") + + result = { + k: [t[i:i + block_size] for i in range(0, total_length, block_size)] + for k, t in concatenated.items() + } + result["labels"] = result["input_ids"].copy() + return result + + +def validate_data_format(dataset, tokenizer, end_marker, num_samples=10, is_json_format=False): + """Validate that training data is in the expected format.""" + import json as json_module + + if is_json_format: + logger.info("Validating JSON format data...") + marker_to_check = '"expr":' # JSON format has expr field + else: + logger.info(f"Validating data contains '{end_marker}'...") + marker_to_check = end_marker + + sample_indices = random.sample( + range(len(dataset)), + min(num_samples, len(dataset)) + ) + + valid_count = 0 + for idx in sample_indices: + text = dataset[idx]["text"] + if is_json_format: + # For JSON format, validate it's valid JSON with expr field + try: + obj = json_module.loads(text) + if "expr" in obj and "vars" in obj: + valid_count += 1 + except: + pass + else: + # For EOS format, check marker presence + if marker_to_check in text: + valid_count += 1 + + rate = valid_count / len(sample_indices) * 100 + logger.info(f"Validation: {valid_count}/{len(sample_indices)} ({rate:.1f}%) valid") + + if valid_count == 0: + logger.error("No valid samples found! Data not properly prepared.") + sys.exit(1) + + return rate + + +def main(): + parser = argparse.ArgumentParser( + description="Train expression generation model" + ) + + # Required arguments + parser.add_argument("--experiment_name", type=str, required=True, + help="Experiment name (e.g., 'exp_a_json', 'exp_b_eos')") + parser.add_argument("--train_file", type=str, required=True, + help="Path to training CSV file") + parser.add_argument("--output_dir", type=str, required=True, + help="Directory to save model") + + # Format options + parser.add_argument("--end_marker", type=str, default="<|endofex|>", + help="End marker token (e.g., '<|endofex|>' or '<|endoftext|>')") + parser.add_argument("--use_native_eos", action="store_true", + help="Use native GPT-2 EOS token instead of custom token") + parser.add_argument("--json_format", action="store_true", + help="Data is in JSON format (for EXP-A)") + + # Optional data arguments + parser.add_argument("--validation_file", type=str, default=None, + help="Path to validation CSV file") + parser.add_argument("--test_file", type=str, default=None, + help="Path to test CSV file") + + # Model arguments + parser.add_argument("--model_name_or_path", type=str, default="gpt2", + help="Base model name") + parser.add_argument("--block_size", type=int, default=128, + help="Block size for tokenization") + + # Training arguments + parser.add_argument("--num_train_epochs", type=int, default=3, + help="Number of training epochs") + parser.add_argument("--per_device_train_batch_size", type=int, default=8, + help="Batch size per device") + parser.add_argument("--per_device_eval_batch_size", type=int, default=8, + help="Eval batch size per device") + parser.add_argument("--gradient_accumulation_steps", type=int, default=4, + help="Gradient accumulation steps") + parser.add_argument("--learning_rate", type=float, default=5e-5, + help="Learning rate") + parser.add_argument("--weight_decay", type=float, default=0.01, + help="Weight decay") + parser.add_argument("--warmup_steps", type=int, default=500, + help="Warmup steps") + parser.add_argument("--fp16", action="store_true", + help="Use FP16 mixed precision") + + # LoRA arguments + parser.add_argument("--lora_r", type=int, default=8, + help="LoRA rank") + parser.add_argument("--lora_alpha", type=int, default=32, + help="LoRA alpha") + parser.add_argument("--lora_dropout", type=float, default=0.05, + help="LoRA dropout") + + # Wandb arguments + parser.add_argument("--wandb_project", type=str, default="seriguela_experiments", + help="Wandb project name") + parser.add_argument("--wandb_run_name", type=str, default=None, + help="Wandb run name") + + # Other + parser.add_argument("--seed", type=int, default=42, + help="Random seed") + parser.add_argument("--logging_steps", type=int, default=100, + help="Logging steps") + parser.add_argument("--save_steps", type=int, default=500, + help="Save checkpoint steps") + parser.add_argument("--eval_steps", type=int, default=500, + help="Evaluation steps") + parser.add_argument("--push_to_hub", action="store_true", + help="Push model to HuggingFace Hub") + parser.add_argument("--hub_model_id", type=str, default=None, + help="Hub model ID for pushing") + + args = parser.parse_args() + + # Load environment variables + load_dotenv() + + # Set seed + set_seed(args.seed) + + # Configure wandb + wandb_api_key = os.getenv("WANDB_API_KEY") + if wandb_api_key: + os.environ["WANDB_API_KEY"] = wandb_api_key + wandb.login(key=wandb_api_key) + + wandb_run_name = args.wandb_run_name or args.experiment_name + wandb.init( + project=args.wandb_project, + name=wandb_run_name, + config=vars(args) + ) + + logger.info("=" * 60) + logger.info(f"EXPERIMENT: {args.experiment_name}") + logger.info("=" * 60) + logger.info(f"End marker: {args.end_marker}") + logger.info(f"Use native EOS: {args.use_native_eos}") + logger.info(f"Train file: {args.train_file}") + logger.info(f"Output dir: {args.output_dir}") + logger.info("=" * 60) + + # Load dataset + logger.info("Loading dataset...") + + data_files = {"train": args.train_file} + if args.validation_file: + data_files["validation"] = args.validation_file + if args.test_file: + data_files["test"] = args.test_file + + raw_datasets = load_dataset("csv", data_files=data_files) + logger.info(f"Loaded dataset: {raw_datasets}") + + # Validate data format + validate_data_format( + raw_datasets["train"], + tokenizer=None, + end_marker=args.end_marker, + is_json_format=args.json_format + ) + + # Load tokenizer + logger.info(f"Loading tokenizer: {args.model_name_or_path}") + tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path) + + # Set padding token + if tokenizer.pad_token is None: + tokenizer.pad_token = tokenizer.eos_token + + # Add special tokens based on experiment type + if args.use_native_eos: + # EXP-B: Use native EOS token, no special tokens needed + logger.info("Using native GPT-2 EOS token (<|endoftext|>)") + end_token_id = tokenizer.eos_token_id + logger.info(f"EOS token ID: {end_token_id}") + else: + # EXP-A: Add custom <|endofex|> token + logger.info("Adding custom special tokens") + tokenizer.add_special_tokens({ + "additional_special_tokens": ["<|startofex|>", "<|endofex|>"] + }) + end_token_id = tokenizer.convert_tokens_to_ids("<|endofex|>") + logger.info(f"Custom end token ID: {end_token_id}") + + # Tokenize dataset + logger.info("Tokenizing dataset...") + tokenized_datasets = raw_datasets.map( + lambda examples: tokenize_function(examples, tokenizer), + batched=True, + remove_columns=raw_datasets["train"].column_names + ) + + # Group into blocks + logger.info(f"Grouping texts into blocks of {args.block_size}...") + lm_datasets = tokenized_datasets.map( + lambda examples: group_texts(examples, args.block_size), + batched=True + ) + + logger.info(f"Processed dataset: {lm_datasets}") + + # Validate processed data has end markers + logger.info("Validating processed data...") + sample_indices = random.sample( + range(len(lm_datasets["train"])), + min(10, len(lm_datasets["train"])) + ) + + valid_count = 0 + for idx in sample_indices: + sample = lm_datasets["train"][idx] + decoded = tokenizer.decode(sample["input_ids"]) + if args.end_marker in decoded: + valid_count += 1 + + logger.info(f"Processed data validation: {valid_count}/{len(sample_indices)} contain end marker") + + if valid_count == 0: + logger.error("No processed samples contain end marker! Check data format.") + sys.exit(1) + + # Load model + logger.info(f"Loading model: {args.model_name_or_path}") + model = AutoModelForCausalLM.from_pretrained(args.model_name_or_path) + + # Resize embeddings if using custom tokens + if not args.use_native_eos: + model.resize_token_embeddings(len(tokenizer)) + logger.info(f"Resized embeddings to {len(tokenizer)}") + + # Configure EOS token for generation + model.config.eos_token_id = end_token_id + logger.info(f"Model EOS token ID: {model.config.eos_token_id}") + + # Apply LoRA + logger.info("Applying LoRA configuration...") + lora_config = LoraConfig( + task_type=TaskType.CAUSAL_LM, + r=args.lora_r, + lora_alpha=args.lora_alpha, + target_modules=["c_attn"], + lora_dropout=args.lora_dropout, + bias="none" + ) + + model = get_peft_model(model, lora_config) + model.print_trainable_parameters() + model.train() + + # Training arguments + logger.info("Configuring training...") + + has_validation = "validation" in lm_datasets and len(lm_datasets["validation"]) > 0 + + training_args = TrainingArguments( + output_dir=args.output_dir, + overwrite_output_dir=True, + num_train_epochs=args.num_train_epochs, + per_device_train_batch_size=args.per_device_train_batch_size, + per_device_eval_batch_size=args.per_device_eval_batch_size, + gradient_accumulation_steps=args.gradient_accumulation_steps, + learning_rate=args.learning_rate, + weight_decay=args.weight_decay, + warmup_steps=args.warmup_steps, + logging_dir=os.path.join(args.output_dir, 'logs'), + logging_steps=args.logging_steps, + eval_strategy="epoch" if has_validation else "no", + save_strategy="epoch", + save_total_limit=2, + load_best_model_at_end=has_validation, + metric_for_best_model="eval_loss" if has_validation else None, + greater_is_better=False if has_validation else None, + fp16=args.fp16, + report_to="wandb", + run_name=wandb_run_name, + seed=args.seed, + ) + + # Data collator + data_collator = DataCollatorForLanguageModeling( + tokenizer=tokenizer, + mlm=False + ) + + # Trainer + logger.info("Initializing Trainer...") + callbacks = [] + if has_validation: + callbacks.append(EarlyStoppingCallback(early_stopping_patience=2)) + + trainer = Trainer( + model=model, + args=training_args, + train_dataset=lm_datasets["train"], + eval_dataset=lm_datasets.get("validation"), + tokenizer=tokenizer, + data_collator=data_collator, + callbacks=callbacks if callbacks else None, + ) + + # Train + logger.info("=" * 60) + logger.info("STARTING TRAINING") + logger.info("=" * 60) + + try: + train_result = trainer.train() + + # Log metrics + metrics = train_result.metrics + trainer.log_metrics("train", metrics) + trainer.save_metrics("train", metrics) + + # Save model + logger.info(f"Saving model to {args.output_dir}") + trainer.save_model() + tokenizer.save_pretrained(args.output_dir) + + # Save experiment info + import json + exp_info = { + "experiment_name": args.experiment_name, + "end_marker": args.end_marker, + "use_native_eos": args.use_native_eos, + "train_file": args.train_file, + "end_token_id": end_token_id, + "final_loss": metrics.get("train_loss", None), + } + with open(os.path.join(args.output_dir, "experiment_info.json"), "w") as f: + json.dump(exp_info, f, indent=2) + + logger.info("=" * 60) + logger.info("TRAINING COMPLETE") + logger.info("=" * 60) + logger.info(f"Final train loss: {metrics.get('train_loss', 'N/A')}") + logger.info(f"Model saved to: {args.output_dir}") + + except Exception as e: + logger.error(f"Training failed: {e}") + import traceback + traceback.print_exc() + sys.exit(1) + + finally: + wandb.finish() + + # Push to Hub if requested + if args.push_to_hub and args.hub_model_id: + logger.info(f"Pushing to Hub: {args.hub_model_id}") + try: + trainer.push_to_hub(commit_message=f"Training complete: {args.experiment_name}") + logger.info("Push successful!") + except Exception as e: + logger.error(f"Push failed: {e}") + + +if __name__ == "__main__": + main() diff --git a/scripts/train_test.py b/scripts/train_test.py new file mode 100644 index 0000000000000000000000000000000000000000..3aaafcbd57bc2fb3277fdfb173887831c974ef88 --- /dev/null +++ b/scripts/train_test.py @@ -0,0 +1,512 @@ +# train_gpt2_equations.py +# Script to fine-tune a GPT-2 model using PEFT (LoRA) on a dataset of equations. + +# Author: Your Name +# Date: April 17, 2025 # Updated dynamically if needed, or keep original date + +import argparse +import logging +import os +import sys +from datetime import datetime # For dynamic dating if preferred +from typing import Dict, Any, Optional, List, Union # For type hinting +import json # For loading training args from JSON + +# Environment variable loading +from dotenv import load_dotenv + +# Third-party libraries +import numpy as np +from datasets import load_dataset, DatasetDict, Dataset +from transformers import ( + AutoTokenizer, + AutoModelForCausalLM, + Trainer, + TrainingArguments, + DataCollatorForLanguageModeling, + set_seed, + EarlyStoppingCallback, + PreTrainedTokenizerBase, + PreTrainedModel, + TrainerCallback, +) +from peft import LoraConfig, get_peft_model, TaskType, PeftModel # Import PeftModel for type hint + +# --- Constants --- +SPECIAL_TOKENS = ["", ""] +DEFAULT_MODEL_NAME = "gpt2" +DEFAULT_BLOCK_SIZE = 128 +DEFAULT_EPOCHS = 3 +DEFAULT_BATCH_SIZE = 8 +DEFAULT_LR = 5e-5 +DEFAULT_WEIGHT_DECAY = 0.01 +DEFAULT_GRAD_ACCUM_STEPS = 1 +DEFAULT_LOGGING_STEPS = 100 +DEFAULT_SAVE_EVAL_STEPS = 500 +DEFAULT_SAVE_TOTAL_LIMIT = 2 +DEFAULT_SEED = 42 +DEFAULT_EVAL_STRATEGY = "epoch" +DEFAULT_SAVE_STRATEGY = "epoch" +DEFAULT_DATA_COLUMN = "text" # Default target column after processing + +# --- Logging Configuration --- +# Configure logging at the module level +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', + datefmt='%Y-%m-%d %H:%M:%S' +) +logger = logging.getLogger(__name__) + +# --- Helper Functions --- + +def load_hf_token() -> str: + """Loads Hugging Face token from .env file.""" + load_dotenv() + token = os.getenv("HF_TOKEN") + if not token: + logger.error("Hugging Face token (HF_TOKEN) not found in .env file.") + raise ValueError("Hugging Face token not found in .env.") + logger.info("Hugging Face token loaded successfully.") + return token + +def parse_arguments() -> argparse.Namespace: + """Parses command-line arguments.""" + parser = argparse.ArgumentParser( + description="Fine-tune GPT-2 model using PEFT (LoRA) on an equation dataset." + ) + parser.add_argument("--bf16", action='store_true', help="Use bfloat16 precision training.") + parser.add_argument("--dataloader_num_workers", type=int, default=8, help="Number of workers for data loading.") + parser.add_argument("--warmup_ratio", type=float, default=0.03, help="Ratio of total steps for learning rate warmup.") + parser.add_argument("--max_grad_norm", type=float, default=1.0, help="Maximum gradient norm for gradient clipping.") + parser.add_argument("--optim", type=str, default="adamw_torch_fused", choices=["adamw_torch_fused", "adamw_hf", "adamw_torch", "sgd"], + help="Optimizer to use during training.") + + # Model & Data Args + parser.add_argument("--model_name_or_path", type=str, default=DEFAULT_MODEL_NAME, + help="Pretrained model name or path (e.g., 'gpt2', 'gpt2-medium').") + parser.add_argument("--dataset_repo_id", type=str, required=True, + help="Hugging Face Hub repository ID for the dataset (e.g., 'username/my-equation-dataset').") + parser.add_argument("--data_dir", type=str, default="10k", + help="Directory containing the dataset files within the repo (optional).") + parser.add_argument("--source_data_column", type=str, default="i_simple", # Changed from args.approach based on usage + help="Column name in the *source* dataset to use for training (will be renamed to 'text').") + parser.add_argument("--block_size", type=int, default=DEFAULT_BLOCK_SIZE, + help="Block size for tokenizing and chunking.") + + # Training Hyperparameters + parser.add_argument("--num_train_epochs", type=int, default=DEFAULT_EPOCHS, help="Number of training epochs.") + parser.add_argument("--per_device_train_batch_size", type=int, default=DEFAULT_BATCH_SIZE, + help="Batch size per device during training.") + parser.add_argument("--per_device_eval_batch_size", type=int, default=DEFAULT_BATCH_SIZE, + help="Batch size per device during evaluation.") + parser.add_argument("--learning_rate", type=float, default=DEFAULT_LR, help="Learning rate.") + parser.add_argument("--lr_scheduler_type", type=str, default="linear", choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant"], + help="Learning rate scheduler type.") + parser.add_argument("--weight_decay", type=float, default=DEFAULT_WEIGHT_DECAY, help="Weight decay.") + parser.add_argument("--gradient_accumulation_steps", type=int, default=DEFAULT_GRAD_ACCUM_STEPS, + help="Steps for gradient accumulation.") + parser.add_argument("--warmup_steps", type=int, default=0, help="Learning rate scheduler warmup steps.") + + # LoRA / PEFT Parameters + parser.add_argument("--lora_r", type=int, default=8, help="LoRA rank (dimension).") + parser.add_argument("--lora_alpha", type=int, default=32, help="LoRA alpha (scaling factor).") + parser.add_argument("--lora_dropout", type=float, default=0.05, help="LoRA dropout.") + parser.add_argument("--lora_target_modules", nargs='+', default=["c_attn"], + help="Module names to apply LoRA to (e.g., 'c_attn' for GPT-2 query/key/value).") + parser.add_argument("--lora_bias", type=str, default="none", choices=["none", "all", "lora_only"], + help="Bias type for LoRA.") + + # Logging, Saving & Evaluation Args + parser.add_argument("--output_dir", type=str, required=True, + help="Directory to save the fine-tuned model, checkpoints, and logs.") + parser.add_argument("--overwrite_output_dir", action='store_true', + help="Overwrite the content of the output directory if it exists.") + parser.add_argument("--logging_steps", type=int, default=DEFAULT_LOGGING_STEPS, help="Log training metrics every N steps.") + parser.add_argument("--eval_steps", type=int, default=DEFAULT_SAVE_EVAL_STEPS, + help="Evaluate every N steps (if eval_strategy='steps').") + parser.add_argument("--save_steps", type=int, default=DEFAULT_SAVE_EVAL_STEPS, + help="Save checkpoint every N steps (if save_strategy='steps').") + parser.add_argument("--eval_strategy", type=str, default=DEFAULT_EVAL_STRATEGY, choices=["steps", "epoch", "no"], help="Evaluation strategy.") + parser.add_argument("--save_strategy", type=str, default=DEFAULT_SAVE_STRATEGY, choices=["steps", "epoch", "no"], + help="Checkpoint saving strategy.") + parser.add_argument("--save_total_limit", type=int, default=DEFAULT_SAVE_TOTAL_LIMIT, + help="Limit the total number of checkpoints saved.") + parser.add_argument("--load_best_model_at_end", action='store_true', + help="Load the best model (based on evaluation loss) at the end.") + parser.add_argument("--early_stopping_patience", type=int, default=None, + help="Number of evaluations with no improvement to trigger early stopping. Requires load_best_model_at_end.") + parser.add_argument("--special_tokens", nargs='+', default=SPECIAL_TOKENS, + help="List of special tokens to add to the tokenizer (e.g., '', '').") + + # Technical Args + parser.add_argument("--fp16", action='store_true', help="Use mixed precision training (FP16).") + parser.add_argument("--seed", type=int, default=DEFAULT_SEED, help="Random seed for reproducibility.") + parser.add_argument("--report_to", type=str, default="tensorboard", choices=["tensorboard", "wandb", "none"], + help="Where to report metrics.") + parser.add_argument("--run_name", type=str, default="train_gpt2_equations", + help="Name of the run for logging purposes.") + + + # Hugging Face Hub Args + parser.add_argument("--push_to_hub", action='store_true', help="Push the final model to the Hugging Face Hub.") + parser.add_argument("--hub_model_id", type=str, default=None, + help="Repository ID for pushing (e.g., 'username/gpt2-finetuned-equations'). Required if --push_to_hub.") + + + args = parser.parse_args() + + # --- Argument Validation --- + if args.push_to_hub and not args.hub_model_id: + raise ValueError("--hub_model_id is required when --push_to_hub is set.") + if args.early_stopping_patience is not None and not args.load_best_model_at_end: + logger.warning("--early_stopping_patience is set, but --load_best_model_at_end is False. Early stopping requires loading the best model.") + # Or raise ValueError if strictness is needed. + if args.eval_strategy == "no" and (args.load_best_model_at_end or args.early_stopping_patience is not None): + raise ValueError("Cannot use --load_best_model_at_end or --early_stopping_patience without evaluation (set --eval_strategy to 'steps' or 'epoch').") + + return args + +# --- Dataset Loading and Preprocessing --- + +def load_and_prepare_dataset( + dataset_repo_id: str, + data_dir: Optional[str], + source_column: str, + target_column: str, + tokenizer: PreTrainedTokenizerBase, + block_size: int, + eval_strategy: str +) -> DatasetDict: + + """Loads dataset, renames column, tokenizes, and groups texts.""" + logger.info(f"Loading dataset from Hub: {dataset_repo_id} (data_dir: {data_dir})") + try: + raw_datasets = load_dataset(dataset_repo_id, data_dir=data_dir) + logger.info(f"Dataset loaded: {raw_datasets}") + + except Exception as e: + logger.error(f"Failed to load dataset: {e}", exc_info=True) + sys.exit(1) + + eos_text_token = tokenizer.eos_token # Ex: "<|endoftext|>" + + # --- Preprocessing Steps --- + # 1. Rename source column to target column (e.g., 'text') + logger.info(f"Renaming column '{source_column}' to '{target_column}' and removing others.") + try: + # Define the mapping function robustly + def rename_and_keep_column(example: Dict[str, Any]) -> Dict[str, Any]: + if source_column not in example: + raise KeyError(f"Source column '{source_column}' not found in example: {list(example.keys())}") + + text = example[source_column] + + return {target_column: text + eos_text_token} # Append EOS token to the text + + # Get all column names *before* mapping to correctly remove them + column_names_to_remove = {} + for split in raw_datasets.keys(): + column_names_to_remove[split] = raw_datasets[split].column_names + + processed_datasets = DatasetDict() + for split, names in column_names_to_remove.items(): + processed_datasets[split] = raw_datasets[split].map( + rename_and_keep_column, + batched=False, # Process example by example for renaming usually safer + remove_columns=names # Remove all original columns + ) + logger.info(f"Dataset after column renaming: {processed_datasets}") + + except KeyError as e: + logger.error(f"Error during column renaming: {e}", exc_info=True) + sys.exit(1) + except Exception as e: + logger.error(f"An unexpected error occurred during column renaming/cleanup: {e}", exc_info=True) + sys.exit(1) + + + # 2. Tokenize + logger.info("Tokenizing dataset...") + def tokenize_function(examples: Dict[str, List[str]]) -> Dict[str, List[Any]]: + return tokenizer(examples[target_column], truncation=True) + + tokenized_datasets = processed_datasets.map( + tokenize_function, + batched=True, + remove_columns=processed_datasets["train"].column_names, + # num_proc=os.cpu_count(), # Optional: Use multiple processes for speed + desc="Running tokenizer on dataset", # Progress bar description + ) + logger.info("Tokenization complete.") + + return tokenized_datasets + + +# --- Tokenizer and Model Loading --- + +def load_tokenizer(model_name_or_path: str) -> PreTrainedTokenizerBase: + """Loads the tokenizer and adds special tokens.""" + logger.info(f"Loading tokenizer for model: {model_name_or_path}") + try: + tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) + + # Defina seus tokens especiais de forma clara + SPECIAL_TOKENS = { + "pad_token": "", + "additional_special_tokens": ["", ""] + + } + + # Adiciona os tokens especiais + num_added = tokenizer.add_special_tokens(SPECIAL_TOKENS) + + # # Reforça as definições (importante para compatibilidade com Trainer) + #tokenizer.pad_token = "" + + logger.info(f"Added {num_added} special tokens: {SPECIAL_TOKENS}") + + return tokenizer + + except Exception as e: + logger.error(f"Failed to load tokenizer: {e}", exc_info=True) + sys.exit(1) + +def load_model(model_name_or_path: str, tokenizer: PreTrainedTokenizerBase, args: argparse.Namespace) -> PeftModel: + """Loads the base model, resizes embeddings, and applies PEFT (LoRA).""" + logger.info(f"Loading pretrained model: {model_name_or_path}") + try: + base_model = AutoModelForCausalLM.from_pretrained(model_name_or_path) + # Resize token embeddings to match tokenizer vocabulary size (including added tokens) + base_model.resize_token_embeddings(len(tokenizer)) + logger.info(f"Resized model token embeddings to: {len(tokenizer)}") + + except Exception as e: + logger.error(f"Failed to load base model: {e}", exc_info=True) + sys.exit(1) + + # --- PEFT (LoRA) Configuration --- + logger.info("Configuring PEFT (LoRA)...") + lora_config = LoraConfig( + task_type=TaskType.CAUSAL_LM, + r=args.lora_r, + lora_alpha=args.lora_alpha, + target_modules=args.lora_target_modules, + lora_dropout=args.lora_dropout, + bias=args.lora_bias, + # modules_to_save = ["lm_head"], # Optional: If you want to train the lm_head as well + ) + logger.info(f"LoRA Config: {lora_config}") + + # Apply PEFT to the base model + try: + model = get_peft_model(base_model, lora_config) + logger.info("Applied PEFT (LoRA) configuration to the model.") + model.print_trainable_parameters() # Shows trainable vs total parameters + + # Basic check for trainable parameters + if not any(p.requires_grad for p in model.parameters()): + logger.error("No parameters marked as trainable after applying LoRA. Check LoRA config and target modules.") + sys.exit(1) + # model.gradient_checkpointing_enable() # Consider enabling if memory is an issue + + return model + + except Exception as e: + logger.error(f"Failed to apply PEFT (LoRA) to the model: {e}", exc_info=True) + sys.exit(1) + +# --- Trainer Initialization --- + +def initialize_trainer( + model: PeftModel, + args: TrainingArguments, + train_dataset: Dataset, + eval_dataset: Optional[Dataset], + tokenizer: PreTrainedTokenizerBase, + early_stopping_patience: Optional[int] +) -> Trainer: + """Initializes and returns the Hugging Face Trainer.""" + logger.info("Initializing Trainer...") + + # Data collator for Causal LM (handles padding and labels) + # data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) + data_collator = DataCollatorForLanguageModeling( + tokenizer=tokenizer, + mlm=False, # Causal LM does not use masked language modeling + pad_to_multiple_of=8, # Optional: Helps with performance on some GPUs + ) + + + # Callbacks + callbacks: List[TrainerCallback] = [] + if args.load_best_model_at_end and early_stopping_patience is not None and early_stopping_patience > 0: + early_stopping_callback = EarlyStoppingCallback(early_stopping_patience=early_stopping_patience) + callbacks.append(early_stopping_callback) + logger.info(f"Early stopping enabled with patience: {early_stopping_patience}") + + trainer = Trainer( + model=model, + args=args, + train_dataset=train_dataset, + eval_dataset=eval_dataset, # Trainer handles None eval_dataset + tokenizer=tokenizer, + data_collator=data_collator, + callbacks=callbacks if callbacks else None, + # compute_metrics=compute_metrics, # Add if custom metrics are needed + ) + logger.info("Trainer initialized.") + return trainer + +# --- Main Execution --- + +def main(): + """Main function to orchestrate the fine-tuning process.""" + start_time = datetime.now() + logger.info(f"--- Starting Fine-Tuning Script at {start_time.strftime('%Y-%m-%d %H:%M:%S')} ---") + + # 1. Parse Arguments + args = parse_arguments() + logger.info(f"Running with arguments: {vars(args)}") + + # 2. Load HF Token (only if needed) + hf_token = None + if args.push_to_hub: + hf_token = load_hf_token() + + # 3. Set Seed for Reproducibility + set_seed(args.seed) + logger.info(f"Random seed set to: {args.seed}") + + # 4. Load Tokenizer + tokenizer = load_tokenizer(args.model_name_or_path) + + # 5. Load and Prepare Dataset + lm_datasets = load_and_prepare_dataset( + dataset_repo_id=args.dataset_repo_id, + data_dir=args.data_dir, + source_column=args.source_data_column, + target_column=DEFAULT_DATA_COLUMN, # Use the constant target column name + tokenizer=tokenizer, + block_size=args.block_size, + eval_strategy=args.eval_strategy # Pass eval strategy to handle warnings correctly + ) + train_dataset = lm_datasets["train"] + eval_dataset = lm_datasets.get("validation") # Returns None if 'validation' doesn't exist + has_validation = eval_dataset is not None and len(eval_dataset) > 0 + if not has_validation: + logger.warning("No validation dataset found. Skipping evaluation during training.") + eval_dataset = None + + + # 6. Load Model and Apply PEFT + model = load_model(args.model_name_or_path, tokenizer, args) + + # 7. Configure Training Arguments + training_args = TrainingArguments( + output_dir=args.output_dir, + num_train_epochs=args.num_train_epochs, + per_device_train_batch_size=args.per_device_train_batch_size, + per_device_eval_batch_size=args.per_device_eval_batch_size, + learning_rate=args.learning_rate, + lr_scheduler_type=args.lr_scheduler_type, + weight_decay=args.weight_decay, + gradient_accumulation_steps=args.gradient_accumulation_steps, + warmup_steps=args.warmup_steps, + fp16=args.fp16, + bf16=args.bf16, + seed=args.seed, + eval_strategy=args.eval_strategy, + metric_for_best_model="eval_loss", # Or make this an arg + greater_is_better=False, # Or make this an arg + load_best_model_at_end=args.load_best_model_at_end, + save_strategy=args.save_strategy, # Ensure this matches eval_strategy for early stopping + save_total_limit=args.save_total_limit, + logging_dir=os.path.join(args.output_dir, "logs"), # Example + logging_steps=args.logging_steps, + report_to=args.report_to, + run_name=args.run_name, + push_to_hub=args.push_to_hub, + hub_model_id=args.hub_model_id, + hub_token=hf_token if args.push_to_hub else None, # Assuming hf_token is loaded + overwrite_output_dir=args.overwrite_output_dir, + optim=args.optim, + dataloader_num_workers=args.dataloader_num_workers, + warmup_ratio=args.warmup_ratio, + max_grad_norm=args.max_grad_norm, + + #label_smoothing_factor=0.1, + ) + + # 8. Initialize Trainer + trainer = initialize_trainer( + model=model, + args=training_args, + train_dataset=train_dataset, + eval_dataset=eval_dataset, + tokenizer=tokenizer, + early_stopping_patience=args.early_stopping_patience + ) + + # 9. Start Training + logger.info("*** Starting Training ***") + try: + train_result = trainer.train() + logger.info("Training finished.") + + # Log and save final training metrics + metrics = train_result.metrics + trainer.log_metrics("train", metrics) + trainer.save_metrics("train", metrics) + + # Save the final model, tokenizer, and config + logger.info(f"Saving final model and tokenizer to {training_args.output_dir}") + trainer.save_model() # Saves PEFT adapter, base model config, tokenizer, etc. + # Tokenizer is usually saved by save_model, but explicit save is harmless + tokenizer.save_pretrained(training_args.output_dir) + logger.info("Model and tokenizer saved successfully.") + + except Exception as e: + logger.error(f"An error occurred during training: {e}", exc_info=True) + sys.exit(1) + + # 10. Evaluate (if configured and possible) + if training_args.do_eval: # Checks if eval_strategy is not 'no' + if eval_dataset: + logger.info("*** Evaluating Final Model ***") + try: + eval_metrics = trainer.evaluate() + # Modify metrics for perplexity if desired + try: + perplexity = np.exp(eval_metrics["eval_loss"]) + eval_metrics["perplexity"] = perplexity + logger.info(f"Perplexity: {perplexity:.4f}") + except OverflowError: + eval_metrics["perplexity"] = float("inf") + logger.warning("Could not compute perplexity due to overflow in exp(eval_loss).") + + logger.info(f"Evaluation metrics: {eval_metrics}") + trainer.log_metrics("eval", eval_metrics) + trainer.save_metrics("eval", eval_metrics) + except Exception as e: + logger.error(f"An error occurred during evaluation: {e}", exc_info=True) + else: + logger.warning("Evaluation was configured but no valid evaluation dataset was found/processed. Skipping final evaluation.") + + # 11. Push to Hub (if requested) + if training_args.push_to_hub: + logger.info(f"Pushing final model artifacts to Hub repository: {training_args.hub_model_id}") + try: + # This pushes the content saved by save_model() (adapter, configs, tokenizer) + trainer.push_to_hub(commit_message="End of fine-tuning training") + logger.info("Model pushed successfully to the Hub.") + except Exception as e: + logger.error(f"Failed to push model to Hub: {e}", exc_info=True) + # Don't exit, training still completed locally + + end_time = datetime.now() + logger.info(f"--- Script Finished at {end_time.strftime('%Y-%m-%d %H:%M:%S')} ---") + logger.info(f"Total execution time: {end_time - start_time}") + +if __name__ == "__main__": + main() diff --git a/test_wc.csv b/test_wc.csv new file mode 100644 index 0000000000000000000000000000000000000000..43b7dcdfb14624f617112787f1d364a057750dd8 --- /dev/null +++ b/test_wc.csv @@ -0,0 +1,3001 @@ +,eq,support,num_points +0,x_1 - 4.52*x_4*(x_3 + 1.888*exp(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1,2.993904459*x_1 + log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2,sin(x_1**2.588),"{'x_1': {'max': 10, 'min': -10}}",500 +3,(x_1**2 + x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +4,-x_1**0.663 + (x_1 + 1.268)*tan(exp(4.336*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +5,3.807*cos(x_2 - 0.87*asin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +6,log(x_1)*cos(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +7,x_1 - 2.188,"{'x_1': {'max': 10, 'min': -10}}",500 +8,14.907321*x_1**2 + x_2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +9,0.521104742053153*cos((x_1 + 0.371)**2)/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +10,exp(5*x_1)/x_1**10.175046,"{'x_1': {'max': 10, 'min': -10}}",500 +11,2*x_1 + tan(x_2) - 2.85,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +12,((x_1 - 0.159)*sin(4.211*x_2) + log(sin(x_1 - 1.073)))/(x_1 - 0.159),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +13,sqrt(-x_1**4 + x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +14,-(x_1 + 1.466)*(x_2 + 0.788) + log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +15,asin(2*x_1 + 2.61*cos(x_1 + 0.563)),"{'x_1': {'max': 10, 'min': -10}}",500 +16,x_1**2*log(sin(0.928*cos(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +17,x_3 + sqrt((0.625*x_1 + x_2)*(x_1 + 1.994)) + 0.072,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +18,x_1 + x_2**2 + x_3 - 0.835,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +19,x_1**6*tan(x_1)**6,"{'x_1': {'max': 10, 'min': -10}}",500 +20,x_2 - x_3 + 2.081*sin(exp(x_1)) - 1.659,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +21,x_1*(sqrt(x_2) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +22,x_1/(x_1 + exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +23,x_1**2/(x_2*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +24,log(x_1) + 0.630207380786071,"{'x_1': {'max': 10, 'min': -10}}",500 +25,log(x_2) + log(sin(x_1**2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +26,x_1 + tan(x_2*(x_1 - 1.868)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +27,-2.483776*x_1**1.422/(-x_2 + x_3 + 0.453),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +28,asin(x_1 - cos(x_1**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +29,x_1**1.674 - x_2 + sqrt(x_1 - 0.288),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +30,tan(x_1)*tan(x_2**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +31,x_1**3.691/tan(2.64*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +32,x_1*(x_2 + 0.748) + x_2 - 0.676,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +33,-x_2 - sin(-x_1 + x_2 + 1.125) - 1.787,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +34,1/log(1.394*x_1 - 0.931192),"{'x_1': {'max': 10, 'min': -10}}",500 +35,x_3 + sin(x_1*(x_2 - 0.273)) + 1.192,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +36,-x_1 + 4.53030794916858*sqrt(0.0487241813938344*x_1 + (0.546448087431694*tan(x_1) + 1.0)**5),"{'x_1': {'max': 10, 'min': -10}}",500 +37,x_1*sqrt(x_2**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +38,x_1 + 2.1686401268998*sqrt(x_2) - x_2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +39,x_1**5,"{'x_1': {'max': 10, 'min': -10}}",500 +40,x_1 + sqrt(cos(x_1))/sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +41,x_1 - 1.045678375*x_2*x_3 + 1.297*x_3 + 0.741884,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +42,x_1**0.53,"{'x_1': {'max': 10, 'min': -10}}",500 +43,16.410601*x_2*(x_1 - 1.018) + log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +44,log(sin(tan(x_1) - 0.125))**5,"{'x_1': {'max': 10, 'min': -10}}",500 +45,x_1**4,"{'x_1': {'max': 10, 'min': -10}}",500 +46,2.892972*x_1 + 4.634*tan(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +47,exp(2*x_1) + tan(x_1 - 1.898) - 0.995,"{'x_1': {'max': 10, 'min': -10}}",500 +48,11.0364963843744*x_3*tan(3.640464*(0.524109014675052*x_1 - 1)**2/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +49,x_1 - x_2*x_3**(3/2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +50,exp(x_1*(x_3 + 1.216)/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +51,log(x_1*(x_1**3 + 1.7)),"{'x_1': {'max': 10, 'min': -10}}",500 +52,0.0110969264877643*exp(x_1 + 3.903218*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +53,x_1*x_2**11.223915,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +54,4.171*x_1*sin(x_1**4),"{'x_1': {'max': 10, 'min': -10}}",500 +55,21.169201*x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +56,(x_1 + x_2 - 2.495)*log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +57,1.0008849491314*x_1**3.326 + cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +58,(x_1 - 0.911)**3/log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +59,0.193482323589899*exp(3*x_1/2),"{'x_1': {'max': 10, 'min': -10}}",500 +60,log(cos(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +61,sqrt(-cos(x_1) + tan(2*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +62,x_1*x_2**3 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +63,2*x_1 + x_2 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +64,(x_1 - 1.68)*sin(x_1 + 1.013)**3/(log(x_1) + 0.667829372575655),"{'x_1': {'max': 10, 'min': -10}}",500 +65,sin(0.142884*x_1**1.719),"{'x_1': {'max': 10, 'min': -10}}",500 +66,log(x_1 + sin(x_2) - 1.345),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +67,0.546*x_1*exp(0.43*x_2*(x_1 - 0.813)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +68,x_1**2*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +69,x_1 + x_2 + sin(x_2**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +70,sin(3.191*exp(x_1 + tan(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +71,x_2*(x_1 - x_2)*tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +72,0.945345247211264*x_1**2*x_2/sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +73,(x_1 - 0.56)*cos(1.644*x_1 + 1.17546)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +74,x_1*exp(3.541*x_2)/x_2 + 4.32*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +75,1.424*tan(x_3*(x_1 + x_2)) + 1.513712,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +76,1.489*tan(4.403*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +77,x_1 - 0.993 + x_2/(x_1*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +78,4.96*x_1*sqrt(cos(x_1)*tan(4.979*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +79,1.85607111932706*sqrt(x_1)*sqrt(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +80,x_1**6,"{'x_1': {'max': 10, 'min': -10}}",500 +81,x_1 + x_2 - exp(0.984*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +82,sin(log(x_1 - cos(cos(x_2)))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +83,x_1 - cos(exp(x_2)) + 2.911,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +84,-x_2**2 + x_2 + cos(x_1) - 1.438,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +85,(x_1 + x_2*(x_3 + x_4) - 1.038)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +86,cos(x_1**0.613 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +87,-(x_1 - 0.943)*(x_1 - x_2) + exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +88,0.203450539324794*x_1**3*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +89,cos(x_1 + x_2)**3 + tan(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +90,0.201*tan(x_1 + log(x_1 + x_2) + 0.938),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +91,cos(x_1**3) - cos(x_2) + 0.139,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +92,2*x_1 + 0.272*x_2 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +93,1.821*x_3 + cos(2.171*x_1 + x_2) + 1.67,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +94,x_1*tan(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +95,tan(x_1**0.964*sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +96,2.169729*x_1*tan((x_2 - x_3)**0.243283),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +97,2*x_1 + (cos(x_1 + 0.828) + 0.883)**2.774,"{'x_1': {'max': 10, 'min': -10}}",500 +98,2.102*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +99,log(3.592*cos(2809.88594984426*x_1**2))**5,"{'x_1': {'max': 10, 'min': -10}}",500 +100,x_1/(x_1 + sin(x_1) - 0.777),"{'x_1': {'max': 10, 'min': -10}}",500 +101,-x_1**(3/2) + x_1**3,"{'x_1': {'max': 10, 'min': -10}}",500 +102,(x_1 + x_2*(82.540153864*x_1 - 35.244645699928))**1.151,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +103,cos(2*x_1**2.656895*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +104,(x_1 + x_2)**(-1.239),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +105,-x_2 + log(x_1**2)**4.093425,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +106,x_1 - log(x_3 + cos(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +107,4.883*x_2*log(log(3.03*x_1 + 3.34209)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +108,log(x_1)*asin(x_1 - x_2 + 0.499),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +109,x_1 + 0.793*x_2 - 0.404736,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +110,2.085*(x_2 + 1.758)*log(log(x_1 - 1.785)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +111,2.05733926855288*(0.544959128065395*x_1 + 0.464190057977338*asin(x_1) - 1)**0.94,"{'x_1': {'max': 10, 'min': -10}}",500 +112,3.706*x_1*tan(2.013*x_1) - x_2 - 1.454,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +113,tan(2.933*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +114,x_1**2*sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +115,0.0567816657468769*x_1**4/(x_2 - 0.339),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +116,x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +117,3*x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +118,tan(-x_2 + x_3 + (x_1 - 0.029)**3 + 1.594),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +119,x_1 - sqrt(x_1*(0.259*x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +120,-x_1*(-x_1 + x_2 + 1.274)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +121,exp(sqrt(x_1*(x_1 + x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +122,x_1*x_2**1.896 + log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +123,x_1 + x_2*cos(43.4697587856*(0.506585612968592*x_1 - 1)**2) - 0.833,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +124,(x_1 + 0.437)*(x_2 - 1.415)/sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +125,cos(x_1**4)**5,"{'x_1': {'max': 10, 'min': -10}}",500 +126,1.638*cos(1.22359878391571*(0.874125874125874*x_1 - 1)**(3/2)),"{'x_1': {'max': 10, 'min': -10}}",500 +127,4.867*exp(0.2163861*x_1**2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +128,1/sqrt(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +129,tan(x_1)*asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +130,1.36798905608755*(x_1 + 0.933)**2*tan(x_2)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +131,19.377604*(x_2 - 0.935)*exp(x_1)*log(x_1 - 1.057),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +132,(x_1 - 0.5)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +133,cos((x_1 + x_2)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +134,x_1 - x_3 + log(x_2**2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +135,x_1**3,"{'x_1': {'max': 10, 'min': -10}}",500 +136,exp(exp(x_1/sin(x_1))) + 1.639,"{'x_1': {'max': 10, 'min': -10}}",500 +137,sin((sqrt(x_1) - x_2)/tan(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +138,x_1 - x_2 + log(tan(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +139,x_1 - 1.031*x_2 - exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +140,x_1*x_2 + x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +141,4.121*x_2 - (x_1 - x_2)*asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +142,(x_1 - 0.968)*(sqrt(x_2) + log(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +143,x_1*(x_2 - 1.347)/(x_3 + exp(x_1)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +144,sin((x_1 - 0.329)/(x_2 - 0.283))/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +145,sin(x_1*sin(x_2))**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +146,sqrt(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +147,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +148,-x_1**3*x_2**7.707 + x_1 + 0.157,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +149,(x_1 + 1.493)/(x_2 - 1.781),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +150,x_1*x_2*log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +151,-1.768*x_3 + asin(21.077281*x_2*(x_1 + 1.952)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +152,log(log(x_1)/(2*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +153,x_1 + 0.67,"{'x_1': {'max': 10, 'min': -10}}",500 +154,x_1*exp(4.114*x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +155,x_1 + exp(x_1) + cos(x_1) - 0.661,"{'x_1': {'max': 10, 'min': -10}}",500 +156,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +157,(x_1*log(x_1 + 1.921) + x_1 - x_2 - 0.158)/log(x_1 + 1.921),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +158,(4.363*x_1 - 7.098601)*sin(x_1 - 0.28),"{'x_1': {'max': 10, 'min': -10}}",500 +159,x_1**4*Abs(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +160,tan(x_1**2/x_2) - 0.071,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +161,3.309*x_1 + x_3*(x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +162,1.127*sin(x_1**2*sin(x_1)**2 + x_2) - 1.475243,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +163,x_1/(x_2 + log(x_3 + 1.653) - 0.597)**(1/4),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +164,sqrt(x_1) + x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +165,x_1 - x_2 - x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +166,2.45960311115695*x_1**2*exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +167,x_1 + cos(x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +168,1.045*sin(cos(tan(x_1**2.22)) + 1.14),"{'x_1': {'max': 10, 'min': -10}}",500 +169,x_1**2*(x_3 + 3.147*exp(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +170,sqrt(asin(x_1**0.305)),"{'x_1': {'max': 10, 'min': -10}}",500 +171,(x_1 + 1.492)*(4.375*x_1 + x_2**4 - 2.848125),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +172,log(x_1*(x_2 + exp(x_1))) + 0.224742272677907,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +173,x_1*x_2*x_4/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +174,-x_1 + x_2 + 8.15961422912928*(0.509164969450102*sin(x_1) + 1.0)**3.85,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +175,(x_1 + x_1/x_2 + 0.761)**4.23052,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +176,log(1.49*x_1 - x_2/(x_1 + 0.908)**1.904),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +177,x_1**11.679*(x_1 + x_2)**13.032,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +178,-sin(3.197*x_1*(4.369*x_1 - x_2) - x_1 + 1.466),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +179,cos(sqrt(log(x_1**5))/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +180,sin(tan(0.542965303453837*sqrt(x_1/x_2)/x_2)) - 1.59,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +181,(log(x_1) + sin(x_2))**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +182,x_1*sqrt(x_2)/(x_1 + x_2 + 1.153),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +183,1.134*x_1 - 1.928934,"{'x_1': {'max': 10, 'min': -10}}",500 +184,-x_2 + log(x_1**13.227) + 0.898,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +185,(0.336*x_1 + sin(2.399*x_1) + 0.3948)/(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +186,x_1**2*(x_1 + 0.407*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +187,0.511*x_2 + x_3 + tan(0.222123500666371*x_1/x_2) + 0.203378,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +188,(-x_1*x_2 + x_1 + cos(x_1))/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +189,3.287*x_1*(x_2 + x_3 - 1.147) - x_4 + 1.76,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +190,(x_1 - 0.443)*(4.233*x_1 + 0.468851),"{'x_1': {'max': 10, 'min': -10}}",500 +191,4.071*x_1 + cos(x_1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +192,-1.62*x_2 + (x_1 - 0.129)**2*sin(x_1)**2 - 2.66814,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +193,sqrt(x_1)*exp(x_1/2)/sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +194,x_1 - x_2**2*(log(x_2) + 1.49716466995131) - 1.21,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +195,log(x_1 + 0.335)/x_1**6.352,"{'x_1': {'max': 10, 'min': -10}}",500 +196,0.079524*x_1**0.65 + sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +197,x_1 - (x_2 + 0.627)*(sin(x_3) + 1.226) - 0.486,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +198,exp((x_1**4 + x_2)/x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +199,2*x_1 + log(x_1 + 0.856),"{'x_1': {'max': 10, 'min': -10}}",500 +200,log(x_1)**4,"{'x_1': {'max': 10, 'min': -10}}",500 +201,5.53750587743667*x_1*x_2*exp(exp(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +202,(x_1 + cos(1.744*x_2))/(x_2 + 1.555),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +203,(-x_1**4 + x_1 + x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +204,4.023*(x_1 - 0.276)*(x_1*x_3 - x_2)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +205,(x_1 + x_3 + cos(x_2))/x_1,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +206,sin(2*x_1*sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +207,3.688*x_1 - tan(x_1*(x_2 + 0.12) - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +208,x_1**10*log(3.08454153763779*(0.528820729772607*x_2 + 1)**1.768)**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +209,x_1*x_2/(2.20635445928346*sqrt(x_1) + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +210,x_1**2*x_2**3.535/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +211,tan(x_1) - tan((1.11310376874755*x_1 - 0.404056668055361)*sqrt(0.807102502017756*x_2 - 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +212,log(x_1) + cos(4.05114679380212*exp(x_2)) - 1.21882762373949,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +213,x_1 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +214,log(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +215,exp(0.047524*(cos(x_1) - 0.344)*tan(x_1 - 0.455)),"{'x_1': {'max': 10, 'min': -10}}",500 +216,0.277293679*x_1**2*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +217,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +218,sin(x_1 - (x_2 + tan(x_2 - 1.976))**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +219,x_1 + tan(x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +220,x_2*log(x_1 + 2.98*x_2 + 5.53982),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +221,x_1*asin(2*x_1 - 2.168) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +222,-x_1 + 2*sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +223,log(1.547*x_1 - 0.878696)**2.236,"{'x_1': {'max': 10, 'min': -10}}",500 +224,x_1*sin(x_2) + x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +225,x_1**8 + x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +226,2*x_1 + x_2**5.908,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +227,x_1**2*(x_2 + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +228,x_1*x_2**8.658*x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +229,1.60149616725827*x_1*(1/(x_2 + 0.267))**0.91,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +230,x_1 + tan(2.282*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +231,1.02490869893537*(0.554938956714761*x_1 + 1)**0.041779,"{'x_1': {'max': 10, 'min': -10}}",500 +232,sqrt(2)*sqrt(x_1) + x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +233,40.895244813721*x_1*(0.53276505061268*x_2 + 1)**2*exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +234,(x_1 - sin(x_1))**3,"{'x_1': {'max': 10, 'min': -10}}",500 +235,log(3.376*sin(x_1 + 0.071) - 0.935152),"{'x_1': {'max': 10, 'min': -10}}",500 +236,tan(x_1 + log(1.736*x_1 + x_2 + 2.00508)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +237,asin(x_1*(x_2 + 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +238,43.8731198772676*x_1*x_2**9.65,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +239,log(x_2*(x_1 + 0.99)*cos(x_2 - 1.526)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +240,x_1**5.198*(log(x_1) + 1.54756250871601),"{'x_1': {'max': 10, 'min': -10}}",500 +241,x_1/(x_2 + 2*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +242,log(x_1**7)/(x_1 - 0.874),"{'x_1': {'max': 10, 'min': -10}}",500 +243,x_1**2*x_2/cos(1.497*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +244,x_1**2 + x_2**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +245,x_1**2*(x_1 + 1.235*x_2 - 0.147),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +246,x_1*(x_1 + 1),"{'x_1': {'max': 10, 'min': -10}}",500 +247,x_3*(x_1 - x_2*(10.474708672*x_1 + 15.251175826432)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +248,-1.08731066191551*((0.99304865938431*x_2 + 1)**4 - 0.972483223103593*cos(x_1)**2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +249,x_1**2/asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +250,8577.7060542411*x_1*x_2**10.542518*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +251,x_1/x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +252,log((x_1**2 + cos(x_1))/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +253,x_1*(3.516*x_1 + x_2 + exp(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +254,x_1*exp(-x_3)/x_2 + x_3 + 1.977,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +255,log(log(x_1**2))/2,"{'x_1': {'max': 10, 'min': -10}}",500 +256,log(x_2) + asin(1/sqrt(x_1 + 0.145)) + 1.20032218486747,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +257,-271.303724101609*(0.738552437223043*x_1 - 1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +258,(2*x_1 - x_2)/x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +259,exp(x_3*(x_1 + 1.027)*(4.528*x_1 - x_2)) - 1.297,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +260,exp(-0.572*sin(x_1 - sin(x_1)**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +261,x_1 - x_2**0.545337,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +262,-0.618*x_2*(-x_1 + cos(4.474*Abs(x_1)) + 0.523),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +263,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +264,x_1**4/x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +265,1.512*x_1*sqrt(1.0 - 0.149*cos(log(x_2))**0.95),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +266,12.0866634015897*x_1*x_2*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +267,x_1*x_2*(x_3 - 1)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +268,-x_3 + (x_2 + sin(x_1))**5 - 0.047,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +269,log(x_1) + 1.134*sin(sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +270,0.239432*x_1 + exp(x_2) + 0.375668808,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +271,1.974*asin(x_1**3*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +272,1.755*x_1 + asin(sqrt(-x_2 + log(4.186*x_1 - 4.52088))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +273,sin(x_1)/(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +274,x_1 + 4.86368882934249*(0.691085003455425*tan(tan(cos(x_1))) - 1.0)**4.281,"{'x_1': {'max': 10, 'min': -10}}",500 +275,x_1*exp(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +276,(x_1 - 1.925)*(2.516*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +277,exp(log(x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +278,9.33007811622196*(0.56785917092561*x_1 - 1)**2*sqrt(x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +279,tan(4.34*x_2*log(x_1) + tan(x_2)) - 1.403,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +280,(x_2 + 1/sqrt(x_1 + 0.773))**3.014,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +281,cos(x_1*(x_2 + 0.792)/Abs(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +282,x_1 + tan((4.313*x_1 + 1.824399)/x_2) + 1.233,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +283,(sin(x_1 + 1.076*x_2) - 1.783)*exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +284,asin(3.374*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +285,sin(x_1)**4*cos(x_1)**8,"{'x_1': {'max': 10, 'min': -10}}",500 +286,(x_1*tan(exp(x_1)) + x_1 + 1.757)/tan(exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +287,x_1 + (x_1 + exp(x_2))**2 - 0.89,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +288,x_2*sqrt(x_2*(x_1 + 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +289,0.985*x_1*x_2 + tan(x_2 + 1.475),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +290,x_1*log(tan(2*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +291,-x_1**2.515*sin(x_1 - x_2 + 1.204),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +292,x_1*x_3*asin(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +293,x_1 + cos(x_2) - 0.747,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +294,sin(x_1/(x_1 + x_2 + 0.155))**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +295,x_1**0.1835*(x_2 + log(x_3 - 1.197)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +296,2.928*exp(x_1**7.244),"{'x_1': {'max': 10, 'min': -10}}",500 +297,cos(x_1**4),"{'x_1': {'max': 10, 'min': -10}}",500 +298,(exp(x_2) - 0.896)*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +299,1.63398692810458*x_1**3/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +300,log(0.369*x_1 + 0.109902239*(x_1 + 0.251)*asin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +301,x_1**2*tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +302,(1.04166666666667*x_1 + 4.381*x_2*(x_2 - 1.577) - 1.04166666666667*x_2)/(x_2 - 1.577),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +303,x_1*x_2*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +304,x_1**2*sin(x_2*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +305,(1.33*x_1 - 0.247*x_2)/x_3**2.2185,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +306,2*x_1 - x_2 + tan(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +307,1/(x_1*tan(4.804*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +308,(x_1 + x_2*(0.097336*x_1 + 0.145711992))**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +309,(x_1 - 0.378)*(x_2 + 0.466),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +310,-x_1 + (x_1 + 1.665)*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +311,(x_1 + 1.754)*(x_1 + x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +312,1.30968158962752*sqrt(x_1/(x_2 + 0.815))/(x_2 + 1.286),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +313,x_1 + x_2*x_3 + 2.677*x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +314,-x_1*tan(x_1) + x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +315,cos(x_1**2 + x_1 + 1.896),"{'x_1': {'max': 10, 'min': -10}}",500 +316,x_2**4*exp(tan(x_1)/x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +317,x_1**2*(x_1 - exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +318,log(asin(x_1/x_2)) + asin(4.67*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +319,asin(x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +320,x_2*(x_1 + 0.15) + cos(x_1 - 0.273),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +321,exp(0.276*x_1 - 0.298*x_2)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +322,exp(asin(2.383*x_2 + exp(x_1) + 1.485)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +323,sqrt(x_1**2 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +324,x_1 + 4*x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +325,tan(x_1**3),"{'x_1': {'max': 10, 'min': -10}}",500 +326,x_1*tan(2*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +327,x_1*(x_1 + x_2) + x_2**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +328,x_1*sqrt(x_2)*(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +329,0.098596*x_2*exp(x_1*(x_1**3 - 2.264)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +330,sqrt(sin(x_1*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +331,-sin(tan(x_1*(x_2 - 1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +332,2*x_1 + x_2*(x_1 + 1.182),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +333,x_1 + log(cos(x_1 + 1.1)),"{'x_1': {'max': 10, 'min': -10}}",500 +334,cos(exp(15.293985*x_1*Abs(x_1))) + 1.87,"{'x_1': {'max': 10, 'min': -10}}",500 +335,sqrt(1/x_2)*Abs(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +336,1/cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +337,log(x_1)**9,"{'x_1': {'max': 10, 'min': -10}}",500 +338,x_2**5.244*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +339,2.52431895951623*x_1*x_2*cos(x_3 + 0.244),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +340,x_1**2/log(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +341,x_1*x_2*(x_1 + cos(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +342,cos(3.275*cos(tan(x_1) - 1.199)),"{'x_1': {'max': 10, 'min': -10}}",500 +343,x_1*(-x_3 + cos(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +344,x_1**3 + 3.366*exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +345,(x_1 + x_2)/sin(4.303*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +346,(x_1 + 0.0186550200039354*exp(4.94*x_1))/log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +347,x_1 - x_2*(x_1 - 0.526)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +348,(3.075*x_1*(x_2 - 1.089) + 0.330687830687831)/(x_2 - 1.089),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +349,sin(x_1**1.8/(x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +350,(x_1 + 0.332)/tan(x_2 + exp(x_2) - 0.123),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +351,4.159*asin(asin(4*x_1**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +352,sin(x_2) + 0.268*tan(cos(x_1 + x_2)) + 1.223,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +353,x_1 + 1.00023966943451*(0.997008973080758*x_2 + 0.997008973080758*x_3 + 1)**0.08 + 0.9,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +354,12.752041*x_1*x_2*(-x_1 + x_2 + 0.781),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +355,-0.021904*exp(x_1)*tan(x_2) + sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +356,sin(2.1972245593703*x_1*exp(-2.515*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +357,2*x_1 + exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +358,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +359,tan(x_1 - 3.214*x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +360,x_1**3.465*sin(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +361,x_2*(-x_1 + log(x_1))/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +362,x_2*x_3*cos(tan(cos(x_1))),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +363,Abs(3.596*x_1 + 0.050344),"{'x_1': {'max': 10, 'min': -10}}",500 +364,16.916769*x_1**2*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +365,28.0014200212241*x_1*x_2*x_3**1.427,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +366,x_1*(x_2 - x_3*exp(x_2) + 0.742),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +367,exp(3*asin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +368,sqrt(2*x_1 + asin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +369,3.28392252556497*x_1 + x_2*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +370,1.0*x_1**2 - 0.184*x_1 - 0.173*x_2 + 1.029498,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +371,x_1 + x_2/(x_3 - 3.557*cos(x_4)),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +372,-asin(x_2**3 - exp(x_1))**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +373,x_1**2*sin(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +374,2.038*sin(1.502*x_1*(x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +375,x_1*(1 - log(asin(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +376,sin(x_1) - sin(x_2 - 0.786) + 1.033,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +377,2.091*x_1**2 + x_1*x_2/x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +378,asin(asin(sin(0.837*x_1))/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +379,(x_1 + 1.163)/sin(6.682225*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +380,exp(x_2) + log((x_1 - 0.854)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +381,(-x_1 + sin(2*x_1))**3.83,"{'x_1': {'max': 10, 'min': -10}}",500 +382,asin(3.542*x_3*(x_1 + 0.086)/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +383,asin(x_1*(x_1 + x_2**2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +384,cos(2*x_1*(x_2 - 1.387)) - 0.93,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +385,exp(tan(17.006112*x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +386,x_1**5*asin(x_1)/sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +387,x_1*x_2*(x_1 + x_2 - 1.422),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +388,x_1*sin(tan(x_1)**4/x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +389,x_2*(2*x_1 + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +390,x_1*x_2 - 1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +391,log(cos(x_1)/x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +392,-x_2 + log(sin(x_1**1.954)) - 0.0638203100987274,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +393,1.09544511501033*sqrt(0.833333333333333*x_1 + x_2 - 0.833333333333333*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +394,2.771*x_3 + exp(4.66925719433188*exp(-x_2 + tan(0.743*x_1))),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +395,exp(2*x_1**4),"{'x_1': {'max': 10, 'min': -10}}",500 +396,(x_1 + 1.704)*asin(4.922*x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +397,exp(x_1**4),"{'x_1': {'max': 10, 'min': -10}}",500 +398,3.7636*(x_1 + 0.515463917525773*x_3)**2*(x_1 + x_2 + 0.029)**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +399,3.875*(x_1 - 1.718)*log(x_2 + 0.201),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +400,x_1 + cos(x_2 - x_3 + 0.77) + 0.3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +401,x_1*sin(x_1**4),"{'x_1': {'max': 10, 'min': -10}}",500 +402,x_1 + sqrt(cos(5590.02656386361*(0.743494423791822*x_2 - 1)**5)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +403,0.219961*x_1**1.675,"{'x_1': {'max': 10, 'min': -10}}",500 +404,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +405,3.253*x_1 + cos(sin(sin(x_2))) + 0.123614,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +406,tan(sqrt(x_1) + 3.586005*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +407,-x_1 + 0.916677095633152*exp(x_1) + cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +408,log(x_1 - 1.966)*sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +409,2.0*x_1 + 0.752*cos(x_2) + 1.12,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +410,sqrt(asin(2.957*x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +411,x_1 + tan(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +412,x_1 + x_2 + sin(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +413,x_2*log(x_1)/(x_1 - 0.913)**1.845,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +414,tan(x_1 + (x_1 + exp(x_2))**2 + 1.601),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +415,sqrt(x_2*tan(x_1)) + cos(0.466*x_1 + 0.751658),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +416,(x_1**3 + x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +417,tan(exp(0.484128*x_1))**3,"{'x_1': {'max': 10, 'min': -10}}",500 +418,1.548 - sin(-x_1 + tan(x_2) + 1.35),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +419,x_1 + log(2.469*x_1 + x_2) + 0.942,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +420,1.873*x_1 - 1.434718,"{'x_1': {'max': 10, 'min': -10}}",500 +421,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +422,9.168784*(0.330250990752972*x_1 + x_2)**2*sin(tan(x_3) + 0.017)**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +423,x_1 - x_2*cos(x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +424,(x_1 + 0.959)**2 + 0.769*cos(x_2) - 1.064296,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +425,1.37683923563604*(0.802568218298555*x_1 - 1)**1.454*log(x_2 + 1.205),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +426,exp(-x_1 + 0.641*sin(0.823*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +427,sqrt(cos(x_2*(x_1 + x_1**7.005))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +428,exp(asin(2.17761337247915*sqrt(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +429,x_1*(x_1 + sqrt(x_1 - x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +430,x_1**5.409 + x_2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +431,x_1*x_2*x_3 + exp(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +432,4.588*x_1*(-x_1 + sqrt(x_2) + 0.054),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +433,(x_1 + 0.629)**2 - cos(x_1**3),"{'x_1': {'max': 10, 'min': -10}}",500 +434,asin(log(x_1*log(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +435,x_3 + sqrt(x_1 - 2*x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +436,log(x_1*x_2**5) + tan(3.106*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +437,3.92*exp(5.964*x_1) + 5.67224,"{'x_1': {'max': 10, 'min': -10}}",500 +438,x_1**7*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +439,x_1*(log(2.472*x_1 - 1.611744) + 1),"{'x_1': {'max': 10, 'min': -10}}",500 +440,cos(tan(x_1))**3,"{'x_1': {'max': 10, 'min': -10}}",500 +441,x_1 - cos((x_1 - exp(x_1))**5),"{'x_1': {'max': 10, 'min': -10}}",500 +442,exp(x_1*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +443,x_1*sin(x_1 - 0.797) + 2.174*x_2 + 1.723982,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +444,x_2*(x_1 + exp(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +445,asin(sin(x_1 + x_2 + 0.525) + 0.636),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +446,(x_3 + x_4 + 2.574)*exp(1.063*x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +447,x_1**2 - x_2**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +448,x_1 - x_2 - 1.57204612654973*sqrt(0.550660792951542*x_1 - 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +449,x_3*(-174.013667147776*x_1*x_2 + 3.632*x_1 - 7.220416),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +450,x_1/x_2 + 0.455*sin(x_3 + 1.728) - 0.115115,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +451,x_1*tan(x_2**2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +452,-asin(x_1 - 0.68),"{'x_1': {'max': 10, 'min': -10}}",500 +453,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +454,exp(x_1*x_3/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +455,x_1 + x_2 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +456,-x_1*exp(x_2) + cos(tan(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +457,x_1*(x_2 - 0.116)*(x_1 + asin(x_1) - 0.13),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +458,x_2**2 + sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +459,tan(4.917*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +460,Abs(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +461,x_1 + x_2 + x_3 + cos(4.654*x_2 - 8.265504),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +462,x_1 + asin(2*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +463,(4.721*x_1*(3.398*x_1 + x_2) + x_2)/(3.398*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +464,exp(x_1) + exp(sin(x_1)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +465,x_1*asin(sqrt(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +466,x_1**2*(sin(x_1) + 1),"{'x_1': {'max': 10, 'min': -10}}",500 +467,x_2 + 3.374569*(0.544365813826892*x_1 + 1)**2 + 1.96*tan(x_3) - 0.3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +468,-x_2 + x_3 + 2.83*exp(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +469,24309.5363989501*exp(9.574*x_1)/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +470,x_1*x_2*exp(x_1)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +471,(2*x_1 + sin(x_1 + 1.976))/(x_2 - 1.689),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +472,x_2**0.397854*(x_1 + exp(tan(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +473,x_1*cos((x_1 + 1.825)/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +474,x_1 + x_2 - (x_1**3)**2.486,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +475,x_1/(x_1 + sin(x_1) - 0.119),"{'x_1': {'max': 10, 'min': -10}}",500 +476,(x_1**3)**(-0.911),"{'x_1': {'max': 10, 'min': -10}}",500 +477,sin(tan(0.502681*x_1*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +478,2497.30651297039*(0.736377025036819*x_1 - 0.158156577542272*x_2 - 1)**4.242,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +479,4.81*x_1 - log(x_1**3),"{'x_1': {'max': 10, 'min': -10}}",500 +480,tan(x_1*x_2*x_3)**4,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +481,exp(x_1 + 1108.58371965806*(0.246062992125984*x_2 - x_3)**5),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +482,x_1*(-x_2 + cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +483,x_1 - 0.513*x_2 + 20.853789894025*(0.951474785918173*x_1 - 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +484,(x_1 + 0.072)/(x_2 - sin(x_1 - 1.496)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +485,65790054728.6513*(0.536193029490617*x_1 + 0.125454616165329*x_2 - 1)**12,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +486,((x_1 + x_2*(x_2 - 0.097))/(x_2 - 0.097))**0.183,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +487,cos(sin(x_1/tan(2.605*x_2)))**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +488,x_1 + exp(5*x_1/2),"{'x_1': {'max': 10, 'min': -10}}",500 +489,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +490,(x_1 - exp(2*x_1) - 1.702)*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +491,113.704523513295*(0.623052959501558*x_1 - 1)**10.005*cos(x_2**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +492,3.133*x_1 + sin(tan(x_2) - 1.889) - 1.919,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +493,x_3**1.948 + (x_1 + 0.52)**3.534 - exp(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +494,(x_1 + 0.758)*log(3*x_1 - 2.97),"{'x_1': {'max': 10, 'min': -10}}",500 +495,2.618*x_1 - 1.222606 + asin(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +496,(0.709219858156028*x_1 + 1)**9.087*(22.6962484382686*x_2 + 35.406147563699),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +497,x_2 + exp(6.677*x_1) + 0.118,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +498,x_1**3*(x_1 + x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +499,x_3*(x_1 + sin(2.955*x_1*x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +500,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +501,3.288008303*(0.672494956287828*x_1 + 1)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +502,-x_2 + exp(x_1**8.646),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +503,sqrt(asin(x_1**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +504,x_1*(x_2 + cos(x_1))/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +505,0.932*x_2 + log(asin(x_1)) - 0.301036,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +506,cos(sin(x_2*(69.883617159*tan(4.419*x_1) + 1.0))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +507,x_1**2/cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +508,x_2*(x_1 - 1.07028340080972)*(x_1 - 0.435)/(x_1 - 1.88),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +509,sin(x_1**6.087),"{'x_1': {'max': 10, 'min': -10}}",500 +510,asin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +511,x_1*(x_2 + tan(x_2) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +512,log(1.579*x_1 + (x_1 + x_2)**2.843 - 1.414784),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +513,sin(sin(x_2**3*(x_1 - 1)**3)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +514,log(x_1 - 0.005) + asin(1.3821298327556*(0.68073519400953*x_1 + 1)**0.8415),"{'x_1': {'max': 10, 'min': -10}}",500 +515,0.671*exp(0.55249462010983*sqrt(x_2)/sqrt(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +516,x_1*x_2*exp(x_2) + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +517,log(sqrt(x_1) + 2*x_1)**3.372,"{'x_1': {'max': 10, 'min': -10}}",500 +518,x_3*(x_2 - 0.427)/x_1**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +519,2*x_1 + x_2 - log(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +520,102.651078544759*(x_2 + 0.203169443315725*log(x_1) - 0.818)**2.906,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +521,x_1 - x_2 + log(x_2)/2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +522,1.32*x_1*sin(3.608*x_1 + 0.021648),"{'x_1': {'max': 10, 'min': -10}}",500 +523,(x_1 - 1.683)*asin(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +524,x_1**1.8145*exp(2.41425162876467*x_1**0.586),"{'x_1': {'max': 10, 'min': -10}}",500 +525,x_3*(3.151*x_3 + sqrt(x_1*x_2) - 1.610161),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +526,exp(x_2*(x_1 + 1.502)*(x_1 - 1.458*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +527,sqrt(2*x_1 - cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +528,sin(x_1 + 1.225)/(x_2 + 1.676),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +529,-x_2/x_3 + 1/sqrt(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +530,x_1 - x_2 + cos(1.933*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +531,-1.033364331*(0.989119683481701*x_1 - 1)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +532,0.651152823843988*sqrt(cos(exp(2.468*x_1)) - 0.341),"{'x_1': {'max': 10, 'min': -10}}",500 +533,x_1*cos(4.162*x_1 + 2.526334)/(x_2 + 1.526),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +534,tan(1.43387586631479*sqrt(x_1) - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +535,2.802*exp(x_1**6),"{'x_1': {'max': 10, 'min': -10}}",500 +536,x_1/tan(tan(x_1)**4),"{'x_1': {'max': 10, 'min': -10}}",500 +537,x_2*exp(exp(2*sqrt(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +538,x_1/x_2 - exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +539,3*x_1 + 4.39*x_2 + 4.743,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +540,4.577*x_1*tan((x_1 - 0.214)**(1/4)) + 1.195*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +541,x_1**2*(1 - 5.74004401002779*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +542,x_1**2*sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +543,log(exp(sqrt(x_1 + x_2 + 0.396))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +544,924.589111383374*x_1**0.115,"{'x_1': {'max': 10, 'min': -10}}",500 +545,3.501*x_1 - 0.5*cos(0.348*x_1) - 5.329165,"{'x_1': {'max': 10, 'min': -10}}",500 +546,log(x_1**3.041395),"{'x_1': {'max': 10, 'min': -10}}",500 +547,sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +548,(x_2 - 0.309)**1.476*Abs(x_1 + 0.39),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +549,x_1*(x_1 + tan(x_2) + 0.654) - x_2 - 0.18,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +550,x_2 + asin(2*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +551,3.385*x_1 - x_2 - cos(2.125*x_1) + 2.163015,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +552,2.391*x_1/(x_1 + sqrt(x_2) - 0.888),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +553,x_1**11.665*(4.684*x_2 - 8.192316),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +554,-2.0*x_1 + 2.544*tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +555,tan(exp(5*x_1)*tan(x_1)) + 1.051,"{'x_1': {'max': 10, 'min': -10}}",500 +556,-x_1 + tan(cos(x_1 - 0.214)**3.907) + 1.366,"{'x_1': {'max': 10, 'min': -10}}",500 +557,33.9010328176213*(x_1 + 0.465)**(9/2),"{'x_1': {'max': 10, 'min': -10}}",500 +558,x_1 - 6.16021451016838*x_3*cos(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +559,21.881515573*x_1**3 + x_1**3.585/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +560,4.249*exp(x_1**2 + x_2*cos(4.439*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +561,4.726*x_1 + 7.249684 + 1/(x_2*sqrt(x_1 - 0.763)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +562,2.174*x_1 - 1,"{'x_1': {'max': 10, 'min': -10}}",500 +563,(x_1 + 1.508)*asin(11.519236*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +564,x_2/tan(x_1) + sqrt(asin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +565,5.2383156195126*exp(x_1)*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +566,0.0539253226410438*x_1**12,"{'x_1': {'max': 10, 'min': -10}}",500 +567,(x_1 - sin(cos(x_1)))*(2.868*x_2 + 2.503764),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +568,x_1**7.328,"{'x_1': {'max': 10, 'min': -10}}",500 +569,0.596496293282*x_1*x_2 + x_1 + 1.924,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +570,(x_1 - x_2**2)*(x_3 + 1.028),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +571,tan(1.322*x_1*x_3*(x_2 + 4.175)/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +572,exp(1.719*x_3) + tan(x_1/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +573,(sin(x_1**6) + 0.04)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +574,x_1**2*sin(x_2 + tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +575,x_1 + asin(x_2)**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +576,(x_1 + x_2 + 1.804)*log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +577,log(x_1*(x_2 - 0.51)*asin(x_1)) - 0.249315785813174,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +578,x_1**8.954/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +579,log(2*x_1)/(x_1 + 0.227),"{'x_1': {'max': 10, 'min': -10}}",500 +580,1.092025*x_1**2.813*(0.956937799043062*x_2 - 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +581,(2*x_1 + x_2)*log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +582,sin((x_1 + x_2 + 1.507)*sqrt(sin(x_2 + 0.519))) + 0.245,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +583,1.043*tan(exp(2*x_1*x_2) - 0.491),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +584,asin(x_1 + asin(exp(x_1)) + 0.822) - 0.504,"{'x_1': {'max': 10, 'min': -10}}",500 +585,x_1/(x_2 + 0.065)**8.1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +586,x_1**3 + x_1 + 2*x_2 - 1.129,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +587,x_1*tan(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +588,log(26.865224875*x_1**3 + x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +589,sin(2.272*x_1*x_2*(x_1 + 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +590,x_1**2/cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +591,log((cos(4.754*x_1) + 0.017)/cos(log(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +592,log(x_1**2/(x_3 - log(x_2 - 1.158))**2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +593,0.146*(x_1 + 1.645)*(x_2 + asin(x_1 - 1.023)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +594,x_1**4 + x_1**3.369,"{'x_1': {'max': 10, 'min': -10}}",500 +595,x_1**3.167*cos(2*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +596,x_2*cos(x_1*exp(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +597,x_1*sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +598,0.227410164476343*exp(x_1)*log(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +599,18.3184*x_1*cos(-x_2 + sin(x_3) + 0.36),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +600,(-x_2 + (3.829*x_1 + 2.802828)*cos(x_1) + 1.265)/(x_1 + 0.732),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +601,0.457*cos(x_1 + sin(2.04621601987669*sqrt(x_1)/x_2)) + 0.202451,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +602,exp(x_1**3.513),"{'x_1': {'max': 10, 'min': -10}}",500 +603,(x_1 + 1.118)*log((x_1 - sin(x_2))**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +604,cos(x_1*x_2)**6.995,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +605,-4.331*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +606,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +607,exp(cos(0.475*cos(2*x_1 - 0.282))),"{'x_1': {'max': 10, 'min': -10}}",500 +608,x_1*x_2 + asin(x_2 - 4.951*x_3) - 1.344,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +609,log(14.084823375*x_1**3.888 - 2.92*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +610,x_2**2*(x_1 - 0.289)**1.302,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +611,sin(x_1 + 2*x_2 - 2.504)**0.528,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +612,-x_1 - x_2 + sqrt(x_1/x_2) - 1.87,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +613,x_1 + x_2*tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +614,x_1**4*sin(x_1)**4,"{'x_1': {'max': 10, 'min': -10}}",500 +615,3.055*exp(x_1*sin(2.203*x_1)) - 4.585555,"{'x_1': {'max': 10, 'min': -10}}",500 +616,x_1 + x_2 - cos(x_1 + x_3) + 3.313,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +617,x_1**2*(5.438224*x_2 + 24.395872864*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +618,31.0592615939353*x_2**7.085*tan(x_1)**23.475,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +619,3.926*x_1 + cos((x_1 + 0.62)**4) + 6.740942,"{'x_1': {'max': 10, 'min': -10}}",500 +620,exp(asin(21.608738*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +621,1.580049*x_1**2 + x_1 + tan(2.818*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +622,x_1 + (x_3 + cos(x_2))**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +623,3.769*cos(x_1 + 1.16918859538073*exp(0.145*x_2)*cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +624,x_1 + cos(x_1 + exp(x_2) + 0.664) - 0.475,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +625,0.497*x_1 + log(cos(4.845*x_1)) - 0.615783,"{'x_1': {'max': 10, 'min': -10}}",500 +626,x_1**2*asin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +627,sin(tan(x_1)**0.330894) - 0.694,"{'x_1': {'max': 10, 'min': -10}}",500 +628,0.614*cos(x_1*cos(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +629,x_1 + x_2 + (x_1 + 0.471)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +630,x_2*(x_1 + 1.968)*tan(x_1 - 0.832),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +631,(x_1 + exp(x_1) + 0.582)**9.015,"{'x_1': {'max': 10, 'min': -10}}",500 +632,x_1*(cos(x_1) + 1),"{'x_1': {'max': 10, 'min': -10}}",500 +633,asin(2*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +634,4.3*x_1 + 1.147170431976*sqrt(0.759878419452887*x_1*x_2 + 0.759878419452887*x_1 - 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +635,tan(x_1 + sin(log(x_2))/x_2) - 0.984,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +636,Abs(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +637,cos(x_1/x_2 + sin(x_1 - 0.745)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +638,(-x_2 + sin(4.868*x_1 - 8.455716))/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +639,-tan(-x_1 + 0.937656881815686*x_2**2 + 1.856),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +640,53.1971153127138*x_1**5,"{'x_1': {'max': 10, 'min': -10}}",500 +641,log(x_1**2/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +642,exp(-x_2 + cos(4.093*x_1))/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +643,exp(x_1/2 - x_2/2 - exp(x_2)/2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +644,29.5342320775604*exp(2.975*x_1 + 2*x_1**4.32),"{'x_1': {'max': 10, 'min': -10}}",500 +645,6.702304832161*(-x_1**2 + 0.386267271458209*x_1 + 0.652791688764374)**2/(x_1 - 0.331),"{'x_1': {'max': 10, 'min': -10}}",500 +646,cos(Abs(x_1))**6.265,"{'x_1': {'max': 10, 'min': -10}}",500 +647,(x_1 + tan(x_2))**2.199,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +648,(x_1**1.5285*(x_2 + 0.92))**(1/4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +649,3*x_1 + x_2 + 5.181,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +650,x_1**2*exp(x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +651,sqrt(x_1**5 + x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +652,2*x_1 + x_2 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +653,-(x_2 + 1.338)*(-2.937*x_1 + exp(x_1) + 1.671153),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +654,1.427*x_1*sin(sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +655,-x_1*x_2*(x_2 - log(x_1 + 1.229) + 1.958),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +656,log(x_1**17.52579*x_2**2) + 6.85768861492897,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +657,x_1**32.621448/asin(x_1)**10,"{'x_1': {'max': 10, 'min': -10}}",500 +658,x_1*(sqrt(x_1) + x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +659,430127581.888084*(x_1 + 0.542)**13.636,"{'x_1': {'max': 10, 'min': -10}}",500 +660,x_1*(2.0 - 0.149*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +661,0.00395742443539679*x_1**4/(0.587544065804935*x_2 - 1)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +662,(x_1 - 0.075)**1.184*(x_2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +663,x_1 + 1.14804181108529*sqrt(x_2) - 0.714,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +664,cos(x_1 + sin(1.444*x_1 + 2.001384)),"{'x_1': {'max': 10, 'min': -10}}",500 +665,asin(x_2)**2 + asin(exp(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +666,x_1*cos(exp(sin(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +667,x_1 - x_2 - cos(1.2750818899673*x_1**2) + 0.41,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +668,(x_1 + exp(sqrt(tan(x_1))))*(x_2 - 0.65),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +669,exp(x_1) - 1.356,"{'x_1': {'max': 10, 'min': -10}}",500 +670,cos(sin(5.88260681764381*exp(x_1)/tan(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +671,48.347888707*x_1*x_2 + x_1 - x_3 - 0.174,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +672,2.04*x_1*(3.349*x_1 + exp(x_2/2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +673,cos(2.07171426601257*sqrt(0.465983224603914*x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +674,log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +675,asin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +676,x_1 + 1.037,"{'x_1': {'max': 10, 'min': -10}}",500 +677,-x_2**3*(x_2 - asin(x_1))**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +678,exp(10.549504*(0.615763546798029*x_1 - 1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +679,tan(x_1*(x_1*x_3 + x_2)/x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +680,exp(4.717*tan(sin(0.784*x_1))) + log(x_1) + 1.904,"{'x_1': {'max': 10, 'min': -10}}",500 +681,cos(x_1*x_3/(x_2 - 1.881)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +682,(x_1**68.145)**1.963,"{'x_1': {'max': 10, 'min': -10}}",500 +683,x_2 + cos(x_1) - 1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +684,31.9816694161815*(0.636942675159236*x_1 - 1)**7.682*exp(x_2/2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +685,log(x_1*x_2**2)/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +686,(x_2 + 0.314)*(x_1 + x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +687,asin(sqrt(x_1 + log(x_1)**3)) + 1.22,"{'x_1': {'max': 10, 'min': -10}}",500 +688,(x_1 - 0.922)*(-1.605*x_1 + x_2*asin(x_2) - 3.10407)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +689,sin(x_1**3*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +690,(x_3 + sqrt(x_1 - 0.865))*(1.507*x_1 + x_2 - 0.152207),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +691,asin(x_1)/(x_1**2*sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +692,-sin(x_1 - x_1/sqrt(x_2 + 0.983)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +693,0.817*(x_1 + 0.058)*log(0.576*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +694,sqrt(x_1)*asin(x_1 - 0.406),"{'x_1': {'max': 10, 'min': -10}}",500 +695,22.775818*x_2**1.392*exp(x_1**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +696,1.15031033405852*exp(x_1 - tan(x_1 + exp(x_2) + 0.869)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +697,x_3**4*(x_1 + x_2)**3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +698,0.064964808*x_3*(x_2 + 1.045)*sin(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +699,x_1**5,"{'x_1': {'max': 10, 'min': -10}}",500 +700,x_1**7.646*x_2*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +701,x_1**5.608,"{'x_1': {'max': 10, 'min': -10}}",500 +702,-asin(x_1*(x_1 - 4.244*x_2) - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +703,x_1**2 + asin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +704,2*x_1 + exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +705,x_1 - 1.43666279968544*sqrt(x_1 + 0.863),"{'x_1': {'max': 10, 'min': -10}}",500 +706,x_1 - x_3 + 0.222907141516182*exp(x_1 - x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +707,(x_1 + (4.427*x_1 + x_2 + 0.405)*tan(x_1) + 0.736)/tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +708,20.1268259689366*x_2*(x_1 + 1.013)*log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +709,5.861888*x_1 - 3.624*x_2 + log(x_1) - 0.373272,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +710,(-x_1 + x_2)*log((x_1 - 0.521)**5.718),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +711,exp(x_2 + cos(x_1))/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +712,(1 - x_1**4)/sqrt(1 - (x_1**4 - 1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +713,tan(Abs(x_1 - 0.944)),"{'x_1': {'max': 10, 'min': -10}}",500 +714,x_2*cos((x_1 + 0.054)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +715,x_1 + x_2**4*(x_1 + x_2)**4 - 0.932,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +716,x_1 + log(sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +717,x_2 + exp(x_1**9.078),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +718,x_1/log(x_1 + x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +719,x_1 + 1.97*x_2 + sqrt((x_1 + 0.504)**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +720,x_1*(x_2 + tan(x_1)) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +721,-x_1**3 + x_1 - log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +722,x_1*(1.0 - 0.080062991*cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +723,x_1*x_2*sin((x_2 + 0.324)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +724,(2.45811518324607*x_1 + 1.30890052356021*x_2**2)/(x_1 - 0.531),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +725,x_1*x_2 + x_1 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +726,cos(2*x_1 + tan(x_1 - 0.544) + 1.232),"{'x_1': {'max': 10, 'min': -10}}",500 +727,log((0.927643784786642*x_1 + 1)**4.498) + 0.337833411245651,"{'x_1': {'max': 10, 'min': -10}}",500 +728,109.764631872*x_1**3*tan(4.96791383261958*exp(tan(x_1)) - 1.908),"{'x_1': {'max': 10, 'min': -10}}",500 +729,(35.0127796645776*x_1 - 9.94362942474003)/x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +730,x_1 - exp(x_1)/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +731,x_1 - log(x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +732,41.135081408*(0.579374275782155*x_1 - x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +733,sin(exp(x_1*(x_2 + (x_2 - 0.087)**2)) + 1.981),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +734,2.78*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +735,3.073009*(0.570450656018255*x_1 + 1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +736,x_1*(2*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +737,1.07789951988055*(x_1 + 0.487804878048781*x_2*(x_1 - 1.06))**0.1045,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +738,1.782*(sqrt(x_1) - 0.749110958292491*x_1 + 0.749110958292491*x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +739,1.359556*(0.857632933104631*tan(x_1) + 1.0)**2/x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +740,tan(x_1**0.974864 + x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +741,(x_1 - 0.461)/(0.415*x_1 + x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +742,sin(x_1**2)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +743,log(x_2)*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +744,20.2101549614404*x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +745,log(tan(x_1/x_2**2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +746,sin(asin(log(x_1))**4.45248) + 0.382,"{'x_1': {'max': 10, 'min': -10}}",500 +747,x_1**2 + x_2**8.118,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +748,(sin(x_1) - 0.986)**2/(x_2 + x_3)**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +749,sin(x_1**2/x_2)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +750,0.8680824543443*x_1/(0.965250965250965*x_1 - 0.965250965250965*x_2 + 1)**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +751,(x_1*(1.537*x_1 - tan(x_1)))**0.514,"{'x_1': {'max': 10, 'min': -10}}",500 +752,28.0392656202913*(0.513083632632119*x_1 - 1)**4.99554,"{'x_1': {'max': 10, 'min': -10}}",500 +753,x_2*sin(cos(x_1*exp(3.326*x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +754,Abs((x_1 + x_2*(x_3 + 0.601) + 1.031)/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +755,1.471724 - 0.941*asin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +756,cos(2*x_1 - log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +757,tan((log(x_1) + cos(x_1) - 0.221894331913778)**5),"{'x_1': {'max': 10, 'min': -10}}",500 +758,x_1*exp(-x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +759,x_1**2*exp(2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +760,exp((4.911*x_1 + 9.62556)/(x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +761,x_1 + log(x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +762,2106430.23517753*x_1**4*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +763,exp(tan(3.139*x_1 - 1.606*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +764,x_1*sin(sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +765,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +766,99.188550189*x_2*x_3*cos(1.019*x_1 + 1.787326),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +767,x_1**6*x_2**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +768,x_1/cos(x_1) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +769,22.521332224*x_1**3 + x_1 + 2.367*x_1/x_2 - 1.46,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +770,tan(x_1*cos(x_1)**4),"{'x_1': {'max': 10, 'min': -10}}",500 +771,sin(x_2**5*sin(log(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +772,(4.058*x_1 + cos(cos(x_2))**2)*(1.946*x_3 + 2.839214),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +773,exp(x_1) + cos(x_1 + 0.898) + 0.196,"{'x_1': {'max': 10, 'min': -10}}",500 +774,x_1**3*(-x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +775,0.229305205228159*tan(x_1*(4.296*x_2 + 7.469776))/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +776,4.735*x_1 - exp(exp(x_1)) + cos(x_1) + 3.025665,"{'x_1': {'max': 10, 'min': -10}}",500 +777,(x_1*cos(x_2)**5.885 - x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +778,x_1*x_2 + exp(x_1 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +779,x_1 - 1.11460490853834*sqrt(-0.804929969305001*x_1 + exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +780,x_1 - x_1/x_2 + cos(x_1 + 0.433) - 1.478,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +781,x_1 + x_3 + tan(4.984*cos(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +782,4.781*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +783,1.594*x_1 - x_2*x_3*(x_1 - 1.122) - 0.140272,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +784,sqrt(log(x_2*exp(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +785,0.039204*x_1*x_3/log(3.361*x_2 - 2.103986),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +786,x_1 - log(0.262144*x_2*x_3)**1.645 + 0.031,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +787,x_1**0.5985 - tan(3.84*x_2 - 2.3616),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +788,cos(x_1**6) + 1.218,"{'x_1': {'max': 10, 'min': -10}}",500 +789,x_1*sin(x_3)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +790,x_1 - x_2 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +791,x_1*sin(2.285*x_1 + sqrt(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +792,10.705984*x_1**0.231 + 18.130564*(0.234852043212776*x_1 + x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +793,(x_1 + 2*x_2)*(x_2 - 1.977),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +794,2.223081*x_1*x_4*(x_2 + 3.563*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +795,0.914*x_1 + x_1/x_2 + exp(x_1) - 0.203822,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +796,-x_1 + x_2 + x_3 + cos(x_1) + 1.371,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +797,x_1**3/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +798,x_1 + x_3 + cos(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +799,x_3 + 3.92*x_4 + log(x_1/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +800,0.921*exp(x_2 + exp(x_1)) + 0.671409,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +801,10.26625681*(0.558659217877095*x_1 + 1)**4,"{'x_1': {'max': 10, 'min': -10}}",500 +802,x_1*(x_1 - x_2 + 0.357),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +803,x_1**5*(x_1*x_2)**1.0865,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +804,2.20354396521241*exp(3*x_1)/x_1**3,"{'x_1': {'max': 10, 'min': -10}}",500 +805,x_1**2.566/cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +806,x_1 + x_2 - tan(cos(x_1 - 0.866)) + 0.144,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +807,0.622*x_1 - tan(log(x_2) - 0.311974765020825) - 0.586,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +808,cos(x_1**7),"{'x_1': {'max': 10, 'min': -10}}",500 +809,x_3*x_4 + (x_1 + x_2)**3,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +810,-1.205*x_1 + asin(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +811,x_1 - exp(tan(exp(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +812,2489.90540804446*x_1*exp(-4*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +813,x_1 + 0.491,"{'x_1': {'max': 10, 'min': -10}}",500 +814,(sqrt(x_1) + x_2*log(x_2))**5/x_2**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +815,x_1**0.539 + x_1**3.285936/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +816,sqrt(tan(2*x_1 + 2.988)),"{'x_1': {'max': 10, 'min': -10}}",500 +817,x_2*x_3**3*(x_1 + 1.5),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +818,sqrt(x_1 + log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +819,0.148499597194843*log(x_1)/(x_1*(x_2 + 1.602)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +820,2*sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +821,x_1*(x_1 + cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +822,-x_1 + Abs(x_1 - 1.484) - 1.083,"{'x_1': {'max': 10, 'min': -10}}",500 +823,4.327*x_1 - sin(x_2)*tan(x_1 + 1.445) - 7.027048,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +824,2.65686239458289*(0.575043128234618*sin(3*x_1) - 1.0)**1.766,"{'x_1': {'max': 10, 'min': -10}}",500 +825,7.540516*x_1*asin(x_2**2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +826,-1.676*x_1 + (x_1 + 0.845)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +827,x_1**10*tan(2.868*x_1 - 5.50656),"{'x_1': {'max': 10, 'min': -10}}",500 +828,exp(x_1**3),"{'x_1': {'max': 10, 'min': -10}}",500 +829,sqrt(sqrt(x_1) - x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +830,x_1 + 2.23*x_2 + tan(x_1) + 0.38362,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +831,(x_1 - x_2)*cos(x_1**1.3255),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +832,x_1**2.937*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +833,(3.26741776944292*exp(x_1) + 1.188)*asin(x_1 - 0.195),"{'x_1': {'max': 10, 'min': -10}}",500 +834,918606087.636139*x_1**24,"{'x_1': {'max': 10, 'min': -10}}",500 +835,2.228*exp(x_1*sqrt(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +836,x_1 + cos(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +837,x_2*(sin(x_1**2.098) - 0.194),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +838,5.134756*(x_1 - 0.009)*(x_2 + 0.888),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +839,(0.133*x_1 - 0.15428)*(x_2 + cos(2.168*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +840,-sin(-x_1 + 28.8691132864801*x_2*x_3 + 1.736),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +841,exp(x_1**5) + log(log(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +842,tan(0.731*x_1) - 0.066,"{'x_1': {'max': 10, 'min': -10}}",500 +843,2.11*x_1 - 3.45*cos(x_2 - 0.803),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +844,x_1 - x_2 + 1.03969579023523*x_2**1.199,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +845,exp(-x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +846,x_1*(cos(x_2) + 3.144*tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +847,x_2 + sin(x_1**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +848,x_1 - cos(x_1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +849,log(-x_1 + x_2 + 1.97*sin(0.786*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +850,2.773505125*(0.711743772241993*x_1 + 0.711743772241993*asin(4.778*x_1) - 1)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +851,x_1**6*x_2/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +852,3.914*x_2*(x_1 - 1.604)/(x_1 + x_2*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +853,2.512*tan(40.5094783419322*x_1**6*x_3**6*(0.683526999316473*x_2 - 1)**6),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +854,x_1**2 + exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +855,x_1**5.653,"{'x_1': {'max': 10, 'min': -10}}",500 +856,x_1*x_2 + 129.615987153657*(0.737370456339845*x_1 + 0.296370762194471*x_3 + 1)**4,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +857,x_1**2/log(x_1 + x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +858,x_2 - x_3 + cos(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +859,-1.103*x_1 + log(cos(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +860,x_1 - exp(x_1/2),"{'x_1': {'max': 10, 'min': -10}}",500 +861,(exp(2*x_1) + cos(x_1))/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +862,log(x_1) - tan(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +863,(-x_1)**1.02429,"{'x_1': {'max': 10, 'min': -10}}",500 +864,61.9171102453015*x_1 + exp(4*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +865,x_1 + x_2 + exp(x_3) + 1.635,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +866,x_1/(x_3 + tan(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +867,11.03103*x_1**2*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +868,x_1*exp(5.13*x_2) + 4.68*x_2 - 6.75324,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +869,x_1*(x_2 - 0.199),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +870,1.926*x_1 + 1.101672,"{'x_1': {'max': 10, 'min': -10}}",500 +871,x_1**3/asin(x_2)**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +872,19486.0668806036*x_1**3*x_2**3*(x_1 - 0.317050044472495)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +873,x_2 + cos(3.808*x_1 + tan(x_1 + 1.081)) + 1.091,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +874,x_1**2*sin(x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +875,x_1*(2*x_1 + exp(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +876,exp(3.352561*x_1*x_2 + 1.515361*(0.812347684809098*x_1 + 1)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +877,x_1*x_2 + x_1 + x_2 - 1.431,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +878,x_1*(x_1 + exp(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +879,sin(x_1) + sin(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +880,2.21093855713987*exp(x_2) + cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +881,x_1*exp(-x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +882,4.473*x_1*asin(x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +883,sqrt(x_1**5 + x_1 - x_2 - 0.857),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +884,-x_1 + x_1/cos(x_2) - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +885,3.03*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +886,x_1 - 11717.6274160758*x_3**3*(0.666222518321119*x_2 + 1)**3 + x_4 - 0.028,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +887,x_1**4*log(x_1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +888,x_1**2*log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +889,84.400252415521*x_1**4 + 0.991*x_1**5.05*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +890,x_1*cos(cos(sqrt(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +891,28.991029248*x_1**4.085 - x_2 + x_3 - 0.878,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +892,x_1 - tan(5.221225*x_2*x_3**2) - 0.569,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +893,x_1 - x_2**2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +894,log(Abs(4.876*x_1 + 8.957212)),"{'x_1': {'max': 10, 'min': -10}}",500 +895,tan(x_1)/(x_2 + 23.3341166916*(0.88809946714032*x_2 + 1)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +896,3.679*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +897,x_1**11.43*x_2 + x_1**11.93,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +898,cos(x_1*Abs(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +899,(x_1 + 0.914)*(x_1 + x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +900,4.451*x_1*log(x_1 + exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +901,cos(x_1*(cos(x_2) - 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +902,(x_1*(x_2 - exp(x_3)))**(-0.073),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +903,1.610538468*x_1 - asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +904,0.753571*x_1**3/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +905,x_1 + cos(sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +906,x_1*(x_2 + 2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +907,(sqrt(x_1) - cos(x_2 + x_3 - 0.456))**5,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +908,x_1 + x_2**2*(0.762129*x_1 - 0.584552943),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +909,sqrt(x_1*(x_2 + x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +910,-asin(1.88042548376691*sqrt(x_2 - 0.9335407239819*x_3) - tan(x_1)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +911,x_1*x_2 - x_1 + log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +912,x_1**9*(7.83461210396266*x_2 - 6.61241261574448),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +913,-x_1 - x_2 + asin(x_1) + 0.145,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +914,x_1 + 22.667121*(x_2 - 0.653)**2 + 0.871,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +915,x_1**0.549047/log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +916,x_1**2*(exp(x_1) - 1),"{'x_1': {'max': 10, 'min': -10}}",500 +917,(x_2 + 1.875)*(x_2 + x_3*sin(x_1))/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +918,log(-x_1 + 1.428*sin(exp(sin(0.316*x_1))) - 1.818),"{'x_1': {'max': 10, 'min': -10}}",500 +919,tan(exp(-x_1)*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +920,x_2 + sqrt(x_2**1.556) + sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +921,log(2*x_1 - x_2**2.4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +922,sqrt(x_1*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +923,x_1*exp(x_1) + sin(exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +924,x_1**5 + x_1**3,"{'x_1': {'max': 10, 'min': -10}}",500 +925,4.282*x_1 + x_2**24.258,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +926,log(x_1 + 0.987)*log(sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +927,x_1**2*asin(x_1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +928,x_1**4 + x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +929,tan(4.934*x_1 - log(tan(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +930,-x_1 + x_1/x_2 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +931,sqrt(x_1)*exp(-x_1*cos(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +932,x_1 - 28698181.3039255*x_1**17.448,"{'x_1': {'max': 10, 'min': -10}}",500 +933,x_3*(2*x_1 + 1.92639559800161*sqrt(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +934,x_1 - 2.849344*sqrt(0.123171500137665 - (0.592417061611374*x_1 + 1)**4),"{'x_1': {'max': 10, 'min': -10}}",500 +935,0.285075848224454*exp(x_1 + 3.777*x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +936,x_1 + 2.9316250624e-5*x_1**4.296 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +937,tan(4.059*tan(0.73*cos(2.375*x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +938,2.269*x_1 - x_2 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +939,0.0289015278425907*x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +940,exp(3*x_2*(x_1 + 0.42)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +941,3.207681*x_1**0.984*(log(x_1) + 1.5993875765806),"{'x_1': {'max': 10, 'min': -10}}",500 +942,exp(8*x_1) + exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +943,x_1 + sqrt(x_2) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +944,x_1 + x_2 + cos(exp(sin(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +945,-3.839*x_1*x_3/(x_2 - x_3*(4.329*x_1 - 3.207789)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +946,cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +947,2*x_2 + cos(x_1 - 1.072)/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +948,exp(exp(3*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +949,x_1**2/x_2**9,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +950,-x_3 + 2.6569*(0.4*x_1 + exp(cos(x_2)))**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +951,x_3*(x_1 - x_2)/asin(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +952,(x_1 - x_2**2)**3.408,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +953,x_1*tan(x_1) + sin(cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +954,x_1**2*exp(-x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +955,x_1 + 0.19325166031733*exp(3.589*x_2) - 0.33,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +956,x_1**3*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +957,-0.820025856*x_1**3*x_2**3 + x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +958,x_1 - 0.147829074360145*x_2*(0.508388408744281*x_1 - 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +959,(x_2 + 0.156140349486991*exp(2.46699299978094*exp(x_1)))**1.638,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +960,x_1*x_2**2 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +961,tan(x_1 - sqrt(cos(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +962,x_1*x_2 - sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +963,sin(3.803*x_2 + tan(x_1 + 0.795) - 1.884),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +964,0.728*x_1 + tan(x_2 + exp(0.257174612171402*exp(x_1))) - 0.75348,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +965,-x_1 + x_2 + sin(x_1**2) + 0.975,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +966,x_1*(x_2 + x_3)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +967,1.209*exp(2.892*tan(x_1 + 1)) - 1.866696,"{'x_1': {'max': 10, 'min': -10}}",500 +968,4.084*x_1 - tan(sqrt(x_1 + 0.984)),"{'x_1': {'max': 10, 'min': -10}}",500 +969,x_1*(x_1*x_3 + x_2)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +970,x_1**1.486/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +971,exp(x_1) + asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +972,22.733824*x_1*tan(2*x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +973,x_1/cos(x_2*x_3) - x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +974,log(x_1 + x_2**3.375),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +975,3.127*x_1 + 3.492859,"{'x_1': {'max': 10, 'min': -10}}",500 +976,-3.551*tan(x_2 - 0.572257297897905*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +977,x_1 + log(cos(x_1 - 1.175))/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +978,1.525*x_2*exp(7.84*x_1*(x_2 - 1.801)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +979,(x_1 - 1.776)/tan(tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +980,x_1*(-3.697*x_1 - 4.388639)/(x_1 + 1.087),"{'x_1': {'max': 10, 'min': -10}}",500 +981,1.478*exp(asin(cos(x_1 + 1.212))/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +982,cos(x_1**5 - x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +983,0.681*tan(log(x_1 + x_2**2.493)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +984,sqrt(x_1**2.349)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +985,x_2 + sin(0.948*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +986,cos(2.773*cos(log(x_1 + x_2**2))) + 0.292,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +987,x_1 + sin(x_2 + x_3/x_4) - 0.189,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +988,x_1*(x_3 + cos(x_2) + 0.933),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +989,1.343*x_1 + 1.872*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +990,-x_2 + 1.852*tan(x_2 + tan(x_1)) - 1.21306,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +991,tan(x_1*tan(x_1 + 0.894)),"{'x_1': {'max': 10, 'min': -10}}",500 +992,1.1359444134001*x_1**3.034*(0.920810313075506*x_2 - 1)**1.545,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +993,(x_1 - x_2)*exp(-3*x_2**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +994,log(x_1**1.296*Abs(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +995,-x_3 + log(2*x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +996,x_1 - log(x_1*x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +997,sqrt(cos(x_1 + 1)),"{'x_1': {'max': 10, 'min': -10}}",500 +998,x_1**6/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +999,x_1**0.597 + 2*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1000,(1.638*x_1 - 2.501226)/(x_1 + log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1001,x_2**4 + x_2 + log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1002,x_1 - log(x_1) + 0.61*tan(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1003,2.885*x_1 + exp(3.293*x_1)/log(x_2) - 1.347295,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1004,exp(x_2*(x_1 - 1.57)) + 1.317,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1005,x_1*log(1.216*x_3 - x_4 + 1.068)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1006,x_2**5*(2*x_1 + x_2)**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1007,(2.78*x_1 - 3.31654)/(2*x_1 + exp(0.724*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1008,sqrt(x_1)/x_2 + x_1*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1009,1.021*x_1 - (x_1 + 1.141)*(x_2 - 4.202*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1010,cos(log(sin(x_1)) + 1.07772907775169),"{'x_1': {'max': 10, 'min': -10}}",500 +1011,3.169*(x_2 - 1.827)*(cos(x_1) + 0.954)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1012,x_1 - x_2**2*(17.572864*x_1 + 10.5437184),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1013,1.909*exp(asin(x_1)/(x_2 + 1.998)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1014,(x_1 - log(x_2/(x_3 + 1.274))**13.866)**2.643,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1015,x_1 + x_2*(x_1 + 0.387)**2 - 0.16,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1016,x_1 - x_2 - 0.00406223575286307*x_2**2.86,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1017,1.59203082731351*x_2*(0.835421888053467*x_1 + 1)**2.586/(x_3 - 1.517),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1018,tan(x_1**2 + x_1 + x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1019,1.3*asin(exp(x_2**3)*log(x_1 - 1.556)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1020,2.852*x_1*(x_2 - log(tan(2.559*x_1 - 4.263294))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1021,sqrt(sin(x_1))*cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1022,x_2/log(x_1) + 0.99*asin(x_1) + 1.17711,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1023,x_3 + sin(x_1 - x_2) - 1.083,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1024,x_1*x_3*(x_2 + cos(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1025,(1.689*x_1 + 2*x_2)/(x_1 + 0.285),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1026,-x_2 + log(x_1 - 1.492),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1027,4.931*x_1 + 2.193*x_2 + 0.468445,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1028,log(x_1 + 1/x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1029,(0.431*x_1 - 0.622795)*(x_1 + tan(x_2/(x_1 + 0.19))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1030,-22.5625*x_1**2 + x_1 + exp(x_2) - 1.092,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1031,log(2*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1032,x_1*tan(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1033,asin(4.227*x_1 - 7.81995)/(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1034,(x_1 + 3.59*x_2)*(x_1 + cos(x_1) + 0.618),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1035,asin(x_1**0.517 + x_1**3.676),"{'x_1': {'max': 10, 'min': -10}}",500 +1036,sqrt(sin(x_1 + sqrt(x_2/x_3))),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1037,x_1 + x_2**4*x_3**3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1038,x_1 - x_3 + 2.63*sin(x_2) + 0.837,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1039,x_1*(17.952169*x_1**2 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1040,x_1*x_2 - sqrt(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1041,2.853*x_1 - log(x_3) + x_2/x_1,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1042,1.91781125244379*sqrt(x_1 - 0.271886895051659*(-sin(4.465*x_1 - x_2))**2.968),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1043,-x_1*(x_3 + 1.696) + log(x_1 + x_2 + 0.477),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1044,0.37*(x_1 - 0.037)*log(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1045,0.3125*(x_1 - asin((0.146*x_2 - 0.2555)/x_3))/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1046,exp(x_1*(x_2 - 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1047,x_1 + tan(tan(-x_1 + x_1**1.991 + 1.208) - 1.679),"{'x_1': {'max': 10, 'min': -10}}",500 +1048,(log(x_1) + 0.96088161520203)/(x_2 + 0.15*cos(x_1) - 0.174),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1049,x_1*(x_2 + tan(x_1 - 1.719)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1050,log(x_1*sin(tan(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1051,x_1 + log(exp(2.133*x_2)/x_2) + 0.017064,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1052,x_3*sin(x_1 - x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1053,2.775*x_1*asin(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1054,3.653*sin(x_1/(x_1 + x_2 + 1.025)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1055,sqrt(x_2 + asin(exp(2*x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1056,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1057,exp(2.005142581*(0.793021411578113*x_1 - 1)**3) + 1.213*tan(0.473*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1058,x_1**3 - x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1059,x_1*(x_1 + x_2) + cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1060,4.106*sin(1.5600624024961*tan(x_1)/x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1061,-tan(x_1)**1.69/(tan(x_1)**2 - 1),"{'x_1': {'max': 10, 'min': -10}}",500 +1062,2.719201*x_2*cos(cos(3.973*x_1*cos(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1063,asin(x_1 + x_3 - asin(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1064,-x_1*x_2/(x_2 - (x_1 + 0.907)**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1065,x_1*(x_3 + asin(x_2) - 2.669),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1066,5.517084663*x_1**3 - x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1067,75.939094204416*x_1**4,"{'x_1': {'max': 10, 'min': -10}}",500 +1068,2.589*x_3*(log(x_1*x_2) + 0.787096587873162),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1069,1.699*x_2 + log(2*x_1 + 3.844)**5.012,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1070,exp(2.21*x_1*(x_2 - tan(x_3) + 1.578)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1071,x_1 - exp(x_1/2),"{'x_1': {'max': 10, 'min': -10}}",500 +1072,(4.907*x_1 - 6.82073)*log(tan(3.084*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1073,log(x_1 + x_2 + x_3 - 1.644),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1074,2*x_1 + 830.678372054*(x_1 - 0.942)**6,"{'x_1': {'max': 10, 'min': -10}}",500 +1075,4.838*x_1*log(3.527*x_1 + x_2 - 5.304608)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1076,0.896*x_1*log(x_3*(1.349*x_1 + x_2 - 0.870105)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1077,exp(3.02*x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1078,asin(sin(log(x_1**2))/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1079,1/(x_1**3*(x_2 - 0.215)**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1080,cos((x_1 + x_2)*(x_2 + 1.529)) + 1.673,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1081,sqrt(x_2**9.66*(x_1 - x_2)**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1082,x_3*log(4.253*x_1 - x_2/sqrt(1 - x_2**2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1083,19.838116*x_1*x_2/(2.989*x_1 + 1.197*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1084,0.7097232079489/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1085,x_1*x_2**2 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1086,(x_1 - 1.346)*sin(2.03519040878243*x_2**5*sqrt(x_1 - 0.99)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1087,(x_1 + x_2 + 1.358)/(x_3*(x_2 + 1.706)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1088,x_1/cos(sin(22.043025*x_2*exp(x_3))),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1089,1.005*x_1 + (x_1 + x_2 - 0.116)**0.868,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1090,tan(tan(x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1091,x_2 + exp(x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1092,-x_1 + x_2 + tan(2*x_1) + 1.039,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1093,2*x_1 + sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1094,3.935*x_1 - (x_1 + 0.442)**10,"{'x_1': {'max': 10, 'min': -10}}",500 +1095,(x_1 + x_2)/log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1096,x_1 + exp(sin(sqrt(x_1)*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1097,(x_1 + 0.433)**4.388*exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1098,-tan(-x_1 + log(x_2) + 0.365),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1099,2.43552156422714*(0.813008130081301*x_1 + (0.813008130081301*x_1 + 0.82520325203252)*tan(x_2) + 1)**4.3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1100,3*x_1 + 22.486564*x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1101,8.80323750171562*(0.647249190938511*x_2 + 0.647249190938511*tan(x_1) + 1.0)**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1102,x_1 + (x_2 - tan(x_1))**2 - 0.427,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1103,4.826809*x_1*x_2 - x_3 + sin(x_1 + 0.298) + 1.405,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1104,3.866*x_1*sqrt(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1105,sin(cos(cos(2*x_1) + 1.922)),"{'x_1': {'max': 10, 'min': -10}}",500 +1106,tan(x_1**0.215),"{'x_1': {'max': 10, 'min': -10}}",500 +1107,exp(x_1)*asin(1.297*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1108,1.575025*x_1**0.271*(0.796812749003984*x_2 - 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1109,-3.992*x_1 + 3.177*asin(x_1*sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1110,(3.201*x_1*(x_1 + 1.53) - x_1 - sin(x_1))/(x_1 + 1.53),"{'x_1': {'max': 10, 'min': -10}}",500 +1111,tan(x_1 + log(x_1) + 0.919),"{'x_1': {'max': 10, 'min': -10}}",500 +1112,(x_1 - 2*x_2 - x_3)**1.374,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1113,(x_1 - 1.884)*cos(x_2 - 0.633),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1114,(x_1 - 0.567)*exp(exp(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1115,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1116,3.14*sqrt(x_1*(x_1 + 0.101423992859751)),"{'x_1': {'max': 10, 'min': -10}}",500 +1117,x_1*exp((x_1 + x_2*x_3)/x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1118,(sqrt(x_1) + x_1**2)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1119,x_1 + x_2 + x_3 + x_4,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1120,7.059649*(0.376364320662401*x_1 + 0.376364320662401*exp(x_2) - 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1121,4.819*asin(4.713*x_1) + 1.166198,"{'x_1': {'max': 10, 'min': -10}}",500 +1122,log(sin(log(x_1**2))) + 1.17310114065885,"{'x_1': {'max': 10, 'min': -10}}",500 +1123,x_1**10/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1124,x_1 - sin(x_1**0.951) - 0.876,"{'x_1': {'max': 10, 'min': -10}}",500 +1125,x_1/(1.645*x_1 + tan(x_2**4) + 0.998515),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1126,(2.194*x_1 - 1.254968)*tan(exp(1.46*x_1))**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1127,x_1*(13.2799813757723*x_1 - 0.307),"{'x_1': {'max': 10, 'min': -10}}",500 +1128,-x_1**0.08664 + x_1 - cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1129,(x_1 + x_2*(x_1 + x_3) - 0.851)**2/x_2**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1130,7.306209*x_1*(x_2 + 1.209)/(x_1 + x_3 + 0.583),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1131,sin(1.845*exp(222.504348141*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1132,cos(2*x_1**(3/2)),"{'x_1': {'max': 10, 'min': -10}}",500 +1133,0.149*x_1/(x_2 + sin(x_2 - 0.428)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1134,0.327653997378768*(x_1 + sin(x_1 + 1.563))/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1135,5.978025*x_1**2*sqrt(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1136,x_2 - 0.445*x_3 + tan(x_1) - 0.17,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1137,(1.404*x_1 - 1.127412)/sqrt(sin(2.305*x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1138,sqrt(-x_1 + log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1139,x_2*x_3*(x_1 - 1.2)/(x_4 - 1.065),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1140,x_2 + (x_1 + 0.318)**6,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1141,(x_1 - 0.733)/(x_2 - 0.185),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1142,x_1*x_3*exp(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1143,tan(x_1**7.1908442318535)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1144,x_1*(x_3 + asin(x_3))/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1145,(4.349*x_1 + 8.554483)*exp(2*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1146,(x_1 + 1.671)*(x_2 + tan(3.0*x_2 + 5.073)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1147,3.039*x_1 + x_2**3 + 2.24886,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1148,4.091*x_2 + asin(1/x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1149,0.130812828610073*x_1**5/(0.66577896138482*sin(x_2 + 0.534) + 1)**5 + 1.491*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1150,(x_1 - cos(x_2))/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1151,-log((0.536480686695279*x_2 - 1)**2) + sin(x_1) - 1.2454494325268,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1152,log(2*x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1153,x_1**2*x_2*exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1154,3.353*x_1 + cos(6.332*x_1 + 9.535992),"{'x_1': {'max': 10, 'min': -10}}",500 +1155,4.196653397*x_1**0.999*x_2 + x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1156,0.232668564300418*x_1**2/(0.482357299416541*x_1 - 0.895255147717099*x_2 + 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1157,x_1*(x_1 + exp(x_2))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1158,x_1*(x_1*exp(x_2) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1159,x_1**4 + x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1160,sin(x_1**8),"{'x_1': {'max': 10, 'min': -10}}",500 +1161,sqrt(log(x_1) + asin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1162,3.832*x_1*(x_1 + x_2*(x_1 - 1.788) - 1.118),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1163,(x_1 + 1.56)/(84.662348471*x_1**3 + x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1164,(-x_1*x_2 + x_1 + x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1165,asin(tan(x_1 - 0.332778702163062*x_2/x_1))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1166,x_1 + exp(2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1167,(1 - tan(exp(x_1) - 0.739)**2)**(1/4),"{'x_1': {'max': 10, 'min': -10}}",500 +1168,x_1*x_2**15*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1169,asin(exp(x_1*x_2)) + 0.663,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1170,x_1 + x_2 - x_4 + log(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1171,9.0738*x_2 + sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1172,x_1*(76.549078936*x_2 - 37.202852362896) + 4.675*x_2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1173,-x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1174,x_2 + exp(sqrt(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1175,sin(x_1*(x_2 - x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1176,-x_1 + sin(x_1) - 1.891 + x_2/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1177,x_1 - sin((x_1 - 0.121)*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1178,0.677*x_1*sin(x_1 - 1.034) + x_1 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1179,x_1*x_2/sin(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1180,x_1 + exp(x_1 - exp(sin(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1181,x_1**3/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1182,sin(x_1/x_2 + x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1183,x_1 + sin(x_1**5) - sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1184,x_1**2*x_2**2*(exp(2*tan(0.194*x_1)) + 1.975),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1185,sin(tan(x_1))**2.357,"{'x_1': {'max': 10, 'min': -10}}",500 +1186,x_1 + 2623.22961778976*x_1**6.869 - 1.031,"{'x_1': {'max': 10, 'min': -10}}",500 +1187,exp(x_1*exp(-x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1188,x_1*(x_1*(x_1 + 1)**2 - 1),"{'x_1': {'max': 10, 'min': -10}}",500 +1189,tan(0.216*x_1 - 0.014256) - 0.077,"{'x_1': {'max': 10, 'min': -10}}",500 +1190,x_1**3*exp(3*x_2 - 3*tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1191,-x_2 + tan(x_1*exp(-x_2/2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1192,4.622*x_1 + x_2*sin(sin(x_1 - 0.011)) + 5.29219,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1193,2*x_1 - x_2*asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1194,exp(sqrt(2)*x_1**0.397),"{'x_1': {'max': 10, 'min': -10}}",500 +1195,log(x_1*x_2*cos(2.879*x_1 - 4.479724)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1196,-(x_1 + 1.178)*(-0.509*x_1 + x_2*x_3 + 0.15779),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1197,x_1*x_2 + asin(x_2 + 0.71),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1198,(x_1 - 1.237)*(4.6*x_1 + x_2*log(x_1 + 1.832))/log(x_1 + 1.832),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1199,(x_2*((x_1 - 0.462)*(x_3 - 1.719) - 1))**15.724,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1200,(x_1 + 1.541)*(0.359*x_2 + cos(16.088121*x_2**2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1201,x_1*sqrt(x_1*sin(1.958*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1202,x_1 - x_2 + cos(x_3) + 0.886,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1203,2*x_1 + cos(x_2**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1204,x_1*exp(cos(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1205,x_1*(x_1**2 + 1)*log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1206,x_2 + 10.220809*x_2**3/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1207,(tan(x_1 + x_2 - 1.296) + 0.785)/(x_3 - 0.85),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1208,(x_1 - 0.201)*(3.83*x_1*x_2 + asin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1209,sqrt(x_1 - exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1210,(x_2 - 0.221)*log(3.957*x_1 + 2.61162)/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1211,x_2 + cos(sin(3.237*x_1)**2) + 1.191,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1212,2.57566970689178*sqrt(x_1*(x_1**2 - 0.674547752086205)),"{'x_1': {'max': 10, 'min': -10}}",500 +1213,(-x_2 + (x_1 - 1.507)*sin(x_2**5) - 0.404)/sin(x_2**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1214,sqrt(x_1)*x_2**1.246 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1215,log(x_2 + log(x_1**2) + 8.08971701649712),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1216,x_1 + x_1/x_2 - 2.556 - 0.129/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1217,x_1**4.58*x_2/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1218,-x_2 - cos(3.38*x_1) + 1.459,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1219,3.915*x_1 + log(x_1) + 4.52542699157958,"{'x_1': {'max': 10, 'min': -10}}",500 +1220,-x_1 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1221,tan(x_1**2*x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1222,(x_2 + cos(x_1) - 0.69)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1223,x_1**2*(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1224,(2.078*x_1 + 4.87291*x_2)*exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1225,0.00540671868686678*x_1**2*x_2/(x_3 - 0.616),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1226,x_1**2*(457599.053384776*exp(x_2) - 360130.455013818),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1227,(2.947*x_3 + 0.126721)*tan(x_1/x_2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1228,1.383*sin(x_1 - log(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1229,1.708*x_1 + x_2*(log(x_1) + 0.975314072323616) + 2.317756,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1230,-x_1*(-x_2 + x_3 + 1.522)*sin(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1231,2.096704*x_2*(0.69060773480663*x_1 + 1)**2*asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1232,log(-x_2 - 1.145 + x_1**(-4)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1233,asin(2.30122880218374*sqrt(0.610500610500611*x_1 - 1))/(x_1 + 0.674),"{'x_1': {'max': 10, 'min': -10}}",500 +1234,asin(2*x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1235,x_1**6.988*x_2**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1236,10.0535532358832*x_1**0.559*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1237,log(x_2*(sin(x_1) + 1.264)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1238,x_1**2/(x_1 + x_2**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1239,1.19415241908225*x_1*sqrt(x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1240,exp(x_1*x_2**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1241,2.687*cos(sqrt(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1242,(x_1 - 0.251)*exp(-x_2**0.7755),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1243,1.55048379546514*sqrt(x_2 - 0.089) + cos(cos(4.164*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1244,x_1 - cos(0.392*x_2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1245,exp(x_1) - cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1246,x_1*x_2**2 - sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1247,2.06552228339981*x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1248,tan(2.554*x_1 - cos(0.348*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1249,sqrt(cos(x_2 + exp(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1250,4.854*asin(1.774224*(0.750750750750751*x_1*x_2 + cos(x_1))**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1251,exp(x_1)/(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1252,log(x_1 + 1.868) + 4.709*cos(x_1 - 1.021),"{'x_1': {'max': 10, 'min': -10}}",500 +1253,1.144*x_1 + x_2 - 1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1254,x_2**5*(0.222*tan(x_1) - 0.269508),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1255,2.487813875*(0.738007380073801*x_1 + 1)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1256,(x_1 + 1.547)*(x_1 - sqrt(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1257,exp(2*x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1258,x_1*asin(asin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1259,0.54*x_1 + exp(x_2)*asin(3.067*x_1 + 2.702027),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1260,x_1 - 4.878*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1261,0.210703750526759*log(x_1)/sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1262,sqrt(x_1) + 1.84658788506305*(0.419474348304652 - (0.647668393782383*x_2 + 1)**2)**0.706,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1263,(x_1 + 0.62)**2*(log(x_1) - 1.85150947363383),"{'x_1': {'max': 10, 'min': -10}}",500 +1264,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1265,1/cos(sqrt(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1266,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1267,(x_1 - 0.12)**2.488/x_2**2.656,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1268,x_1*asin(-4.063*x_1 + x_2 + 1.332664),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1269,x_1 + sqrt(x_2) + x_3 - x_4 + 0.198,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1270,(tan(x_1 + x_2 - 1.772) + 1.45)*tan(log(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1271,sin(x_1 + 11.063808*x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1272,x_1 + sin(x_2**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1273,x_1*(-x_2 + sin(x_2)**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1274,(x_1 + 0.482)**9.682*(x_2 - 1.278)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1275,x_1 - exp(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1276,1.32*x_1*(4.766*x_1 + cos(x_2)) + x_3 + 1.208,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1277,x_1 - 2*x_2 + sin(3.58*x_1) + 1.534,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1278,x_1*(tan(2.279*x_2) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1279,x_2*(x_1*x_2 - 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1280,0.205676676264912*exp(2.802276*x_1**2 + sqrt(x_2))/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1281,x_1*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1282,exp(3*x_1*sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1283,(x_1 - 0.04)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1284,x_1**2*(x_3 + 1.794)/x_2**2.035,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1285,-x_1**2 + x_1 + x_1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1286,(0.309*x_1*(x_2 - 1.663) + 1)/(x_2 - 1.663),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1287,cos(1.26875575769316*(0.712250712250712*tan(x_1 - 0.96) + 0.712250712250712*asin(x_2) + 1)**0.7015),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1288,x_1*(log(x_2) + 1.11612471375274)/(x_2 + 0.849),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1289,(-x_2 + (x_3 + x_4)*sin(x_1))/(x_3 + x_4),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1290,x_1 - x_2 + asin(exp(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1291,tan(x_1*x_2*log(x_1)) - 0.838,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1292,tan(4.051*x_1 + sqrt(x_2 - 0.831)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1293,x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1294,x_1*x_2**2.136 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1295,x_1 + cos(cos(cos(x_2)))**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1296,2*x_1 + x_2**3 + x_3 - 0.42,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1297,x_2**3.698*(2*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1298,-x_2 + cos(x_1 + 1.169) - x_3/x_1,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1299,3.059*exp(x_1) - 1/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1300,x_1**7.863*(x_2 + 1.549),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1301,tan(x_1*(x_2 - 0.484)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1302,x_1**12.756*log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1303,sqrt(x_1) + 2*x_2 + x_3 - 1.298,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1304,sin(4.334*x_1*x_2)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1305,x_1*(7.409284*x_1*cos(x_1 - 0.939)**2 + 4.416),"{'x_1': {'max': 10, 'min': -10}}",500 +1306,2.287*x_1**3.584*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1307,x_1 + x_1/x_2**2 + 1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1308,x_1**4 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1309,x_1*x_2*sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1310,(x_1 - x_2)**2/x_1**3.431,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1311,(x_1 - 0.247)**2*asin(x_1/(x_2 - 0.117)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1312,cos(x_1*exp(x_1/4)),"{'x_1': {'max': 10, 'min': -10}}",500 +1313,x_1**1.634*x_2**3.007,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1314,(0.557351740728892*x_3 + 0.400735901584073)*tan(x_1/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1315,0.386*x_1 - exp(4.992*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1316,x_1 - x_2 + exp(sin(x_1**4)) - 0.741,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1317,x_1**10*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1318,4.995*x_1 + 3.541455,"{'x_1': {'max': 10, 'min': -10}}",500 +1319,x_2 + sin(2.721*exp(2*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1320,-x_2 - x_3 + 0.598450674380627*(0.836820083682008*x_1 + 1)**3.118,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1321,x_1**3*x_3**6/x_2**2.3475,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1322,-x_1**2 + asin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1323,(x_2*cos(x_2 - 0.664) + (x_1 + 1.512)*(x_3 - 0.401))/(x_3 - 0.401),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1324,(x_1 + 1.5)/tan(2.222*x_1 + 2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1325,(x_1 + cos(tan(x_1)))**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1326,sqrt(tan(x_1*(x_2 + 0.729) - x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1327,x_1*sin(x_2 + 0.667)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1328,x_2*(2.23*x_1**5 - 5.40552*x_2 - 8.88667488),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1329,log((x_1 - 0.069)**3)**2.956,"{'x_1': {'max': 10, 'min': -10}}",500 +1330,asin(x_1 + 2.709*x_2 + 1.680677)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1331,10.6612027268972*x_1**(3/2) - 0.799*x_1 + 1.133781,"{'x_1': {'max': 10, 'min': -10}}",500 +1332,6.56823084507969*x_1**2*tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1333,log(x_1*(0.146998403874207 - 0.355068608391804*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1334,asin((x_1 + 0.592*x_2)*tan(sin(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1335,sin(2*x_1 + log(x_1 + 0.932) - 0.914)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1336,sin(18809.9680222243*(0.648929266709929*x_1 - 1)**22.76 + sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1337,0.19889067044082*exp(x_1 + x_2*x_3 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1338,(x_1**2*(4.490161*x_2 - 8.558246866) + asin(4.128*x_1))/(x_2 - 1.906),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1339,tan(exp(cos(x_1**2)) + 0.108) - 1.427,"{'x_1': {'max': 10, 'min': -10}}",500 +1340,x_2 + 2.984*sin(3.719*x_1) + asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1341,-sqrt(x_2 - 0.401) + sqrt(tan(sin(1.536*x_1) + 1.23)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1342,0.646*exp(x_1*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1343,x_1 + 74.405973816*x_2*x_3 + x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1344,0.810505022905261/(0.371195248700817*x_1 + 1)**0.212,"{'x_1': {'max': 10, 'min': -10}}",500 +1345,x_1*sin(x_1 + x_2 - 1.89),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1346,x_1 - x_1/(sqrt(x_1) - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1347,1.2409215277794*x_1*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1348,x_1*x_3**4/cos(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1349,tan(sin(cos(cos(log(x_1 + 0.275))) - 1.465)) - 1.454,"{'x_1': {'max': 10, 'min': -10}}",500 +1350,asin(1.491*x_2 + asin(x_1 - 1.805) - 0.573),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1351,5.07304888570382*(0.815660685154976*x_2 + 0.815660685154976*cos(2.083*x_1) + 1.0)**7.97,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1352,exp(x_1**9.228/(asin(log(x_1)) - 0.465)**8.196),"{'x_1': {'max': 10, 'min': -10}}",500 +1353,x_1**2 - log(x_2)**(3/2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1354,2.519*x_1 + log(x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1355,x_1 + x_2/sqrt(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1356,3.189*(x_1 - 0.344)*asin(exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1357,2*x_1*x_2 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1358,2*x_1 - (x_1 + x_2)**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1359,log((x_2 - 0.735)*cos(x_1 + 1.171)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1360,3*x_1 + asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1361,x_1*(x_2*exp(x_1) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1362,4.571*asin(x_1/tan(1.277*x_2 + 2.075125)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1363,x_1 - tan(3.315*x_1*x_2) - 1.445,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1364,1.355*exp((2*x_1*x_2 + 2*x_1 - 2.962)/x_2) - 0.09756,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1365,(exp(x_2) + log(x_1))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1366,exp(sin(cos(exp(x_1)) - 0.742)),"{'x_1': {'max': 10, 'min': -10}}",500 +1367,sin(2.557*x_1 + sin(x_2 + 1.53) - 2.654166),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1368,x_2**3.545 + sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1369,sin(x_1*x_2*asin(x_2)**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1370,log(x_1 + exp(2.85*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1371,1.606*tan(2.747*x_1) + 2.73823,"{'x_1': {'max': 10, 'min': -10}}",500 +1372,cos(1.561*x_1)/sin(4.496*x_1 + 1.339808),"{'x_1': {'max': 10, 'min': -10}}",500 +1373,x_1 + 12.895281*x_2**2 + x_2 - 1.084,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1374,tan(3.753*tan(x_1 + 0.97))/x_1**5,"{'x_1': {'max': 10, 'min': -10}}",500 +1375,exp(0.361925443358668/x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1376,(x_2 - 1.854)*log(0.279876854184159/(x_1 - 1.935)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1377,x_1*(x_2 - 0.139)*tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1378,sqrt((1.025*x_1 + x_2 + 0.749275)/x_2)/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1379,x_1 + x_2**3 - 3.646*x_2 - 1.607,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1380,x_1*x_3**2*sin(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1381,x_1**4 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1382,sin(x_1 + cos(2.708*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1383,x_2 + log(x_1**4)**11.952,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1384,x_1**3 - x_1 + asin(x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1385,sin(x_1 + sin(x_1)*sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1386,4.903*x_1*(x_1 + x_2**2 + x_2 + 0.979),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1387,x_2*(0.590964104681563*exp(x_1) - 1.682)*sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1388,x_1 - x_2 + cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1389,x_2 + log(sin(x_1)) + 1.49649315341797,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1390,x_1 + 101.978472448*x_1**8.168*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1391,x_1**2*sin(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1392,(x_1 + 0.055)*sin(log(cos(x_1 + x_2 - 0.508))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1393,x_1*(x_1 - x_2 - exp(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1394,(x_1 - x_2)**2/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1395,x_1**4*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1396,-x_2**3 + log(x_1) + 4.796*cos(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1397,x_1**0.714*(x_1 + x_2 - 0.715)/(x_1**0.714*x_3 + 1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1398,x_1 + x_2**3 + x_3 + x_4,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1399,(x_1*(1 - x_2**5))**7.146,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1400,exp(x_1**3/(x_2 + 0.188)) + 1.368,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1401,sin(1.55512540746956*x_1**2)/(x_1 + 0.168*cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1402,205.10985212854*x_1**3*x_2**11.604,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1403,x_1*(x_2 + 1.408),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1404,1.504*x_1 + Abs(x_1) - 1.78224,"{'x_1': {'max': 10, 'min': -10}}",500 +1405,1.714201025625*x_3*(0.579710144927536*x_1 + 1)**2*(x_2 - 1.456),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1406,tan((x_1 + 1.19)*(x_2 - 0.897)*(x_1 + x_3 - 0.633)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1407,log(sin(tan(x_1))/log(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1408,x_1*x_2*sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1409,0.575881913151408*x_3**2*cos(-x_1 + x_2 + 1.993),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1410,227.962768850566*(0.33178500331785*x_1 - x_2 + 0.0053085600530856)**4.921,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1411,2*x_1 + x_2 - 0.674*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1412,x_3*(x_1 + 0.537*x_2 - 1.054131),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1413,sin(x_1**3 + x_1 - 1.02),"{'x_1': {'max': 10, 'min': -10}}",500 +1414,4.328*x_1 + x_1/(2*x_2) + 1.99088,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1415,sqrt(x_1)*x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1416,x_1*x_2*tan(x_1) + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1417,sqrt(x_1/x_2 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1418,x_1**2/sqrt(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1419,x_1 + 1.06837606837607*x_2**2/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1420,-x_1/(sqrt(x_1) - x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1421,0.447*exp(sqrt(x_1) - (x_1**4.723 - x_2)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1422,2*x_1 + exp(2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1423,log(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1424,-1/x_1**1.181 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1425,343.482714410759*x_1**3*x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1426,(x_1 - x_2)/log(x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1427,0.797*cos(x_1**(-72.807171)),"{'x_1': {'max': 10, 'min': -10}}",500 +1428,(exp(3.992*x_1) - 0.994)*(log(x_2*exp(x_1)) + 0.438),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1429,x_1 - 1/(x_1 + x_2 + 0.021)**1.203,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1430,x_1 + x_2 + log(x_1) - 0.18631618487007,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1431,x_1 - x_2 + asin(0.523*x_1 - 0.801236) + 1.706,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1432,0.810083272880017*x_1**2*x_2**(3/2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1433,2.199*x_1 - sin(x_1 + x_2 + 1.726),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1434,x_1**20 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1435,exp(x_1)*tan(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1436,0.194523532209579*(0.509424350483953*tan(x_1/x_2) - 1.0)**0.312,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1437,4.143*x_1*cos(log(x_1 - 1.824)),"{'x_1': {'max': 10, 'min': -10}}",500 +1438,x_1 + cos(tan(x_1**2)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +1439,3.15657677478807*(0.744047619047619*x_1 + 1)**3.888,"{'x_1': {'max': 10, 'min': -10}}",500 +1440,73023.8038909754*x_1**8.438,"{'x_1': {'max': 10, 'min': -10}}",500 +1441,exp(x_2 + cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1442,(x_1 + 1.982)*(x_1 + exp(x_2**2) - 1.011),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1443,sqrt(x_1*(x_2 + cos(tan(x_2 + 1.836)))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1444,x_1 - sqrt(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1445,0.374334135394937*sqrt(x_1**5),"{'x_1': {'max': 10, 'min': -10}}",500 +1446,tan(sin(2*x_1 - x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1447,x_1**2*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1448,x_2*x_3*log(2*x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1449,x_2*tan(x_1 - 0.133)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1450,1.393668613*x_1**3*(0.895255147717099*cos((x_2 - 1.698)*log(x_2)) - 1.0)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1451,x_1**1.578*tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1452,x_2 + 4.099*tan(x_1**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1453,exp(3.802*x_2 + asin(sin(x_1))**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1454,x_1/sin(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1455,x_1*(cos(4.065*x_2) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1456,-x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1457,1.05782796332863*x_2*sqrt(cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1458,tan(x_1 + 4.008*x_2 - 2*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1459,(4.893*x_1 - 3.694215)*cos(log(x_1) + 0.997317570516528),"{'x_1': {'max': 10, 'min': -10}}",500 +1460,(x_1 - x_2*x_3**2.873)**2/x_2**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1461,x_2*x_3*x_4 + cos(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1462,(tan(x_1 + x_2) + 1.975)*tan(x_3 - 0.067),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1463,0.976*x_1 + cos(2*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1464,(3.54*x_1 - 1.18944)*(4.41*x_1 - cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1465,12.166144*x_1*x_2*exp(-x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1466,sin(sin(log(4.645*tan(x_1 + 1.202) - 7.90579))),"{'x_1': {'max': 10, 'min': -10}}",500 +1467,sqrt(x_2)*(x_1 - 5.03332120165145*x_2**4.18),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1468,3.871*x_1*(x_1 - 1.087)/(x_2*(x_1 - 1.087) + 0.465766185374942*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1469,tan(2*x_1**2.42),"{'x_1': {'max': 10, 'min': -10}}",500 +1470,x_1 - 7.171684*x_3*(x_2 - 0.693)**3.58,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1471,((x_1 + x_2)/x_2)**0.08,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1472,tan(sin(x_1 - 0.011))**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1473,x_1**3.993*(log(x_1) - 0.16369609267079)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1474,-2.027*x_1 - 2.876*x_2 + 0.395265,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1475,asin(0.817216*(x_1 + 0.844)*(x_2 + 0.925) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1476,(x_1**2 - log(x_2))**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1477,exp(x_1**2/x_2) - 1.991,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1478,1.703*x_1 - tan((x_1 - x_2**2)**2) - 0.816,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1479,log(x_2)*tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1480,cos(sin(exp(x_1))) + 1.863,"{'x_1': {'max': 10, 'min': -10}}",500 +1481,cos(log(x_1 - x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1482,3.68427545352602*(x_1 + 0.384322005801906*x_2)**0.8285,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1483,6.06768375340955*(x_1 + 0.290613193839*(x_2 + tan(x_1))**1.834)**1.459,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1484,log(x_2 + cos(sin(x_1 + 0.814)**3)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1485,x_3*log(x_1 + x_2 - 0.462*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1486,4.924*x_1 - x_2*tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1487,x_1*exp(-1.263*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1488,2.6721481567802*(x_1/2 - x_2**3.115 - 0.2115)**1.418,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1489,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1490,3.917*exp(1.20954536913669*sqrt(tan(4.599*x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +1491,cos(x_1)/x_1**4,"{'x_1': {'max': 10, 'min': -10}}",500 +1492,x_1*(-3.39525*x_1 + 0.75*log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1493,3.421*sin(1.58508245411955*sqrt(0.398012106823671*x_1*log(x_1) + x_1 - 0.743088603439794)),"{'x_1': {'max': 10, 'min': -10}}",500 +1494,4.449*x_1 + exp(x_2/2) - 1.306,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1495,x_1/(tan(x_2**1.9165) + 0.279),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1496,sqrt(x_1 - tan(x_1))*Abs(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1497,x_1 + 1.235,"{'x_1': {'max': 10, 'min': -10}}",500 +1498,x_1**2*(-x_1 + x_2 - 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1499,x_1**7,"{'x_1': {'max': 10, 'min': -10}}",500 +1500,x_1*x_2 + tan(log(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1501,x_1**4*sqrt(sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1502,1.428*x_1*cos(x_1**2*(x_1**2 + 1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1503,x_1*x_2*(x_3 + 1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1504,(x_2 + 1.985)*sin(3.689*x_1)/(x_1 - 1.363),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1505,x_1 - 0.781762420836783*x_2**2/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1506,tan(tan(x_1**2)) + 0.969,"{'x_1': {'max': 10, 'min': -10}}",500 +1507,sin(x_1**3*(x_1 + x_2)**3) - 0.8,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1508,x_2*x_3 + exp(x_1 - x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1509,tan((x_2 + 0.7)**6*exp(2*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1510,x_1/x_2 + asin(x_3) - 1.392,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1511,cos(tan(x_2*(x_1 + 0.403)) + 0.923),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1512,2.07797978815964*sqrt(x_1 - 0.032) + sin(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1513,x_1 + 1.285*x_2 - sqrt(x_1 - 0.607) + 0.57054,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1514,asin(x_1 + exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1515,x_1**1.709*(cos(x_1) - 1.605),"{'x_1': {'max': 10, 'min': -10}}",500 +1516,(x_1**0.437 + x_2 + 1.474)*log(x_1 + 0.502),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1517,tan(tan(sqrt(x_1)) - 0.461) - 0.99,"{'x_1': {'max': 10, 'min': -10}}",500 +1518,1/sqrt(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1519,sin(tan(x_2*(x_1 - 1.464))) + 0.276,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1520,2*x_1 - asin(3.32*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1521,4.25034433646899*x_2*exp(asin(0.244*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1522,x_1 - x_2 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1523,x_1/sin(x_1*x_2) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1524,x_1**3 + sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1525,0.223353401910856*exp(exp(x_1*(x_2 - 0.035) + tan(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1526,(x_1 + asin(x_2) - 0.366)**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1527,3.522*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1528,0.502*x_1 + 0.231422,"{'x_1': {'max': 10, 'min': -10}}",500 +1529,exp(sin(1.773*x_1 - 1.905975)**1.42),"{'x_1': {'max': 10, 'min': -10}}",500 +1530,(x_1 + tan(exp(x_1)) - 0.284)**9.242,"{'x_1': {'max': 10, 'min': -10}}",500 +1531,cos(4*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1532,0.384*(x_1 + 1.837)*(x_2 - exp(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1533,43.7492645136247*x_2**3*x_3*(x_1 - 0.379),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1534,x_1*(tan(x_1) + 1),"{'x_1': {'max': 10, 'min': -10}}",500 +1535,x_1*x_2/(sqrt(x_3) + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1536,tan(4.016*x_1**2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1537,55.4177033988281*x_1*x_2*(3.476*x_1 - cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1538,-x_1 + x_3*(x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1539,0.256*cos(x_1 + log(x_1 - x_2 + 0.579)) + 0.375552,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1540,-2.043*x_2*(x_1 - tan(x_1 - 0.614) + 1.102),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1541,x_2**3*(x_1 + x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1542,exp(x_2*(asin(x_1) + 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1543,(4.646*x_1 - x_2)*(x_3 + x_4 - 1.018),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1544,x_1 + log(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1545,0.127880620816*asin(x_1/tan(2*x_1))**3.822,"{'x_1': {'max': 10, 'min': -10}}",500 +1546,x_1*x_2**2*log(cos(x_2) + 1.322),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1547,cos(exp(6*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1548,7.83556849405325*sqrt(-x_1**2 + 0.0162876705902668*x_1 - 0.0457032036762885 + 0.0320607973789164/x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1549,x_1*x_2**2*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1550,log(sin(x_1) - 1.053) + tan(sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1551,x_1*cos(x_2) + exp(2.703*x_2) + 0.213,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1552,sqrt(x_2)*Abs(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1553,0.799*x_1 + asin(2*x_1) + 1.025117,"{'x_1': {'max': 10, 'min': -10}}",500 +1554,x_1 - x_2 - log(x_1) - 0.113,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1555,747.002746357166*x_1**3/(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1556,3.277*sin(x_3 + cos(x_1 + x_2 - 0.879) + 1.778),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1557,cos(x_1**2*x_2 + x_1 + 1.42) - 1.256,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1558,(1.023*x_1 - 1.284888)/(2*x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1559,(x_1 - 1.265)/(x_2 + log(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1560,1.15844723660597*sqrt(x_1)*x_2 + x_1 - 1.657,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1561,(-x_1 - 0.248)*(2.997*x_3 - cos(4.336*x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1562,x_1**4.518,"{'x_1': {'max': 10, 'min': -10}}",500 +1563,sqrt(x_1)*exp(x_2*x_3/2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1564,(x_1 + x_2)*exp(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1565,(x_1 + x_2)**3.357 + 3.727*sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1566,0.696*sin(x_1**6),"{'x_1': {'max': 10, 'min': -10}}",500 +1567,((x_1 + x_2**7.06*x_3 - 0.078)/x_2**7.06)**9.704,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1568,1.683*x_1*(x_2 - 0.0700000000000001),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1569,x_1 + x_2 + log(x_3)**3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1570,(x_1*(x_2 + 1.995) - x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1571,x_1 + exp(3.42122953628967*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1572,asin(2.402*x_1**0.665*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1573,asin(x_1**3)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1574,log(sin(cos(x_1) - 1.851)),"{'x_1': {'max': 10, 'min': -10}}",500 +1575,x_1*sin(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1576,3.498*x_1 + x_2 + tan(3.651*x_1 - 3.950382),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1577,1.16550116550117/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1578,(tan(x_1**0.5055) + 1.482)*asin(x_1 + 0.999),"{'x_1': {'max': 10, 'min': -10}}",500 +1579,1.35636155167057*x_1*(x_1**2 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1580,0.899*x_1 + x_2*cos(0.487*x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1581,-x_3*(-2*x_1 + x_2 + 1.438),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1582,cos(x_1**0.66*x_2) - 1.431,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1583,(0.429025*x_1*x_2*x_4 + x_3 + 0.595)/x_4,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1584,x_2*exp(cos(cos(exp(x_1)))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1585,0.434312*x_1 + x_2 - sqrt(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1586,2*x_1 + x_2**2*(x_1 - 0.841),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1587,x_2*cos(3.204*x_1 + 6.084396)**0.029,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1588,x_1 - x_2 + cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1589,sqrt(x_1**2 - 0.768371920526803*x_1**1.891),"{'x_1': {'max': 10, 'min': -10}}",500 +1590,(x_1 + 1.331)*cos(x_3 + log(x_2) + 1.851),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1591,x_1**2*(exp(x_1) + 0.737),"{'x_1': {'max': 10, 'min': -10}}",500 +1592,x_1*x_2*(x_1**4 + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1593,sin(x_1)/(x_2**1.477*(x_1 + 0.4)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1594,2.918*x_1*tan(1.136*x_2) + 3.898*x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1595,x_1*x_2 + cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1596,sqrt(tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1597,0.777283954010504*x_1*x_2**1.318 + x_1 - 0.512,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1598,(x_1 + 0.197)*(log(x_1) + 1.50607539943895),"{'x_1': {'max': 10, 'min': -10}}",500 +1599,5.34756684034898*x_1**(3/2) + 2.01*x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1600,x_1*(x_3 + 1.83) + cos(x_1/x_2) - 0.283,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1601,43.4222244*x_1*(x_1 - 0.287935502447452*x_2 + 0.838)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1602,log(sin(x_1*sqrt(x_2*(x_1 - 1.985)))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1603,(4.005*x_1*(x_1**2 + x_2) + x_2)/(x_1**2 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1604,x_1 + x_2 + 1.149184*(0.932835820895522*x_3 + 1)**2 - 0.629,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1605,(x_1**2 - sqrt(x_2))**1.6845,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1606,x_1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1607,exp((x_1 - 1.881)*exp(x_2)/x_2) - 0.769,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1608,1.45636533878007*x_1*sqrt(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1609,(x_1 - 1.308)*(x_1 + x_2 + tan(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1610,(x_1*x_3*(x_1 + 4.131) - x_2)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1611,log((0.813008130081301*x_1 + 1)**5.859) + 5.32073085529801,"{'x_1': {'max': 10, 'min': -10}}",500 +1612,0.853*(x_2 + 1.385)*(x_1*(x_2 - 1.266) + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1613,2.285*x_2*asin(2*x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1614,(x_1 + x_2 - 0.828*x_3)*tan(1.775*x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1615,log(x_1 - 2.50649157189886*sqrt(0.571428571428571*x_2 - 1) + 4.012099469*(0.629326620516048*x_2 + 1)**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1616,x_1*asin(tan(0.533*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1617,tan(sqrt(x_1**3)),"{'x_1': {'max': 10, 'min': -10}}",500 +1618,x_1**0.378/x_2 + x_1 - 0.895,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1619,x_1*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1620,x_1*x_2*sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1621,tan(sqrt(x_1) + 9.3443261196518*x_1**1.544),"{'x_1': {'max': 10, 'min': -10}}",500 +1622,log(log(x_1))**0.452,"{'x_1': {'max': 10, 'min': -10}}",500 +1623,tan(x_1**2.402),"{'x_1': {'max': 10, 'min': -10}}",500 +1624,0.313172851392578*x_1*exp(x_1)/(x_2 + 0.944),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1625,2.454*x_1 + x_2 + (x_1 - 0.591)*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1626,x_1**3.372 - 3.304*exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1627,exp(x_1**4*x_2**3.676) - 0.878,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1628,x_1 + (0.549*x_1 - x_2**5)**(-0.054),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1629,-x_2 + log(x_1 + x_1**3.226),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1630,10.0336983555133*x_1**4/sin(x_1)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1631,exp((0.00626324266460133*x_2 - 0.00436548013722713)*cos(2*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1632,x_1/sin(x_2*asin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1633,1/(cos(x_1 + 1.25) - 1.227),"{'x_1': {'max': 10, 'min': -10}}",500 +1634,x_1**2*(x_1 - tan(0.81*x_1 + 0.56133))**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1635,x_1**2 - exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1636,sin(4.404*tan(x_1 + 3.461*x_2)) + 0.574,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1637,-x_2 + exp(sqrt(x_1 - 0.476)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1638,sqrt(x_1) - sqrt(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1639,1.61616830806695*x_1*sqrt((x_1 - 0.507)*log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1640,-8*x_1**3 + tan(cos(x_1)) + 1.409,"{'x_1': {'max': 10, 'min': -10}}",500 +1641,-x_1 - x_2 + cos(x_1) + 1.517,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1642,x_1**3*(1 - x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1643,4.668*tan(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1644,x_2/(x_2 + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1645,7.604617384*x_1*(0.574052812858783*x_2 + x_3*(0.574052812858783*x_1 - 1))**2/x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1646,3.865*x_2 + 1.14367827643966*sqrt(x_1 + 0.116892653938964*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1647,3.87415188085638e+41*(x_1*(x_1**3 + 0.00261532127699318))**16.104/x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1648,-x_2*log(x_1) + tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1649,log(x_1**2*x_2) + 1.93016179208717,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1650,x_1 + x_2 + asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1651,x_1*(4.935*x_2 + x_3 + 9.49494)*cos(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1652,1.459*x_1 + cos(x_1 + 1.797)*asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1653,129913.405319909*(0.586166471277843*x_1 + 1)**5,"{'x_1': {'max': 10, 'min': -10}}",500 +1654,exp(4.402*x_2 + 1.995*sin(x_1 - 0.011)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1655,cos(log(x_1**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +1656,x_2*asin(1.278*x_1)**2/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1657,130.092712786482*x_1**6/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1658,x_1*log(x_3**4)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1659,x_1*x_2 + x_1 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1660,x_1**3*sqrt(sin(x_1 - 0.451)),"{'x_1': {'max': 10, 'min': -10}}",500 +1661,0.512650475670744*exp(1.339*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1662,x_1*(1.924*x_1 + x_2 + 0.973544),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1663,x_1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1664,sin(x_1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1665,x_1*(1 - log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1666,0.261574679571018*x_1**2/tan(2*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1667,x_1*(x_1 + x_2 + cos(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1668,x_1*log(asin(asin(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +1669,2.047*x_1 + tan(7.3439775749e-5*x_1**5) + 2.732745,"{'x_1': {'max': 10, 'min': -10}}",500 +1670,x_1*asin(4.639*x_1)/(x_2 + 0.639),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1671,x_2 + log(cos(3.911*x_1) + 1.929),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1672,sin(exp(tan(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +1673,1.696*x_1/cos(sqrt(x_2) + log(x_1 - 0.879)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1674,4.005*x_1 + sin(x_2**0.316635),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1675,x_1*(x_2**2 + cos(x_1)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1676,(x_1 - x_2 - 0.357)/(x_2 + 1.351),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1677,x_1 + x_1/tan(0.447561*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1678,cos(3.202*x_1 - 1.050256),"{'x_1': {'max': 10, 'min': -10}}",500 +1679,4.268*x_1 + cos(cos(x_1 + 0.182)) - 7.34096,"{'x_1': {'max': 10, 'min': -10}}",500 +1680,x_1 + x_1/(x_2 + sqrt(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1681,4.026*x_1 + sin(3.225*x_1) + 0.12078,"{'x_1': {'max': 10, 'min': -10}}",500 +1682,cos(x_1*(2*x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1683,1.055*x_1 - cos((x_1*x_2)**2.096),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1684,x_1 + x_1**5.974,"{'x_1': {'max': 10, 'min': -10}}",500 +1685,asin(sqrt(x_2) + 0.871*exp(3*x_1) + 1.129687),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1686,18409.4597578076*x_3**5*(x_1 + 0.916)**20*(0.611995104039168*x_2 - 1)**20,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1687,x_1**6/x_2**6,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1688,x_1*x_3/x_2**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1689,-x_1 + 1.953 + 0.7825406515973/sqrt(0.612369871402327*x_1 + 1),"{'x_1': {'max': 10, 'min': -10}}",500 +1690,tan(tan(1.022*x_1)*asin(3.688*x_1))**5,"{'x_1': {'max': 10, 'min': -10}}",500 +1691,x_1 + asin((x_2 - 0.396)**2) - 2.055,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1692,x_1 + x_2 + (x_1 - 1.323)*(x_2 + 0.228) - 0.086,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1693,asin((x_1 + x_2)**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1694,(x_1 - 1.573*x_2)*log(4.888*x_3 + 6.080672),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1695,x_1*x_2*exp(sin(cos(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1696,(x_1*(x_1 + 0.412) - x_2**6.492)/(x_1 + 0.412),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1697,x_1 + x_2**8,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1698,(x_3 + 0.572)*(x_1 - x_2 + 1.305)/x_1,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1699,(-2.468*x_1 + (x_1 - x_2)*(3.038*x_3 + 2.181284))/(x_1 - x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1700,15.264649*x_2*(x_1 + 0.834),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1701,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1702,0.9492799761*x_1**2 + x_1 - x_2 + 1.509,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1703,sin(x_2) + cos(x_1 + x_1**2.47),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1704,Abs(x_1 + tan(1.765*x_1 - 1.745585) - 0.803),"{'x_1': {'max': 10, 'min': -10}}",500 +1705,log(2*x_1 + x_2 + x_3 - 0.058),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1706,2*x_1 - 0.828 + sin(x_2)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1707,0.282485875706215*x_1**2/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1708,1.62573060498965*sqrt(x_1/log(4.904*x_1)**2)*Abs(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1709,sqrt(cos(3.344*x_1 + x_2 + 4.415*x_3) + 0.238),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1710,log(x_2*x_3) + sin(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1711,2.71*sin(tan(x_1 + asin(0.743*x_1) - 0.104)),"{'x_1': {'max': 10, 'min': -10}}",500 +1712,sin(x_1)**4,"{'x_1': {'max': 10, 'min': -10}}",500 +1713,-x_1 + tan((x_1*x_2 + x_1 - 1.938)/x_2) + 0.719,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1714,x_1*tan(tan(2.017*x_2)) - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1715,x_3*(x_1 + x_2 + 1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1716,-x_2**0.973 + log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1717,(-x_1 + 0.824*x_2 + 0.7828)*exp(3.249*cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1718,x_1**2 + cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1719,3.758*x_2 + tan(cos(tan(x_1 - x_2))) + 6.102992,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1720,3.698*cos(x_1**6) + 3.516798,"{'x_1': {'max': 10, 'min': -10}}",500 +1721,x_1**2*x_2*asin(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1722,0.74*sin(0.145*x_1) - 0.97236,"{'x_1': {'max': 10, 'min': -10}}",500 +1723,x_1*x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1724,x_2 + 1.54647407005522*sqrt(x_1**2.757) + 0.318,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1725,(x_1**6.2988 + x_2)/x_1**5.7988,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1726,sin(x_1) - tan(x_1**3),"{'x_1': {'max': 10, 'min': -10}}",500 +1727,exp(3.699*asin(log(x_1*(x_1 - x_2)) - 1.6194882482876)) + 1.432,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1728,1.901*x_1 + x_2*x_3 + 2.332*x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1729,cos((2.814*x_1 - x_2 + 0.238684)*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1730,(2.387*x_1 - 1.021636)*cos((x_1 - 0.703)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +1731,x_1 - (x_1 - 0.777)**3 + 4.395*tan(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1732,x_1*(x_2 + x_3 + 1.883)*sin(3.962*x_1 + 4.671198),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1733,3.741*cos(1.05877287460531*sqrt(x_1) + 0.623*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1734,log(x_1)/x_1**1.737,"{'x_1': {'max': 10, 'min': -10}}",500 +1735,-x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1736,-exp(14.2884*x_1*x_2) + sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1737,1/(x_1 + 0.687),"{'x_1': {'max': 10, 'min': -10}}",500 +1738,3.499*x_1 - x_2**2*(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1739,x_1**3.053*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1740,6.4365971010148*exp(x_1) - 1.773,"{'x_1': {'max': 10, 'min': -10}}",500 +1741,sin(exp(1.70952399845066*x_1**1.716)),"{'x_1': {'max': 10, 'min': -10}}",500 +1742,exp(x_1*cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1743,0.655*cos(sin(x_1**3.702)),"{'x_1': {'max': 10, 'min': -10}}",500 +1744,(0.527992360070905*x_1 - 0.672134274370262)/(0.51150895140665*x_1 - 0.51150895140665*x_2 + 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1745,x_1**1.287*exp(2.37*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1746,x_1**4*x_2*log(tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1747,-x_1*tan(x_1)/(2.019*x_1 - x_2*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1748,2.748*x_2 + (x_1 + x_2)**3.957,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1749,cos(x_1*sqrt(sqrt(x_1) + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1750,x_1 - x_2 + log(x_1)**2.188 - 0.592,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1751,log(x_1 + sqrt(x_2) + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1752,x_1*(x_1 + log(x_1 - x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1753,cos(x_1 - 2.965284*x_2**0.678*(0.580720092915215*x_1 - 1)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1754,21.968489916396*x_2*(x_1 - 0.889)/log(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1755,x_1 + tan(x_2)**6.264,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1756,x_1 + x_2 + x_3 + log(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1757,3.73*cos(log(x_1 + 0.512*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1758,x_1*x_2*log(x_1/x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1759,39.807813264*x_1 - 1.393*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1760,sin(x_1 + log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1761,26.1447339005625*x_1*x_2 + log(x_3) + 0.198850858745165,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1762,x_1**11.766 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1763,(x_2*(x_1 - 0.422) + sin(x_1)**5)**4/x_2**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1764,4.839*x_1 - cos(20.277009*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1765,x_1*(x_1**2 + 2),"{'x_1': {'max': 10, 'min': -10}}",500 +1766,log(asin(sqrt(x_1) + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1767,-x_2 + log(cos(x_1)/x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1768,x_1**2*(x_2 - 0.945)**2/x_2**2 + x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1769,x_1/(-x_2 + (x_2 - 0.499)**4.222),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1770,-tan(x_1*(x_1 - sin(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +1771,x_1*x_2*log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1772,x_1 + x_2**2*sin(x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1773,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1774,x_1/(x_4*(x_2 + 0.777)*(x_3 + 1.58)),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1775,sin(10.923025*asin(x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1776,2.0*x_2 + 0.749*cos(x_1) + 0.835884,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1777,x_1**10*x_2**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1778,x_1/tan(3.505*x_2) - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1779,cos(x_2 - 0.780475445096563*exp(0.172*cos(x_1))) - 1.885,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1780,1.88684941432194*(x_1 + x_2)**7.852776/x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1781,tan(x_1)**2/x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1782,x_1 - sin(4.579*x_1*exp(cos(x_1))) - 0.801,"{'x_1': {'max': 10, 'min': -10}}",500 +1783,x_1**0.724/x_2**2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1784,(x_1 - 0.171)**2*asin(3.553*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1785,3.395*x_2 + log(sqrt(x_1)*(2.898*x_2 + 4.952682)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1786,cos(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1787,tan(x_1 + 1.409),"{'x_1': {'max': 10, 'min': -10}}",500 +1788,(1.039*x_2 - 1.451483)*cos(1.96035711032454*sqrt(x_2) - cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1789,-(x_1 - x_2)*(x_2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1790,2*x_1 - x_2 - x_3 + 0.146,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1791,x_2**(3/2)*(x_1 - 0.555),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1792,x_1 + sin(log(x_2 + 0.594)) - 3.25,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1793,sqrt(log(x_1**2)**5),"{'x_1': {'max': 10, 'min': -10}}",500 +1794,-1.12*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1795,x_1*(x_1 - x_2 - 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1796,x_1*(-x_1 + log(x_1) + 1.60382217364846),"{'x_1': {'max': 10, 'min': -10}}",500 +1797,1.06*sin(2.824*x_1*x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1798,log(x_1) + 4.846*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1799,x_1*sin(2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1800,-2*x_1 + sqrt(log((x_1 + 0.162)**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +1801,tan(x_1)/log(4.551*cos(x_1))**4,"{'x_1': {'max': 10, 'min': -10}}",500 +1802,3.175*(x_1 + 1.103)*log(x_2 + 1.394),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1803,x_1 - sin(x_1 - sqrt(log(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +1804,7197.14604492527*x_1**15.726,"{'x_1': {'max': 10, 'min': -10}}",500 +1805,log(x_1**5)**(1/4),"{'x_1': {'max': 10, 'min': -10}}",500 +1806,1.04450945424156*sqrt(0.916590284142988*x_1 + 0.916590284142988*x_2 - 1)*asin(x_1 + 0.718),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1807,log(x_1**2*asin(x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1808,2.862864*x_3*(0.591016548463357*x_1 - 1)**2*cos(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1809,log(x_1 - x_2**0.683 + 0.98),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1810,4.054*exp(2*tan(0.7*tan(x_1 - 1.922))),"{'x_1': {'max': 10, 'min': -10}}",500 +1811,log(x_2/(x_1 + 1.172)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1812,4.56*x_2 + exp(2.831*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1813,0.873321080067088/(-x_1**2 + 0.903583975480345*x_1 + 0.903583975480345*x_2)**1.336,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1814,-x_1**2 + log(x_1 + 0.029),"{'x_1': {'max': 10, 'min': -10}}",500 +1815,2.13401030925345*sqrt(x_1 + 0.219587176108915*x_2) + 2.538*tan(x_1) + 3.880602,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1816,x_1*(x_1 + 1),"{'x_1': {'max': 10, 'min': -10}}",500 +1817,(x_1 - 1.523)/cos(1.297*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1818,sin(tan(x_1**4 + 2.403*tan(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +1819,sin(2*x_2 + 3.25)**4 + cos(x_1) + 1.341,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1820,4.028*x_1 - x_2**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1821,2*x_1 + cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1822,1.565*x_1 + x_2 + sin(1.748*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1823,cos((x_1 + x_2)*asin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1824,4.12*x_1*cos(0.809*x_1) + 4.326*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1825,sqrt(sin(2*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1826,sin(log(x_1 + sqrt(x_1**2.087) - 0.387)),"{'x_1': {'max': 10, 'min': -10}}",500 +1827,log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1828,109816300.48595*(0.287769784172662*x_1 + x_2)**14.863842,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1829,x_2*cos(3.091*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1830,log((x_1 + 1.442)*(x_2 + 1.326) + sin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1831,10.0368995749185*x_1**(3/2),"{'x_1': {'max': 10, 'min': -10}}",500 +1832,x_1**9.631224*x_2/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1833,175.166389681936*(0.274876305662452*x_1 + x_2 + 0.274876305662452*sin(exp(2.518*x_2)))**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1834,x_1*x_2/(x_3 + (x_1 - 0.074)**3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1835,sin(log(tan(x_1))) + 0.173,"{'x_1': {'max': 10, 'min': -10}}",500 +1836,log((x_1 - 0.149)*(x_1 + x_2)) + 0.254642218373581,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1837,x_1**(3/2)*(x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1838,x_1*(23.5225*x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1839,0.0582824958137555*x_1**12.3225,"{'x_1': {'max': 10, 'min': -10}}",500 +1840,x_1 + x_2*log(x_1)**6 - 1.583,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1841,log(x_1) - log(cos(x_1)) - 1.10260430993765,"{'x_1': {'max': 10, 'min': -10}}",500 +1842,x_1 - 95.318738432*x_2**1.533 - x_3 + 1.557,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1843,x_1 + exp(x_3)*tan(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1844,cos(6.9404446864183*x_1**3.108434),"{'x_1': {'max': 10, 'min': -10}}",500 +1845,exp(0.147*x_1 + x_2 + x_3 + x_4),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1846,sqrt(tan(2*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1847,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1848,exp(x_1 + (x_1 + 0.512)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1849,0.536333022612924*exp(x_1)/sin(4.241*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1850,4.913*x_1 + sin(x_1 + x_2 - x_3 - 1.784),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1851,-x_1 + log(tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1852,x_3 + exp(x_1)*sin(x_2) - 0.608,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1853,tan(x_1 + log(x_1 + 0.938) - 1.269),"{'x_1': {'max': 10, 'min': -10}}",500 +1854,tan(x_1 + 2.422300607*x_2**3 + 0.15) + 1.694,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1855,x_1 - sin(log(x_2 - 1.194)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1856,x_1**5*x_2**5 + cos(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1857,x_1 - x_2/(x_3*x_4),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1858,4.112*x_1 + x_2 + tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1859,sin(x_1 + x_2 + 1.889)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1860,x_1 + 2*x_2*(58.703547006976*x_1 + 27.7667777342996) + 1.646,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1861,tan(1.652*exp(x_1)) + 0.769,"{'x_1': {'max': 10, 'min': -10}}",500 +1862,cos(x_1 + sin(sqrt(tan(2.09*x_1)))),"{'x_1': {'max': 10, 'min': -10}}",500 +1863,4.69*x_1 + exp(0.21092596498629*x_2/x_1) - 6.20018,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1864,3.782*x_1 - exp(x_1/2) + 7.329516,"{'x_1': {'max': 10, 'min': -10}}",500 +1865,x_1 + log(tan(x_2 + x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1866,12.271009*x_2**3*x_3*sin(log(x_1)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1867,sin(sin(x_1 - 2*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1868,-x_2 + exp(sin(log(x_1) + 1.57048916229553)) - 0.844,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1869,4.771*x_1*sin((x_1 + x_2*x_3 + 1.423)/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1870,x_2*x_3*(x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1871,sin(x_1**2.211/(x_2 - 0.89))/cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1872,x_2 + exp(x_1**2) - 0.566,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1873,cos(0.282300416*x_1**3)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1874,x_1*(8.116801*sqrt(x_2) + 8.116801*x_3 + 1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1875,1.264*x_1 + tan(8.548*x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1876,sqrt(x_1) + 2*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1877,-x_2 + 4.074*cos(x_1 - 1.787) + 6.347292,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1878,log((x_1 + 0.823)*(asin(x_2) + 0.511)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1879,x_1**1.59*exp(-x_1/2) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1880,x_1*x_2 + x_3 + x_4,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +1881,x_2 + sin(1.1*cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1882,1.301881*x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1883,exp((x_1 - 1.361)*asin(exp(x_2) + 1.181)) + 1.13,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1884,91.440460062289*x_1*x_2/(2.471*x_2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1885,x_1 - x_2*log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1886,x_3*log(x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1887,8.145316*(x_1 - 0.375)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1888,94.0444961554557*x_1**3*x_2 + x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1889,sin(x_1 + 0.887)/log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1890,x_1**0.778*(6.5784904684305*x_1 - 1.42391568580747*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1891,2.133*x_1 + 2.951524*x_3*(x_2 - 0.202)**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1892,x_1 - cos(cos(x_1 + 0.678)),"{'x_1': {'max': 10, 'min': -10}}",500 +1893,log(x_1 + tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1894,x_1**2.424*(x_2 - 0.546)**9,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1895,sin(x_1**2*x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1896,(x_1 + 1.466)/asin(log(1.432*x_1 - 1.86876)/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1897,x_1 - 1.456,"{'x_1': {'max': 10, 'min': -10}}",500 +1898,exp(2.746*x_1*(4.0*x_1 + x_2*x_3)) - 1.775,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1899,x_1 + x_2**3*log(x_1)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1900,0.240731824747232*log(0.265*x_1)**0.0975/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1901,76.332940488*x_1*(x_1 + 0.235737859500236*exp(0.927*x_1) + 0.695)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1902,3.206*exp(x_1) + 4.821824,"{'x_1': {'max': 10, 'min': -10}}",500 +1903,1.345*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1904,exp(x_1*x_2*exp(sqrt(x_1))) + 1.983,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1905,log(exp(3.776*x_1 + x_2 + asin(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1906,-x_1**3,"{'x_1': {'max': 10, 'min': -10}}",500 +1907,x_1 - x_2 - 1.332,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1908,3.68*x_1 - x_2*(x_1 + x_2**3 - 1.612) - 0.77648,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1909,4.640749632*(0.599520383693046*x_1*x_2 + x_3)**3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1910,2.506*x_1*(x_2 - 0.449)/(x_3 + 1.828),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1911,x_1**2 + sin(x_1 - 0.177),"{'x_1': {'max': 10, 'min': -10}}",500 +1912,(x_1 - 0.283)**2*exp(2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1913,x_1 + 1.12650945532148*x_2**0.0775 + x_2*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1914,2.063*exp(cos(x_1)*asin(x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1915,-51.895117*x_2**3*x_3*(-x_1 + 0.268096514745308*x_2 + 0.493)**3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1916,x_1*(x_1**2 - x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1917,sin(x_1) + 1.457*tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1918,-2*x_2 + sin(sin(x_1))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1919,x_1 + sin(x_2 + sin(cos(x_2 - 1.838)) + 0.616),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1920,sqrt(x_1)*log(x_1) - x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +1921,-x_1/(2.747*x_1 + x_2 - tan(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1922,log(x_2 - x_3 + exp(x_1)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1923,x_1 + sqrt(x_1*cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1924,log((x_1 - 0.964)**8.28311),"{'x_1': {'max': 10, 'min': -10}}",500 +1925,x_1/(x_3*(1.946*x_1 - x_2 + 1.120896)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1926,x_1*tan(cos(1.369*x_1 - 2.074035)),"{'x_1': {'max': 10, 'min': -10}}",500 +1927,(-x_2 + asin(x_1))/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1928,x_1 + sin(x_1 + 0.404)**4.131 + 0.389,"{'x_1': {'max': 10, 'min': -10}}",500 +1929,0.686*exp(4.187*x_1*(x_1 + 2*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1930,52816308624.8988*x_1**5 + x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1931,(cos(x_1) + 0.748)*sin(x_3 - 1.588)/(x_2 - 1.643),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1932,x_1**5*log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1933,cos((x_1 - 1.526)*log(x_2))**0.541,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1934,log(cos(exp(sin(x_1)))**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1935,x_1*log(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1936,x_1*x_2**1.41264,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1937,log(x_1**1.074 - 2.819*sin(x_2) - 5.277168),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1938,x_1 + exp(tan(3.025*x_1 + x_2 + 2.3716)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1939,4.124*x_1*asin(4.196*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1940,9.61254501842606*x_1 - 1,"{'x_1': {'max': 10, 'min': -10}}",500 +1941,(2.651*x_1 - 4.7718)*exp(-asin(x_1 - 1.123)),"{'x_1': {'max': 10, 'min': -10}}",500 +1942,exp(2.09701952265049*x_1**2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1943,(2.974*x_1 - 4.933866)/(2*x_1 - exp(x_2) + 1.638),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1944,x_2*sin(x_1**7.3)*sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1945,0.393*exp(x_1 + exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1946,x_2*(x_1 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1947,exp(x_2) + log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1948,sqrt(cos(x_1)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +1949,asin(2*x_1)**0.384,"{'x_1': {'max': 10, 'min': -10}}",500 +1950,3.114*x_1 + x_2 + 2.00491371385404*sqrt((0.628930817610063*x_1 - 1)**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1951,sqrt(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1952,-x_1**1.089,"{'x_1': {'max': 10, 'min': -10}}",500 +1953,x_1 - 2.310438248*x_2**3 - 0.506*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1954,exp(x_3*(x_1 + x_2 - 1.146)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1955,x_1/((x_2 - 1.682)*log(x_3 - 0.99)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1956,(x_1 + 1.417)/(x_2*(11.61855396*x_2*(0.641025641025641*x_1 - 1)**2 + 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1957,(x_1/x_2)**0.2355*(x_1 + 0.707)**3.895,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1958,sin(x_1**4.958540104*x_2**3)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1959,x_1**3*cos(x_1)**5.019,"{'x_1': {'max': 10, 'min': -10}}",500 +1960,-x_1 + 161.904901992234*cos(1.19373363863133*sqrt(x_1))**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1961,x_2*(45.652403224*x_1*x_2**2 - x_3 + 0.209),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1962,exp(x_3*(x_1 + x_2 - 0.91)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1963,tan(cos(2*x_1*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1964,cos(x_1*(1.0 + 3.325/sin(x_2)))**2.3275,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1965,cos(x_1 - 0.758*x_2 + x_3) - 0.534,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1966,x_1 - x_3 - log(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1967,(4.481*x_1*cos(x_1) + x_1 - 0.761)/cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +1968,exp(2*x_1) + cos(x_2 - 1.23),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1969,x_2**2.852 + 2.126*tan(x_1 + 1.535),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1970,x_1 - cos(sqrt(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +1971,-x_1 + x_2 + asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1972,x_1*(x_2 + 0.524288)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1973,x_1*asin((x_1 - tan(x_1 - 1.386))**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1974,1.8422158350848*(0.829187396351575*x_1 + 1)**1.181,"{'x_1': {'max': 10, 'min': -10}}",500 +1975,3.508*x_1 + tan(cos(x_1 + 0.492)) + 1.15764,"{'x_1': {'max': 10, 'min': -10}}",500 +1976,x_3*(x_2*cos(x_1) + 1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1977,-2*x_1 + asin(sin(exp(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +1978,x_1*asin(1.338*x_2) + x_2 - 0.763,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1979,sqrt(x_1)*(log(x_1*x_2) - 0.0565703514883944),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1980,sqrt(tan(21.501769*x_1*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1981,3.235*exp(2.09618701455762*sqrt(asin(x_1 + 0.016))),"{'x_1': {'max': 10, 'min': -10}}",500 +1982,cos(x_1)/x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +1983,-tan(x_1/(x_1**3 - x_2**2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1984,1.684*x_1 + 2*x_2*(39.69126001*x_1 + 33.97571856856),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1985,0.0254978099538771*x_1*x_2/x_3**8.296,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1986,16.0*x_1**4,"{'x_1': {'max': 10, 'min': -10}}",500 +1987,sin(exp(x_1)/log(x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +1988,cos(x_1/x_2 + x_2 - 1.378) + 0.231,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1989,sqrt(x_2*(x_1 - 1.135))*(1.79164728671689*x_1 - 3.14434098818814),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1990,cos((sqrt(x_1) - 3.155*x_2)/cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1991,log(x_2 + 4.787*exp(2*x_1))**0.967,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1992,sin(x_1**77.056/x_2**16),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1993,tan(x_1**4 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1994,1.894*exp(3*x_1*asin(x_1**3)),"{'x_1': {'max': 10, 'min': -10}}",500 +1995,asin(x_1/x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1996,(x_1 + 0.00146480362219901*x_2**2.984*x_3**3.612)**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1997,(x_1*(x_2**2 - 1))**7.978,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1998,x_1 + x_1*log(x_3)/x_2 + 1.489,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +1999,x_1*(1 - x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2000,(x_1 + 1.93)/log(3.289*x_2 + x_3 + 0.972),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2001,tan(x_1*sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2002,x_1*x_2*(log(x_1) + 1.56108765149305),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2003,(x_1 - 0.2)*sin(x_1 + log(x_2) - 0.987),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2004,x_1 + sin(147.25890288377*(0.738007380073801*x_1 + 1)**2)**(3/2),"{'x_1': {'max': 10, 'min': -10}}",500 +2005,2*x_1 - sin(x_1*x_2) + 0.612,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2006,2.389*x_1 + sin(log(sin(0.222461772750372*exp(x_2)) + 1.438)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2007,x_2**2*(exp(x_1) - 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2008,cos(x_1**3)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2009,3.006*x_1 + x_2 + tan(10.883401*x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2010,(x_1 - 1.436)*(exp(x_3) - 1.073)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2011,log(cos(5.5778*x_1 + 7.1618952)),"{'x_1': {'max': 10, 'min': -10}}",500 +2012,(x_1**9.491067*(x_1 + 1.954) + x_2)/(x_1 + 1.954),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2013,cos(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2014,x_1**5/x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2015,-x_1 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2016,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2017,tan(3.415*x_1 + exp(x_2))/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2018,1.336*sin((x_1 + 1.224)*sin(x_2**5.49)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2019,tan(23.951236*tan(x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2020,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2021,x_1 + tan(exp((x_1 + 0.874)**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +2022,sin(x_1) + tan(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2023,x_1*cos(x_1 + x_2**2 - 1.455),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2024,asin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2025,log(x_1**5 + 1.65698552046085*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2026,4.05*(cos(cos(x_1)) + 1.996)*log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2027,x_1*(x_2 + x_3)*asin(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2028,x_1*exp(-x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2029,x_1 - x_2 - x_3 - exp(0.123*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2030,3*x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2031,x_1*sqrt(x_2) + x_3 - 1.383,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2032,5.38767786827105*(x_1 + 0.68)**1.992*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2033,3.209*x_1*(x_1 + x_2**2*x_3)/x_2**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2034,0.0169*x_1*x_2 - x_2**1.5405 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2035,0.686792718930031*x_1**3.041/(0.775795190069822*x_2 - 1)**1.48,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2036,asin(exp(exp(x_1)/2)),"{'x_1': {'max': 10, 'min': -10}}",500 +2037,x_1 + exp(86893138609763.5*(x_2 + 0.206058108386565*log(x_2))**20.318916),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2038,3.845521*(0.509943906170321*x_1 + 1)**2/x_2**15.608,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2039,2.517*x_1 + x_2 + 2.474211,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2040,sqrt(asin(3*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2041,x_1*asin(x_1*sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2042,-2.21042982245535*sqrt(x_2) + sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2043,1.740992427*(0.831255195344971*x_2 - 1)**3 + log(2.614*x_1 + 3.707*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2044,sin(3.761*x_1)**5.645/x_1**4,"{'x_1': {'max': 10, 'min': -10}}",500 +2045,tan(cos(x_1**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +2046,x_1*exp(-tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2047,2.611456*x_1**0.416 + 2*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2048,2.926*sin(2.163841*(0.679809653297077*x_1 + 1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2049,(x_1 + 0.153)**2.571,"{'x_1': {'max': 10, 'min': -10}}",500 +2050,(0.311915159076731*x_1 - 0.0542732376793512)/(x_2*asin(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2051,x_2 + asin(x_1 - log(x_1 - 1.082)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2052,1.667*x_2 + x_3 + log(x_1)**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2053,3.717*x_1 + (x_1 + 1.163)*tan(x_1 - x_2) - 0.66906,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2054,x_1**2*x_2**0.997625,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2055,x_2 + asin(x_1**3) + 1.681,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2056,x_1 + x_2 - sin(x_1) + 0.33,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2057,x_2 - x_3 + exp(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2058,cos(x_1) - cos(3.001*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2059,-x_2 + sqrt(asin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2060,x_1 - asin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2061,exp(x_1) + tan(x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2062,1.06502683923131*(x_1 + sin(exp(x_1)))*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2063,x_1**0.134415 + x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2064,4.064*x_1 + 5.746496,"{'x_1': {'max': 10, 'min': -10}}",500 +2065,0.837*x_1 + 4.557*x_2 - log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2066,13.5701419462529*x_1**1.758,"{'x_1': {'max': 10, 'min': -10}}",500 +2067,x_1*exp(-tan(x_2 + 0.259)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2068,(4.765*x_1*(x_2 - 1.997) + 0.219635405227323)/(x_2 - 1.997),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2069,x_1**3 - x_1 - x_2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2070,4.5*sin(x_1 + sin(sin(2.454*x_1)) - 1.752),"{'x_1': {'max': 10, 'min': -10}}",500 +2071,asin(3.546*x_1 + 9.357481*x_2**2) + 0.723,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2072,2.67857014095207*sqrt(0.629151893769501*x_1 + 0.139377911778799*x_2 - 0.139377911778799*log(x_2) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2073,612.096*x_1**8 + exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2074,sin(cos(1.00548495761995*x_2*sqrt(0.989119683481701*x_1 + 1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2075,sin(x_1**2 - log(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2076,3.896*x_1 + x_2 + x_3 + asin(3.94*x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2077,sin(2.852*tan((x_1 - 1.378)*(x_2 - 0.31))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2078,sin(x_1)**4/x_1**4,"{'x_1': {'max': 10, 'min': -10}}",500 +2079,sqrt(x_1*x_2)*(0.936482781475452*x_1 + 0.924308505316272),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2080,exp(tan(x_1**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +2081,x_1**3.149,"{'x_1': {'max': 10, 'min': -10}}",500 +2082,x_1 + 62.9924397419656*x_2*(0.629326620516048*x_1 - 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2083,-87.943022623*x_1**3*(x_2 - 0.224870699347875)**3 - x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2084,sqrt(x_2 + 0.726) - tan(x_1 - 0.894) + asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2085,4.02859677029854*x_1*(0.580720092915215*x_2 + 1)**0.455,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2086,exp(x_1/(x_2 - x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2087,x_1 + x_2 + 1.004*asin((x_1 - 1.433)/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2088,-x_1 + exp(x_1**5 - x_2**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2089,x_1*tan(0.322*x_1) + x_2 + 1.766,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2090,cos(exp(x_1*x_2))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2091,cos(x_1 + sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2092,x_1**2*x_2**(3/2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2093,0.514668039114771*x_1**2*log(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2094,cos(exp(x_1) + log(log(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2095,sqrt(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2096,x_2*(1.468*x_1 + x_2 + 1.641224)/(x_1 + 1.413),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2097,x_1/(2*x_2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2098,4.139*x_1*tan(x_1*exp(23.145721*(x_1 + 0.371)**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +2099,x_1**2 - exp(x_1) + 1.423,"{'x_1': {'max': 10, 'min': -10}}",500 +2100,3.457*cos(x_1/tan(cos(x_1 + x_2 - 0.859))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2101,-2*x_2 + tan(x_1**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2102,0.271739130434783*x_1**2/x_2 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2103,x_1 - x_2 + sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2104,(x_1 + 0.016)*(x_1 + x_2)*(x_3 - 0.078),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2105,((x_1 + 1.95)*sin(x_2) + log(x_1))/(x_1 + 1.95),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2106,x_1/log(x_1 - 1.381) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2107,x_1 + 1.589*sqrt(0.725567255371554*x_1 + (0.629326620516048*x_1 - 1)**2 + 0.862699466636778),"{'x_1': {'max': 10, 'min': -10}}",500 +2108,(x_1 - x_2)/log(4.785*x_3 - 3.30165),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2109,x_1*(2 - x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2110,x_1*x_3*(x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2111,-x_2/asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2112,tan(x_1) + asin(x_1 - 0.623*x_2 + 2.939147),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2113,asin(x_2*(tan(cos(x_1)) + 0.172)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2114,x_1**14.88842/(x_2 + 1.738),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2115,x_1*x_2 + 0.777*exp(3.076*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2116,x_1**2*sqrt(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2117,(x_1 + x_2 + 0.983)**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2118,(x_1 + 0.861)*cos(x_1 - 0.893)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2119,4.095*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2120,log(x_1 + x_3 + 1.987 - (x_1 - 1.002)/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2121,x_1*(x_1 + x_2)*(x_3 - 1.211),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2122,(2.613*x_1 - 2.56074)*sin(4.812*x_1)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +2123,x_1 + (x_1 - 0.826)*cos(1.962*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2124,asin(log(0.799*x_1 - 0.365942)) + 0.233,"{'x_1': {'max': 10, 'min': -10}}",500 +2125,x_1 + (x_1/x_2)**1.665,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2126,log(x_1)*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2127,x_2*(sin(x_3) + 1.27)*tan(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2128,(asin(x_1) - 0.118)/(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2129,-sin(x_1 - 1.648) - 0.488,"{'x_1': {'max': 10, 'min': -10}}",500 +2130,x_1 + 2.022*x_2 + 1.406018,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2131,4.157*x_2*cos(sin(x_1))**0.698,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2132,0.119025*x_1**4.178*tan(x_2 + x_3 - 0.584),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2133,5.13313049835571*(0.720980533525595*x_2 + 1)**5 + cos(x_1 - 0.426),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2134,x_2 + tan(x_2) + asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2135,(x_3 + (x_1 + x_2)**11.436*(x_4 + 1.676))/(x_4 + 1.676),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2136,-x_3 + (sqrt(x_1) - x_2)**3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2137,log(asin(x_1) - 0.243)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +2138,x_1**3 + x_1/sqrt(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2139,1.88838555385281*sqrt(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2140,x_1**(5/2)*tan(x_1)**5 + 0.416*x_1 + 0.555776,"{'x_1': {'max': 10, 'min': -10}}",500 +2141,x_1/log(4.856*x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2142,log(x_2)*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2143,log(tan(x_1 + x_2 + 0.2)) + 1.38077918043178,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2144,(1.0*x_1 + 1.383)/(1.0*x_1**2 + 1.557*x_1 + 2.196324),"{'x_1': {'max': 10, 'min': -10}}",500 +2145,4.473*x_1 + x_2 + exp(exp(x_1**3)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2146,sqrt(x_1) + x_1*x_2 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2147,tan(x_1**0.672 + x_2 + 1.041),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2148,-3.94409612723996*x_1**3.944,"{'x_1': {'max': 10, 'min': -10}}",500 +2149,(0.365*x_1 + 0.56283)*exp(-3.95*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2150,0.236630383341221/(x_1 - 1.619),"{'x_1': {'max': 10, 'min': -10}}",500 +2151,exp(0.527*tan(0.359514776408329*exp(x_1))) + 0.239,"{'x_1': {'max': 10, 'min': -10}}",500 +2152,sin((x_1 - 1.819)*(2*x_1 + x_2)) + 0.648,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2153,tan(x_1**10 + x_2) - 0.173,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2154,x_2*(x_2 + cos(cos(x_1)))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2155,x_1 - x_2 + x_3 + log(x_1 - 1.216) + 1.432,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2156,sqrt(x_1**3*x_3*log(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2157,sin(2.397*sin(sin(x_1))) - cos(x_1 - 0.371),"{'x_1': {'max': 10, 'min': -10}}",500 +2158,sqrt(x_1**(5/2)/(x_2 + 0.344)**1.692368),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2159,1.442*x_3*(x_1 - x_2**8.46336),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2160,x_1*x_2*asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2161,2.561*tan(5.998805513*x_1**3) + 3.401008,"{'x_1': {'max': 10, 'min': -10}}",500 +2162,x_1**2*log(x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2163,(x_2 + sin(x_1))/(x_2*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2164,x_1**3*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2165,1.36366639982182*x_2*(0.93984962406015*sin(cos(x_1)) - 1.0)**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2166,3*x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2167,tan(9.394225*x_1 - 17.75508525 + 1/x_1) + 1.749,"{'x_1': {'max': 10, 'min': -10}}",500 +2168,(x_1 - 1.609)*tan(2.439*x_1 - 3.934107)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +2169,asin(x_1) + asin(2.78771701130233*(0.814332247557003*x_1 + 1)**1.785),"{'x_1': {'max': 10, 'min': -10}}",500 +2170,log(sin(x_1**0.5265)),"{'x_1': {'max': 10, 'min': -10}}",500 +2171,1.35720300618588*sqrt(x_2) + x_2*asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2172,sqrt(x_2*(x_1 + 0.241)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2173,1.12576224028458*(0.766283524904215*x_2 + cos(x_1) - 0.924)**0.445,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2174,x_1/x_2 + x_2**3 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2175,x_1*(tan(x_2) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2176,x_1 - x_2 - cos(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2177,-1.62189400654294*x_1*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2178,sqrt(0.0758903839294523 - x_1**2)*(13.95372*x_1 - 6.97686),"{'x_1': {'max': 10, 'min': -10}}",500 +2179,0.030664297*x_1**3,"{'x_1': {'max': 10, 'min': -10}}",500 +2180,tan(x_1 - x_2 + 1.43)**4/x_3**4,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2181,exp(x_1**12)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2182,x_1**2/x_2**2.728,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2183,1.175*x_1 - x_2 + sin(exp(x_1)) - 0.6509,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2184,x_1*(x_1 + x_2)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2185,x_1 + exp(x_2**6),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2186,x_1**3 + 4.918*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2187,x_1 - x_2*log(0.677*x_1 - 0.396045) + 1.195,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2188,sin(tan(x_1**6)),"{'x_1': {'max': 10, 'min': -10}}",500 +2189,x_1 + 1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2190,sin(1.3490737563232*sqrt(0.549450549450549*x_2 + 0.549450549450549*tan(x_1) - 1.0)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2191,x_1**2*(log(x_1) + 0.627541423461951)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2192,x_1*cos(x_1*x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2193,x_1*log(x_1) + tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2194,-x_1 + sin(sin(x_1)) - 1.911,"{'x_1': {'max': 10, 'min': -10}}",500 +2195,19.9314662663141*x_1**5.561,"{'x_1': {'max': 10, 'min': -10}}",500 +2196,2.068*x_1*exp(-2*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2197,x_1*asin(tan(0.948*x_2)) - 2.901*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2198,cos((x_1 + x_2*tan(x_1))**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2199,2300.23998711338*x_1**2.542,"{'x_1': {'max': 10, 'min': -10}}",500 +2200,1.9047619047619 + 1.83809523809524/log(x_1) + 2.86007619047619/(x_1*log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2201,sin((x_1 - x_2)**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2202,sin(x_1*x_2 + 0.463*x_3)**8.041416,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2203,x_1*x_2*log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2204,x_2*(x_1 - x_2*sqrt(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2205,log(x_1 + asin(x_2**3.698)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2206,0.69*x_1 + sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2207,x_1 + x_3 + 1.314 + log(x_2)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2208,x_1*x_2 + x_1 - x_2 + 1.901,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2209,x_1*x_2*(sqrt(x_2) - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2210,4.64*x_1 + asin(3.175*x_1) + 2.08336,"{'x_1': {'max': 10, 'min': -10}}",500 +2211,sin(x_1 - 1),"{'x_1': {'max': 10, 'min': -10}}",500 +2212,exp(x_1) + sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2213,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2214,x_1 + x_2*log(x_1 - 1.823) - 1.434,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2215,x_1 + 3.464*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2216,tan(x_1/sin(1)) + 1.434,"{'x_1': {'max': 10, 'min': -10}}",500 +2217,2935.99391045429*x_1*x_2**6,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2218,Abs((2.953*x_1 - 1.901732)*(x_2 - 0.011)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2219,x_1*asin(x_1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2220,(x_1 - 1.196)/(x_1 + sqrt(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2221,3.874*sin(1/(x_1 - 0.055)),"{'x_1': {'max': 10, 'min': -10}}",500 +2222,x_1**3*(x_2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2223,x_2 + cos(x_1*(x_1 - 0.576*x_2)) + 0.511,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2224,(x_1 + cos(4.76*x_2 - 4.06028) - 1.227)/sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2225,log(x_1 - exp(x_1/(x_2 - 1.247)) + 1.323),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2226,101.894546593461*x_1*(0.715307582260372*x_3 - 1)**3*tan(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2227,tan(x_1/sin(x_1 - 0.982)) + 0.843,"{'x_1': {'max': 10, 'min': -10}}",500 +2228,tan(asin(x_1 - 0.981)**1.655),"{'x_1': {'max': 10, 'min': -10}}",500 +2229,(-x_1 + x_2*(x_1 + log(x_1) + 0.696641069814201) - 0.368)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2230,x_2 + log(cos(x_1 + x_2))**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2231,6.90674002841245*sqrt(x_2*sqrt(0.580383052814858*x_1 - 1))/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2232,x_1 + log(x_2**1.688),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2233,log(x_2*sin(x_1)) + 1.52254601921509,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2234,x_1 + cos(log(x_1) + 0.447885823992117),"{'x_1': {'max': 10, 'min': -10}}",500 +2235,sin(x_1) + sin(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2236,cos(x_1 - tan(1.881*x_2) + 0.33) - 0.681,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2237,x_3*(6.19451608274754*x_1 - x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2238,(x_1 + 3.816*x_2 + 4.186152)*log(x_1 - 1.102),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2239,x_1**2*exp(x_2)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2240,13.30863361*x_1**0.419*cos(x_2)**1.766,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2241,(x_1 - 0.378)**3*(x_1 + 0.877)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +2242,log(sqrt(x_2)*(2.11210795178656*x_1 - 2.17969540624373)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2243,sqrt(1 - exp(2*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2244,3.790809*x_1*(x_2 - 0.231),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2245,0.216225*x_1*x_2/sqrt(1 - x_1**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2246,x_1 + sin(x_2*sin(x_2)) + 0.534,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2247,cos(2*x_1 - 2.888*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2248,exp(x_1 + 3.844*x_2) - 1.744,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2249,exp(x_1*x_3 + x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2250,log(2.847*x_1 + log(x_1 + x_2 + 1.67)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2251,0.988142292490119*exp(0.677513074472094*x_1)/(x_2*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2252,(0.530785562632696*x_1 - 1)**3*(0.0107102665798428*x_2 - 0.0013280730559005),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2253,exp(x_1**6.807 + tan(x_1)) + 0.822,"{'x_1': {'max': 10, 'min': -10}}",500 +2254,x_1**3*sin(x_2) + x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2255,0.469210293178898*(x_1 - 0.395)**4.035,"{'x_1': {'max': 10, 'min': -10}}",500 +2256,x_1**5 + sqrt(tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2257,x_1*(x_2 + x_2**7.282954),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2258,x_1**0.2455/x_2 + x_2 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2259,log(x_1)/(2*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2260,(x_1 + 1.239)/log(x_2 + 0.572),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2261,-x_1**3/x_2**2 + 0.312*x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2262,x_1*x_2 + 4.596*sin(sin(1.173*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2263,(x_1 + x_2)*cos(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2264,x_1*exp(-(x_2 + exp(x_1))**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2265,1.614*exp((x_1 - 0.522)/(x_2 - 0.069)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2266,sin(x_1*tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2267,x_1**2*(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2268,log(2.906*x_1 + asin(3.876961*(0.507872016251904*x_2 - 1)**2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2269,sqrt(x_1**2*asin(4.692*x_2) - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2270,x_1**2/(0.767376*x_2**2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2271,x_1*(2.18*x_1 + 2.18*x_2 + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2272,x_1 + x_2 + x_3 + 0.071,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2273,x_1 + asin(tan(2*x_2 - 1.322)) + 1.029,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2274,x_1 - x_2 + 1.0537872618046*(x_1 + 0.869)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2275,-log(x_1) + tan(x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2276,2.95801092988502*(0.309214594928881*x_1 + asin(x_2) - 0.597402597402597)**0.924,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2277,x_1 - x_2*x_3/cos(x_4),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2278,0.938943473689133*exp(x_1 + cos(exp(1.333*x_1))) + 0.677,"{'x_1': {'max': 10, 'min': -10}}",500 +2279,2.698*x_1*(x_2 + tan(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2280,x_1 + 1.723*x_2 + (x_1 - 0.873)*exp(4.099*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2281,x_2*(x_1 + tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2282,1.04898932757495*x_2/x_1**0.969 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2283,(x_2 + cos(x_1))**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2284,1.768*cos(asin(0.246*x_1)/tan(exp(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2285,0.989*cos(x_1 + 1.388),"{'x_1': {'max': 10, 'min': -10}}",500 +2286,(x_1**2 - tan(x_1))**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2287,x_1 + 0.38,"{'x_1': {'max': 10, 'min': -10}}",500 +2288,sqrt(1/x_2)*Abs(x_1 + 0.146),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2289,47.516597848*x_2*x_3*asin(2*x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2290,x_1*log(x_1 + 2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2291,2.137444*x_1**2*(0.683994528043776*tan(x_1**3) + 1.0)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2292,1.729225*(x_2*(0.760456273764259*x_2 + 1) - 0.760456273764259*sin(x_1))**2/x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2293,sin(2.380849*x_1**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2294,238247547.013968*x_1**15 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2295,x_1/(2*x_1 + tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2296,x_3*(x_1 + x_2*(1.367631*x_1 + 1.083163752)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2297,sqrt(x_1*x_2*sin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2298,x_1*(sin(3.91*x_1 - 3.84353) + 1),"{'x_1': {'max': 10, 'min': -10}}",500 +2299,log(x_1*(x_2 + 1.688)/x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2300,0.224*x_1 + 7867.46591583126*(0.629722921914358*x_1 - 1)**3*cos(x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2301,x_1 - 0.187*x_2 + cos(2.712*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2302,x_1**3/log(x_1)**3 + x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2303,exp((x_2*log(x_1))**0.719),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2304,x_1*x_2*x_3 + 2.10823455805547*x_4**2,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2305,cos(x_1**7.256*(1.556*x_1 + x_2 + 0.924264)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2306,x_1*x_2*x_3*exp(-x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2307,2*x_1 + x_3*(x_2 + 1.455) + 3.628,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2308,1.837*exp(2*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2309,x_1 + sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2310,asin(x_1*tan(x_1)) - 1.755,"{'x_1': {'max': 10, 'min': -10}}",500 +2311,86.9589615628989*(0.203873598369011*x_1 + 0.0591233435270132 + E)**2.808,"{'x_1': {'max': 10, 'min': -10}}",500 +2312,(-x_1 + (x_1 - x_2)**2)**0.239,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2313,log(x_1 + log(x_2 + 1.113552872566*sqrt(0.806451612903226*x_1 - 1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2314,1.124*cos(x_2 + sin(1.325*x_1) + 1.811),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2315,-x_1**2 + x_1**4.832,"{'x_1': {'max': 10, 'min': -10}}",500 +2316,tan(x_1 + cos(1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2317,x_1**3/(8*(x_1/2 + x_2 + 0.947)**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2318,(x_1 + 1.916)*asin(x_2)/(tan(tan(x_2) + 0.027) + 0.366),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2319,x_1**2/x_2**1.016,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2320,-x_3*(2.21*x_2 - (x_1 + 0.082)**8.762),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2321,x_1 + x_1**9.844*(x_2 + x_3)**4,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2322,(x_1 + sqrt(x_2))**(-0.4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2323,x_1*x_2*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2324,x_1/(x_2 + sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2325,3.111*x_1/tan(61.121124*x_1**2)**5,"{'x_1': {'max': 10, 'min': -10}}",500 +2326,x_1 - sqrt(x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2327,0.470951576608223*exp(x_1)*sin(x_1 + 0.914),"{'x_1': {'max': 10, 'min': -10}}",500 +2328,-1028589.09187352*x_2**4 + exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2329,(x_1 + tan(x_2))**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2330,-tan(-x_1 + sqrt(x_2) + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2331,sqrt(x_1 + x_3 - tan(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2332,x_1 - x_2*cos(cos(cos(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2333,2*x_1 + exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2334,x_1*(x_3 - 0.495)/(x_2 + (x_1 - 0.155)**2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2335,x_1 + x_3 + exp(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2336,log(1.984*x_1 + 3.108928)/(x_2 - 0.44),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2337,x_1/sqrt(x_1/log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2338,(x_1 + 0.43)*(x_2 + 2*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2339,x_1*cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2340,2.127*tan((x_1 + x_2 + 0.101)**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2341,(x_1**1.34*(x_2 + asin(x_3) + 0.449) + 1)**2/x_1**2.68,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2342,71.2783959617072*(0.646830530401035*x_1 + 1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2343,1.601 - tan(3.519*sin(x_2 - tan(x_1)) + 0.56304),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2344,sin(cos(0.23084025854109*sin(x_1 + x_2)/x_2) + 0.869) + 1.733,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2345,0.0626251564063868*x_1/(0.500250125062531*x_2 - 1)**4,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2346,cos(x_1*x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2347,sqrt(x_4*(x_1 - x_2 + x_3 + 1.979)),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2348,(0.327*x_1**2 + x_2)/(x_1*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2349,x_1*log(x_1) + x_2**2.008,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2350,asin(0.388*x_1 + 0.276256),"{'x_1': {'max': 10, 'min': -10}}",500 +2351,asin((x_1 + 0.093)*(3.886*x_2 - x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2352,2*x_1 + x_2 + log(x_2 + 1.412),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2353,2.22193609269034*sqrt(x_1 - 0.888),"{'x_1': {'max': 10, 'min': -10}}",500 +2354,log(x_3*(x_1 + 1.922*x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2355,x_1 - tan(2.519*x_2 - x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2356,x_2 + 1.18867804079548*tan(x_1)**1.909/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2357,asin(x_1 + 0.109*x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2358,x_1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2359,x_1 + sin(tan(x_1)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2360,2*x_1 + x_1**2.756 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2361,1.94115281711188*tan(exp(x_1))**3.358,"{'x_1': {'max': 10, 'min': -10}}",500 +2362,x_1*exp(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2363,3.985*x_1*(1.921*x_1 + (x_1 + x_2)**2 + 3.434748),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2364,(x_1 + (x_3 + 1.397)*exp(3.241*x_2) + 1.947)*exp(-3.241*x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2365,(x_1 + x_3*exp(4.757*x_2) - 0.598)*exp(-4.757*x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2366,x_1*(0.546021840873635*x_2 - 0.4334373374935)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2367,4.742*x_1 + x_2/tan(x_1 - 0.74) - 6.458604,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2368,0.973474753065836/(x_1 + 0.260756192959583*tan(x_1 + x_2**2))**0.02,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2369,x_1 + x_2 + x_3 + asin(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2370,Abs(x_1 + 0.989)/x_2**0.25425,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2371,2*x_1 + 2*x_2 + 3.262,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2372,x_1*x_2**4*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2373,sqrt(x_2)*(1.26964561984831*exp(0.15*x_1) - 0.387241914053735) + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2374,x_3 + (x_1*(1 - x_2))**6.138,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2375,(2.48*x_1 - sin(x_1 + 0.088))*(-x_2 - 2.595*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2376,16.6338558118828*x_2*x_3*log(2*x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2377,3.477*x_1 - x_2 - x_3**(1/4),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2378,x_1*(3.225*x_1 + tan(0.19*x_1) + 1.776975),"{'x_1': {'max': 10, 'min': -10}}",500 +2379,2.925*x_1 + x_2 + 1.407,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2380,-x_2 + exp(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2381,sin(0.321*x_1)**3/x_1**(1/4),"{'x_1': {'max': 10, 'min': -10}}",500 +2382,sqrt(log(x_3)*tan(x_1 + x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2383,sin(x_1)**(-3),"{'x_1': {'max': 10, 'min': -10}}",500 +2384,0.679*cos(x_2 + tan(x_1) - 1.436),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2385,4.568*x_1*(x_2 + 1505.11655550236*(0.564971751412429*x_1 - 1)**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2386,-x_2/x_3 + x_3 + log(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2387,1652563.05561585*(0.932008131917777*x_2 + exp(x_1))**9.624,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2388,x_1*x_2**0.626*tan(4.086*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2389,(1.959*x_1 + 0.548634668973808*sqrt(x_4)*(x_2 + x_3))/(x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2390,sqrt(x_1)*x_2 + x_1 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2391,4.009331165058*x_2*(x_1 + 1.158) + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2392,x_1/(x_1 + tan(x_1 + x_2) + 0.965),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2393,x_1*(x_2 + 1.035),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2394,253.714108706521*x_2**5 + 1.19337998177409*(0.890471950133571*sin(x_1) + 1.0)**1.524,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2395,x_1**5*x_2**11,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2396,exp(cos(2*x_1 - 1.866))/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2397,-tan(x_1) - 0.067,"{'x_1': {'max': 10, 'min': -10}}",500 +2398,2.586*x_1*x_3/x_2 + cos(x_1 + 1.332) - 0.703,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2399,x_1 - cos(x_3)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2400,(0.740818220681718*cos(x_2) + 1.19049488063552)*exp(sin(log(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2401,1.56316711758507*(0.949667616334283*x_1 - 1)**8.65,"{'x_1': {'max': 10, 'min': -10}}",500 +2402,1.016064*(0.992063492063492*x_1 + x_2)**2/x_1**4.834 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2403,1.502*cos(4.228*x_1 - 7.754152),"{'x_1': {'max': 10, 'min': -10}}",500 +2404,x_1**9.97,"{'x_1': {'max': 10, 'min': -10}}",500 +2405,2.52*log(3.863*x_2 + 0.764874)*sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2406,x_1*x_2*asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2407,asin(5.262436*x_1**2 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2408,x_1**(3/2)*x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2409,asin(4.198*x_1 + sin(3.554*x_1)) + 0.145,"{'x_1': {'max': 10, 'min': -10}}",500 +2410,x_1 - sin(cos(0.306560392397302*x_2/x_1)) + 0.839,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2411,62.6993223674009*(0.218150087260035*x_1 - x_2 + 0.786)**2.718,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2412,sqrt(x_1*x_2)/(-x_2 + x_3 + 1.453),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2413,x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2414,log(x_1 - 3.497*x_2 - asin(x_2) + 0.999),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2415,(0.638224988908236*x_1 + 0.336982794143548)/sqrt((0.522466039707419*x_1 + 1)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +2416,2.423*x_2 + 1.541028 + 1/sqrt(x_1 + 0.103),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2417,x_1 - sin(x_1**3.157) + 1.409,"{'x_1': {'max': 10, 'min': -10}}",500 +2418,x_1*exp(cos(log(x_1) + 1.19027947719393)),"{'x_1': {'max': 10, 'min': -10}}",500 +2419,(x_1 - 1.202)*tan(x_1)/(x_2 - 1.734),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2420,x_3 + cos(tan(x_1/x_2)**4.332),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2421,0.483325277912035*exp(x_1*(x_1 + 1))/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2422,cos(x_1*tan(log(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2423,3.809*x_1*(x_2 + x_3*tan(x_1)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2424,3.467044*x_1**1.165*x_2*tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2425,x_1 + x_2**4 + x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2426,x_1**3 + (x_1 + 0.672)*(x_2 + 0.045)**6,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2427,1.0*x_1/tan(0.996*x_2) + 4.86*asin(x_1) + 9.44298,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2428,x_1*(-x_1 + sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2429,x_1 + log(x_2) + sqrt(sin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2430,x_1*(x_1*log(x_2) + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2431,cos(x_1/(x_2**3*x_3**3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2432,x_1*sin(log(2*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2433,(asin(4.863*cos(x_1))**2)**(-0.03),"{'x_1': {'max': 10, 'min': -10}}",500 +2434,x_1 + (x_1 + x_2)**10 - 0.567,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2435,(x_2 + 1.361)*tan(tan(x_1 + 1.946))/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2436,1/(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2437,x_1*(cos(-x_2 + x_3 + 1.247) - 0.184),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2438,12.475024*x_1**2 - 2*x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2439,x_3*(1.699*x_1 + (x_2 - 0.559)**0.626),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2440,1496.55517366504*(0.55005500550055*x_1 + 0.55005500550055*asin(x_1) - 1)**12.231,"{'x_1': {'max': 10, 'min': -10}}",500 +2441,exp(x_1*(2202.22326281666*x_1**4 + 1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2442,exp(x_1*(x_2 + x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2443,x_1**3 + x_2 - 0.544,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2444,6.671907 - 3.547*x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2445,x_2 + x_3 - 1.256*x_4 + exp(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2446,x_1 - 1.83902147893928*sqrt(x_2) + x_3*(x_1 + 0.193),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2447,15.280281*x_1**2 - log(log(1.03*x_1 - 0.10609)),"{'x_1': {'max': 10, 'min': -10}}",500 +2448,x_1*exp(x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2449,x_2*(x_1*x_3 + 1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2450,(x_2 + 0.436485553705193*exp(x_2))*(cos(1.613*x_1) + 1.631),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2451,x_1*x_3*(x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2452,asin(x_1 + sin(x_1) + asin(4.96*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2453,1.008016*x_1*tan(x_2) + (x_3 - 0.501)**0.239,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2454,x_1**2 - 2.195*x_2 + log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2455,x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2456,2.922*x_1 + (1.798*x_1 + x_2)*asin(x_1) - 0.829848,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2457,x_1*sqrt(tan(x_1 + 0.033)),"{'x_1': {'max': 10, 'min': -10}}",500 +2458,x_1**2 + log(x_1/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2459,asin(cos(x_1 + x_2 - 0.578)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2460,(4.431*x_1 - 6.650931)*sin(sqrt(x_2)*Abs(x_1 - 0.214)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2461,(x_1 + x_2 + 0.159)**3.9008,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2462,-2*x_2 + cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2463,x_1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2464,x_1*(x_2 - 0.864)*(3.084*x_1 + x_2 + 0.106),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2465,exp(sin(3.45*x_1 + x_2))/(x_1 + 0.191),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2466,x_1**17.605,"{'x_1': {'max': 10, 'min': -10}}",500 +2467,x_2**2*(2.03508561624775*x_1 + 2.03508561624775*x_2 - 0.545402945154397),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2468,6.901129*x_1**2*exp(-2*x_1*x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2469,sin(3.614*x_1 + exp(x_2)) + 1.281,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2470,(x_1 + exp(25.675778*x_1))**0.826,"{'x_1': {'max': 10, 'min': -10}}",500 +2471,x_1/(x_2 - 0.188),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2472,(4.166*x_1 - 1.95802)/sin((x_1 + 1.225)/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2473,1.7329166165745*sqrt(x_1) + x_1 - cos(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2474,(x_1 + asin(x_1**3))/x_1**5,"{'x_1': {'max': 10, 'min': -10}}",500 +2475,cos(x_1 - x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2476,(x_1 + x_2)**3*log(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2477,x_1 + x_2 + 0.232,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2478,x_2 + cos(x_1 - 0.044),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2479,x_1 + cos(x_2 - log(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2480,x_1*x_3*(x_1 + log(x_2) - 0.448),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2481,x_1*x_3 + x_2*x_3/log(x_3 - 0.784),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2482,0.562880229470658*x_1**5.699*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2483,1.05890039422203*(x_2*(x_1 - 0.44149623072593))**0.07,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2484,(-x_1 + exp(Abs(x_1)))**1.828,"{'x_1': {'max': 10, 'min': -10}}",500 +2485,0.126025*x_2*x_3 + 7.414875*(0.512820512820513*x_1 + 1)**3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2486,exp(x_1**3.609),"{'x_1': {'max': 10, 'min': -10}}",500 +2487,x_2*(x_1**2 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2488,(x_1 + x_2 - asin(x_3))/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2489,-1.765*x_3 + sin(x_1/(x_2 - 1.588)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2490,x_1/(x_2*sin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2491,x_1 - exp(x_1**3/x_2**15),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2492,x_2*(x_1 + tan(12.1104*x_1**2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2493,x_1**5*tan(sin(1.281*x_1))**4.761,"{'x_1': {'max': 10, 'min': -10}}",500 +2494,(x_1 + sin(sin(x_2) - 1.575))**1.847,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2495,tan(x_1**3)**6.768,"{'x_1': {'max': 10, 'min': -10}}",500 +2496,(x_1 - x_2)*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2497,x_2*(x_2 + sqrt(asin(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2498,(x_1 - 0.271)*cos((x_2 + tan(1.373*x_1))**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2499,14.44*x_1**2 + x_1 - x_2 + 1.08,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2500,2.325*x_1**(3/2)*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2501,x_1**2*(x_2 + tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2502,0.793*exp(sqrt(x_1)) + 3.77348885784155*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2503,x_1 + x_1**1.866 + 2.37*x_2 - 3.4839,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2504,x_2 + (x_1 + 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2505,(x_2 + x_3 - 1.127)*cos(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2506,1.194*sin(x_1/log((x_1 - 0.803)/x_2)) - 1.191612,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2507,(x_1 + 0.839)/((x_2 - 0.768)*asin(0.603324125159934*sqrt(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2508,cos(x_1 + x_2 + log(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2509,(-4.663*x_1 + 3.823*x_2)/(-x_2 + log(x_1) + 0.624),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2510,cos(x_1*(x_1 - 1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2511,sin(x_1/sin(log(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2512,sin(0.246*x_1 + log(x_2 - 0.925)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2513,x_1*x_2*(x_3 + x_4**2),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2514,x_1*Abs(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2515,-x_2 + tan(exp(5*x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2516,(x_1 + 0.132)*(x_1 + x_2)/tan(x_3 + 0.996),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2517,x_1*x_2*(x_2 + x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2518,x_1*exp(3.228*x_1)/sqrt(1 - (x_1 - 0.425)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2519,1.829/(x_2**2 + 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2520,log(x_1 + log(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2521,0.120409*x_1*x_2*(2.177*x_1 + x_2 - 3.927308),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2522,0.613*tan(4.014*x_1)*tan(x_2)/(sin(x_2) + 1.328),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2523,cos(x_2*sin(x_1*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2524,cos(x_1*(2.85*x_1 + x_2)) - 1.812,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2525,x_1*log(x_1 + 0.035) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2526,x_1**3*x_2**3/x_3**4.937,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2527,x_2*(2.153*x_1 + 12.5020777958122*(0.571755288736421*x_1 - 1)**3),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2528,-x_2*(log(x_2) + 0.753301103379692) + sqrt(x_1 - 0.713),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2529,x_1 - x_2*x_3*exp(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2530,x_1**8.022 + tan(x_1) + 0.768,"{'x_1': {'max': 10, 'min': -10}}",500 +2531,exp(log(x_2*cos(x_1))**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2532,x_2**2*tan(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2533,(3.091*x_1 + x_2 + 1.114)*(x_3 + x_4 + 1.905),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2534,x_1 + x_2**2/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2535,(x_1 + 0.581)/x_2**3.783,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2536,(x_1 + 1)*(x_2 + 1.75),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2537,x_1 + x_2**5*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2538,exp(x_1*(1 - x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2539,(x_1 + 1.0)*exp((x_1 + 0.456)**5),"{'x_1': {'max': 10, 'min': -10}}",500 +2540,sqrt(x_1)*x_2**2*x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2541,(x_1 - 0.59)**1.115/(log(x_2) + 0.648672690458116),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2542,x_1**4*x_2**2.646,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2543,tan(x_1**2) + 1.938,"{'x_1': {'max': 10, 'min': -10}}",500 +2544,(x_1 + x_2 + sin(x_1))**1.473,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2545,cos(x_1)/(x_1*sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2546,(-x_1 + cos(x_1))/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2547,cos(x_1**24),"{'x_1': {'max': 10, 'min': -10}}",500 +2548,sin(x_1*(x_2 + 0.287)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2549,1/(x_2*sin(x_1 - 0.012)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2550,(0.266311584553928*sqrt(x_1) + x_2**2*(0.710116288 - 1.585081*x_3))/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2551,4.297*asin(sin(x_1 - 1.629)/x_1) - 0.150395,"{'x_1': {'max': 10, 'min': -10}}",500 +2552,2.93*x_2 + log(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2553,(sin(x_1) + 1)*tan(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2554,tan(x_1/asin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2555,4.785*tan(1/x_1) + 1.64604,"{'x_1': {'max': 10, 'min': -10}}",500 +2556,x_1*(x_2 + sin(exp(asin(x_2)))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2557,4.34656025357409*(0.67476383265857*x_1 + 1)**2.898,"{'x_1': {'max': 10, 'min': -10}}",500 +2558,x_1**0.163/(sin(2*x_1) + 0.283)**2.812,"{'x_1': {'max': 10, 'min': -10}}",500 +2559,(0.352360817477097*x_1 + x_2)**5*(683.20794791225*x_2 + 935.311680691871),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2560,2.849*exp(x_1**2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2561,cos(x_1 - sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2562,0.966289*x_1**0.918 + x_1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2563,log(sin((7.403841*x_2 + 14.615182134)*exp(0.331*x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2564,2.149*x_1**11.455*tan(4.125*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2565,x_1 + sin(x_1 + 1.641)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +2566,log(x_1 + 0.655) + sin(x_1) + 0.452,"{'x_1': {'max': 10, 'min': -10}}",500 +2567,(sin(exp(4.677*x_1) + 0.499) + 0.213)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +2568,0.137641*x_2*tan(log(x_1) + 0.825490367547658),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2569,exp(x_1*sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2570,x_2 + 1.273*x_3 + cos(x_1 + 0.35*x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2571,-tan(x_2 - tan(x_1**3)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2572,x_1**4,"{'x_1': {'max': 10, 'min': -10}}",500 +2573,(x_1 - 1.094)*sqrt(cos(2.962*x_1 + 0.944878)),"{'x_1': {'max': 10, 'min': -10}}",500 +2574,x_1**3.817398/x_2**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2575,x_1**4/(x_2 - x_3)**4,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2576,x_1 + x_1/cos(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2577,(x_1*(x_2 - 0.554) + (x_1 + 1.603)*exp(x_1))/(x_1*(x_1 + 1.603)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2578,(x_1 - 1.906)/(x_2 - 0.068)**0.0835,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2579,x_1/(1.374*x_2 + tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2580,exp((4.074*sin(x_1 - 1.836) - 4.086222)/sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2581,45.185284*(x_1 + 0.148765248437965*x_2 + 0.11)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2582,(0.375093773443361*x_1 - 0.998499624906227*x_2 + 0.375093773443361*sin(x_2 - 1.497))/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2583,x_1**5,"{'x_1': {'max': 10, 'min': -10}}",500 +2584,-x_2 + log(x_1)*sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2585,sin(x_1**0.609*(3.494*x_2 + log(x_1))) + 1.698,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2586,(2*x_1 - x_2 - 1.284)*cos(2.062*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2587,-x_2*asin(x_2 + 1.846) + sin(x_1) - 1.062,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2588,123.169501633679*(0.572409845449342*x_1 + 1)**8.628*(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2589,2.17*cos(x_1*asin(x_2)) - 4.26405,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2590,(x_1 - 1.786)*(tan(1/(x_2 + 1.485)) - 1.382),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2591,x_1 + exp(x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2592,19.660356*x_1*x_3*log(x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2593,-tan(x_1*x_2) + tan(x_1 - 0.809) + 1.857,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2594,asin(94.794401991516*x_1*cos(1)) + 0.003,"{'x_1': {'max': 10, 'min': -10}}",500 +2595,(0.814*x_1 + 1.060642)*cos(log(x_1 + 1.191)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2596,sin(log(x_2) + asin(x_1 + x_2 - 0.387) - 1.182) + 1.905,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2597,x_1*(1.114 + 1.0/tan(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2598,asin((x_1 - 0.398)**4),"{'x_1': {'max': 10, 'min': -10}}",500 +2599,sin(x_1*(x_1 + 2)) - 0.249,"{'x_1': {'max': 10, 'min': -10}}",500 +2600,log(x_1**5*(x_2 + cos(x_1 + 0.426))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2601,x_1*tan(x_1**2)**4,"{'x_1': {'max': 10, 'min': -10}}",500 +2602,x_1*(asin(x_2) + 2.574),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2603,x_1 + 2*(x_1 + 0.97)**(3/2) + 1.967,"{'x_1': {'max': 10, 'min': -10}}",500 +2604,-sqrt(cos(x_3)) + tan(x_1/x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2605,log(x_1*x_3/(cos(cos(4.043*x_2) + 0.666) - 0.15)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2606,x_1**3*x_3*exp(-3*exp(x_2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2607,3*x_1 + asin(0.721*x_1) - 1.842,"{'x_1': {'max': 10, 'min': -10}}",500 +2608,(2.099601*x_1 - 2.805066936)*(0.690131124913734*x_1*x_2 + 0.690131124913734*x_1 - 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2609,1.48795161211647*x_1*sqrt(x_2 + 0.4516711833785*x_3 - 0.726738934056007),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2610,cos(x_1 + tan(x_2) + 1.102) - 0.719,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2611,log(asin(2.078*x_1)) + 1.0681530811834,"{'x_1': {'max': 10, 'min': -10}}",500 +2612,x_1*(tan(3.405*x_1 - 6.1971) - 1),"{'x_1': {'max': 10, 'min': -10}}",500 +2613,tan(x_2 + cos(cos(x_1)**3)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2614,x_1 - x_2 + 4.632*tan(0.27*x_2) + 0.852,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2615,log(asin(1.008*x_1)) - sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2616,x_2 + tan(x_1) + 0.139,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2617,x_1 + log(cos(sin(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2618,x_1/(sqrt(x_1) + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2619,-x_3 + 108.441586233841*(x_1 - 0.309885342423303*tan(4.614*x_2))**4,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2620,3.51077974338379*x_1**3.355,"{'x_1': {'max': 10, 'min': -10}}",500 +2621,1.795*x_1 - 2.234775,"{'x_1': {'max': 10, 'min': -10}}",500 +2622,5.20177544997497*exp(x_1)*tan(0.702*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2623,x_3 + exp(1.577404088*x_2*(0.894454382826476*x_1 + 1)**2) - 1.163,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2624,x_1**2/log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2625,x_1 + sqrt(x_1*(x_2 - 0.878)) + 1.167,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2626,x_1 - 0.35 + sin(x_1)**2/x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2627,exp(x_1 - x_2**0.883) + 1.305,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2628,2.06494551986245*sqrt(x_1*sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2629,x_1*(tan(cos(x_2 - 0.483)) - 0.763),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2630,0.331873966870065*exp(-x_1 + x_2 + 2.03654857928708*exp(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2631,sqrt(x_1) + 1.817104*(0.741839762611276*x_2 + 1)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2632,3.463*cos(tan(x_1 - sin(x_1)**1.023) + 1.28),"{'x_1': {'max': 10, 'min': -10}}",500 +2633,(x_1 - x_2/(x_1 + x_2))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2634,x_2 + asin(1.747*x_1) - 1.383,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2635,sin((0.458864646595954*exp(x_1) - 1.161)/x_1**3),"{'x_1': {'max': 10, 'min': -10}}",500 +2636,x_1**2 + x_1 - x_2 - 0.683,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2637,x_1 + 3.448449*x_2**3.718*tan(exp(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2638,(3*x_1 + x_2)**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2639,x_1**3.909,"{'x_1': {'max': 10, 'min': -10}}",500 +2640,x_1 + 2.019*x_2*cos(x_2) - 0.831,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2641,-x_1/(2*x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2642,1.172889*(0.923361034164358*x_1 + 1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2643,x_1 + x_2**4 - log(x_3 + 0.714),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2644,x_1 - 767.411537797216*x_1**5.422*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2645,4.007*sin(log(x_1)) - 0.308539,"{'x_1': {'max': 10, 'min': -10}}",500 +2646,13.351716*x_2*x_3**1.172*(x_1 - 1.929),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2647,x_1*cos(x_2 + 0.455) + 1.032*x_2 - 0.906096,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2648,log(x_1 - 1.229),"{'x_1': {'max': 10, 'min': -10}}",500 +2649,1.736*x_1*sqrt(x_2 + sin(x_1 + 0.899)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2650,x_1**3*(x_2 - 0.072)**3*cos(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2651,x_1 + 2*x_2*(233.486987438961*x_1 - 211.072236644821),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2652,3.766*x_1 + x_2**5.202 - 1.698466,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2653,x_1**(5/2),"{'x_1': {'max': 10, 'min': -10}}",500 +2654,(-x_1 + (x_2 + 1.013)*log(x_1**5.769))/(x_2 + 1.013),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2655,log(x_2 + 4.38*x_3 + 1.80336760621582*(0.789889415481833*x_1 - 1)**(5/2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2656,x_1**13.885,"{'x_1': {'max': 10, 'min': -10}}",500 +2657,x_1 + (2*x_2 + x_3)**3.522,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2658,(x_1 + log(x_2 + asin(x_2)))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2659,x_1 - 1.794,"{'x_1': {'max': 10, 'min': -10}}",500 +2660,sqrt(x_2 + x_3)*exp(x_1/2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2661,(x_1 - 0.894*x_2*(x_1 + x_2 + 1.125))/(x_1 + x_2 + 1.125),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2662,-3.235*x_2*(x_1 + x_2 - 0.702) + exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2663,x_2 + x_3 + exp(x_1) + 2.332,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2664,x_1*(x_1 + cos(3.216*x_1))**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2665,(x_1 + log(x_1 + x_2**2 + 1.717) + 0.437)**3.05,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2666,x_1 + x_1/x_2 - tan(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2667,cos(x_1)**7.634,"{'x_1': {'max': 10, 'min': -10}}",500 +2668,cos(x_1) + Abs(x_1 - 1.069),"{'x_1': {'max': 10, 'min': -10}}",500 +2669,x_1**5,"{'x_1': {'max': 10, 'min': -10}}",500 +2670,log(x_1)*tan(x_2 + sin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2671,0.805207059419915*x_1**2*exp(-2.326*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2672,x_1*(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2673,x_1*tan(x_3)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2674,sin(sin(x_1 - sin(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2675,3.799*exp(x_1 + cos(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2676,1.29991580971513*tan(x_1)**0.64,"{'x_1': {'max': 10, 'min': -10}}",500 +2677,x_1*(x_1 + 2*x_2 + 1.886),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2678,(x_1 + sin(x_1 + 1.824) + 0.029)/(x_1*(x_2 + 1.02)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2679,(asin(x_1) + 0.83)*asin(3.016*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2680,x_1**3*(-4.434*x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2681,exp(asin(x_1 - x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2682,-x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2683,x_2 + asin(tan(x_1/sqrt(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2684,-x_2 + 0.845754476917213*exp(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2685,2*x_1 - x_2 + (x_2 - 0.699)**3.20397,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2686,(x_1 - tan(3.749*x_2))**2 + exp(x_1) - 1.122,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2687,x_1 + 1.21408401686209*sqrt(x_2) + 4.084*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2688,(log(x_2**5.602) + 0.26030136893009)*sin(0.93*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2689,x_1 + 1990.95992516324*(0.809061488673139*x_1 + 1)**4 + 0.185,"{'x_1': {'max': 10, 'min': -10}}",500 +2690,x_1 - cos(0.861*x_1 + x_2 + 0.844641),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2691,x_1**3/(-x_1 + x_2 + 0.051)**6,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2692,(sin(x_1 - 1.754) - 0.938)/(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2693,sin(sin(x_1 + 0.249))**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2694,x_1 - x_2**2.544 - (x_3 + 0.48)**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2695,x_1*log(-x_3 + 3.806401*(0.512557662737058*x_2 + 1)**2 + 1.737),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2696,4.525*(x_1 - 1.576)*(x_1 - x_2 - x_3 + 0.626),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2697,x_1*tan(0.605326876513317*x_2/x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2698,asin(3.416*x_1)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2699,-2.068*asin((x_1*(1.24*x_1 - x_2) + x_1 - 1.444)/(1.24*x_1 - x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2700,(98.6113296028002*x_1*x_3*(x_2 + 0.006) + x_2)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2701,2*x_1 + 2.9*exp(x_1) + 0.203,"{'x_1': {'max': 10, 'min': -10}}",500 +2702,-577.065596277318*x_2 + asin(x_1**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2703,-x_1 + 1.24722531530186*(0.972762645914397*x_1 - 1)**8,"{'x_1': {'max': 10, 'min': -10}}",500 +2704,-x_1/x_2 + log(x_1 + 1.403)**19.64,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2705,x_1**4.462,"{'x_1': {'max': 10, 'min': -10}}",500 +2706,x_1/(sin(exp(1.893376*(0.726744186046512*x_2 + 1)**2)) + 0.96),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2707,x_1 + x_2 - 3.90009144384319*exp(x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2708,1.77774892410056e+15*(0.130879331015457*x_1 + (0.577700751010976*x_2 - 1)**3.706)**17.268,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2709,x_1 + log(x_1 - x_2/x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2710,-x_1**16.901 + exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2711,x_1 + cos(x_1 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2712,3.871*cos(x_1*sin(x_1)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +2713,x_1**2*log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2714,x_1**4 + x_2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2715,log(x_1**2*x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2716,(x_1 + 1.555)*exp(x_2*(x_1 + 0.742)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2717,x_1 + x_2 + x_3/x_4,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2718,-sqrt(x_2) + cos(x_1**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2719,2.252*x_1 - 0.51*log(x_1 + x_2 - 0.082),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2720,exp(x_1)/(-x_1 + sqrt(tan(4.821*x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2721,x_1 + x_3/x_4 + log(x_2) + 1.45114533343951,"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2722,-x_1 + sin(cos(x_1) + 0.2) - tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2723,374.014576649426*(x_1 - 0.956)**4 + Abs(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2724,38.9640738822194*x_2*(0.610500610500611*x_1 + 1)**6.366,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2725,sqrt(tan(-x_2 + asin(x_1) + 1.076) + 0.751),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2726,x_1*x_2**30,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2727,x_1 + sqrt((x_1 + 0.589)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +2728,x_1*(x_1 + asin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2729,cos(x_1*(cos(x_1) + 1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2730,5.895184*tan(x_2**4.239)*asin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2731,x_1 + x_2 + (16.2343075937129*x_2 - 14.4160651432171)*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2732,(x_1 + 1.863)*(2.31*x_1 + (x_1 - 0.637)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +2733,sin(0.939*x_1 - sqrt(cos(exp(x_1))) + 0.815991),"{'x_1': {'max': 10, 'min': -10}}",500 +2734,-x_1*x_2 + x_1 + sqrt(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2735,(x_1 + 0.743)*sqrt(tan(4.322*x_1 - 5.195044)),"{'x_1': {'max': 10, 'min': -10}}",500 +2736,x_1**0.042959,"{'x_1': {'max': 10, 'min': -10}}",500 +2737,(x_1 - 1.245)*tan(1.254*x_2 + 0.71478)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2738,x_1*x_2*sin(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2739,cos(x_1**2 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2740,(x_1*(log(x_1) + 1.60080071903844))**(-0.501),"{'x_1': {'max': 10, 'min': -10}}",500 +2741,(1.582564*x_1 + 1.302450172)*sin(x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2742,-cos(x_2**2) + asin(x_1 + 0.955),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2743,1.282*tan(x_1) + 2.094788,"{'x_1': {'max': 10, 'min': -10}}",500 +2744,x_1 + 4.551*x_2 + tan(2*x_1 - 0.516),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2745,tan(x_1 - 1.131) - asin(x_2 + 0.558) - 0.965,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2746,x_1**3*(x_1 + 1)**3,"{'x_1': {'max': 10, 'min': -10}}",500 +2747,3.273*x_2*(2.053*x_2 + Abs(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2748,-tan(x_1*(x_2 - 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2749,tan(x_1 - 1.48197232730654*(x_1 + 0.45)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2750,-tan((2.48183945259875*x_2 - 2.48183945259875*cos(x_1) + 0.620459863149687)*exp(-x_2)) - 0.188,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2751,x_1*(1 - 2*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2752,asin(sin(sin(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2753,(x_1 + asin(x_1))**11.853,"{'x_1': {'max': 10, 'min': -10}}",500 +2754,x_1*cos(x_2*log(x_2))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2755,3.169*x_1 - log(0.662*x_1 - 0.174768)*cos(x_2**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2756,x_1/x_2**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2757,x_1*x_2*log(4.92*x_1 - 8.98392),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2758,1.753*x_1/(x_2 + asin(log(x_2 - 0.776)) + 0.599),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2759,1.87948912896191*exp(-x_3)*sin(x_2*asin(x_1)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2760,1.40214121970649*x_1*sqrt(0.508646998982706*x_2 - 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2761,x_2**5.178*(x_1 + 1.657)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2762,11.7197000509942*(0.611246943765281*sin(sin(x_1)) + 1.0)**5,"{'x_1': {'max': 10, 'min': -10}}",500 +2763,log(x_1 + sin(x_1 + 0.328)),"{'x_1': {'max': 10, 'min': -10}}",500 +2764,log(x_1)/(x_1*(x_1 + 1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2765,(x_1 + 1.311)*cos(4.397*x_1 - x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2766,x_1*x_2**6,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2767,x_1/(4.618*x_1*x_2 + tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2768,x_1**10.072 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2769,x_1/asin(0.938004999530729*exp(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2770,x_1**2 + x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2771,log(x_1*(x_1 + x_2**3)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2772,x_1 + 8476.56510454141*x_2**3 + 3.946*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2773,x_2 + 3.315*asin(cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2774,x_1 + 1.659*x_2 - 1.990737,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2775,4.195*sin(cos(2*x_1) + 0.701) + 1.74512,"{'x_1': {'max': 10, 'min': -10}}",500 +2776,x_1*x_2 + 0.586*sin(1.396*x_3 - 2.12192),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2777,x_1 - 3.702*x_2 - log(x_1) - 1.754502,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2778,(x_1 + asin(x_1 - 0.568) + 1.447)/(log(x_1) + 1.37371557891303),"{'x_1': {'max': 10, 'min': -10}}",500 +2779,x_1 + exp(x_1**2 + exp(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2780,(x_1 + x_2)/(x_1*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2781,-x_1 + exp(x_1*x_2**5),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2782,-sqrt(x_2) + asin(x_1 + sqrt(x_2 + 0.984)) - 0.784,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2783,(x_1 + x_2)*cos(cos(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2784,-x_1*x_2 + 1.629*x_1 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2785,3.307*(x_1 + 0.146)*(3.27*x_1 + x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2786,sqrt(x_1**3*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2787,(x_1 - x_2)*tan(x_2 - 1.712),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2788,x_1 + 0.911*x_2*x_3**2.473,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2789,x_1**2/tan(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2790,sqrt(x_1)*exp(-x_2/2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2791,x_1*(-2*x_2 + log(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2792,2.13*x_1 + 1.611*x_2 + 0.240039,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2793,0.0291278429212775*(x_1 + x_2)/x_1**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2794,x_1**4*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2795,Abs(x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2796,tan((1.985*x_1 + 1.840095)*exp(x_1 - x_2**2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2797,sin(2.374*tan(2*x_1 - 0.47)) + 0.596,"{'x_1': {'max': 10, 'min': -10}}",500 +2798,2.3*exp(x_1) + sin(0.489021468042447*x_2/(0.699300699300699*x_1 - 1)**2) + 1.465,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2799,log(-x_1**2 + x_1 + 1.815),"{'x_1': {'max': 10, 'min': -10}}",500 +2800,0.849*cos(1.113*x_1*(x_1 + cos(x_2))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2801,1.15931013969516*sqrt(x_1) + 1.547*x_2 + 2.685*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2802,-(x_1 - x_2 + 0.169)**1.149 + cos(x_1 + 0.328),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2803,(x_1 - 0.531)/((x_2 - 0.652)*sin(x_2 + x_3 + 0.505)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2804,(x_3 + 0.124)*exp(x_2) + log(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2805,asin(x_1 + x_2**2 + x_2 + 0.464),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2806,(sqrt(x_1) + x_1 + x_2*(4.077*x_1 + 4.986171) + 0.395)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2807,x_2 + tan(1.43701078631999*sqrt(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2808,2.667616624656*(0.782472613458529*cos(x_1 + log(x_1 + 0.992)) - 1.0)**4,"{'x_1': {'max': 10, 'min': -10}}",500 +2809,x_1**2*(0.0576*asin(x_1 - x_2) - 0.0070272),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2810,1.023*x_1 - x_2 - cos(tan(x_1 - 1.71)) + 0.465,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2811,x_1 + sqrt(x_2)*x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2812,tan(x_2*(x_1 - 0.886))/(x_1*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2813,2*x_2 - x_3 + asin(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2814,exp(x_1)*sin(x_2**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2815,(x_3 - 1.646)*log(asin(21.818241*x_2*(x_1 + 0.035))),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2816,sqrt(log(x_1 - 0.273)),"{'x_1': {'max': 10, 'min': -10}}",500 +2817,log(x_1*(x_1 + x_2 + 0.159)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2818,sin(x_1**4)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2819,x_1*tan(1.630729*(x_1 - 0.433)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2820,3.502*x_1 + x_2*exp(-2.405*x_1) + 3.701614,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2821,cos(x_1 + tan(tan(sin(x_1)))) - 0.144,"{'x_1': {'max': 10, 'min': -10}}",500 +2822,tan(24.147396*(x_1 + 0.335)**2)/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2823,cos(-x_1 + x_2 + asin(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2824,(4.347*x_1*x_2 + (1.512 - x_3)*exp(x_2))/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2825,sqrt(x_2*asin(log(x_1))**4),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2826,0.226*exp(x_2) + cos(x_1 + 1.114),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2827,-(x_3 - 0.022)*(x_2 - tan(x_1) + 1.378)/x_2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2828,x_1/(x_2*(exp(4.73*x_2) + 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2829,1.90598702927192*x_3*exp(0.177*x_1 + x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2830,1.365534810721*x_2**6.592*sin(x_2*exp(x_1))**0.942,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2831,x_2*(x_1*x_2 - 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2832,sin(x_1)**2/x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2833,x_1*x_2**0.3285,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2834,-x_2 + asin(2*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2835,log(x_1)/2 + 0.189560566384281,"{'x_1': {'max': 10, 'min': -10}}",500 +2836,x_1 - 3.396*x_2 + 0.267402570880001*exp(x_1 - x_3) - 4.486116,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2837,x_2*(x_1 - tan(4.341*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2838,x_1 - 1.128*sqrt(0.152469694683366 - (x_1 + 0.02)**2),"{'x_1': {'max': 10, 'min': -10}}",500 +2839,x_1/(x_2**2*sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2840,17.6690417525869*(0.771604938271605*x_1 - 1)**11.076,"{'x_1': {'max': 10, 'min': -10}}",500 +2841,x_1**19,"{'x_1': {'max': 10, 'min': -10}}",500 +2842,(1.4375812271908*x_1**(1/4) + 1.34147456181621*sqrt(0.566251415628539*x_1 + 1))/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2843,sqrt(x_1 + x_2)*sin((x_1 + 0.902)**1.596),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2844,tan(2.346*x_1*(x_2**2 + log(x_1))),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2845,x_1*(x_2 - 1.559),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2846,(x_1 - 0.307)*(x_1 + cos(x_2))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2847,116.428733387*(x_2 - 1.197)*(x_3 - 1.939)*exp(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2848,asin((x_1 + x_2 - 1.798)/x_2)/(x_2 + 1.859),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2849,2.087*x_1/(x_2 - Abs(4.496*x_2 + 2.144592)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2850,sin(2.304*exp(x_1**2)),"{'x_1': {'max': 10, 'min': -10}}",500 +2851,(x_2 + x_3)*tan(x_1 + 0.334*x_2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2852,tan(x_1)**4*asin(x_1 - 0.142)**3.879,"{'x_1': {'max': 10, 'min': -10}}",500 +2853,x_1**3 + x_2 + 1.909,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2854,x_2**3.821*tan(x_2)**2 + tan(x_1) + 0.011,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2855,x_2 + exp(3.044*x_1*(x_2 - 1.213)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2856,tan(x_1*cos(x_1 + x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2857,2*x_1 + 0.27144384321212*exp(x_1) - 3.76,"{'x_1': {'max': 10, 'min': -10}}",500 +2858,log(x_1 + 4.093*x_2 + 5.529054)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2859,cos(x_1*exp(0.616*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2860,x_1,"{'x_1': {'max': 10, 'min': -10}}",500 +2861,-x_1 + x_1**2.335 + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2862,1.168*exp(sin(0.828*x_1)/(x_1 - x_2)) + 0.427488,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2863,3.676*cos((x_1 + 0.194)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +2864,sin(x_1*(-x_1 + x_2 + 0.789)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2865,x_3*(x_1 + x_2 - 0.085) + asin(x_1),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2866,x_1*tan(0.591*x_1 - 0.789576)**2/(cos(x_2) - 1.417),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2867,4.211*x_1*(x_1 + x_2 + tan(3.569*x_1 + 4.100781)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2868,tan(x_1)/x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2869,-2.282*x_1 - 2.0*x_2 + 0.47*tan(x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2870,x_2*(6.58119668811199 - 3.98136520756926*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2871,x_1*x_2 + log(x_1*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2872,x_1 + log(x_1 - 0.546)/2 + cos(x_1) + 1.591,"{'x_1': {'max': 10, 'min': -10}}",500 +2873,x_1**1.108*exp(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2874,12.938409*x_2*(asin(sin(x_1)) - 1.173),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2875,log(x_1 + x_3*tan(x_2**5)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2876,x_1*log(2*x_1) + x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2877,3.713*x_1 + x_2**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2878,log(2*x_1 - 2.79)**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2879,x_1/x_2**1.206,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2880,log(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2881,2.17956512402935*(0.624609618988132*asin(x_1 - 0.213) - 1)**1.6555,"{'x_1': {'max': 10, 'min': -10}}",500 +2882,(x_2*(x_1 - x_2 + 1.738) + exp(x_1))/(x_1 - x_2 + 1.738),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2883,sqrt(2*x_1 + sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2884,-x_1*(x_2 - 0.281)/(x_3 - sin(x_1)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2885,(x_1 + 0.897)*(3.004*x_1 - sin(x_2) + 1.631172),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2886,0.634517766497462*x_1**2.397/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2887,x_1 - sqrt(x_2*tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2888,(x_2 - 0.115)*sin(x_1**2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2889,x_1 + log(x_1 + tan(x_1 + 0.858)),"{'x_1': {'max': 10, 'min': -10}}",500 +2890,1.0358384880813*x_1/(sqrt(x_2)*x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2891,(x_1 + 0.341)**2/tan(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2892,-x_1**2 + x_1 + x_2 + x_3 + 0.471,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2893,0.4905*log(log(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2894,x_2*(x_1 + 1)*(x_1 + 1.616),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2895,asin(exp(x_1**6)) - 0.673,"{'x_1': {'max': 10, 'min': -10}}",500 +2896,(x_1**4.779 - x_2)**11.457,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2897,(x_1 + x_2*tan(cos(x_1)))**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2898,sin(x_1*(x_3 + 0.43)*(x_1 + x_2 + 1.623)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2899,2*x_1 - x_2 - log(x_1 + 1.978),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2900,x_1/x_2 + x_2/(x_3 - 1.601),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2901,(x_1 - 0.359)*(0.678*x_2 + log(x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2902,0.898704*x_1**2.4905*x_2**2*Abs(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2903,-sin(x_1 + exp(x_2) - log(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2904,(0.519*x_1 - 0.22317)*exp(-4.646*x_1*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2905,x_1*asin(x_1 + sqrt(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2906,x_2*(x_1 + 1.306)*cos(tan(x_1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2907,-x_1/(x_2 - 1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2908,tan(x_1/sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2909,cos(tan(3.881*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2910,tan(x_1*x_2*(x_1 + 0.965)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2911,x_1 - x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2912,x_3 + log(x_2 + cos(x_1)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2913,x_1/(x_1 + 2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2914,3.064*x_1 + log(x_1) + sin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2915,4.40209096407605*x_1**(3/2)*x_2 + 4.265*x_1 + 3.996305,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2916,4.301*x_1/log((x_2 + 1.545)/x_3),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2917,-1.0*x_3 + 1.11361445783132 + 1.50602409638554*log(x_2)/x_1,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2918,x_1 - log(1.918*x_1 + x_2) + 0.071,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2919,x_1 + tan(log(x_1) + 1.50983854721135),"{'x_1': {'max': 10, 'min': -10}}",500 +2920,1.94*sin(x_1 + x_1**6.67),"{'x_1': {'max': 10, 'min': -10}}",500 +2921,-(-x_1 + 0.612*x_2 + 2.772092)*log(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2922,x_1 + exp(2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2923,1101.78094370214*x_1**6 + x_1**2,"{'x_1': {'max': 10, 'min': -10}}",500 +2924,3.599*x_1*(x_2 - log(x_2*(x_1 + 1.02)) - 1.45814979822032),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2925,(x_1 - 0.856)*exp(-sqrt(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2926,(x_1 + 0.1)**3.303,"{'x_1': {'max': 10, 'min': -10}}",500 +2927,cos(x_1**5 + x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2928,2.974*x_1/(-x_2**0.996 + x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2929,sqrt(x_1)*x_3 + x_1 + x_2 - 1.84,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2930,17.926756*x_1*x_2 - x_3*(x_4 - 0.837),"{'x_3': {'max': 10, 'min': -10}, 'x_4': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}}",500 +2931,x_1 + sin(sin(x_1 - 1.442)),"{'x_1': {'max': 10, 'min': -10}}",500 +2932,exp(cos(x_1)**3),"{'x_1': {'max': 10, 'min': -10}}",500 +2933,log(x_1**4),"{'x_1': {'max': 10, 'min': -10}}",500 +2934,x_1 + x_2**2 + x_2 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2935,x_2 + x_3 + exp(x_1/2),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2936,cos(x_1 + x_2)/x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2937,tan(sqrt(x_1)*x_2*(x_3 - 1.246)) - 0.585,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2938,log(x_1*sqrt(cos(x_1 + 1.032))/x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2939,0.378*x_1 - 0.376866 - tan(1.47*x_1)/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2940,(x_1*(x_2 + 1))**1.717496,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2941,1.899*x_1*x_2**1.523*tan(4.374*x_1),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2942,exp(x_1**3.70986),"{'x_1': {'max': 10, 'min': -10}}",500 +2943,x_2 + 39.1208980983203*(0.142962056583524*x_1 - 0.627746390458255*x_2 - 1)**1.885,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2944,(x_1 + 0.054)*log(log(sin(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2945,2.047*exp(x_1 - tan(x_1)/asin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2946,3.276*x_1 + 2*log(x_1 - 1.376),"{'x_1': {'max': 10, 'min': -10}}",500 +2947,cos(x_1 - 0.22786543992899*exp(x_1) + 0.996),"{'x_1': {'max': 10, 'min': -10}}",500 +2948,1.475*x_1 + 2*x_2*(87.791671617841*x_1 - 171.632718012879),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2949,cos(log(x_1**2*x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2950,4.153*x_1*sin(129.132336776881*x_1**4),"{'x_1': {'max': 10, 'min': -10}}",500 +2951,(-x_1*cos(x_1) + x_1)**5.88,"{'x_1': {'max': 10, 'min': -10}}",500 +2952,x_2*sin(x_1)**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2953,exp(3*sqrt(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2954,2.374*x_1*(x_1 + x_2) + asin(x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2955,(x_1**5*x_2)**(-0.89),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2956,32962.0680112035*(x_1 + 0.169)**12.3175,"{'x_1': {'max': 10, 'min': -10}}",500 +2957,x_1**2/sin(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2958,exp(x_1**2/(x_2**2*x_3**2)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2959,0.944*asin(x_1 - 1.353)/(2.78*x_2 + sin(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2960,sin(x_1*tan(x_1 - 0.701)/x_2**3.863),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2961,x_1*x_2 + x_1 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2962,1.293*x_2 + tan(x_1*exp(2*x_1)) + 4.287349,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2963,x_1**8*x_2**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2964,2.60224881490064*exp(0.666*x_1 + x_2/2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2965,2.551*x_1 + tan(2.465*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2966,4.518 + asin(x_2)/x_1,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2967,cos(x_1**3.04*(x_2 + 3.189*x_3)),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2968,3.703*exp(x_1**0.083),"{'x_1': {'max': 10, 'min': -10}}",500 +2969,cos(2*x_1 - tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2970,3.020644*(0.575373993095512*x_1 + 1)**2/asin(x_1)**2 + exp(x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2971,tan(tan(x_1*(x_1 + 2))),"{'x_1': {'max': 10, 'min': -10}}",500 +2972,0.000414057977544388*tan(cos(x_1))**5/(0.510204081632653*x_2 + 1)**11.575256,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2973,-x_1 - x_2 + sin(x_1)**5,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2974,x_1 + log(x_1 - sin(x_2) - 0.827),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2975,x_1**(1/4)/x_2**3,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2976,asin(x_1*cos(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2977,3.395*Abs(cos(2.965*x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2978,1.625*log(x_1 + 0.279)*cos(1.175*x_1),"{'x_1': {'max': 10, 'min': -10}}",500 +2979,x_1/(-x_1 + x_2 + tan(x_2)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2980,x_1/(-x_3 + cos(cos(x_2))),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2981,x_1/x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2982,log(x_1**2*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2983,2*x_1 - x_2 - exp(0.51*x_2),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2984,cos(tan(tan(x_1))),"{'x_1': {'max': 10, 'min': -10}}",500 +2985,x_2*log(-x_1 + exp(x_1) - 2.063),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2986,exp(x_1*(x_2**(1/4) - 1)),"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2987,2.424249*x_1**5.648*x_2*x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2988,(exp(x_1) - 1.852)/(x_2*log(cos(x_3 + 0.138))),"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2989,x_1**2/(x_1 + x_2)**2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2990,3.3*tan(log(cos(1.40499110317468*sqrt(x_1)) - 1.191)),"{'x_1': {'max': 10, 'min': -10}}",500 +2991,x_1 + tan(tan(2.02607995893548*sqrt(x_1) + 0.968*x_1 + 0.679536)),"{'x_1': {'max': 10, 'min': -10}}",500 +2992,x_1**4*x_2,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2993,0.106472131939959*exp(1.228*x_1 + exp(x_1) - sin(x_1)),"{'x_1': {'max': 10, 'min': -10}}",500 +2994,x_1 - x_2 + cos(22.886656*x_1**2) + 2.602,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2995,sin(x_2**14.732*cos(x_1)) + 1.078,"{'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2996,2*x_1 + x_2 + x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2997,9.10590850091248*x_1**6*x_2**6.374 - x_3,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500 +2998,x_1*(2.09989637862491*exp(0.552*x_1) - 3.671),"{'x_1': {'max': 10, 'min': -10}}",500 +2999,x_1 - 3.464*x_2 - 5.987809*x_3**2,"{'x_3': {'max': 10, 'min': -10}, 'x_1': {'max': 10, 'min': -10}, 'x_2': {'max': 10, 'min': -10}}",500