{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "w84cR3AZIU0e"
},
"source": [
"## **Assignment #2: Classification, Regression, Clustering, Evaluation**"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PnYmknSefeqx"
},
"source": [
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "n7afdXdxIbLA"
},
"source": [
"### **Overview**\n",
"\n",
"In this assignment, you'll level up your data science toolkit. While the first assignment focused on the data, on this one you will practice:\n",
"\n",
"- Classification models\n",
"\n",
"- Regression models\n",
"\n",
"- Feature Engineering\n",
"\n",
"- Evaluations\n",
"\n",
"You’ll go from raw data to insights by building a full modeling pipeline, enhancing your dataset, and training different models.\n",
"\n",
"This assignment will be completed individually."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lJAPMumvIUyW"
},
"source": [
"### **Objectives**\n",
"\n",
"You’ll gain hands-on experience in:\n",
"- Evaluation\n",
"- Classification\n",
"- Regression\n",
"- Dataset preparation\n",
"- Explore various data hubs\n",
"- Engineering meaningful features\n",
"- Communicating findings clearly - visually and verbally\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MwRmaJBiIjMR"
},
"source": [
"### **Submission Guidelines**\n",
"\n",
"1. Please note that this assignmnet must be submitted alone.\n",
"2. Link to your HugingFace Model.\n",
"\n",
"Your HF model should include:\n",
"- README file: explanations, visuals, insights, etc.\n",
"- **Video**: Include the video of your presentation in the README file.\n",
"- **Python Notebook**: upload a copy of this notebook, with all of your coding work. Do not submit a Colab link; include the `.ipynb` file in the HF model.\n",
"- **ML Models:** Upload your models.\n",
"\n",
"Note: Students may be randomly chosen to present their work in a quick online session with the T.A., typically lasting ±10 minutes. Similar to Peer Review.\n",
"\n",
"
\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hD9SZmagIjOV"
},
"source": [
"### **Evaluation Criteria**\n",
"\n",
"* **Data Handling & EDA (20%)**\n",
" Thoughtful and thorough data cleaning; handling of missing values, outliers, duplicates, and more; well-chosen visualizations; clear statistical summaries; use of EDA to guide modeling choices.\n",
"\n",
"* **Feature Engineering (20%)**\n",
" Creative and effective feature creation, transformation, encoding, selection, scaling, and more; integration of clustering results as features; clear explanation of feature choices and their impact.\n",
"\n",
"* **Model Training (20%)**\n",
" Appropriate selection of models; correct train/test split; reproducible code; logical modeling workflow with a solid baseline and improvements post-feature engineering. An iterative process.\n",
"\n",
"* **Evaluation & Interpretation (20%)**\n",
" Use of relevant evaluation metrics; structured model comparison; use of feature importance or visualizations to interpret results; clear discussion of what the model learned and how it performed.\n",
"\n",
"* **Presentation (20%)**\n",
" 4–6 minute video with clear delivery; structured narrative; visuals that support the explanation; confident, professional communication of findings and lessons.\n",
"\n",
"* **Bonus (up to +10%)**\n",
" Extra work such as trying data science tools, creative visualizations, advanced hyper param tuning, interactive dashboards, and deeper business/ethical insights.\n",
"\n",
"* **Late Submission (-10% per day)**\n",
" Assignments submitted after the deadline will receive a 10% penalty per day.\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "h3vpVHSxIUwI"
},
"source": [
"### **Additional Guidelines**\n",
"\n",
"- The first thing you should do is download a copy of this notebook to your drive.\n",
"- Keep your dataset size manageable. If the dataset is too large, you can sample a subset.\n",
"- Run on Colab (CPU is fine). Colab free is enough. No GPU needed.\n",
"- You may use any Python package (scikit-learn, xgboost, lightgbm, catboost, etc.).\n",
"- No SHAP required. Use `feature_importances`, and similar tools.\n",
"- Make sure your results are reproducible (set **seeds** where needed).\n",
"- Be thoughtful with your cluster features — only use them if they help!\n",
"- Your presentation should tell a story; what worked, what didn’t, and why.\n",
"- Be creative, but also rigorous."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7lTH1B5b5c12"
},
"source": [
"### Assignment High-level Flow"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EK9fe2XygjgM"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6kUonEv8Ipkp"
},
"source": [
"
\n",
"\n",
"---\n",
"\n",
"---\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "acyYQrhPdEhB"
},
"source": [
"imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"collapsed": true,
"id": "hVCyFalKwSVW",
"outputId": "71c5b5ea-a6ac-47e6-e469-f23e679701d3"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Requirement already satisfied: kaggle in /usr/local/lib/python3.12/dist-packages (1.7.4.5)\n",
"Requirement already satisfied: bleach in /usr/local/lib/python3.12/dist-packages (from kaggle) (6.3.0)\n",
"Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.12/dist-packages (from kaggle) (2025.11.12)\n",
"Requirement already satisfied: charset-normalizer in /usr/local/lib/python3.12/dist-packages (from kaggle) (3.4.4)\n",
"Requirement already satisfied: idna in /usr/local/lib/python3.12/dist-packages (from kaggle) (3.11)\n",
"Requirement already satisfied: protobuf in /usr/local/lib/python3.12/dist-packages (from kaggle) (5.29.5)\n",
"Requirement already satisfied: python-dateutil>=2.5.3 in /usr/local/lib/python3.12/dist-packages (from kaggle) (2.9.0.post0)\n",
"Requirement already satisfied: python-slugify in /usr/local/lib/python3.12/dist-packages (from kaggle) (8.0.4)\n",
"Requirement already satisfied: requests in /usr/local/lib/python3.12/dist-packages (from kaggle) (2.32.4)\n",
"Requirement already satisfied: setuptools>=21.0.0 in /usr/local/lib/python3.12/dist-packages (from kaggle) (75.2.0)\n",
"Requirement already satisfied: six>=1.10 in /usr/local/lib/python3.12/dist-packages (from kaggle) (1.17.0)\n",
"Requirement already satisfied: text-unidecode in /usr/local/lib/python3.12/dist-packages (from kaggle) (1.3)\n",
"Requirement already satisfied: tqdm in /usr/local/lib/python3.12/dist-packages (from kaggle) (4.67.1)\n",
"Requirement already satisfied: urllib3>=1.15.1 in /usr/local/lib/python3.12/dist-packages (from kaggle) (2.5.0)\n",
"Requirement already satisfied: webencodings in /usr/local/lib/python3.12/dist-packages (from kaggle) (0.5.1)\n"
]
}
],
"source": [
"!pip install kaggle"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true,
"id": "H9ZazAMOc5jC"
},
"outputs": [],
"source": [
"import os\n",
"import random\n",
"import numpy as np\n",
"import pandas as pd\n",
"import seaborn\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "2Ha3wBk1sDQy"
},
"outputs": [],
"source": [
"user_name = \"yonatanlevy\"\n",
"api_key = \"4bfbe7b5a0fa05fcafff8e59823c9bcc\"\n",
"\n",
"# Create Kaggle authentication file\n",
"os.makedirs(os.path.expanduser(\"~/.config/kaggle\"), exist_ok=True)\n",
"with open(os.path.expanduser(\"~/.config/kaggle/kaggle.json\"), \"w\") as f:\n",
" f.write(f'{{\"username\":\"{user_name}\",\"key\":\"{api_key}\"}}')\n",
"os.chmod(os.path.expanduser(\"~/.config/kaggle/kaggle.json\"), 0o600)\n",
"\n",
"import kaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TPaWKBWmdGNF"
},
"source": [
"Set Seeds"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "zBaCUY21dHeF"
},
"outputs": [],
"source": [
"SEED = 42\n",
"\n",
"random.seed(SEED)\n",
"np.random.seed(SEED)\n",
"os.environ['PYTHONHASHSEED'] = str(SEED)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "INAizD1WeZcf"
},
"source": [
"For Jupyter Notebooks"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "G0hg5eohd4s-"
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"%config InlineBackend.figure_format = 'retina'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UwSXkPGvecLK"
},
"source": [
"Warnings"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "Nk9C78G3d7vp"
},
"outputs": [],
"source": [
"import warnings\n",
"warnings.filterwarnings('ignore')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9EDUUjYc2R-T"
},
"source": [
"**Utility functions**"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": true,
"id": "fUEbi2wOhssS"
},
"outputs": [],
"source": [
"from re import U\n",
"def print_missing_values(df):\n",
" missing = (df\n",
" .isnull()\n",
" .mean()\n",
" .sort_values(ascending=True)\n",
" * 100\n",
" )\n",
"\n",
" missing.plot(\n",
" kind=\"bar\",\n",
" figsize=(20,4),\n",
" title=\"Missing values\",\n",
" xlabel=\"Features\",\n",
" ylabel=\"Missing values (%)\"\n",
" )\n",
"\n",
"def plot_data_completeness_by_year(df):\n",
" # share of non-missing cells per year (0–1)\n",
" completeness = (\n",
" df.groupby(\"year\")\n",
" .apply(lambda g: 1 - g.isnull().values.sum() / (g.shape[0] * g.shape[1]))\n",
" )\n",
"\n",
" plt.figure(figsize=(10, 4))\n",
" plt.bar(completeness.index, completeness.values * 100)\n",
" plt.xlabel(\"Year\")\n",
" plt.ylabel(\"Non-missing data (%)\")\n",
" plt.title(\"Data completeness by year\")\n",
" plt.tight_layout()\n",
" plt.show()\n",
"\n",
"def drop_unusefull_features(df, threshold):\n",
" missing_pct = df.isnull().mean() * 100\n",
" cols_to_drop = missing_pct[missing_pct > threshold].index\n",
" df = df.drop(columns=cols_to_drop)\n",
" print_missing_values(df)\n",
" return df\n",
"\n",
"def boxplot_features(df, columns, n_cols=4):\n",
" num = len(columns)\n",
" n_rows = int(np.ceil(num / n_cols))\n",
" plt.figure(figsize=(3 * n_cols, 3 * n_rows))\n",
"\n",
" for i, col in enumerate(columns, 1):\n",
" if not np.issubdtype(df[col].dtype, np.number):\n",
" continue\n",
" plt.subplot(n_rows, n_cols, i)\n",
" plt.boxplot(df[col].dropna(), vert=True)\n",
" plt.title(col, fontsize=8)\n",
"\n",
" plt.tight_layout()\n",
" plt.show()\n",
"\n",
"# Prints the uniq values\n",
"def show_unique_in_column(df, col_name):\n",
"\n",
" if col_name not in df.columns:\n",
" print(f\"Column '{col_name}' not found in DataFrame.\")\n",
" return\n",
"\n",
" uniques = df[col_name].unique()\n",
" print(f\"Unique values in '{col_name}' ({len(uniques)} values):\")\n",
" for v in uniques:\n",
" print(v)\n",
"\n",
"def bar_plot_with_filter(df, y_asix, x_asix, filter_column, filter_value):\n",
" filtered_df = df[df[filter_column] == filter_value]\n",
" # summarize data:\n",
" df_summary = (\n",
" filtered_df\n",
" .groupby(x_asix)[y_asix]\n",
" .mean()\n",
" .sort_values(ascending=False)\n",
" .head(100)\n",
" )\n",
"\n",
" # exicute plot\n",
" df_summary.plot(\n",
" kind=\"bar\",\n",
" figsize=(20,4),\n",
" title=f\"Mean {y_asix}\",\n",
" xlabel=x_asix,\n",
" ylabel=y_asix\n",
" )\n",
"\n",
"# this function finds all countries that have less than a certain threshold of data completeness across features\n",
"def find_sparse_places(df, threshold=0.3):\n",
" # features to check (ignore IDs)\n",
" feature_cols = [\n",
" c for c in df.columns\n",
" if c not in [\"Description\", \"Name\", \"year\", \"iso_code\"]\n",
" ]\n",
"\n",
" # fraction of non-missing values per place, across all its years\n",
" completeness = (\n",
" df.groupby(\"Name\")[feature_cols]\n",
" .apply(lambda g: g.notnull().sum().sum() / (g.shape[0] * len(feature_cols)))\n",
" )\n",
"\n",
" sparse = completeness[completeness < threshold] # e.g. <30% filled\n",
" return sparse.sort_values() # most empty first"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "evFmlLbzdgBj"
},
"source": [
"
\n",
"\n",
"---\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "a_vsO0Q1IOMT"
},
"source": [
"# **Part 1: Select a Regression Dataset**\n",
"\n",
"1. Choose a numeric & categorical tabular dataset. If you prefer, you may use open-source datasets; [Hugginface](https://huggingface.co/datasets?task_categories=task_categories:tabular-classification&sort=trending), [Kaggle](https://www.kaggle.com/datasets?tags=13302-Classification&minUsabilityRating=8.00+or+higher), etc.\n",
"\n",
"2. Avoid choosing a \"basic\"/\"small\" dataset.\n",
" - 10K rows and more.\n",
" - 15 features and more.\n",
" - Numeric & Categorial features are a must.\n",
"\n",
"3. The Label (target variable) is numeric.\n",
"\n",
"4. Please submit your dataset [here](https://forms.gle/YYiRLXJnbwUfwuwc7), to share it with the class so everyone can see.\n",
"And make sure your chosen dataset is unique using this [link](https://docs.google.com/spreadsheets/d/1M8uojrzhSyVnOlSAJpzCKxrhWdzPR77k4x8Kxvr8VDk/edit?usp=sharing).\n",
"\n",
" *Note: Due to their popularity, the following are datasets you may not choose.*\n",
" > - Iris dataset\n",
" > - Wine dataset\n",
" > - Titanic dataset\n",
" > - Boston Housing dataset\n",
"\n",
"5. Choose a dataset with a combination of numeric and textual values. This way you would have enough information to work on.\n",
"\n",
"6. Briefly describe your chosen dataset (source, size, features) and the question you want to answer."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GFaHgQJyKwa5"
},
"source": [
"Import Dataset: CO₂ Emissions Across Countries, Regions, & Sectors"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"collapsed": true,
"id": "OI0MZzohKwfE",
"outputId": "32b1827f-b0bd-425c-ba26-4edfd95ab08c"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Dataset URL: https://www.kaggle.com/datasets/shreyanshdangi/co-emissions-across-countries-regions-and-sectors\n"
]
}
],
"source": [
"# Get dataset from kaggle and save on notebook\n",
"dataset_name = \"shreyanshdangi/co-emissions-across-countries-regions-and-sectors\"\n",
"kaggle.api.dataset_download_files(dataset_name, path=\"./\", unzip=True)\n",
"\n",
"csv_file_path = \"//content/Data.csv\"\n",
"assert os.path.exists(csv_file_path), f\"The file {csv_file_path} does not exist. Ensure the dataset was downloaded and extracted successfully.\"\n",
"\n",
"os.rename(\"/content/Data.csv\", \"/content/co2_emission.csv\")\n",
"co2_emission = pd.read_csv(\"/content/co2_emission.csv\")\n",
"pd.set_option('display.max_rows', None)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4t2QNyE6IPKS"
},
"source": [
"
\n",
"\n",
"---\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6eLmNWJJIPS0"
},
"source": [
"# **Part 2: Exploratory Data Analysis (EDA)**\n",
"\n",
"Use your EDA to tell the story of your data - highlight interesting patterns, anomalies, or relationships that lead you toward your classification goal. Ask interesting questions, and answer them.\n",
"\n",
"\n",
"1. **Data Cleaning** : Check for missing values, duplicate entries, scaling/normalize issues, parsing dates, fixing typos, or any inconsistencies. Document how you address them.\n",
"2. **Outlier Detection & Handling**: Identify outliers and decide whether to keep or remove them, providing a short justification.\n",
"2. **Descriptive Statistics**: Summarize the data (e.g., mean, median, correlations) to reveal patterns.\n",
"4. **Visualizations**: Use a set of plots (e.g., histograms, scatter plots, box plots) to illustrate **key insights.** Label charts, axes, and legends clearly.\n",
"\n",
"Tip: not necessarily in this order."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 256
},
"collapsed": true,
"id": "xVS1OFCv6sb1",
"outputId": "0480fa6d-7b72-499b-bd12-e37c00778b46"
},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" Description Name year iso_code population gdp cement_co2 \\\n",
"0 Country Afghanistan 1850 AFG 3752993.0 NaN 0.0 \n",
"1 Country Afghanistan 1851 AFG 3767956.0 NaN 0.0 \n",
"2 Country Afghanistan 1852 AFG 3783940.0 NaN 0.0 \n",
"3 Country Afghanistan 1853 AFG 3800954.0 NaN 0.0 \n",
"4 Country Afghanistan 1854 AFG 3818038.0 NaN 0.0 \n",
"\n",
" cement_co2_per_capita co2 co2_growth_abs ... share_global_other_co2 \\\n",
"0 0.0 NaN NaN ... NaN \n",
"1 0.0 NaN NaN ... NaN \n",
"2 0.0 NaN NaN ... NaN \n",
"3 0.0 NaN NaN ... NaN \n",
"4 0.0 NaN NaN ... NaN \n",
"\n",
" share_of_temperature_change_from_ghg temperature_change_from_ch4 \\\n",
"0 NaN NaN \n",
"1 0.156 0.0 \n",
"2 0.155 0.0 \n",
"3 0.155 0.0 \n",
"4 0.155 0.0 \n",
"\n",
" temperature_change_from_co2 temperature_change_from_ghg \\\n",
"0 NaN NaN \n",
"1 0.0 0.0 \n",
"2 0.0 0.0 \n",
"3 0.0 0.0 \n",
"4 0.0 0.0 \n",
"\n",
" temperature_change_from_n2o total_ghg total_ghg_excluding_lucf \\\n",
"0 NaN 7.436 0.629 \n",
"1 0.0 7.500 0.633 \n",
"2 0.0 7.560 0.637 \n",
"3 0.0 7.620 0.641 \n",
"4 0.0 7.678 0.644 \n",
"\n",
" trade_co2 trade_co2_share \n",
"0 NaN NaN \n",
"1 NaN NaN \n",
"2 NaN NaN \n",
"3 NaN NaN \n",
"4 NaN NaN \n",
"\n",
"[5 rows x 80 columns]"
],
"text/html": [
"\n",
"
| \n", " | Description | \n", "Name | \n", "year | \n", "iso_code | \n", "population | \n", "gdp | \n", "cement_co2 | \n", "cement_co2_per_capita | \n", "co2 | \n", "co2_growth_abs | \n", "... | \n", "share_global_other_co2 | \n", "share_of_temperature_change_from_ghg | \n", "temperature_change_from_ch4 | \n", "temperature_change_from_co2 | \n", "temperature_change_from_ghg | \n", "temperature_change_from_n2o | \n", "total_ghg | \n", "total_ghg_excluding_lucf | \n", "trade_co2 | \n", "trade_co2_share | \n", "
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | \n", "Country | \n", "Afghanistan | \n", "1850 | \n", "AFG | \n", "3752993.0 | \n", "NaN | \n", "0.0 | \n", "0.0 | \n", "NaN | \n", "NaN | \n", "... | \n", "NaN | \n", "NaN | \n", "NaN | \n", "NaN | \n", "NaN | \n", "NaN | \n", "7.436 | \n", "0.629 | \n", "NaN | \n", "NaN | \n", "
| 1 | \n", "Country | \n", "Afghanistan | \n", "1851 | \n", "AFG | \n", "3767956.0 | \n", "NaN | \n", "0.0 | \n", "0.0 | \n", "NaN | \n", "NaN | \n", "... | \n", "NaN | \n", "0.156 | \n", "0.0 | \n", "0.0 | \n", "0.0 | \n", "0.0 | \n", "7.500 | \n", "0.633 | \n", "NaN | \n", "NaN | \n", "
| 2 | \n", "Country | \n", "Afghanistan | \n", "1852 | \n", "AFG | \n", "3783940.0 | \n", "NaN | \n", "0.0 | \n", "0.0 | \n", "NaN | \n", "NaN | \n", "... | \n", "NaN | \n", "0.155 | \n", "0.0 | \n", "0.0 | \n", "0.0 | \n", "0.0 | \n", "7.560 | \n", "0.637 | \n", "NaN | \n", "NaN | \n", "
| 3 | \n", "Country | \n", "Afghanistan | \n", "1853 | \n", "AFG | \n", "3800954.0 | \n", "NaN | \n", "0.0 | \n", "0.0 | \n", "NaN | \n", "NaN | \n", "... | \n", "NaN | \n", "0.155 | \n", "0.0 | \n", "0.0 | \n", "0.0 | \n", "0.0 | \n", "7.620 | \n", "0.641 | \n", "NaN | \n", "NaN | \n", "
| 4 | \n", "Country | \n", "Afghanistan | \n", "1854 | \n", "AFG | \n", "3818038.0 | \n", "NaN | \n", "0.0 | \n", "0.0 | \n", "NaN | \n", "NaN | \n", "... | \n", "NaN | \n", "0.155 | \n", "0.0 | \n", "0.0 | \n", "0.0 | \n", "0.0 | \n", "7.678 | \n", "0.644 | \n", "NaN | \n", "NaN | \n", "
5 rows × 80 columns
\n", "Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),\n",
" ('scaler', StandardScaler()), ('lr', LinearRegression())])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),\n",
" ('scaler', StandardScaler()), ('lr', LinearRegression())])SimpleImputer(strategy='median')
StandardScaler()
LinearRegression()
| \n", " | train_proportion | \n", "test_proportion | \n", "
|---|---|---|
| class | \n", "\n", " | \n", " |
| 0 | \n", "33.33 | \n", "33.56 | \n", "
| 1 | \n", "33.33 | \n", "34.29 | \n", "
| 2 | \n", "33.34 | \n", "32.14 | \n", "