Spaces:
No application file
No application file
| SECONDARY-CANCER-TYPE UPDATE RULE (no event collapse) | |
| Goal: Within any 30-day window for the same evidence_source, update only the | |
| `secondary_cancer_type` field; keep every assessment event separate and leave all | |
| other fields unchanged. | |
| How to apply: | |
| 1) Group assessments by evidence_source (e.g., imaging, physician, radiation, etc.). | |
| 2) Within each source, form groups where events occur within 30 calendar days of one | |
| another. Use transitive closure (if A is within 30 days of B and B within 30 days of C, | |
| then A,B,C are in the same group). | |
| 3) For each group, collect all stated secondary cancer types across the group. | |
| - Normalize by lowercasing and trimming; map obvious synonyms (e.g., “hepatic mets” | |
| → “liver metastasis”) before de-duplication. | |
| 4) Write the **comma-separated, alphabetical list of unique values** back into the | |
| `secondary_cancer_type` field of **each event in that group**. | |
| 5) Do **not** modify any other fields (including `date_of_disease_progression_assessment`, | |
| `disease_progression_status`, `evidence_for_*`, `treatment_change`, etc.). | |
| 6) If no secondary cancer type is present in the entire group, leave the field as `""`. | |
| Examples | |
| - Imaging events on 2023-05-02 (“liver metastasis”) and 2023-05-20 (“lung mets”) → | |
| both events’ `secondary_cancer_type` become: "liver metastasis, lung metastasis". | |
| - Physician notes on 2023-06-01 and 2023-06-25 with only one mentioning “brain mets” → | |
| both physician events get: "brain metastasis". | |
| You are an expert oncology clinical data curator. Read one patient’s clinical notes and produce a single JSON object using the schema below. Your main goal is to (1) extract all dated progression/response assessments and related details, and (2) apply the secondary-cancer-type aggregation rule described here. | |
| AGGREGATION RULE (30-day window, same evidence source) | |
| - Consider assessments that occur within 30 calendar days of each other AND share the same evidence_source (e.g., imaging, physician). | |
| - Collapse those assessments into one event and update fields as follows: | |
| • secondary_cancer_type → set to a **comma-separated list of unique values** observed across the collapsed assessments (leave "" if none are stated). | |
| • date_of_disease_progression_assessment → use the **latest date** among the collapsed assessments. | |
| • disease_progression_status → if multiple statuses appear, keep the most definitive using this priority: | |
| progression > complete response > partial response > stable disease > no evidence of disease > indeterminate. | |
| • treatment_change → if present in any collapsed assessment, keep the details tied to the **latest date**; if conflicting across notes, prefer the latest. | |
| • evidence_for_disease_progression_assessment / exact_evidence_span_for_disease_progression_assessment → use the clearest/most definitive phrasing from the latest assessment. | |
| DATE & VALUE RULES | |
| - Use ISO-8601 where available: YYYY-MM-DD; if only month is known use YYYY-MM; if only year is known use YYYY. | |
| - If a note says “today” and the note date is known, resolve it to that date; otherwise leave the date field as "". | |
| - Keep drug/regimen names verbatim. | |
| - Do not guess. If a field is not stated, set it to "". | |
| OUTPUT | |
| - Return **only** valid JSON (no prose, no Markdown fences). | |
| - Keep field order and names exactly as in the schema. | |
| - All string fields must be strings; if unknown, use "". | |
| import pandas as pd | |
| import re | |
| import json | |
| import pandas as pd | |
| import re | |
| ---- | |
| import boto3 | |
| import botocore | |
| def filter_rows(group): | |
| condition1 = ( | |
| ((group["biomarker_name"]=="er") & (group["test_result"].str.lower().isin(['positive', 'pos', '+' ]))) | | |
| ((group["biomarker_name"]=="her2") & (group["test_result"].str.lower().isin(['negative', 'neg', '-' ]))) | |
| ) | |
| condition2 = ( | |
| ((group["biomarker_name"]=="pr") & (group["test_result"].str.lower().isin(['positive', 'pos', '+' ]))) | | |
| ((group["biomarker_name"]=="her2") & (group["test_result"].str.lower().isin(['negative', 'neg', '-' ]))) | |
| ) | |
| return group[condition1 | condition2] | |
| filtered_df_final = pd.concat([filter_rows(group) for _, group in finaldf.groupby("chai_patient_id")], ignore_index=True) | |
| stage_filter = ['1', '2', '3', 'i', 'ii', 'iii', 'iia', 'iiia', 'iib', 'iiib'] | |
| x = filtered_df_final[filtered_df_final["stage_status"].isin(stage_filter)] | |
| y = x[["chai_patient_id", "clq_id"]].drop_duplicates() | |
| def filter_rows(group): | |
| condition1 = ( | |
| ((group["biomarker_name"]=="er") & (group["test_result"].str.lower().isin(['positive', 'pos', '+' ]))) | | |
| ((group["biomarker_name"]=="her2") & (group["test_result"].str.lower().isin(['negative', 'neg', '-' ]))) | |
| ) | |
| condition2 = ( | |
| ((group["biomarker_name"]=="pr") & (group["test_result"].str.lower().isin(['positive', 'pos', '+' ]))) | | |
| ((group["biomarker_name"]=="her2") & (group["test_result"].str.lower().isin(['negative', 'neg', '-' ]))) | |
| ) | |
| return group[condition1 | condition2] | |
| filtered_df = pd.concat([filter_rows(group) for _, group in df.groupby("chai_patient_id")], ignore_index=True) | |
| def list_files_in_bucket(bucket_name, prefix=''): | |
| """ | |
| List all files in a given S3 bucket. | |
| Parameters: | |
| - bucket_name (str): The name of the S3 bucket. | |
| - prefix (str): The prefix to filter files (useful for listing files in a specific folder). | |
| Returns: | |
| - list: A list of file keys in the bucket. | |
| """ | |
| s3_client = boto3.client('s3') | |
| try: | |
| response = s3_client.list_objects_v2(Bucket=bucket_name, Prefix=prefix) | |
| files = [content['Key'] for content in response.get('Contents', [])] | |
| return files | |
| except botocore.exceptions.ClientError as e: | |
| print(f"Error listing files: {e}") | |
| return [] | |
| def read_file_from_s3(bucket_name, file_key, file_name=None): | |
| """ | |
| Read the contents of a specific file in an S3 bucket. | |
| Parameters: | |
| - bucket_name (str): The name of the S3 bucket. | |
| - file_key (str): The key (path) of the file in the S3 bucket. | |
| - file_name (str): Optional, the name of the file to read. | |
| Returns: | |
| - str: The content of the file as a string if it exists, None otherwise. | |
| """ | |
| s3_client = boto3.client('s3') | |
| try: | |
| full_key = f"{file_key}/{file_name}" if file_name else file_key | |
| obj = s3_client.get_object(Bucket=bucket_name, Key=full_key) | |
| content = obj['Body'].read().decode('utf-8') | |
| return content | |
| except botocore.exceptions.ClientError as e: | |
| print(f"Error reading file: {e}") | |
| return None | |
| # Example usage | |
| bucket_name = 'your-bucket-name' | |
| prefix = 'your/prefix/' # Optional, if you want to list files in a specific folder | |
| file_name = 'your-file-name.txt' | |
| # List files in the bucket | |
| files = list_files_in_bucket(bucket_name, prefix) | |
| print(f"Files in bucket '{bucket_name}': {files}") | |
| # Read the content of the specified file | |
| content = read_file_from_s3(bucket_name, prefix, file_name) | |
| print(f"Content of '{file_name}':\n{content}") | |
| ----- | |
| # List containing data from the snapshot | |
| data = [ | |
| ['p.G6bS$ts', 'p.G6bS$ts | 8', '8 |', 'C.1994delG', 'p.G6bS$ts | 17'], | |
| ['pS12/ifs', 'pS12/ifs | 16', '16 |', '©3810dupC', 'pS12/ifs | 14'], | |
| ['pAs042fs', 'pAs042fs | 48', '48 |', 'c.15124delG', 'pAs042fs | 6'], | |
| ['†on', '†on 2', 'C8§5—2A>G', '†on', '64'], | |
| ['p.Y628fs', 'p.Y628fs |', '', 'c.1882delT,c.2851—1G>T', 'p.Y628fs | 16'], | |
| ['p.H1O4/R', 'p.H1O4 /R', '21', 'C.3140A>G', 'p.H1O4/R | 13'], | |
| ['pK26/fs', 'pK26/fs |', '', 'c.800delA', 'pK26/fs | 6'], | |
| ['O.T542fs', 'O.T542fs | 9', '9', 'C.1624delA', 'O.T542fs | 18'], | |
| ['p.r224D', 'p.r224D | 6', '6', 'c6/2G>T', 'p.r224D | 16'] | |
| ] | |
| # Function to split on '|' and return the second part | |
| def extract_post_split(value): | |
| parts = value.split('|') | |
| return parts[1].strip() if len(parts) > 1 else '' # Return second part if exists, else empty string | |
| # Extract 1st, 3rd, and post-split second and last values | |
| extracted_data = [] | |
| for row in data: | |
| extracted_row = [ | |
| row[0], # 1st value as is | |
| extract_post_split(row[1]), # Split 2nd value on '|' and take second part | |
| row[3], # 4th value as is | |
| extract_post_split(row[-1]) # Split last value on '|' and take second part | |
| ] | |
| extracted_data.append(extracted_row) | |
| # Print the result | |
| for row in extracted_data: | |
| print(row) | |
| def filter_medical_terms(lines): | |
| terms = ['er', 'pr', 'her2', 'mammaprint', 'oncotype'] | |
| filtered_lines = [] | |
| for line in lines: | |
| if any(term in line.lower() for term in terms): | |
| filtered_lines.append(line.strip()) | |
| return filtered_lines | |
| # The text output from the LLM (you would replace this with the actual output) | |
| llm_output = """ | |
| [Your provided text goes here] ... | |
| """ | |
| # Extract JSON strings using regex | |
| json_strings = re.findall(r'```json\n(.*?)```', llm_output, re.DOTALL) | |
| # Parse each JSON string and collect the data | |
| data = [] | |
| for json_str in json_strings: | |
| try: | |
| parsed = json.loads(json_str) | |
| entity_name = parsed['entity_name'] | |
| attributes = parsed['attributes'][0] | |
| attributes['entity_name'] = entity_name | |
| data.append(attributes) | |
| except json.JSONDecodeError: | |
| print(f"Error parsing JSON: {json_str}") | |
| # Create a pandas DataFrame | |
| df = pd.DataFrame(data) | |
| # Reorder columns to have 'entity_name' first | |
| cols = ['entity_name'] + [col for col in df.columns if col != 'entity_name'] | |
| df = df[cols] | |
| # Display the DataFrame | |
| print(df) | |
| json_string = re.search(r"```(.*?)```", llm_output, re.DOTALL).group(1).strip() | |
| # Load the JSON string into a dictionary | |
| data = json.loads(json_string) | |
| # Convert the 'attributes' list to a DataFrame | |
| df = pd.DataFrame(data['attributes']) | |
| # Display the DataFrame | |
| print(df) | |
| def extract_table(text): | |
| # Find the start and end of the table | |
| start = text.find("| Biomarker Name |") | |
| end = text.rfind("|", start) | |
| # Extract the table portion | |
| table_text = text[start:end].strip() | |
| # Convert the table to a list of rows | |
| rows = [row.strip().split("|")[1:-1] for row in table_text.split("\n") if "|" in row] | |
| # Create a DataFrame from the rows | |
| df = pd.DataFrame(rows[1:], columns=rows[0]) | |
| return df | |
| # Extract the final result table | |
| df_final_result = extract_table(llm_response) | |
| # Load your data (replace 'your_file.csv' with the actual file path) | |
| merge_data = pd.read_csv('your_file.csv') | |
| # Calculate value counts for columns ending with _data and _gt | |
| columns_data = [col for col in merge_data.columns if col.endswith('_data')] | |
| columns_gt = [col for col in merge_data.columns if col.endswith('_gt')] | |
| # Initialize a dictionary to store value counts and differences | |
| value_counts_diff = {} | |
| for data_col, gt_col in zip(columns_data, columns_gt): | |
| data_counts = merge_data[data_col].value_counts(dropna=False) | |
| gt_counts = merge_data[gt_col].value_counts(dropna=False) | |
| # Create a DataFrame combining the counts | |
| combined_counts = pd.DataFrame({ | |
| 'data_counts': data_counts, | |
| 'gt_counts': gt_counts | |
| }).fillna(0) | |
| # Calculate the difference between data and gt counts | |
| combined_counts['difference'] = combined_counts['data_counts'] - combined_counts['gt_counts'] | |
| # Store in dictionary | |
| value_counts_diff[data_col] = combined_counts | |
| # Display the results for each column | |
| value_counts_diff | |
| data['match'] = data['Column_B'] == df_excel['Column_A'] | |
| # Map the boolean result to 'Match' or 'No Match' | |
| data['match'] = data['match'].map({True: 'Match', False: 'No Match'}) | |
| # Calculate performance metrics | |
| tp = data['match'].value_counts().get('Match', 0) | |
| fn = data['match'].value_counts().get('No Match', 0) | |
| tot = len(data) # You can replace len(data) with len(df_excel) if both are the same | |
| accuracy = tp / tot | |
| precision = tp / (tp + 0) # Assuming all true positives | |
| recall = tp / (tp + fn) | |
| f1_score = 2 * (precision * recall) / (precision + recall) | |
| print(f"Accuracy: {accuracy:.2%}") | |
| print(f"Precision: {precision:.2%}") | |
| print(f"Recall: {recall:.2%}") | |
| print(f"F1 Score: {f1_score:.2%}") | |
| You are a healthcare professional with specialized expertise in oncology and a comprehensive understanding of biomarker data from patient medical records. Your primary task is to extract and identify the table layouts present in the provided data, focusing solely on these layouts. Present your findings in a clear and structured table format. Ensure that only the relevant table layouts are included, without incorporating any additional findings or headers unrelated to the identified tables. | |
| import numpy as np | |
| # Assuming 'Biomarker', 'Method', and 'Result' are the columns to compare | |
| columns_to_compare = ['Biomarker', 'Method', 'Result'] | |
| # Check if the two DataFrames are equal on the selected columns | |
| comparison = data[columns_to_compare].eq(df_excel[columns_to_compare]) | |
| # True Positives: All columns match | |
| true_positives = comparison.all(axis=1).sum() | |
| # False Positives: Mismatch in any column but counted only where `df_excel` has a value | |
| false_positives = np.logical_and(~comparison, ~df_excel[columns_to_compare].isnull()).sum().sum() | |
| # False Negatives: Mismatch in any column but counted only where `data` has a value | |
| false_negatives = np.logical_and(~comparison, ~data[columns_to_compare].isnull()).sum().sum() | |
| # Precision: TP / (TP + FP) | |
| precision = true_positives / (true_positives + false_positives) | |
| # Recall: TP / (TP + FN) | |
| recall = true_positives / (true_positives + false_negatives) | |
| # F1 Score: 2 * (precision * recall) / (precision + recall) | |
| f1_score = 2 * (precision * recall) / (precision + recall) | |
| print(f"Precision: {precision:.2f}") | |
| print(f"Recall: {recall:.2f}") | |
| print(f"F1 Score: {f1_score:.2f}") | |
| def clean_llm_response(text): | |
| # Remove lines with '--- | --- | ---' pattern | |
| clean_text = re.sub(r'---(\s*\|\s*---)+', '', text) | |
| # Remove lines containing '**' | |
| clean_text = re.sub(r'\*\*.*\*\*', '', clean_text) | |
| # Remove any resulting empty lines | |
| clean_text = re.sub(r'\n\s*\n', '\n', clean_text) | |
| return clean_text | |
| # Function to parse tables from the response | |
| def parse_tables(response_text): | |
| tables = {} | |
| current_table = [] | |
| current_table_name = None | |
| for line in response_text.strip().split('\n'): | |
| if line.startswith("Table"): | |
| if current_table_name: | |
| tables[current_table_name] = current_table | |
| current_table_name = re.sub(r'Table \d+: ', '', line) | |
| current_table = [] | |
| else: | |
| current_table.append(line.strip()) | |
| if current_table_name: | |
| tables[current_table_name] = current_table | |
| return tables | |
| # Function to convert parsed tables into DataFrames | |
| def tables_to_dataframes(parsed_tables): | |
| dataframes = {} | |
| for table_name, lines in parsed_tables.items(): | |
| if len(lines) > 2: | |
| headers = lines[1].split(" | ") | |
| data = [row.split(" | ") for row in lines[2:]] | |
| df = pd.DataFrame(data, columns=headers) | |
| dataframes[table_name] = df | |
| return dataframes | |
| # Parse the tables | |
| parsed_tables = parse_tables(llm_response) | |
| # Convert tables to DataFrames | |
| dataframes = tables_to_dataframes(parsed_tables) | |
| # Display dataframes | |
| for table_name, df in dataframes.items(): | |
| print(f"\n{table_name}:\n") | |
| print(df) | |
| The following is a table containing information about people. Please extract the data from the table and output it in a structured format such as JSON, where each row is an object with the corresponding column headers as keys. | |
| | Name | Age | Occupation | Country | | |
| |--------|-----|-------------|-----------| | |
| | John | 30 | Engineer | USA | | |
| | Maria | 25 | Doctor | Spain | | |
| | Ahmed | 40 | Teacher | Egypt | | |
| Please structure the output like this: | |
| [ | |
| { | |
| "Name": "John", | |
| "Age": 30, | |
| "Occupation": "Engineer", | |
| "Country": "USA" | |
| }, | |
| { | |
| "Name": "Maria", | |
| "Age": 25, | |
| "Occupation": "Doctor", | |
| "Country": "Spain" | |
| }, | |
| { | |
| "Name": "Ahmed", | |
| "Age": 40, | |
| "Occupation": "Teacher", | |
| "Country": "Egypt" | |
| } | |
| ] | |
| Step 11. **HARD CONSTRAINT – Secondary cancer aggregation (windowed by the selected progression event)** | |
| • For EACH selected tumor-response event, collect the DISTINCT secondary cancer types whose diagnosis dates fall within a ±30-day window around that event’s date_of_disease_progression_assessment. | |
| • Let D be the anchor date for the event: | |
| - If date_of_disease_progression_assessment is present, D = that date. | |
| - If it is null, determine D using Step 13 (start_date_of_treatment or date_of_secondary_cancer_diagnosis fallback for that event). | |
| • Window is inclusive: [D−30 days, D+30 days]. | |
| • Output these types in an ARRAY field named secondary_cancer_types_within_30d_of_progression (empty array if none). | |
| "secondary_cancer_types_within_30d_of_progression": [] | |
| from datetime import datetime, timedelta | |
| from typing import List, Dict, Any, Optional, Tuple | |
| import re | |
| # --- helpers --------------------------------------------------------------- | |
| def parse_date(s: Optional[str]) -> Optional[datetime]: | |
| """Parse common date formats to a datetime.date (YYYY-MM-DD, M/D/YYYY, etc.).""" | |
| if not s or not isinstance(s, str): | |
| return None | |
| s = s.strip() | |
| fmts = ["%Y-%m-%d", "%Y/%m/%d", "%m/%d/%Y", "%d-%b-%Y", "%d-%B-%Y"] | |
| for fmt in fmts: | |
| try: | |
| return datetime.strptime(s, fmt) | |
| except Exception: | |
| pass | |
| # try M/D/YY | |
| m = re.search(r"(\d{1,2})/(\d{1,2})/(\d{2})$", s) | |
| if m: | |
| mm, dd, yy = map(int, m.groups()) | |
| yy = (2000 + yy) if yy < 50 else (1900 + yy) | |
| try: | |
| return datetime(yy, mm, dd) | |
| except Exception: | |
| return None | |
| return None | |
| def iso(d: Optional[datetime]) -> str: | |
| return d.strftime("%Y-%m-%d") if d else "" | |
| def anchor_date_for_event(ev: Dict[str, Any]) -> Optional[datetime]: | |
| """ | |
| Step-13 anchor: | |
| D = date_of_disease_progression_assessment | |
| or treatment_change.start_date_of_treatment | |
| or date_of_secondary_cancer_diagnosis | |
| """ | |
| d_prog = parse_date(ev.get("date_of_disease_progression_assessment")) | |
| if d_prog: | |
| return d_prog | |
| start_tx = parse_date((ev.get("treatment_change") or {}).get("start_date_of_treatment")) | |
| if start_tx: | |
| return start_tx | |
| return parse_date(ev.get("date_of_secondary_cancer_diagnosis")) | |
| def collect_secondary_pool(observations: List[Dict[str, Any]]) -> List[Tuple[str, datetime]]: | |
| """Collect (secondary_cancer_type, diagnosis_date) pairs from all observations.""" | |
| pool: List[Tuple[str, datetime]] = [] | |
| for ob in observations: | |
| t = (ob.get("secondary_cancer_type") or "").strip() | |
| dt = parse_date(ob.get("date_of_secondary_cancer_diagnosis")) | |
| if t and dt: | |
| pool.append((t, dt)) | |
| return pool | |
| # --- main aggregation ------------------------------------------------------ | |
| def aggregate_secondary_types_within_30d( | |
| observations: List[Dict[str, Any]], | |
| id_field: str = "segment_id", | |
| ) -> List[Dict[str, Any]]: | |
| """ | |
| For EACH event in `observations`, compute the distinct secondary tumor types whose | |
| diagnosis dates fall within ±30 days of the event's anchor date (D). | |
| Returns a new list (does not mutate input) with: | |
| - 'secondary_cancer_types_within_30d_of_progression': List[str] | |
| - 'secondary_cancer_types_within_30d_of_progression_csv': str | |
| - 'date_of_disease_progression_assessment' normalized to YYYY-MM-DD (if present) | |
| """ | |
| pool = collect_secondary_pool(observations) | |
| out: List[Dict[str, Any]] = [] | |
| for ev in observations: | |
| ev2 = dict(ev) # shallow copy | |
| D = anchor_date_for_event(ev2) | |
| # normalize progression date if present | |
| if ev2.get("date_of_disease_progression_assessment"): | |
| ev2["date_of_disease_progression_assessment"] = iso(parse_date(ev2.get("date_of_disease_progression_assessment"))) | |
| if not D: | |
| ev2["secondary_cancer_types_within_30d_of_progression"] = [] | |
| ev2["secondary_cancer_types_within_30d_of_progression_csv"] = "" | |
| out.append(ev2) | |
| continue | |
| lo, hi = D - timedelta(days=30), D + timedelta(days=30) | |
| hits = sorted({t for (t, dt) in pool if lo <= dt <= hi}, key=lambda s: s.lower()) | |
| ev2["secondary_cancer_types_within_30d_of_progression"] = hits | |
| ev2["secondary_cancer_types_within_30d_of_progression_csv"] = ", ".join(hits) | |
| out.append(ev2) | |
| return out | |
| # --- optional convenience: aggregate for a single assessment date 'y' ----- | |
| def aggregate_for_assessment_date_y( | |
| observations: List[Dict[str, Any]], | |
| y: str, # e.g., "2024-04-13" or "4/13/2024" | |
| ) -> List[str]: | |
| """ | |
| Given a specific disease-progression assessment date 'y', return the distinct | |
| secondary tumor types with diagnosis_date within ±30 days of y. | |
| """ | |
| D = parse_date(y) | |
| if not D: | |
| return [] | |
| lo, hi = D - timedelta(days=30), D + timedelta(days=30) | |
| pool = collect_secondary_pool(observations) | |
| return sorted({t for (t, dt) in pool if lo <= dt <= hi}, key=lambda s: s.lower()) | |
| print("For assessment date 2024-04-13:", | |
| aggregate_for_assessment_date_y(observations, "2024-04-13")) | |