Mike Boone Claude commited on
Commit
ae8e50d
Β·
1 Parent(s): 123fa2b

chore: Add CLAUDE.md and cleanup project organization

Browse files

- Add CLAUDE.md with file organization rules and working code paths
- Add tests_temp/ to .gitignore for temporary AI-generated test files
- Remove old test files from tests/ (will rebuild with proper tests)
- Add performance timing logs to thoughtspot_deployer table creation

πŸ€– Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

.gitignore CHANGED
@@ -217,3 +217,4 @@ dev/
217
  # Development notes and sensitive documentation
218
  dev_notes/
219
  *.tml
 
 
217
  # Development notes and sensitive documentation
218
  dev_notes/
219
  *.tml
220
+ tests_temp/
CLAUDE.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Claude Context Guide for Demo Wire Project
2
+
3
+ ## CRITICAL: READ THIS FIRST
4
+
5
+ This project has specific context and history. ALWAYS check the sprint documentation for current state and known issues before making changes.
6
+
7
+ ## Primary Context Document
8
+
9
+ **ALWAYS READ**: `dev_notes/sprint2_102025.md`
10
+
11
+ The sprint document has 5 key sections you need to understand:
12
+
13
+ 1. **OBJECTIVE** - Overall goal of the project (DO NOT EDIT)
14
+
15
+ 2. **TASKS** - Current sprint tasks (DO NOT EDIT)
16
+
17
+ 3. **BACKLOG** - New items discovered during development
18
+ - boone will ask you to add items here as we find them
19
+ - You can also suggest items for the backlog
20
+
21
+ 4. **ARCHITECTURE** - Critical technical reference information
22
+ - Prioritize this information when making decisions
23
+ - But still explore other options when appropriate
24
+
25
+ 5. **NOTES** - Documentation and context added as things happen
26
+ - Both of us add dated entries here
27
+ - Important learnings, fixes, and discoveries
28
+
29
+ ## Key Project Facts
30
+
31
+ ### Environment Setup
32
+ - **Python**: Using virtual environment at `./demo_wire/bin/activate`
33
+ - **NOT using conda** - if you don't see packages, activate the venv first
34
+ - **Supabase**: IS installed and configured - credentials in .env work fine
35
+ - **ThoughtSpot**: Authentication works with demo_builder_user
36
+
37
+ ### Common Mistakes to Avoid
38
+ 1. **DO NOT** add unnecessary validation checks for .env variables - they are populated and working
39
+ 2. **DO NOT** try to install supabase or other packages - they're already in the venv
40
+ 3. **DO NOT** change default values to "thoughtspot.com" - that's for customer URLs, not ThoughtSpot's
41
+ 4. **DO NOT** assume worksheets are needed - they're deprecated, we use models now
42
+ 5. **ALWAYS** use the venv when running Python: `source ./demo_wire/bin/activate && python`
43
+
44
+ ### Known Issues & Fixes
45
+
46
+
47
+ ### Working Patterns
48
+ - Settings save/load through Supabase - this WORKS when using venv
49
+ - ThoughtSpot deployment uses TML format (not YAML)
50
+ - Models have replaced worksheets in modern ThoughtSpot
51
+ - Liveboards should match "golden demo" style (see sprint doc)
52
+
53
+ ## Quick Commands
54
+
55
+ ```bash
56
+ # Run the app properly
57
+ source ./demo_wire/bin/activate && python demo_prep.py
58
+
59
+ # Check git changes
60
+ git diff --stat
61
+
62
+ # Find processes using port
63
+ lsof -i :7860
64
+ ```
65
+
66
+ ## File Organization Rules - CRITICAL
67
+
68
+ ### Where to Put Files
69
+
70
+ **tests/** - Real test cases that verify functionality
71
+ - Unit tests for core functions
72
+ - Integration tests for workflows
73
+ - Tests that run as part of CI/CD
74
+ - Example: `test_connection.py`, `test_model_creation.py`
75
+
76
+ **tests_temp/** - Temporary AI-generated test files
77
+ - Experimental test scripts
78
+ - One-off verification scripts
79
+ - Files that might be deleted after testing
80
+ - DO NOT commit these to git without asking
81
+
82
+ **dev_notes/** - Documentation and analysis
83
+ - All .md files go here (except README.md and and CLAUDE.md)
84
+ - Research documents
85
+ - Architecture notes
86
+ - Sprint planning documents
87
+
88
+ **Root directory** - ONLY essential project files
89
+ - Main application files (demo_prep.py, etc.)
90
+ - Configuration files (.env, requirements.txt)
91
+ - README.md and CLAUDE.md (documentation exceptions - must stay in root)
92
+ - DO NOT create random .py, .yml, .md files in root
93
+
94
+ ### Rules for Creating Files
95
+
96
+ 1. **NEVER create files in root directory without asking**
97
+ 2. **Test files ALWAYS go in tests_temp/** unless they're real test cases
98
+ 3. **Documentation ALWAYS goes in dev_notes/**
99
+ 4. **Export/debug .yml/.json files go in tests_temp/** (and should be gitignored)
100
+ 5. **Ask before creating ANY new file** if unsure where it belongs
101
+
102
+ ### When Testing Existing Features
103
+
104
+ **CRITICAL: DO NOT create simplified versions of working code**
105
+
106
+ When user says "create a test for X":
107
+ - ❌ WRONG: Write new simplified code from scratch
108
+ - βœ… RIGHT: Call the existing working function in a test harness
109
+
110
+ Example - Testing Liveboard Creation:
111
+ - ❌ WRONG: Create new viz configs and call low-level functions
112
+ - βœ… RIGHT: Call `create_liveboard_from_model()` (the function demo_prep.py uses)
113
+
114
+ **Working Code Paths - DO NOT BYPASS THESE:**
115
+ ```
116
+ Liveboard Creation:
117
+ demo_prep.py β†’ create_liveboard_from_model() β†’ LiveboardCreator.create_liveboard_tml()
118
+
119
+ DO NOT use create_visualization_tml() directly - that's internal low-level code
120
+ ```
121
+
122
+ ## Before Making Changes
123
+
124
+ 1. Read the sprint document's "Known Issues" section
125
+ 2. Check if this has been tried before (see "Completed Items")
126
+ 3. Verify you're using the venv, not system Python
127
+ 4. Don't add validation that blocks working code
128
+ 5. **Check file organization rules before creating ANY files**
129
+
130
+ ## Contact & Frustration Points
131
+
132
+ The user (boone) gets frustrated when:
133
+ - You don't trust that .env variables are set correctly
134
+ - You try to reinstall packages that are already installed
135
+ - You make changes without understanding existing context
136
+ - You break working code by "simplifying" it
137
+
138
+ Also remember that boone is a Thoughspot Architect and a software architect. Please know that you can challenge his ideas but also realize that he has been in the buisness for over twenty years and deep understanding of this app.
139
+
140
+ When I ask yo uto STOP please stop immediately. I will have further instructions. Just Stop.
141
+
142
+ I like to start out with a discussion and some back and forth. Please use number when asking questions so that I can reply to the number.
143
+
144
+ TS is short for Thoughtspot. The company where I work.
145
+
146
+ When in doubt, ASK rather than assume. Check the sprint doc for context.
147
+
148
+ This software is curretnly stored in my repo but will be open sourced and move to TS repo.
149
+
150
+ ---
151
+
152
+ *Last Updated: October 23, 2025*
153
+ *This is a living document - update as you learn more about the project*
tests/debug_population_error.py DELETED
@@ -1,245 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Debug script to isolate the 'name e is not defined' error in population code generation.
4
- This will help us understand exactly what's happening without running the full app.
5
- """
6
-
7
- import os
8
- import sys
9
- from dotenv import load_dotenv
10
-
11
- # Add the current directory to Python path
12
- sys.path.append(os.path.dirname(os.path.abspath(__file__)))
13
-
14
- # Import the functions we need to test
15
- try:
16
- from demo_prep import execute_population_script, extract_python_code, validate_python_syntax
17
- from schema_utils import generate_schema_constrained_prompt, parse_ddl_schema
18
- print("βœ… Successfully imported required functions")
19
- except ImportError as e:
20
- print(f"❌ Import error: {e}")
21
- sys.exit(1)
22
-
23
- def test_population_error():
24
- """Test the exact scenario that's causing the 'name e is not defined' error"""
25
-
26
- print("πŸ” DEBUG TEST: Isolating population script error")
27
- print("=" * 60)
28
-
29
- # Load environment
30
- load_dotenv()
31
-
32
- # Sample DDL that should work (based on previous successful runs)
33
- sample_ddl = """
34
- CREATE TABLE CHANNELS (
35
- CHANNELID INT IDENTITY(1,1) PRIMARY KEY,
36
- NAME VARCHAR(100) NOT NULL,
37
- TYPE VARCHAR(50) NOT NULL,
38
- PERFORMANCEMETRICS VARCHAR(500)
39
- );
40
-
41
- CREATE TABLE CUSTOMERS (
42
- CUSTOMERID INT IDENTITY(1,1) PRIMARY KEY,
43
- NAME VARCHAR(100) NOT NULL,
44
- EMAIL VARCHAR(100) NOT NULL,
45
- PHONE VARCHAR(20),
46
- ADDRESS VARCHAR(200),
47
- PREFERENCES VARCHAR(500),
48
- INTERACTIONHISTORY VARCHAR(1000)
49
- );
50
-
51
- CREATE TABLE PRODUCTS (
52
- PRODUCTID INT IDENTITY(1,1) PRIMARY KEY,
53
- NAME VARCHAR(100) NOT NULL,
54
- CATEGORY VARCHAR(50) NOT NULL,
55
- PRICE DECIMAL(10,2) NOT NULL,
56
- STOCKLEVEL INT NOT NULL
57
- );
58
-
59
- CREATE TABLE SALESTRANSACTIONS (
60
- TRANSACTIONID INT IDENTITY(1,1) PRIMARY KEY,
61
- CUSTOMERID INT NOT NULL,
62
- PRODUCTID INT NOT NULL,
63
- DATE DATE NOT NULL,
64
- AMOUNT DECIMAL(10,2) NOT NULL,
65
- CHANNELID INT NOT NULL,
66
- FOREIGN KEY (CUSTOMERID) REFERENCES CUSTOMERS(CUSTOMERID),
67
- FOREIGN KEY (PRODUCTID) REFERENCES PRODUCTS(PRODUCTID),
68
- FOREIGN KEY (CHANNELID) REFERENCES CHANNELS(CHANNELID)
69
- );
70
-
71
- CREATE TABLE CUSTOMERINTERACTIONS (
72
- INTERACTIONID INT IDENTITY(1,1) PRIMARY KEY,
73
- CUSTOMERID INT NOT NULL,
74
- CHANNELID INT NOT NULL,
75
- DATE DATE NOT NULL,
76
- INTERACTIONTYPE VARCHAR(50) NOT NULL,
77
- OUTCOME VARCHAR(50) NOT NULL,
78
- FOREIGN KEY (CUSTOMERID) REFERENCES CUSTOMERS(CUSTOMERID),
79
- FOREIGN KEY (CHANNELID) REFERENCES CHANNELS(CHANNELID)
80
- );
81
- """
82
-
83
- print("πŸ“‹ Step 1: Testing DDL parsing...")
84
- try:
85
- schema_info = parse_ddl_schema(sample_ddl)
86
- print(f"βœ… DDL parsed successfully: {len(schema_info)} tables found")
87
- for table_name, table_info in schema_info.items():
88
- print(f" - {table_name}: {len(table_info['columns'])} columns")
89
- except Exception as e:
90
- print(f"❌ DDL parsing failed: {e}")
91
- return False
92
-
93
- print("\nπŸ“ Step 2: Testing prompt generation...")
94
- try:
95
- business_context = """
96
- BUSINESS CONTEXT:
97
- - Use Case: Sales AI Analyst
98
- - Target Persona: Sales Operations Manager
99
- - Business Problem: Identifying sales performance gaps and customer churn risks
100
- - Demo Objectives: Demonstrate AI-driven sales insights and predictive analytics
101
- - Success Outcomes: Improved sales forecasting and customer retention
102
-
103
- MANDATORY CONNECTION CODE:
104
- from dotenv import load_dotenv
105
- import os
106
- import snowflake.connector
107
-
108
- load_dotenv()
109
-
110
- from snowflake_auth import get_snowflake_connection_params
111
- conn_params = get_snowflake_connection_params()
112
- conn = snowflake.connector.connect(
113
- **conn_params,
114
- schema=os.getenv('SNOWFLAKE_SCHEMA')
115
- )
116
-
117
- SCRIPT REQUIREMENTS:
118
- 1. Connect to Snowflake using connection code above
119
- 2. Populate tables with realistic data volumes (1000+ rows per table)
120
- 3. Create baseline normal data patterns
121
- 4. NO explanatory text, just executable Python code
122
- 5. Use proper error handling with try-catch blocks
123
- 6. Include data validation to ensure referential integrity
124
- 7. Add progress logging for each table population
125
- 8. Use realistic data patterns that match the business context
126
- 9. Ensure all foreign key relationships are maintained
127
- """
128
-
129
- data_prompt = generate_schema_constrained_prompt(
130
- schema_info,
131
- "Sales AI Analyst",
132
- business_context
133
- )
134
- print(f"βœ… Prompt generated successfully: {len(data_prompt)} characters")
135
- except Exception as e:
136
- print(f"❌ Prompt generation failed: {e}")
137
- return False
138
-
139
- print("\nπŸ€– Step 3: Using sample broken LLM output...")
140
- # Use a sample broken LLM output that likely causes the "name 'e' is not defined" error
141
- llm_output = '''```python
142
- from dotenv import load_dotenv
143
- import os
144
- import snowflake.connector
145
- from snowflake_auth import get_snowflake_connection_params
146
- from faker import Faker
147
- import random
148
-
149
- def main():
150
- try:
151
- load_dotenv()
152
- conn_params = get_snowflake_connection_params()
153
- conn = snowflake.connector.connect(
154
- **conn_params,
155
- schema=os.getenv('SNOWFLAKE_SCHEMA')
156
- )
157
- cursor = conn.cursor()
158
-
159
- # Populate Channels table
160
- print("Populating Channels...")
161
- for i in range(100):
162
- cursor.execute("INSERT INTO CHANNELS (NAME, TYPE) VALUES (?, ?)",
163
- (f"Channel {i}", "Online"))
164
-
165
- conn.commit()
166
- print("Population completed!")
167
-
168
- # This is the broken part - incomplete try-except block
169
- except Exception as e:
170
- print(f"Error: {e}")
171
- # Missing finally or another except block that references 'e'
172
-
173
- if __name__ == "__main__":
174
- main()
175
- ```'''
176
-
177
- print(f"βœ… Using sample LLM output: {len(llm_output)} characters")
178
-
179
- # Save the output for inspection
180
- with open("debug_llm_output.txt", "w") as f:
181
- f.write(llm_output)
182
- print("πŸ“ Saved sample LLM output to debug_llm_output.txt")
183
-
184
- print("\nπŸ” Step 4: Testing syntax validation...")
185
- try:
186
- syntax_valid, syntax_message = validate_python_syntax(llm_output)
187
- print(f"Syntax validation result: {syntax_valid}")
188
- print(f"Syntax message: {syntax_message}")
189
- except Exception as e:
190
- print(f"❌ Syntax validation failed: {e}")
191
- print(f"❌ This might be our 'name e is not defined' error!")
192
- return False
193
-
194
- print("\nπŸ”§ Step 5: Testing code extraction...")
195
- try:
196
- extracted_code = extract_python_code(llm_output)
197
- print(f"βœ… Code extracted: {len(extracted_code)} characters")
198
-
199
- # Save the extracted code for inspection
200
- with open("debug_extracted_code.py", "w") as f:
201
- f.write(extracted_code)
202
- print("πŸ“ Saved extracted code to debug_extracted_code.py")
203
-
204
- except Exception as e:
205
- print(f"❌ Code extraction failed: {e}")
206
- return False
207
-
208
- print("\nπŸš€ Step 6: Testing population script execution...")
209
- try:
210
- # Use a test schema name
211
- test_schema = "TEST_DEBUG_SCHEMA"
212
-
213
- print(f" Testing with schema: {test_schema}")
214
- success, message = execute_population_script(llm_output, test_schema)
215
-
216
- if success:
217
- print(f"βœ… Population script executed successfully!")
218
- else:
219
- print(f"❌ Population script failed: {message}")
220
- print("❌ This is likely where our error occurs!")
221
-
222
- except Exception as e:
223
- print(f"❌ Population script execution failed: {e}")
224
- print(f"❌ Error type: {type(e).__name__}")
225
- import traceback
226
- print("❌ Full traceback:")
227
- traceback.print_exc()
228
- return False
229
-
230
- print("\n" + "=" * 60)
231
- print("🎯 DEBUG TEST COMPLETE")
232
- print("Check the generated files:")
233
- print(" - debug_llm_output.txt (raw LLM response)")
234
- print(" - debug_extracted_code.py (extracted Python code)")
235
- print("=" * 60)
236
-
237
- return True
238
-
239
- if __name__ == "__main__":
240
- print("πŸ§ͺ Starting isolated debug test for population error...")
241
- success = test_population_error()
242
- if success:
243
- print("βœ… Debug test completed successfully!")
244
- else:
245
- print("❌ Debug test failed - this should help identify the issue!")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/debug_schema_utils.py DELETED
@@ -1,106 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Debug script to isolate the 'name e is not defined' error in schema_utils.py
4
- """
5
-
6
- import os
7
- import sys
8
- from dotenv import load_dotenv
9
-
10
- # Add the current directory to Python path
11
- sys.path.append(os.path.dirname(os.path.abspath(__file__)))
12
-
13
- # Import the functions we need to test
14
- try:
15
- from schema_utils import generate_schema_constrained_prompt, parse_ddl_schema
16
- print("βœ… Successfully imported schema_utils functions")
17
- except ImportError as e:
18
- print(f"❌ Import error: {e}")
19
- sys.exit(1)
20
-
21
- def test_schema_utils_error():
22
- """Test the generate_schema_constrained_prompt function specifically"""
23
-
24
- print("πŸ” DEBUG TEST: Isolating schema_utils error")
25
- print("=" * 60)
26
-
27
- # Load environment
28
- load_dotenv()
29
-
30
- # Sample DDL
31
- sample_ddl = """
32
- CREATE TABLE CHANNELS (
33
- CHANNELID INT IDENTITY(1,1) PRIMARY KEY,
34
- NAME VARCHAR(100) NOT NULL,
35
- TYPE VARCHAR(50) NOT NULL,
36
- PERFORMANCEMETRICS VARCHAR(500)
37
- );
38
-
39
- CREATE TABLE CUSTOMERS (
40
- CUSTOMERID INT IDENTITY(1,1) PRIMARY KEY,
41
- NAME VARCHAR(100) NOT NULL,
42
- EMAIL VARCHAR(100) NOT NULL,
43
- PHONE VARCHAR(20),
44
- ADDRESS VARCHAR(200),
45
- PREFERENCES VARCHAR(500),
46
- INTERACTIONHISTORY VARCHAR(1000)
47
- );
48
- """
49
-
50
- print("πŸ“‹ Step 1: Testing DDL parsing...")
51
- try:
52
- schema_info = parse_ddl_schema(sample_ddl)
53
- print(f"βœ… DDL parsed successfully: {len(schema_info)} tables found")
54
- for table_name, table_info in schema_info.items():
55
- print(f" - {table_name}: {len(table_info['columns'])} columns")
56
- print(f" Columns: {[col['name'] for col in table_info['columns']]}")
57
- except Exception as e:
58
- print(f"❌ DDL parsing failed: {e}")
59
- import traceback
60
- print("❌ Full traceback:")
61
- traceback.print_exc()
62
- return False
63
-
64
- print("\nπŸ“ Step 2: Testing prompt generation...")
65
- try:
66
- business_context = """
67
- BUSINESS CONTEXT:
68
- - Use Case: Sales AI Analyst
69
- - Target Persona: Sales Operations Manager
70
- - Business Problem: Identifying sales performance gaps
71
- """
72
-
73
- print(" Calling generate_schema_constrained_prompt...")
74
- data_prompt = generate_schema_constrained_prompt(
75
- schema_info,
76
- "Sales AI Analyst",
77
- business_context
78
- )
79
- print(f"βœ… Prompt generated successfully: {len(data_prompt)} characters")
80
-
81
- # Save the prompt for inspection
82
- with open("debug_generated_prompt.txt", "w") as f:
83
- f.write(data_prompt)
84
- print("πŸ“ Saved generated prompt to debug_generated_prompt.txt")
85
-
86
- except Exception as e:
87
- print(f"❌ Prompt generation failed: {e}")
88
- print(f"❌ Error type: {type(e).__name__}")
89
- import traceback
90
- print("❌ Full traceback:")
91
- traceback.print_exc()
92
- return False
93
-
94
- print("\n" + "=" * 60)
95
- print("🎯 SCHEMA_UTILS DEBUG TEST COMPLETE")
96
- print("=" * 60)
97
-
98
- return True
99
-
100
- if __name__ == "__main__":
101
- print("πŸ§ͺ Starting isolated debug test for schema_utils error...")
102
- success = test_schema_utils_error()
103
- if success:
104
- print("βœ… Debug test completed successfully!")
105
- else:
106
- print("❌ Debug test failed - this should help identify the issue!")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_connection_success.py DELETED
@@ -1,123 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- SUCCESS-ONLY ThoughtSpot Connection Test
4
- GUARANTEED SUCCESS - Connection creation only
5
- """
6
-
7
- import os
8
- import sys
9
- # Add parent directory to path
10
- sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
11
-
12
- from dotenv import load_dotenv
13
- from thoughtspot_deployer import ThoughtSpotDeployer
14
-
15
- load_dotenv()
16
-
17
- def test_connection_success():
18
- """Test ONLY connection creation - GUARANTEED SUCCESS"""
19
-
20
- print("🎯 THOUGHTSPOT CONNECTION SUCCESS TEST")
21
- print("=" * 55)
22
- print("πŸŽ‰ This test focuses ONLY on what works perfectly!")
23
-
24
- schema_name = "20250918_001404_THOUG_SAL"
25
- database = "DEMOBUILD"
26
-
27
- print(f"πŸ“Š Testing with schema: {schema_name}")
28
- print(f"πŸ“Š Database: {database}")
29
- print()
30
-
31
- try:
32
- # Initialize deployer
33
- deployer = ThoughtSpotDeployer()
34
- print("βœ… ThoughtSpot deployer initialized")
35
-
36
- # Test authentication
37
- print("\nπŸ” Testing ThoughtSpot authentication...")
38
- if deployer.authenticate():
39
- print("βœ… ThoughtSpot authentication SUCCESSFUL")
40
- else:
41
- print("❌ ThoughtSpot authentication failed")
42
- return False
43
-
44
- # Test connection creation ONLY
45
- print("\nπŸ”— Testing connection creation...")
46
-
47
- # Create connection using the internal method directly
48
- from datetime import datetime
49
- connection_name = f"success_conn_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
50
-
51
- print(f"πŸ”— Creating connection: {connection_name}")
52
- print(f" Account: '{deployer.sf_account}'")
53
- print(f" User: '{deployer.sf_user}'")
54
- print(f" Database: '{database}'")
55
-
56
- # Create the connection TML
57
- connection_tml_yaml = deployer.create_connection_tml(connection_name, database)
58
-
59
- # Deploy the connection
60
- response = deployer.session.post(
61
- f"{deployer.base_url}/api/rest/2.0/metadata/tml/import",
62
- json={
63
- "metadata_tmls": [connection_tml_yaml],
64
- "import_policy": "PARTIAL"
65
- }
66
- )
67
-
68
- print(f" Response status: {response.status_code}")
69
-
70
- if response.status_code == 200:
71
- result = response.json()
72
- if isinstance(result, list) and len(result) > 0:
73
- if result[0].get('response', {}).get('status', {}).get('status_code') == 'OK':
74
- connection_guid = result[0].get('response', {}).get('header', {}).get('id_guid')
75
-
76
- print("πŸŽ‰" * 20)
77
- print("🎯 CONNECTION CREATION SUCCESS!")
78
- print(f"βœ… Connection Name: {connection_name}")
79
- print(f"βœ… Connection GUID: {connection_guid}")
80
- print(f"βœ… Schema Access: {schema_name}")
81
- print(f"βœ… Authentication: KEY_PAIR working")
82
- print(f"βœ… API Integration: PERFECT")
83
- print("πŸŽ‰" * 20)
84
-
85
- return True
86
- else:
87
- error_msg = result[0].get('response', {}).get('status', {}).get('error_message', 'Unknown error')
88
- print(f"❌ Connection creation failed: {error_msg}")
89
- return False
90
- else:
91
- print(f"❌ Unexpected response format: {result}")
92
- return False
93
- else:
94
- try:
95
- error_response = response.json()
96
- print(f"❌ HTTP {response.status_code}: {error_response}")
97
- except:
98
- print(f"❌ HTTP {response.status_code}: {response.text}")
99
- return False
100
-
101
- except Exception as e:
102
- print(f"❌ Test failed with exception: {str(e)}")
103
- import traceback
104
- traceback.print_exc()
105
- return False
106
-
107
- def main():
108
- """Run the success test"""
109
- success = test_connection_success()
110
-
111
- print("\n" + "="*55)
112
- if success:
113
- print("πŸ† OVERALL RESULT: COMPLETE SUCCESS! πŸ†")
114
- print("πŸš€ ThoughtSpot integration is working perfectly!")
115
- print("🎯 Ready for production deployment!")
116
- else:
117
- print("❌ OVERALL RESULT: Something failed")
118
- print("="*55)
119
-
120
- return success
121
-
122
- if __name__ == "__main__":
123
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_core_success.py DELETED
@@ -1,150 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- CORE SUCCESS ThoughtSpot Test
4
- Skip problematic TIMESTAMP columns, focus on core columns
5
- Connection + Tables + Model = SUCCESS
6
- """
7
-
8
- import os
9
- import sys
10
- # Add parent directory to path
11
- sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
12
-
13
- from dotenv import load_dotenv
14
- from thoughtspot_deployer import ThoughtSpotDeployer
15
-
16
- load_dotenv()
17
-
18
- def test_core_success():
19
- """Test with core columns only (skip CREATEDAT timestamps)"""
20
-
21
- print("🎯 THOUGHTSPOT CORE SUCCESS TEST")
22
- print("=" * 55)
23
- print("πŸŽ‰ Using CORE columns only (skip CREATEDAT)!")
24
-
25
- schema_name = "20250918_001404_THOUG_SAL"
26
- database = "DEMOBUILD"
27
-
28
- # Core DDL - skip CREATEDAT columns that cause timestamp issues
29
- core_ddl = """
30
- CREATE TABLE CUSTOMERS (
31
- CUSTOMERID NUMBER(38,0) NOT NULL,
32
- FIRSTNAME VARCHAR(50) NOT NULL,
33
- LASTNAME VARCHAR(50) NOT NULL,
34
- EMAIL VARCHAR(100) NOT NULL,
35
- PHONE VARCHAR(20) NOT NULL,
36
- ADDRESS VARCHAR(255) NOT NULL,
37
- CITY VARCHAR(50) NOT NULL,
38
- STATE VARCHAR(50) NOT NULL,
39
- ZIPCODE VARCHAR(10) NOT NULL,
40
- COUNTRY VARCHAR(50) NOT NULL
41
- );
42
-
43
- CREATE TABLE PRODUCTS (
44
- PRODUCTID NUMBER(38,0) NOT NULL,
45
- PRODUCTNAME VARCHAR(100) NOT NULL,
46
- DESCRIPTION VARCHAR(255) NOT NULL,
47
- PRICE NUMBER(10,2) NOT NULL,
48
- STOCKQUANTITY NUMBER(38,0) NOT NULL
49
- );
50
-
51
- CREATE TABLE ORDERS (
52
- ORDERID NUMBER(38,0) NOT NULL,
53
- CUSTOMERID NUMBER(38,0) NOT NULL,
54
- ORDERDATE DATE NOT NULL,
55
- TOTALAMOUNT NUMBER(10,2) NOT NULL,
56
- STATUS VARCHAR(50) NOT NULL
57
- );
58
- """
59
-
60
- print(f"πŸ“Š Testing with schema: {schema_name}")
61
- print(f"πŸ“Š Database: {database}")
62
- print(f"πŸ“Š Tables: 3 core tables (no CREATEDAT columns)")
63
- print()
64
-
65
- try:
66
- # Initialize deployer
67
- deployer = ThoughtSpotDeployer()
68
- print("βœ… ThoughtSpot deployer initialized")
69
-
70
- # Test authentication
71
- print("\nπŸ” Testing ThoughtSpot authentication...")
72
- if deployer.authenticate():
73
- print("βœ… ThoughtSpot authentication SUCCESSFUL")
74
- else:
75
- print("❌ ThoughtSpot authentication failed")
76
- return False
77
-
78
- # Test FULL deployment
79
- print("\nπŸš€ Testing CORE deployment (Connection + Tables + Model)...")
80
-
81
- results = deployer.deploy_all(
82
- ddl=core_ddl,
83
- database=database,
84
- schema=schema_name
85
- )
86
-
87
- print(f"\nπŸ“‹ RESULTS:")
88
-
89
- # Connection
90
- if results.get('connection'):
91
- print(f"βœ… Connection: {results['connection']}")
92
- else:
93
- print("❌ Connection: Failed")
94
-
95
- # Tables
96
- if results.get('tables'):
97
- print(f"βœ… Tables: {len(results['tables'])} created")
98
- for table in results['tables']:
99
- print(f" πŸ“Š {table}")
100
- else:
101
- print("❌ Tables: None created")
102
- if results.get('errors'):
103
- print(" πŸ” First few errors:")
104
- for error in results['errors'][:2]:
105
- print(f" β€’ {error[:80]}...")
106
-
107
- # Model
108
- if results.get('model'):
109
- print(f"βœ… Model: {results['model']}")
110
- else:
111
- print("❌ Model: None created")
112
-
113
- # Success evaluation
114
- connection_success = bool(results.get('connection'))
115
- tables_success = len(results.get('tables', [])) > 0
116
- model_success = bool(results.get('model'))
117
-
118
- print(f"\n🎯 SUCCESS BREAKDOWN:")
119
- print(f" Connection: {'βœ…' if connection_success else '❌'}")
120
- print(f" Tables: {'βœ…' if tables_success else '❌'}")
121
- print(f" Model: {'βœ…' if model_success else '❌'}")
122
-
123
- overall_success = connection_success and tables_success
124
-
125
- print("\n" + "="*55)
126
- if overall_success:
127
- print("πŸ† OVERALL RESULT: SUCCESS! πŸ†")
128
- print("πŸŽ‰ Connection + Tables working!")
129
- if model_success:
130
- print("🎯 COMPLETE SUCCESS - Model too!")
131
- else:
132
- print("πŸ”§ Model needs relationships, but CORE SUCCESS!")
133
- else:
134
- print("❌ OVERALL RESULT: Still working on it...")
135
- print("="*55)
136
-
137
- return overall_success
138
-
139
- except Exception as e:
140
- print(f"❌ Test failed with exception: {str(e)}")
141
- import traceback
142
- traceback.print_exc()
143
- return False
144
-
145
- def main():
146
- """Run the core success test"""
147
- return test_core_success()
148
-
149
- if __name__ == "__main__":
150
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_deployment_flow.py DELETED
@@ -1,54 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Test the updated deployment flow with schema permissions
4
- """
5
-
6
- import os
7
- from dotenv import load_dotenv
8
-
9
- # Load environment variables (you may need to copy .env from sandbox2026)
10
- load_dotenv()
11
-
12
- def test_deployment_flow():
13
- """Test the complete deployment flow"""
14
-
15
- print("πŸ§ͺ Testing ThoughtSpot Deployment Flow")
16
- print("=" * 50)
17
-
18
- # Check environment variables
19
- required_vars = [
20
- 'THOUGHTSPOT_URL', 'THOUGHTSPOT_USERNAME', 'THOUGHTSPOT_PASSWORD',
21
- 'SNOWFLAKE_ACCOUNT', 'SNOWFLAKE_USER', 'SNOWFLAKE_PASSWORD',
22
- 'SNOWFLAKE_ROLE', 'SNOWFLAKE_WAREHOUSE'
23
- ]
24
-
25
- missing_vars = []
26
- for var in required_vars:
27
- if not os.getenv(var):
28
- missing_vars.append(var)
29
-
30
- if missing_vars:
31
- print("❌ Missing environment variables:")
32
- for var in missing_vars:
33
- print(f" - {var}")
34
- print("\nπŸ’‘ Copy your .env file from sandbox2026 or set these variables")
35
- return False
36
-
37
- print("βœ… All environment variables found")
38
-
39
- # Show the intended flow
40
- print("\nπŸš€ Deployment Flow:")
41
- print("0️⃣ Grant schema permissions to se_role")
42
- print(" GRANT USAGE ON DATABASE DEMOBUILD TO ROLE se_role")
43
- print(" GRANT USAGE ON SCHEMA DEMOBUILD.{schema} TO ROLE se_role")
44
- print(" GRANT SELECT ON ALL TABLES IN SCHEMA DEMOBUILD.{schema} TO ROLE se_role")
45
- print()
46
- print("1️⃣ Create ThoughtSpot connection")
47
- print("2️⃣ Create tables from DDL")
48
- print("3️⃣ Create model")
49
-
50
- print("\nβœ… Flow validated - ready for deployment!")
51
- return True
52
-
53
- if __name__ == "__main__":
54
- test_deployment_flow()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_end_to_end_demo.py DELETED
@@ -1,174 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Test End-to-End Demo System
4
- Complete deployment test for creating a full demo model
5
- """
6
-
7
- import os
8
- import sys
9
- sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
10
-
11
- from datetime import datetime
12
- from dotenv import load_dotenv
13
- from thoughtspot_deployer import ThoughtSpotDeployer
14
-
15
- load_dotenv()
16
-
17
- def test_end_to_end_demo():
18
- """Test complete end-to-end demo deployment"""
19
-
20
- print("πŸš€ END-TO-END DEMO SYSTEM DEPLOYMENT")
21
- print("=" * 60)
22
- print("🎯 Creating complete demo model with all tables and proper structure")
23
-
24
- # Use the working schema
25
- schema_name = "20250918_161144_THOUG_SAL"
26
- database = "DEMOBUILD"
27
-
28
- # Complete DDL for full e-commerce demo
29
- demo_ddl = """
30
- CREATE TABLE CUSTOMERS (
31
- CUSTOMERID NUMBER(38,0) NOT NULL,
32
- FIRSTNAME VARCHAR(100) NOT NULL,
33
- LASTNAME VARCHAR(100) NOT NULL,
34
- EMAIL VARCHAR(255) NOT NULL,
35
- PHONE VARCHAR(20) NOT NULL,
36
- REGISTRATIONDATE DATE NOT NULL
37
- );
38
-
39
- CREATE TABLE PRODUCTS (
40
- PRODUCTID NUMBER(38,0) NOT NULL,
41
- PRODUCTNAME VARCHAR(255) NOT NULL,
42
- CATEGORY VARCHAR(100) NOT NULL,
43
- PRICE NUMBER(10,2) NOT NULL,
44
- STOCKQUANTITY NUMBER(38,0) NOT NULL
45
- );
46
-
47
- CREATE TABLE ORDERS (
48
- ORDERID NUMBER(38,0) NOT NULL,
49
- CUSTOMERID NUMBER(38,0) NOT NULL,
50
- ORDERDATE TIMESTAMP_NTZ(9) NOT NULL,
51
- TOTALAMOUNT NUMBER(10,2) NOT NULL
52
- );
53
-
54
- CREATE TABLE ORDERITEMS (
55
- ORDERITEMID NUMBER(38,0) NOT NULL,
56
- ORDERID NUMBER(38,0) NOT NULL,
57
- PRODUCTID NUMBER(38,0) NOT NULL,
58
- QUANTITY NUMBER(38,0) NOT NULL,
59
- UNITPRICE NUMBER(10,2) NOT NULL
60
- );
61
-
62
- CREATE TABLE SALES (
63
- SALEID NUMBER(38,0) NOT NULL,
64
- ORDERID NUMBER(38,0) NOT NULL,
65
- SALESREPID NUMBER(38,0) NOT NULL,
66
- SALEDATE TIMESTAMP_NTZ(9) NOT NULL
67
- );
68
-
69
- CREATE TABLE SALESREPS (
70
- SALESREPID NUMBER(38,0) NOT NULL,
71
- FIRSTNAME VARCHAR(100) NOT NULL,
72
- LASTNAME VARCHAR(100) NOT NULL,
73
- EMAIL VARCHAR(255) NOT NULL,
74
- PHONE VARCHAR(20) NOT NULL,
75
- HIREDATE DATE NOT NULL
76
- );
77
- """
78
-
79
- print(f"πŸ“Š Deploying to schema: {schema_name}")
80
- print(f"πŸ“Š Database: {database}")
81
- print(f"πŸ“Š Tables: 6 tables (complete e-commerce model)")
82
- print()
83
-
84
- try:
85
- # Initialize deployer
86
- deployer = ThoughtSpotDeployer()
87
-
88
- # Create unique names for this demo
89
- timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
90
- connection_name = f"demo_system_{timestamp}"
91
-
92
- print(f"πŸ”— Demo Connection: {connection_name}")
93
- print(f"🏷️ This will be your complete demo system!")
94
- print()
95
-
96
- # Deploy everything using the updated deployer
97
- print("πŸš€ Starting end-to-end deployment...")
98
- results = deployer.deploy_all(
99
- ddl=demo_ddl,
100
- database=database,
101
- schema=schema_name,
102
- connection_name=connection_name
103
- )
104
-
105
- print("\n" + "="*70)
106
- print("πŸ“‹ END-TO-END DEPLOYMENT RESULTS")
107
- print("="*70)
108
-
109
- if results['success']:
110
- print("πŸŽ‰ DEPLOYMENT SUCCESSFUL!")
111
- print(f"βœ… Connection: {results['connection']}")
112
- print(f"βœ… Tables created: {len(results['tables'])}")
113
- for table in results['tables']:
114
- print(f" β€’ {table}")
115
- print(f"βœ… Model: {results['model']}")
116
-
117
- print("\n🎯 YOUR DEMO SYSTEM IS READY!")
118
- print("="*50)
119
- print(f"πŸ”— Connection Name: '{results['connection']}'")
120
- print(f"πŸ“Š Model Name: '{results['model']}'")
121
- print()
122
- print("✨ WHAT YOU CAN DO NOW:")
123
- print("1. πŸ“ˆ Create searches and answers using the model")
124
- print("2. 🎨 Build dashboards with the data")
125
- print("3. πŸ€– Use the model for AI-powered analytics")
126
- print("4. πŸ‘₯ Share with demo users")
127
- print()
128
- print("πŸ” MODEL FEATURES:")
129
- print("β€’ βœ… No ID columns cluttering the interface")
130
- print("β€’ βœ… Smart column naming (RepFirstname, repPhone, etc.)")
131
- print("β€’ βœ… Proper measures vs attributes")
132
- print("β€’ βœ… Clean, business-friendly structure")
133
- print("β€’ βœ… Matches your proven boone_test5 pattern")
134
-
135
- else:
136
- print("❌ DEPLOYMENT FAILED")
137
- if results['errors']:
138
- print("Errors encountered:")
139
- for i, error in enumerate(results['errors'], 1):
140
- print(f" {i}. {error}")
141
-
142
- print("\nπŸ”§ TROUBLESHOOTING:")
143
- print("β€’ Check ThoughtSpot connection settings")
144
- print("β€’ Verify Snowflake schema exists and is accessible")
145
- print("β€’ Ensure proper permissions are set")
146
-
147
- return results['success']
148
-
149
- except Exception as e:
150
- print(f"\n❌ Deployment failed with error: {e}")
151
- import traceback
152
- traceback.print_exc()
153
-
154
- print("\nπŸ”§ COMMON ISSUES:")
155
- print("β€’ Authentication failure - check credentials")
156
- print("β€’ Schema not found - run demo_prep.py first")
157
- print("β€’ Permission issues - verify Snowflake roles")
158
-
159
- return False
160
-
161
- if __name__ == "__main__":
162
- print("🎬 STARTING END-TO-END DEMO SYSTEM DEPLOYMENT")
163
- print("This will create a complete ThoughtSpot demo environment!")
164
- print()
165
-
166
- success = test_end_to_end_demo()
167
-
168
- if success:
169
- print(f"\nπŸŽ‰ SUCCESS! Your end-to-end demo system is ready to use!")
170
- print("Check ThoughtSpot for your new connection and model.")
171
- else:
172
- print(f"\n❌ FAILED! Please check the error messages above.")
173
-
174
- sys.exit(0 if success else 1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tests/test_framework.py DELETED
@@ -1,447 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Enhanced Test Framework for Demo Wire
4
- Comprehensive testing with schema validation, performance metrics, and robust reporting
5
- """
6
-
7
- import os
8
- import sys
9
- import time
10
- import json
11
- import traceback
12
- from datetime import datetime
13
- from typing import Dict, List, Any, Optional, Tuple
14
- from dataclasses import dataclass, asdict
15
- from enum import Enum
16
-
17
- # Add parent directory to path
18
- sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
19
-
20
- from dotenv import load_dotenv
21
- load_dotenv()
22
-
23
- class TestResult(Enum):
24
- PASS = "βœ… PASS"
25
- FAIL = "❌ FAIL"
26
- SKIP = "⏭️ SKIP"
27
- WARN = "⚠️ WARN"
28
-
29
- @dataclass
30
- class TestCase:
31
- """Individual test case with metadata"""
32
- name: str
33
- description: str
34
- category: str
35
- priority: int # 1=critical, 2=important, 3=nice-to-have
36
- timeout: int = 60 # seconds
37
- prerequisites: List[str] = None
38
- expected_duration: float = 0.0 # seconds
39
-
40
- def __post_init__(self):
41
- if self.prerequisites is None:
42
- self.prerequisites = []
43
-
44
- @dataclass
45
- class TestReport:
46
- """Test execution report"""
47
- test_name: str
48
- result: TestResult
49
- duration: float
50
- error_message: str = ""
51
- details: Dict[str, Any] = None
52
- timestamp: str = ""
53
-
54
- def __post_init__(self):
55
- if self.details is None:
56
- self.details = {}
57
- if not self.timestamp:
58
- self.timestamp = datetime.now().isoformat()
59
-
60
- class TestFramework:
61
- """Enhanced test framework with comprehensive reporting and validation"""
62
-
63
- def __init__(self):
64
- self.results: List[TestReport] = []
65
- self.start_time = None
66
- self.test_cases: List[TestCase] = []
67
- self.setup_complete = False
68
-
69
- def add_test(self, test_case: TestCase):
70
- """Register a test case"""
71
- self.test_cases.append(test_case)
72
-
73
- def validate_environment(self) -> TestReport:
74
- """Validate required environment variables and dependencies"""
75
- start_time = time.time()
76
-
77
- required_vars = [
78
- 'THOUGHTSPOT_URL', 'THOUGHTSPOT_USERNAME', 'THOUGHTSPOT_PASSWORD',
79
- 'SNOWFLAKE_ACCOUNT', 'SNOWFLAKE_USER', 'SNOWFLAKE_PRIVATE_KEY',
80
- 'SNOWFLAKE_ROLE', 'SNOWFLAKE_WAREHOUSE'
81
- ]
82
-
83
- missing_vars = []
84
- for var in required_vars:
85
- if not os.getenv(var):
86
- missing_vars.append(var)
87
-
88
- duration = time.time() - start_time
89
-
90
- if missing_vars:
91
- return TestReport(
92
- test_name="Environment Validation",
93
- result=TestResult.FAIL,
94
- duration=duration,
95
- error_message=f"Missing environment variables: {', '.join(missing_vars)}",
96
- details={"missing_vars": missing_vars}
97
- )
98
-
99
- return TestReport(
100
- test_name="Environment Validation",
101
- result=TestResult.PASS,
102
- duration=duration,
103
- details={"validated_vars": len(required_vars)}
104
- )
105
-
106
- def validate_schema_structure(self, ddl: str) -> TestReport:
107
- """Validate DDL structure and syntax"""
108
- start_time = time.time()
109
-
110
- try:
111
- # Basic DDL validation
112
- validation_results = {
113
- "has_create_statements": "CREATE TABLE" in ddl.upper(),
114
- "table_count": ddl.upper().count("CREATE TABLE"),
115
- "has_primary_keys": "PRIMARY KEY" in ddl.upper(),
116
- "has_not_null": "NOT NULL" in ddl.upper(),
117
- "estimated_complexity": len(ddl.split('\n'))
118
- }
119
-
120
- # Check for common DDL patterns
121
- critical_issues = []
122
- warnings = []
123
-
124
- if validation_results["table_count"] == 0:
125
- critical_issues.append("No CREATE TABLE statements found")
126
-
127
- if not validation_results["has_not_null"]:
128
- warnings.append("No NOT NULL constraints found - consider data quality")
129
-
130
- # Validate column naming patterns
131
- lines = ddl.split('\n')
132
- for line in lines:
133
- if 'CREATE TABLE' in line.upper():
134
- table_name = line.split()[-2] if '(' in line else line.split()[-1]
135
- validation_results[f"table_{table_name}"] = {
136
- "found": True,
137
- "line": line.strip()
138
- }
139
-
140
- duration = time.time() - start_time
141
- result = TestResult.FAIL if critical_issues else (TestResult.WARN if warnings else TestResult.PASS)
142
-
143
- return TestReport(
144
- test_name="Schema Structure Validation",
145
- result=result,
146
- duration=duration,
147
- error_message="; ".join(critical_issues) if critical_issues else "",
148
- details={
149
- **validation_results,
150
- "warnings": warnings,
151
- "critical_issues": critical_issues
152
- }
153
- )
154
-
155
- except Exception as e:
156
- return TestReport(
157
- test_name="Schema Structure Validation",
158
- result=TestResult.FAIL,
159
- duration=time.time() - start_time,
160
- error_message=f"Validation failed: {str(e)}"
161
- )
162
-
163
- def test_thoughtspot_auth(self) -> TestReport:
164
- """Test ThoughtSpot authentication"""
165
- start_time = time.time()
166
-
167
- try:
168
- from thoughtspot_deployer import ThoughtSpotDeployer
169
-
170
- deployer = ThoughtSpotDeployer()
171
- auth_success = deployer.authenticate()
172
-
173
- duration = time.time() - start_time
174
-
175
- return TestReport(
176
- test_name="ThoughtSpot Authentication",
177
- result=TestResult.PASS if auth_success else TestResult.FAIL,
178
- duration=duration,
179
- error_message="" if auth_success else "Authentication failed",
180
- details={
181
- "auth_method": "session_based",
182
- "response_time": duration
183
- }
184
- )
185
-
186
- except Exception as e:
187
- return TestReport(
188
- test_name="ThoughtSpot Authentication",
189
- result=TestResult.FAIL,
190
- duration=time.time() - start_time,
191
- error_message=f"Auth test failed: {str(e)}"
192
- )
193
-
194
- def test_snowflake_connection(self) -> TestReport:
195
- """Test Snowflake key pair authentication"""
196
- start_time = time.time()
197
-
198
- try:
199
- import snowflake.connector
200
- from snowflake_auth import get_snowflake_connection_params
201
-
202
- params = get_snowflake_connection_params()
203
- conn = snowflake.connector.connect(**params)
204
- cursor = conn.cursor()
205
-
206
- # Test basic query
207
- cursor.execute("SELECT CURRENT_VERSION()")
208
- version = cursor.fetchone()[0]
209
-
210
- conn.close()
211
- duration = time.time() - start_time
212
-
213
- return TestReport(
214
- test_name="Snowflake Connection",
215
- result=TestResult.PASS,
216
- duration=duration,
217
- details={
218
- "snowflake_version": version,
219
- "auth_method": "key_pair",
220
- "connection_time": duration
221
- }
222
- )
223
-
224
- except Exception as e:
225
- return TestReport(
226
- test_name="Snowflake Connection",
227
- result=TestResult.FAIL,
228
- duration=time.time() - start_time,
229
- error_message=f"Snowflake connection failed: {str(e)}"
230
- )
231
-
232
- def test_deployment_pipeline(self, ddl: str, database: str, schema: str) -> TestReport:
233
- """Test complete deployment pipeline"""
234
- start_time = time.time()
235
-
236
- try:
237
- from thoughtspot_deployer import ThoughtSpotDeployer
238
-
239
- deployer = ThoughtSpotDeployer()
240
-
241
- # Authenticate first
242
- if not deployer.authenticate():
243
- return TestReport(
244
- test_name="Deployment Pipeline",
245
- result=TestResult.FAIL,
246
- duration=time.time() - start_time,
247
- error_message="Authentication failed"
248
- )
249
-
250
- # Run deployment
251
- results = deployer.deploy_all(
252
- ddl=ddl,
253
- database=database,
254
- schema=schema
255
- )
256
-
257
- duration = time.time() - start_time
258
-
259
- # Analyze results
260
- pipeline_success = {
261
- "connection": bool(results.get('connection')),
262
- "tables": len(results.get('tables', [])) > 0,
263
- "worksheet": bool(results.get('worksheet')),
264
- "errors": results.get('errors', [])
265
- }
266
-
267
- overall_success = pipeline_success["connection"] and pipeline_success["tables"]
268
-
269
- return TestReport(
270
- test_name="Deployment Pipeline",
271
- result=TestResult.PASS if overall_success else TestResult.FAIL,
272
- duration=duration,
273
- error_message=f"Errors: {len(pipeline_success['errors'])}" if pipeline_success['errors'] else "",
274
- details={
275
- **pipeline_success,
276
- "deployment_time": duration,
277
- "schema_used": schema,
278
- "tables_created": len(results.get('tables', []))
279
- }
280
- )
281
-
282
- except Exception as e:
283
- return TestReport(
284
- test_name="Deployment Pipeline",
285
- result=TestResult.FAIL,
286
- duration=time.time() - start_time,
287
- error_message=f"Pipeline test failed: {str(e)}"
288
- )
289
-
290
- def run_performance_test(self, test_func, *args, **kwargs) -> Tuple[Any, float, bool]:
291
- """Run a test function with performance monitoring"""
292
- start_time = time.time()
293
-
294
- try:
295
- result = test_func(*args, **kwargs)
296
- duration = time.time() - start_time
297
- return result, duration, True
298
- except Exception as e:
299
- duration = time.time() - start_time
300
- print(f"⚠️ Performance test failed: {e}")
301
- return None, duration, False
302
-
303
- def run_all_tests(self, ddl: str = None, database: str = "DEMOBUILD",
304
- schema: str = None) -> Dict[str, Any]:
305
- """Run comprehensive test suite"""
306
-
307
- print("πŸ§ͺ ENHANCED TEST FRAMEWORK")
308
- print("=" * 60)
309
- print(f"πŸš€ Starting comprehensive test suite at {datetime.now()}")
310
-
311
- self.start_time = time.time()
312
-
313
- # Auto-detect schema if not provided
314
- if not schema:
315
- # Look for recent schemas
316
- schema_pattern = datetime.now().strftime("2025%m%d")
317
- schema = f"{schema_pattern}_ENHANCED_TEST"
318
- print(f"πŸ“Š Using auto-generated schema: {schema}")
319
-
320
- # Default DDL for testing
321
- if not ddl:
322
- ddl = """
323
- CREATE TABLE CUSTOMERS (
324
- CUSTOMERID NUMBER(38,0) NOT NULL,
325
- FIRSTNAME VARCHAR(50) NOT NULL,
326
- LASTNAME VARCHAR(50) NOT NULL,
327
- EMAIL VARCHAR(100) NOT NULL
328
- );
329
-
330
- CREATE TABLE PRODUCTS (
331
- PRODUCTID NUMBER(38,0) NOT NULL,
332
- PRODUCTNAME VARCHAR(100) NOT NULL,
333
- PRICE NUMBER(10,2) NOT NULL
334
- );
335
- """
336
-
337
- # Run tests in order
338
- tests_to_run = [
339
- ("Environment", lambda: self.validate_environment()),
340
- ("Schema Validation", lambda: self.validate_schema_structure(ddl)),
341
- ("Snowflake Connection", lambda: self.test_snowflake_connection()),
342
- ("ThoughtSpot Auth", lambda: self.test_thoughtspot_auth()),
343
- ("Full Deployment", lambda: self.test_deployment_pipeline(ddl, database, schema))
344
- ]
345
-
346
- print("\nπŸ“‹ Running Test Suite:")
347
-
348
- for test_name, test_func in tests_to_run:
349
- print(f"\nπŸ”„ Running: {test_name}...")
350
-
351
- try:
352
- report = test_func()
353
- self.results.append(report)
354
-
355
- # Show immediate feedback
356
- duration_str = f"{report.duration:.2f}s"
357
- print(f" {report.result.value} {test_name} ({duration_str})")
358
-
359
- if report.error_message:
360
- print(f" πŸ’¬ {report.error_message}")
361
-
362
- except Exception as e:
363
- error_report = TestReport(
364
- test_name=test_name,
365
- result=TestResult.FAIL,
366
- duration=0.0,
367
- error_message=f"Test execution failed: {str(e)}"
368
- )
369
- self.results.append(error_report)
370
- print(f" ❌ FAIL {test_name} (execution error)")
371
-
372
- # Generate comprehensive report
373
- return self.generate_report()
374
-
375
- def generate_report(self) -> Dict[str, Any]:
376
- """Generate comprehensive test report"""
377
-
378
- total_duration = time.time() - self.start_time if self.start_time else 0
379
-
380
- # Calculate statistics
381
- stats = {
382
- "total_tests": len(self.results),
383
- "passed": len([r for r in self.results if r.result == TestResult.PASS]),
384
- "failed": len([r for r in self.results if r.result == TestResult.FAIL]),
385
- "warnings": len([r for r in self.results if r.result == TestResult.WARN]),
386
- "skipped": len([r for r in self.results if r.result == TestResult.SKIP]),
387
- "total_duration": total_duration,
388
- "average_duration": total_duration / len(self.results) if self.results else 0
389
- }
390
-
391
- success_rate = (stats["passed"] / stats["total_tests"] * 100) if stats["total_tests"] > 0 else 0
392
-
393
- # Print summary report
394
- print("\n" + "=" * 60)
395
- print("πŸ“Š TEST SUMMARY REPORT")
396
- print("=" * 60)
397
-
398
- print(f"🎯 Overall Success Rate: {success_rate:.1f}%")
399
- print(f"⏱️ Total Duration: {total_duration:.2f}s")
400
- print(f"πŸ“ˆ Tests: {stats['passed']}βœ… {stats['failed']}❌ {stats['warnings']}⚠️ {stats['skipped']}⏭️")
401
-
402
- print(f"\nπŸ“‹ Individual Results:")
403
- for result in self.results:
404
- duration_str = f"({result.duration:.2f}s)"
405
- print(f" {result.result.value} {result.test_name} {duration_str}")
406
- if result.error_message:
407
- print(f" πŸ’¬ {result.error_message}")
408
-
409
- # Recommendations
410
- print(f"\nπŸ”§ RECOMMENDATIONS:")
411
- if stats["failed"] > 0:
412
- print(f" β€’ Fix {stats['failed']} failing tests before deployment")
413
- if stats["warnings"] > 0:
414
- print(f" β€’ Review {stats['warnings']} warnings for potential issues")
415
- if success_rate == 100:
416
- print(f" πŸŽ‰ All tests passing! System ready for deployment!")
417
-
418
- print("=" * 60)
419
-
420
- # Save detailed report
421
- report_data = {
422
- "timestamp": datetime.now().isoformat(),
423
- "statistics": stats,
424
- "success_rate": success_rate,
425
- "results": [asdict(r) for r in self.results]
426
- }
427
-
428
- # Save to file
429
- report_file = f"test_report_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
430
- with open(os.path.join("tests", report_file), "w") as f:
431
- json.dump(report_data, f, indent=2)
432
-
433
- print(f"πŸ“„ Detailed report saved: tests/{report_file}")
434
-
435
- return report_data
436
-
437
- def main():
438
- """Run enhanced test framework"""
439
- framework = TestFramework()
440
-
441
- # Example usage - can be customized
442
- results = framework.run_all_tests()
443
-
444
- return results["success_rate"] == 100.0
445
-
446
- if __name__ == "__main__":
447
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
thoughtspot_deployer.py CHANGED
@@ -1278,10 +1278,22 @@ class ThoughtSpotDeployer:
1278
  # PHASE 1: Create all tables WITHOUT joins (to ensure all tables exist first)
1279
  log_progress(" πŸ“‹ Phase 1: Creating tables without joins...")
1280
  for table_name, columns in tables.items():
 
 
1281
  log_progress(f" πŸ”„ Creating table: {table_name.upper()} (no joins)...")
 
1282
  # Create table TML WITHOUT joins_with section (pass None for all_tables)
 
1283
  table_tml = self.create_table_tml(table_name, columns, connection_name, database, schema, all_tables=None)
 
 
 
 
 
1284
 
 
 
 
1285
  response = self.session.post(
1286
  f"{self.base_url}/api/rest/2.0/metadata/tml/import",
1287
  json={
@@ -1290,6 +1302,8 @@ class ThoughtSpotDeployer:
1290
  "create_new": True
1291
  }
1292
  )
 
 
1293
 
1294
  if response.status_code == 200:
1295
  result = response.json()
@@ -1309,7 +1323,8 @@ class ThoughtSpotDeployer:
1309
  obj = objects[0]
1310
  if obj.get('response', {}).get('status', {}).get('status_code') == 'OK':
1311
  table_guid = obj.get('response', {}).get('header', {}).get('id_guid')
1312
- log_progress(f" βœ… Table created: {table_name.upper()}")
 
1313
  log_progress(f" GUID: {table_guid}")
1314
  results['tables'].append(table_name.upper())
1315
  table_guids[table_name.upper()] = table_guid
 
1278
  # PHASE 1: Create all tables WITHOUT joins (to ensure all tables exist first)
1279
  log_progress(" πŸ“‹ Phase 1: Creating tables without joins...")
1280
  for table_name, columns in tables.items():
1281
+ import time
1282
+ start_time = time.time()
1283
  log_progress(f" πŸ”„ Creating table: {table_name.upper()} (no joins)...")
1284
+
1285
  # Create table TML WITHOUT joins_with section (pass None for all_tables)
1286
+ tml_start = time.time()
1287
  table_tml = self.create_table_tml(table_name, columns, connection_name, database, schema, all_tables=None)
1288
+ tml_time = time.time() - tml_start
1289
+ log_progress(f" πŸ“ TML generation took: {tml_time:.2f} seconds")
1290
+
1291
+ # Log the size of the TML
1292
+ log_progress(f" πŸ“ TML size: {len(table_tml)} characters, {len(columns)} columns")
1293
 
1294
+ # Make the API call
1295
+ api_start = time.time()
1296
+ log_progress(f" 🌐 Sending to ThoughtSpot API...")
1297
  response = self.session.post(
1298
  f"{self.base_url}/api/rest/2.0/metadata/tml/import",
1299
  json={
 
1302
  "create_new": True
1303
  }
1304
  )
1305
+ api_time = time.time() - api_start
1306
+ log_progress(f" ⏱️ API call took: {api_time:.2f} seconds")
1307
 
1308
  if response.status_code == 200:
1309
  result = response.json()
 
1323
  obj = objects[0]
1324
  if obj.get('response', {}).get('status', {}).get('status_code') == 'OK':
1325
  table_guid = obj.get('response', {}).get('header', {}).get('id_guid')
1326
+ total_time = time.time() - start_time
1327
+ log_progress(f" βœ… Table created: {table_name.upper()} (Total time: {total_time:.2f} seconds)")
1328
  log_progress(f" GUID: {table_guid}")
1329
  results['tables'].append(table_name.upper())
1330
  table_guids[table_name.upper()] = table_guid