Sompote commited on
Commit
c8d2a69
·
verified ·
1 Parent(s): 98d837d

Upload 12 files

Browse files
AI_logo.png ADDED
Dockerfile ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9-slim
2
+
3
+ WORKDIR /app
4
+
5
+ COPY requirements.txt .
6
+ RUN pip install --no-cache-dir -r requirements.txt
7
+
8
+ COPY app.py .
9
+ COPY best_llm_model-16.pt .
10
+ COPY scalers/ ./scalers/
11
+
12
+ EXPOSE 8501
13
+
14
+ CMD ["streamlit", "run", "app.py"]
README copy.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Fricitonangle prediction of solid waste
3
+ emoji: 🚗
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: streamlit
7
+ sdk_version: "1.29.0"
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+
13
+ # Waste Properties Predictor
14
+
15
+ This Streamlit app predicts both friction angle and cohesion based on waste composition and characteristics using deep learning models.
16
+
17
+ ## Features
18
+
19
+ - Predicts both friction angle and cohesion simultaneously
20
+ - Supports Excel file input for batch predictions
21
+ - Provides SHAP value explanations for predictions
22
+ - Interactive input interface with value range validation
23
+ - Supports custom data upload
24
+
25
+ ## Files Description
26
+
27
+ - `app.py`: Main application file
28
+ - `requirements.txt`: Required Python packages
29
+ - `friction_model.pt`: Pre-trained model for friction angle prediction
30
+ - `cohesion_model.pt`: Pre-trained model for cohesion prediction
31
+ - `Data_syw.xlsx`: Default data file with example values
32
+
33
+ ## Usage
34
+
35
+ 1. The app loads with default values from the first row of `Data_syw.xlsx`
36
+ 2. You can either:
37
+ - Use the default values
38
+ - Upload your own Excel file with waste composition data
39
+ - Manually adjust individual values using the input fields
40
+ 3. Click "Predict Properties" to get predictions and SHAP explanations
41
+
42
+ ## Input Parameters
43
+
44
+ The app accepts various waste composition and characteristic parameters. All inputs are validated against the training data ranges to ensure reliable predictions.
45
+
46
+ ## Output
47
+
48
+ For each prediction, the app provides:
49
+ - Predicted friction angle (degrees)
50
+ - Predicted cohesion (kPa)
51
+ - SHAP waterfall plots explaining the contribution of each feature to the predictions
README.md CHANGED
@@ -1,13 +1,49 @@
1
- ---
2
- title: Concrete Creep
3
- emoji: 📈
4
- colorFrom: blue
5
- colorTo: green
6
- sdk: streamlit
7
- sdk_version: 1.44.1
8
- app_file: app.py
9
- pinned: false
10
- short_description: concrete_creep
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Concrete Creep Prediction App
2
+
3
+ A Streamlit application to predict concrete creep strain over time using a specialized LLM-style model.
4
+
5
+ ## Deployment Instructions
6
+
7
+ ### Prerequisites
8
+ - Python 3.8 or higher
9
+ - pip for package installation
10
+
11
+ ### Installation
12
+
13
+ 1. Clone or download this repository
14
+
15
+ 2. Install the required packages:
16
+ ```
17
+ pip install -r requirements.txt
18
+ ```
19
+
20
+ 3. Run the Streamlit app:
21
+ ```
22
+ streamlit run app.py
23
+ ```
24
+
25
+ ## How to Use
26
+
27
+ 1. Open the app in your web browser (typically at http://localhost:8501)
28
+ 2. Adjust the concrete properties using the sidebar controls:
29
+ - Density (kg/m³)
30
+ - Compressive Strength (MPa)
31
+ - Elastic Modulus (MPa)
32
+ - Initial Creep Value
33
+ 3. Set the desired time range for prediction
34
+ 4. Click "Predict Creep Strain" to generate results
35
+ 5. View the prediction charts and download the results as CSV if needed
36
+
37
+ ## Files
38
+
39
+ - `app.py`: Standalone application code
40
+ - `requirements.txt`: Required Python packages
41
+ - `best_llm_model-16.pt`: Pre-trained model
42
+ - `scalers/`: Directory containing normalization scalers
43
+ - `feature_scaler.pkl`: Scaler for input features
44
+ - `creep_scaler.pkl`: Scaler for creep values
45
+ - `time_values.pkl`: Time values for prediction
46
+
47
+ ## Model Information
48
+
49
+ The application uses a specialized LLM-style transformer model to predict concrete creep strain based on concrete properties (density, compressive strength, and elastic modulus). The model performs autoregressive prediction to estimate creep over time.
app.py ADDED
@@ -0,0 +1,625 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import pandas as pd
3
+ import numpy as np
4
+ import torch
5
+ import torch.nn as nn
6
+ import torch.nn.functional as F
7
+ from torch.utils.data import Dataset, DataLoader
8
+ import matplotlib.pyplot as plt
9
+ import pickle
10
+ import os
11
+ import math
12
+
13
+ # Set page config
14
+ st.set_page_config(
15
+ page_title="Concrete Creep Prediction",
16
+ page_icon="🏗️",
17
+ layout="wide"
18
+ )
19
+
20
+ # Display logo
21
+ st.image("AI_logo.png", width=200)
22
+
23
+ # Define custom scaler classes
24
+ class CreepScaler:
25
+ def __init__(self, factor=1000):
26
+ self.factor = factor
27
+ self.mean_ = 0 # Default to no mean shift
28
+ self.scale_ = factor # Use factor as scale
29
+ self.is_standard_scaler = False
30
+
31
+ def transform(self, X):
32
+ if isinstance(X, np.ndarray):
33
+ if self.is_standard_scaler:
34
+ return (X - self.mean_) / self.scale_
35
+ return X / self.factor
36
+ return np.array(X) / self.factor
37
+
38
+ def inverse_transform(self, X):
39
+ if isinstance(X, np.ndarray):
40
+ if self.is_standard_scaler:
41
+ return (X * self.scale_) + self.mean_
42
+ return X * self.factor
43
+ return np.array(X) * self.factor
44
+
45
+ # Positional Encoding for Transformer
46
+ class PositionalEncoding(nn.Module):
47
+ def __init__(self, d_model, max_len=5000):
48
+ super(PositionalEncoding, self).__init__()
49
+
50
+ pe = torch.zeros(max_len, d_model)
51
+ position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
52
+ div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
53
+
54
+ pe[:, 0::2] = torch.sin(position * div_term)
55
+ pe[:, 1::2] = torch.cos(position * div_term)
56
+
57
+ self.register_buffer('pe', pe)
58
+
59
+ def forward(self, x):
60
+ # x: [batch_size, seq_len, d_model]
61
+ return x + self.pe[:x.size(1), :].unsqueeze(0)
62
+
63
+ # Feature Encoder for static features
64
+ class FeatureEncoder(nn.Module):
65
+ def __init__(self, input_dim, hidden_dim, dropout=0.1):
66
+ super(FeatureEncoder, self).__init__()
67
+
68
+ # Original encoding path
69
+ self.fc1 = nn.Linear(input_dim, hidden_dim * 2)
70
+ self.ln1 = nn.LayerNorm(hidden_dim * 2)
71
+ self.fc2 = nn.Linear(hidden_dim * 2, hidden_dim)
72
+ self.ln2 = nn.LayerNorm(hidden_dim)
73
+
74
+ # New feature-wise projection (each feature to dim 16)
75
+ self.feature_projection = nn.Linear(1, 16)
76
+
77
+ # Ensure feature attention is configured correctly
78
+ feature_embed_dim = 16
79
+ # For 16 dimensions, valid num_heads are: 1, 2, 4, 8, 16
80
+ feature_heads = 4 # 16 is divisible by 4
81
+
82
+ # Attention for parallel feature processing
83
+ self.feature_attention = nn.MultiheadAttention(
84
+ embed_dim=feature_embed_dim,
85
+ num_heads=feature_heads,
86
+ dropout=dropout,
87
+ batch_first=True
88
+ )
89
+
90
+ # For batch attention, first choose the embedding dimension
91
+ # Make it a power of 2 for compatibility with many head configurations
92
+ batch_embed_dim = 16 # Fixed safe value, divisible by many head counts
93
+
94
+ # Now choose heads that divide evenly into the embed_dim
95
+ batch_heads = 4 # 16 is divisible by 4
96
+
97
+ # Always project input to the fixed batch_embed_dim
98
+ self.batch_projection = nn.Linear(input_dim, batch_embed_dim)
99
+
100
+ # Batch-wise attention with safe values
101
+ self.batch_attention = nn.MultiheadAttention(
102
+ embed_dim=batch_embed_dim,
103
+ num_heads=batch_heads,
104
+ dropout=dropout,
105
+ batch_first=True
106
+ )
107
+
108
+ # Layer norms for attention outputs
109
+ self.feature_ln = nn.LayerNorm(16)
110
+ self.batch_ln = nn.LayerNorm(batch_embed_dim)
111
+
112
+ # Integration layer - combines original and new paths
113
+ self.integration = nn.Linear(hidden_dim + 16 * input_dim + batch_embed_dim, hidden_dim)
114
+ self.integration_ln = nn.LayerNorm(hidden_dim)
115
+
116
+ self.dropout = nn.Dropout(dropout)
117
+ self.relu = nn.ReLU()
118
+
119
+ # Store dimensions for debugging
120
+ self.input_dim = input_dim
121
+ self.batch_embed_dim = batch_embed_dim
122
+ self.batch_heads = batch_heads
123
+
124
+ print(f"FeatureEncoder initialized with: input_dim={input_dim}, batch_embed_dim={batch_embed_dim}, batch_heads={batch_heads}")
125
+
126
+ def forward(self, x):
127
+ # x: [batch_size, input_dim]
128
+ batch_size, input_dim = x.size()
129
+
130
+ # Original path
131
+ original = self.fc1(x)
132
+ original = self.ln1(original)
133
+ original = self.relu(original)
134
+ original = self.dropout(original)
135
+
136
+ original = self.fc2(original)
137
+ original = self.ln2(original)
138
+ original = self.relu(original)
139
+
140
+ # Feature-wise projection path
141
+ # Reshape to process each feature separately
142
+ features = x.view(batch_size, input_dim, 1) # [batch_size, input_dim, 1]
143
+ features_projected = self.feature_projection(features) # [batch_size, input_dim, 16]
144
+
145
+ # Feature-wise attention
146
+ feature_attn_out, _ = self.feature_attention(
147
+ features_projected,
148
+ features_projected,
149
+ features_projected
150
+ ) # [batch_size, input_dim, 16]
151
+ feature_attn_out = self.feature_ln(feature_attn_out + features_projected) # Add & Norm
152
+
153
+ # Apply projection to make input_dim compatible with attention
154
+ x_proj = self.batch_projection(x)
155
+
156
+ # Batch-wise attention
157
+ batch_attn_out, _ = self.batch_attention(
158
+ x_proj.unsqueeze(1), # [batch_size, 1, batch_embed_dim]
159
+ x_proj.unsqueeze(1),
160
+ x_proj.unsqueeze(1)
161
+ ) # [batch_size, 1, batch_embed_dim]
162
+ batch_attn_out = self.batch_ln(batch_attn_out.squeeze(1) + x_proj) # Add & Norm
163
+
164
+ # Reshape feature attention output to concatenate
165
+ feature_attn_flat = feature_attn_out.reshape(batch_size, -1) # [batch_size, input_dim * 16]
166
+
167
+ # Concatenate all processed features
168
+ combined = torch.cat([original, feature_attn_flat, batch_attn_out], dim=1)
169
+
170
+ # Final integration
171
+ output = self.integration(combined)
172
+ output = self.integration_ln(output)
173
+ output = self.relu(output)
174
+
175
+ return output
176
+
177
+ # Self-Attention Block
178
+ class SelfAttention(nn.Module):
179
+ def __init__(self, d_model, num_heads, dropout=0.1):
180
+ super(SelfAttention, self).__init__()
181
+ self.d_model = d_model
182
+ self.num_heads = num_heads
183
+ self.head_dim = d_model // num_heads
184
+
185
+ assert self.head_dim * num_heads == d_model, "d_model must be divisible by num_heads"
186
+
187
+ # Multi-head attention
188
+ self.attention = nn.MultiheadAttention(
189
+ embed_dim=d_model,
190
+ num_heads=num_heads,
191
+ dropout=dropout,
192
+ batch_first=True
193
+ )
194
+
195
+ # Layer normalization and dropout
196
+ self.layer_norm = nn.LayerNorm(d_model)
197
+ self.dropout = nn.Dropout(dropout)
198
+
199
+ def forward(self, x, attention_mask=None, key_padding_mask=None):
200
+ # x: [batch_size, seq_len, d_model]
201
+
202
+ # Self-attention with residual connection
203
+ attn_output, _ = self.attention(
204
+ query=x,
205
+ key=x,
206
+ value=x,
207
+ attn_mask=attention_mask,
208
+ key_padding_mask=key_padding_mask
209
+ )
210
+
211
+ # Add & Norm
212
+ x = x + self.dropout(attn_output)
213
+ x = self.layer_norm(x)
214
+
215
+ return x
216
+
217
+ # Feed-Forward Block
218
+ class FeedForward(nn.Module):
219
+ def __init__(self, d_model, d_ff, dropout=0.1):
220
+ super(FeedForward, self).__init__()
221
+
222
+ self.linear1 = nn.Linear(d_model, d_ff)
223
+ self.linear2 = nn.Linear(d_ff, d_model)
224
+ self.relu = nn.ReLU()
225
+ self.dropout = nn.Dropout(dropout)
226
+ self.layer_norm = nn.LayerNorm(d_model)
227
+
228
+ def forward(self, x):
229
+ # x: [batch_size, seq_len, d_model]
230
+
231
+ # FFN with residual connection
232
+ ff_output = self.linear1(x)
233
+ ff_output = self.relu(ff_output)
234
+ ff_output = self.dropout(ff_output)
235
+ ff_output = self.linear2(ff_output)
236
+
237
+ # Add & Norm
238
+ x = x + self.dropout(ff_output)
239
+ x = self.layer_norm(x)
240
+
241
+ return x
242
+
243
+ # Transformer Encoder Layer
244
+ class EncoderLayer(nn.Module):
245
+ def __init__(self, d_model, num_heads, d_ff, dropout=0.1):
246
+ super(EncoderLayer, self).__init__()
247
+
248
+ self.self_attention = SelfAttention(d_model, num_heads, dropout)
249
+ self.feed_forward = FeedForward(d_model, d_ff, dropout)
250
+
251
+ def forward(self, x, attention_mask=None, key_padding_mask=None):
252
+ # x: [batch_size, seq_len, d_model]
253
+
254
+ # Self-attention block
255
+ x = self.self_attention(x, attention_mask, key_padding_mask)
256
+
257
+ # Feed-forward block
258
+ x = self.feed_forward(x)
259
+
260
+ return x
261
+
262
+ # LLM-Style Concrete Creep Transformer
263
+ class LLMConcreteModel(nn.Module):
264
+ def __init__(
265
+ self,
266
+ feature_dim,
267
+ d_model=128,
268
+ num_layers=6,
269
+ num_heads=8,
270
+ d_ff=512,
271
+ dropout=0.1,
272
+ target_len=1
273
+ ):
274
+ super(LLMConcreteModel, self).__init__()
275
+
276
+ # Model dimensions
277
+ self.d_model = d_model
278
+ self.target_len = target_len
279
+
280
+ # Input embedding layers
281
+ self.creep_embedding = nn.Linear(1, d_model)
282
+ self.time_embedding = nn.Linear(1, d_model) if True else None # Optional time embedding
283
+ self.feature_encoder = FeatureEncoder(feature_dim, d_model, dropout)
284
+
285
+ # Positional encoding
286
+ self.positional_encoding = PositionalEncoding(d_model)
287
+
288
+ # Encoder layers
289
+ self.encoder_layers = nn.ModuleList([
290
+ EncoderLayer(d_model, num_heads, d_ff, dropout)
291
+ for _ in range(num_layers)
292
+ ])
293
+
294
+ # Output layers for prediction
295
+ self.predictor = nn.Sequential(
296
+ nn.Linear(d_model, d_model),
297
+ nn.ReLU(),
298
+ nn.Dropout(dropout),
299
+ nn.Linear(d_model, target_len)
300
+ )
301
+
302
+ # Integration of features with sequence
303
+ self.feature_integration = nn.Linear(d_model * 2, d_model)
304
+
305
+ # Layer normalization
306
+ self.layer_norm = nn.LayerNorm(d_model)
307
+
308
+ # Dropout
309
+ self.dropout = nn.Dropout(dropout)
310
+
311
+ def forward(self, creep_history, features, lengths, time_history=None):
312
+ # creep_history: [batch_size, max_seq_len]
313
+ # features: [batch_size, feature_dim]
314
+ # lengths: [batch_size] - actual sequence lengths
315
+ # time_history: [batch_size, max_seq_len] (optional)
316
+
317
+ # Get the device from input tensors to ensure consistent device usage
318
+ device = creep_history.device
319
+
320
+ batch_size, max_seq_len = creep_history.size()
321
+
322
+ # Create padding mask (1 for padding, 0 for actual values)
323
+ padding_mask = torch.arange(max_seq_len, device=device).unsqueeze(0) >= lengths.unsqueeze(1)
324
+
325
+ # Embed creep values
326
+ creep_embedded = self.creep_embedding(creep_history.unsqueeze(-1))
327
+
328
+ # Add time embedding if provided
329
+ if time_history is not None and self.time_embedding is not None:
330
+ time_embedded = self.time_embedding(time_history.unsqueeze(-1))
331
+ # Combine creep and time embeddings
332
+ embedded = creep_embedded + time_embedded
333
+ else:
334
+ embedded = creep_embedded
335
+
336
+ # Add positional encoding
337
+ embedded = self.positional_encoding(embedded)
338
+
339
+ # Apply dropout
340
+ embedded = self.dropout(embedded)
341
+
342
+ # Process feature data
343
+ feature_encoded = self.feature_encoder(features) # [batch_size, d_model]
344
+
345
+ # Pass through encoder layers
346
+ encoder_output = embedded
347
+ for layer in self.encoder_layers:
348
+ encoder_output = layer(encoder_output, key_padding_mask=padding_mask)
349
+
350
+ # Extract the last non-padding token for each sequence
351
+ # This will serve as our context representation for prediction
352
+ last_indices = (lengths - 1).clamp(min=0) # Avoid negative indices
353
+ batch_indices = torch.arange(batch_size, device=device)
354
+ context_vectors = encoder_output[batch_indices, last_indices] # [batch_size, d_model]
355
+
356
+ # Combine context with features
357
+ combined = torch.cat([context_vectors, feature_encoded], dim=1) # [batch_size, d_model*2]
358
+ integrated = self.feature_integration(combined) # [batch_size, d_model]
359
+ integrated = torch.tanh(integrated)
360
+
361
+ # Final layer normalization
362
+ integrated = self.layer_norm(integrated)
363
+
364
+ # Generate predictions
365
+ predictions = self.predictor(integrated) # [batch_size, target_len]
366
+
367
+ return predictions
368
+
369
+ @st.cache_resource
370
+ def load_model_and_scalers():
371
+ """
372
+ Load the trained model and scalers
373
+ """
374
+ # Check if model and scalers exist
375
+ if not os.path.exists('best_llm_model-16.pt'):
376
+ st.error("Model file 'best_llm_model-16.pt' not found. Please run the training script first.")
377
+ st.stop()
378
+
379
+ if not os.path.exists('scalers/feature_scaler.pkl'):
380
+ st.error("Scaler files not found. Please run the prediction script first.")
381
+ st.stop()
382
+
383
+ # Load scalers
384
+ try:
385
+ with open('scalers/feature_scaler.pkl', 'rb') as f:
386
+ feature_scaler = pickle.load(f)
387
+ with open('scalers/creep_scaler.pkl', 'rb') as f:
388
+ creep_scaler = pickle.load(f)
389
+ with open('scalers/time_values.pkl', 'rb') as f:
390
+ time_values = pickle.load(f)
391
+ except Exception as e:
392
+ st.error(f"Error loading scalers: {e}")
393
+ st.stop()
394
+
395
+ # Set device
396
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
397
+
398
+ # Load model
399
+ feature_dim = 3 # Density, fc, E
400
+ model = LLMConcreteModel(
401
+ feature_dim=feature_dim,
402
+ d_model=192,
403
+ num_layers=4,
404
+ num_heads=4,
405
+ d_ff=192 * 4,
406
+ dropout=0.056999223340150215,
407
+ target_len=1
408
+ )
409
+
410
+ try:
411
+ model.load_state_dict(torch.load('best_llm_model-16.pt', map_location=device))
412
+ except Exception as e:
413
+ try:
414
+ checkpoint = torch.load('best_llm_model-16.pt', map_location=device)
415
+ model.load_state_dict(checkpoint, strict=False)
416
+ st.warning("Model loaded with non-strict loading due to architecture differences.")
417
+ except Exception as e2:
418
+ st.error(f"Error loading model: {e2}")
419
+ st.stop()
420
+
421
+ model = model.to(device)
422
+ model.eval()
423
+
424
+ return model, feature_scaler, creep_scaler, time_values, device
425
+
426
+ def autoregressive_predict(model, features, time_values, feature_scaler, creep_scaler, device, initial_value=0):
427
+ """
428
+ Perform autoregressive prediction using the model.
429
+
430
+ Args:
431
+ model: Trained PyTorch model
432
+ features: DataFrame with sample features
433
+ time_values: Array of time values
434
+ feature_scaler: StandardScaler for features
435
+ creep_scaler: Standard or custom scaler for creep values
436
+ device: PyTorch device
437
+ initial_value: Initial creep value (default: 0)
438
+
439
+ Returns:
440
+ Array of predicted creep values
441
+ """
442
+ # Scale features
443
+ scaled_features = feature_scaler.transform(features)
444
+ scaled_features_tensor = torch.FloatTensor(scaled_features).to(device)
445
+
446
+ # Initialize predictions list with the initial value
447
+ predictions = [initial_value]
448
+ # Scale the initial value
449
+ scaled_predictions = [creep_scaler.transform(np.array([[initial_value]])).flatten()[0]]
450
+
451
+ # For autoregressive prediction
452
+ with torch.no_grad():
453
+ for i in range(1, len(time_values)):
454
+ # Get the current history
455
+ history = np.array(scaled_predictions)
456
+ history_tensor = torch.FloatTensor(history).unsqueeze(0).to(device) # [1, seq_len]
457
+
458
+ # Create normalized time history if needed
459
+ time_history = np.log1p(time_values[:i])
460
+ time_tensor = torch.FloatTensor(time_history).unsqueeze(0).to(device) # [1, seq_len]
461
+
462
+ # Get the sequence length
463
+ length = torch.tensor([len(history)], device=device)
464
+
465
+ # Generate prediction using the model with the correct interface
466
+ next_value = model(
467
+ creep_history=history_tensor,
468
+ features=scaled_features_tensor,
469
+ lengths=length,
470
+ time_history=time_tensor
471
+ ).item()
472
+
473
+ # Store the scaled prediction
474
+ scaled_predictions.append(next_value)
475
+
476
+ # Inverse transform for actual value
477
+ next_creep = creep_scaler.inverse_transform(np.array([[next_value]])).flatten()[0]
478
+
479
+ # Store the actual prediction
480
+ predictions.append(next_creep)
481
+
482
+ return np.array(predictions)
483
+
484
+ # Load model and scalers
485
+ model, feature_scaler, creep_scaler, time_values, device = load_model_and_scalers()
486
+
487
+ # App title and description
488
+ st.title("Concrete Creep Prediction")
489
+ st.markdown("""
490
+ This app predicts concrete creep strain over time using a specialized LLM-style model.
491
+ Enter the concrete properties below to get a prediction.
492
+ """)
493
+
494
+ # Input sidebar
495
+ st.sidebar.header("Concrete Properties")
496
+
497
+ density = st.sidebar.number_input(
498
+ "Density (kg/m³)",
499
+ min_value=2000.0,
500
+ max_value=3000.0,
501
+ value=2490.0,
502
+ step=10.0,
503
+ help="Concrete density in kg/m³"
504
+ )
505
+
506
+ fc = st.sidebar.number_input(
507
+ "Compressive Strength (fc) in MPa",
508
+ min_value=10.0,
509
+ max_value=1000.0,
510
+ value=670.0,
511
+ step=10.0,
512
+ help="Concrete compressive strength in MPa"
513
+ )
514
+
515
+ e_modulus = st.sidebar.number_input(
516
+ "Elastic Modulus (E) in MPa",
517
+ min_value=10000.0,
518
+ max_value=1000000.0,
519
+ value=436000.0,
520
+ step=1000.0,
521
+ help="Concrete elastic modulus in MPa"
522
+ )
523
+
524
+ initial_value = st.sidebar.number_input(
525
+ "Initial Creep Value",
526
+ min_value=0.0,
527
+ max_value=1000.0,
528
+ value=0.0,
529
+ step=1.0,
530
+ help="Initial creep strain value (usually 0)"
531
+ )
532
+
533
+ # Time settings
534
+ st.sidebar.header("Time Settings")
535
+ max_days = st.sidebar.number_input(
536
+ "Maximum Time (days)",
537
+ min_value=10,
538
+ max_value=10000,
539
+ value=len(time_values),
540
+ step=10,
541
+ help="Maximum time for prediction in days"
542
+ )
543
+
544
+ use_original_time = st.sidebar.checkbox(
545
+ "Use Original Time Values",
546
+ value=True,
547
+ help="If checked, uses the original time values from the model training data"
548
+ )
549
+
550
+ # When the user clicks the predict button
551
+ if st.sidebar.button("Predict Creep Strain"):
552
+ # Create features DataFrame
553
+ features_dict = {
554
+ 'Density': density,
555
+ 'fc': fc,
556
+ 'E': e_modulus
557
+ }
558
+ df_features = pd.DataFrame([features_dict])
559
+
560
+ # Adjust time values if needed
561
+ if use_original_time:
562
+ pred_time_values = time_values[:max_days] if max_days < len(time_values) else time_values
563
+ else:
564
+ pred_time_values = np.linspace(1, max_days, min(max_days, len(time_values)))
565
+
566
+ # Run prediction
567
+ with st.spinner("Predicting creep strain..."):
568
+ predictions = autoregressive_predict(
569
+ model,
570
+ df_features,
571
+ pred_time_values,
572
+ feature_scaler,
573
+ creep_scaler,
574
+ device,
575
+ initial_value
576
+ )
577
+
578
+ # Show results
579
+ st.header("Prediction Results")
580
+
581
+ # Create columns for chart and data
582
+ col1, col2 = st.columns([2, 1])
583
+
584
+ with col1:
585
+ st.subheader("Creep Strain over Time")
586
+ fig, ax = plt.subplots(figsize=(10, 6))
587
+ ax.plot(pred_time_values, predictions, 'r-', linewidth=2)
588
+ ax.set_xlabel('Time (days)')
589
+ ax.set_ylabel('Creep Strain (micro-strain)')
590
+ ax.set_title('Predicted Concrete Creep Strain')
591
+ ax.grid(True)
592
+ st.pyplot(fig)
593
+
594
+ with col2:
595
+ st.subheader("Prediction Data")
596
+ results_df = pd.DataFrame({
597
+ 'Time (days)': pred_time_values,
598
+ 'Creep Strain (micro-strain)': predictions
599
+ })
600
+ st.dataframe(results_df)
601
+
602
+ # Download button for CSV
603
+ csv = results_df.to_csv(index=False)
604
+ st.download_button(
605
+ label="Download Predictions as CSV",
606
+ data=csv,
607
+ file_name="concrete_creep_predictions.csv",
608
+ mime="text/csv"
609
+ )
610
+
611
+ # Summary statistics
612
+ st.subheader("Summary Statistics")
613
+ st.write(f"Initial Creep: {predictions[0]:.2f} micro-strain")
614
+ st.write(f"Final Creep: {predictions[-1]:.2f} micro-strain")
615
+ st.write(f"Max Creep: {np.max(predictions):.2f} micro-strain")
616
+
617
+ # Show input parameters
618
+ st.subheader("Input Parameters")
619
+ st.write(f"Density: {density} kg/m³")
620
+ st.write(f"Compressive Strength (fc): {fc} MPa")
621
+ st.write(f"Elastic Modulus (E): {e_modulus} MPa")
622
+
623
+ # Footer
624
+ st.markdown("---")
625
+ st.markdown("Concrete Creep Prediction App | Enhanced LLM-Style Model")
best_llm_model-16.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8904c256759f1658ed16f5facc52a034c5395c587367537c7a5aa8405737b55
3
+ size 11955926
docker-compose.yml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3'
2
+
3
+ services:
4
+ concrete-creep-app:
5
+ build:
6
+ context: .
7
+ dockerfile: Dockerfile
8
+ ports:
9
+ - "8501:8501"
10
+ restart: unless-stopped
11
+ volumes:
12
+ - ./:/app
13
+ container_name: concrete-creep-prediction
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ pandas>=1.3.0
2
+ numpy>=1.20.0
3
+ torch>=1.9.0
4
+ matplotlib>=3.4.0
5
+ scikit-learn>=0.24.0
6
+ streamlit>=1.10.0
7
+ seaborn>=0.11.0
run_app.sh ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ echo "Starting Concrete Creep Prediction App..."
4
+ streamlit run app.py
scalers/creep_scaler.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49eb729dbb94ff0fd794b7ae0964cc99d1784d105c9bb73e6578febbe855346f
3
+ size 103
scalers/feature_scaler.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8451db8c4b387f2e6920ea748f3db4ce2126df9b2c7ac55049c11e74e9168a9f
3
+ size 627
scalers/time_values.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34ef684159bebd9bebec6be8188fa46acb5f8a8893acd7fedc8d04223f5ceb4b
3
+ size 1438