File size: 70,198 Bytes
9928397
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cadecfd
9928397
 
cadecfd
9928397
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c75cb81
9928397
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c75cb81
9928397
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c75cb81
9928397
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
import streamlit as st
import requests 
import pandas as pd
import numpy as np
import yfinance as yf
import plotly.graph_objects as go
import plotly.figure_factory as ff
from datetime import datetime, date
from dateutil.relativedelta import relativedelta
import datetime as dt
import warnings 
warnings.filterwarnings("ignore")
import os
from scipy.optimize import fsolve
from scipy.stats import norm

###############################################################################
# SET WIDE LAYOUT AND PAGE TITLE
###############################################################################
st.set_page_config(page_title="Default Risk Estimation", layout="wide")

###############################################################################
# GLOBALS & SESSION STATE
###############################################################################
FMP_API_KEY = os.getenv("FMP_API_KEY")

if "altman_results" not in st.session_state:
    st.session_state["altman_results"] = None

if "dtd_results" not in st.session_state:
    st.session_state["dtd_results"] = None

###############################################################################
# HELPER FUNCTIONS (Altman Z)
###############################################################################
def get_fmp_json(url):
    """
    Retrieves JSON from the specified URL and returns as a list.
    Omits direct mention of data source in any error messages.
    """
    r = requests.get(url)
    try:
        data = r.json()
        if not isinstance(data, list):
            return []
        return data
    except Exception:
        return []

def fetch_fmp_annual(endpoint):
    """
    Fetches annual data from the endpoint, sorts by date if present.
    """
    data = get_fmp_json(endpoint)
    df = pd.DataFrame(data)
    if not df.empty and 'date' in df.columns:
        df['date'] = pd.to_datetime(df['date'])
        df.sort_values('date', inplace=True)
    return df

###############################################################################
# HELPER FUNCTIONS (Distance-to-Default)
###############################################################################
def solve_merton(E, sigma_E, D, T, r):
    """
    Merton model solver:
      E = A * N(d1) - D * exp(-rT) * N(d2)
      sigma_E = (A / E) * N(d1) * sigma_A
    """
    def equations(vars_):
        A_, sigmaA_ = vars_
        d1_ = (np.log(A_ / D) + (r + 0.5 * sigmaA_**2) * T) / (sigmaA_ * np.sqrt(T))
        d2_ = d1_ - sigmaA_ * np.sqrt(T)
        eq1 = A_ * norm.cdf(d1_) - D * np.exp(-r * T) * norm.cdf(d2_) - E
        eq2 = sigma_E - (A_ / E) * norm.cdf(d1_) * sigmaA_
        return (eq1, eq2)

    A_guess = E + D
    sigmaA_guess = sigma_E * (E / (E + D))
    A_star, sigmaA_star = fsolve(equations, [A_guess, sigmaA_guess], maxfev=3000)
    return A_star, sigmaA_star

def distance_to_default(A, D, T, r, sigmaA):
    """
    Merton distance to default (d2):
      d2 = [ln(A/D) + (r - 0.5*sigmaA^2)*T] / (sigmaA * sqrt(T))
    """
    return (np.log(A / D) + (r - 0.5 * sigmaA**2) * T) / (sigmaA * np.sqrt(T))

###############################################################################
# ALTMAN Z-SCORE EXECUTION (From Provided Code)
###############################################################################
def run_altman_zscore_calculations(ticker, years_back):
    """
    Uses the original user-provided Altman Z code to fetch and compute partials.
    Returns the final DataFrame with partials and total Z-scores.
    """
    # 1) FETCH ANNUAL STATEMENTS
    income_url = f"https://financialmodelingprep.com/api/v3/income-statement/{ticker}?period=annual&limit=100&apikey={FMP_API_KEY}"
    balance_url = f"https://financialmodelingprep.com/api/v3/balance-sheet-statement/{ticker}?period=annual&limit=100&apikey={FMP_API_KEY}"

    income_df = fetch_fmp_annual(income_url)
    balance_df = fetch_fmp_annual(balance_url)

    merged_bi = pd.merge(balance_df, income_df, on='date', how='inner', suffixes=('_bal','_inc'))
    merged_bi.sort_values('date', inplace=True)

    if merged_bi.empty:
        st.warning("No statements to analyze for this ticker/date range.")
        return pd.DataFrame()

    # 2) FILTER TO LAST X YEARS
    end_date = pd.Timestamp.today()
    start_date = end_date - relativedelta(years=years_back)

    merged_bi = merged_bi[(merged_bi['date'] >= start_date) & (merged_bi['date'] <= end_date)]
    merged_bi.sort_values('date', inplace=True)

    if merged_bi.empty:
        st.warning("No financial statements found in the chosen range.")
        return pd.DataFrame()

    # 3) FETCH HISTORICAL MARKET CAP
    mktcap_df = pd.DataFrame()
    iterations = (years_back // 5) + (1 if years_back % 5 != 0 else 0)

    for i in range(iterations):
        period_end_date = end_date - relativedelta(years=i * 5)
        period_start_date = period_end_date - relativedelta(years=5)

        if period_start_date < start_date:
            period_start_date = start_date

        mktcap_url = (
            f"https://financialmodelingprep.com/api/v3/historical-market-capitalization/{ticker}"
            f"?from={period_start_date.date()}&to={period_end_date.date()}&apikey={FMP_API_KEY}"
        )
        mktcap_data = get_fmp_json(mktcap_url)
        mktcap_period_df = pd.DataFrame(mktcap_data)
        
        if not mktcap_period_df.empty and 'date' in mktcap_period_df.columns:
            mktcap_period_df['date'] = pd.to_datetime(mktcap_period_df['date'])
            mktcap_period_df.rename(columns={'marketCap': 'historical_market_cap'}, inplace=True)
            mktcap_df = pd.concat([mktcap_df, mktcap_period_df], ignore_index=True)

    mktcap_df = mktcap_df.sort_values('date').drop_duplicates(subset=['date'])
    if not mktcap_df.empty and 'date' in mktcap_df.columns:
        mktcap_df['date'] = pd.to_datetime(mktcap_df['date'])
        mktcap_df = mktcap_df[(mktcap_df['date'] >= start_date) & (mktcap_df['date'] <= end_date)]
        mktcap_df.sort_values('date', inplace=True)
    else:
        mktcap_df = pd.DataFrame(columns=['date','historical_market_cap'])

    if not merged_bi.empty and not mktcap_df.empty:
        merged_bi = pd.merge_asof(
            merged_bi.sort_values('date'),
            mktcap_df.sort_values('date'),
            on='date',
            direction='nearest'
        )
    else:
        merged_bi['historical_market_cap'] = np.nan

    # 4) COMPUTE PARTIAL CONTRIBUTIONS
    z_rows = []
    for _, row in merged_bi.iterrows():
        ta = row.get('totalAssets', np.nan)
        tl = row.get('totalLiabilities', np.nan)
        if pd.isnull(ta) or pd.isnull(tl) or ta == 0 or tl == 0:
            continue

        rev = row.get('revenue', 0)
        hist_mcap = row.get('historical_market_cap', np.nan)
        if pd.isnull(hist_mcap):
            continue

        tca = row.get('totalCurrentAssets', np.nan)
        tcl = row.get('totalCurrentLiabilities', np.nan)
        if pd.isnull(tca) or pd.isnull(tcl):
            continue

        wc = (tca - tcl)   
        re = row.get('retainedEarnings', 0)
        ebit = row.get('operatingIncome', np.nan)
        if pd.isnull(ebit):
            ebit = row.get('ebitda', 0)

        X1 = wc / ta
        X2 = re / ta
        X3 = ebit / ta
        X4 = hist_mcap / tl
        X5 = rev / ta if ta != 0 else 0

        # Original Z
        o_part1 = 1.2 * X1  
        o_part2 = 1.4 * X2  
        o_part3 = 3.3 * X3  
        o_part4 = 0.6 * X4  
        o_part5 = 1.0 * X5  
        z_original = o_part1 + o_part2 + o_part3 + o_part4 + o_part5

        # Z''
        d_part1 = 6.56 * X1  
        d_part2 = 3.26 * X2  
        d_part3 = 6.72 * X3  
        d_part4 = 1.05 * X4  
        z_double_prime = d_part1 + d_part2 + d_part3 + d_part4

        # Z'''
        t_part1 = 3.25 * X1  
        t_part2 = 2.85 * X2  
        t_part3 = 4.15 * X3  
        t_part4 = 0.95 * X4  
        z_triple_prime_service = t_part1 + t_part2 + t_part3 + t_part4

        z_rows.append({
            'date': row['date'],

            # Original partials
            'o_part1': o_part1,
            'o_part2': o_part2,
            'o_part3': o_part3,
            'o_part4': o_part4,
            'o_part5': o_part5,
            'z_original': z_original,

            # Z'' partials
            'd_part1': d_part1,
            'd_part2': d_part2,
            'd_part3': d_part3,
            'd_part4': d_part4,
            'z_double_prime': z_double_prime,

            # Z''' partials
            't_part1': t_part1,
            't_part2': t_part2,
            't_part3': t_part3,
            't_part4': t_part4,
            'z_triple_prime_service': z_triple_prime_service
        })

    z_df = pd.DataFrame(z_rows)
    z_df.sort_values('date', inplace=True)
    return z_df

###############################################################################
# DTD EXECUTION (From Provided Code)
###############################################################################
def calculate_yearly_distance_to_default(
    symbol="AAPL",
    years_back=10,
    debt_method="TOTAL",
    risk_free_ticker="^TNX",
    apikey="YOUR_FMP_API_KEY",
):
    """
    Fetches up to `years_back` years of annual data, merges market cap, debt, 
    and risk-free yields. Then computes Merton Distance to Default for each year.
    Returns a DataFrame.
    """
    end_date = date.today()
    start_date = end_date - dt.timedelta(days=365 * years_back)

    # Market cap
    df_mcap = pd.DataFrame()
    iterations = (years_back // 5) + (1 if years_back % 5 != 0 else 0)
    for i in range(iterations):
        period_end_date = end_date - dt.timedelta(days=365 * i * 5)
        period_start_date = period_end_date - dt.timedelta(days=365 * 5)
        url_mcap = (
            f"https://financialmodelingprep.com/api/v3/historical-market-capitalization/"
            f"{symbol}?from={period_start_date}&to={period_end_date}&apikey={apikey}"
        )
        resp_mcap = requests.get(url_mcap)
        data_mcap = resp_mcap.json() if resp_mcap.status_code == 200 else []
        df_mcap_period = pd.DataFrame(data_mcap)
        df_mcap = pd.concat([df_mcap, df_mcap_period], ignore_index=True)

    if df_mcap.empty or "date" not in df_mcap.columns:
        raise ValueError("No market cap data returned. Check your inputs.")
    df_mcap["year"] = pd.to_datetime(df_mcap["date"]).dt.year
    df_mcap = (
        df_mcap.groupby("year", as_index=False)
        .agg({"marketCap": "mean"})
        .sort_values("year", ascending=False)
    )

    # Balance Sheet
    url_bs = f"https://financialmodelingprep.com/api/v3/balance-sheet-statement/{symbol}?period=annual&apikey={apikey}"
    resp_bs = requests.get(url_bs)
    data_bs = resp_bs.json() if resp_bs.status_code == 200 else []
    df_bs = pd.DataFrame(data_bs)
    if df_bs.empty or "date" not in df_bs.columns:
        raise ValueError("No balance sheet data returned. Check your inputs.")
    df_bs["year"] = pd.to_datetime(df_bs["date"]).dt.year
    keep_cols = ["year", "shortTermDebt", "longTermDebt", "totalDebt", "date"]
    df_bs = df_bs[keep_cols].sort_values("year", ascending=False)

    # Risk-free from yfinance
    rf_ticker_obj = yf.Ticker(risk_free_ticker)
    rf_data = rf_ticker_obj.history(start=start_date, end=end_date, auto_adjust=False)
    if rf_data.empty or "Close" not in rf_data.columns:
        raise ValueError("No valid risk-free rate data found. Check your inputs.")
    rf_data = rf_data.reset_index()
    rf_data["year"] = rf_data["Date"].dt.year
    rf_data = rf_data[["year", "Close"]]
    rf_yearly = rf_data.groupby("year", as_index=False)["Close"].mean()
    rf_yearly.rename(columns={"Close": "rf_yield"}, inplace=True)
    rf_yearly["rf_yield"] = rf_yearly["rf_yield"] / 100.0  # decimal

    # Merge
    df_all = pd.merge(df_mcap, df_bs, on="year", how="left")
    df_all = pd.merge(df_all, rf_yearly, on="year", how="left")

    # Merton each year
    results = []
    for _, row in df_all.iterrows():
        yr = row["year"]
        E = row["marketCap"]
        if pd.isna(E) or E <= 0:
            continue

        shortD = row.get("shortTermDebt", 0) or 0
        longD = row.get("longTermDebt", 0) or 0
        totalD = row.get("totalDebt", 0) or 0
        if debt_method.upper() == "STPLUSLT":
            D = shortD + longD
        elif debt_method.upper() == "STPLUSHALFLT":
            D = shortD + 0.5 * longD
        else:
            D = totalD
        if not D or D <= 0:
            D = np.nan

        r_val = row.get("rf_yield", 0.03)

        from_dt = f"{yr}-01-01"
        to_dt = f"{yr}-12-31"
        url_hist = (
            f"https://financialmodelingprep.com/api/v3/historical-price-full/{symbol}"
            f"?from={from_dt}&to={to_dt}&apikey={apikey}"
        )
        resp_hist = requests.get(url_hist)
        data_hist = resp_hist.json() if resp_hist.status_code == 200 else {}
        daily_prices = data_hist.get("historical", [])

        if not daily_prices:
            sigma_E = 0.30
        else:
            df_prices = pd.DataFrame(daily_prices)
            df_prices.sort_values("date", inplace=True)
            close_vals = df_prices["close"].values
            log_rets = np.diff(np.log(close_vals))
            daily_vol = np.std(log_rets)
            sigma_E = daily_vol * np.sqrt(252)
            if sigma_E < 1e-4:
                sigma_E = 0.30

        T = 1.0
        if not np.isnan(D):
            try:
                A_star, sigmaA_star = solve_merton(E, sigma_E, D, T, r_val)
                dtd_value = distance_to_default(A_star, D, T, r_val, sigmaA_star)
            except:
                A_star, sigmaA_star, dtd_value = np.nan, np.nan, np.nan
        else:
            A_star, sigmaA_star, dtd_value = np.nan, np.nan, np.nan

        results.append({
            "year": yr,
            "marketCap": E,
            "shortTermDebt": shortD,
            "longTermDebt": longD,
            "totalDebt": totalD,
            "chosenDebt": D,
            "rf": r_val,
            "sigma_E": sigma_E,
            "A_star": A_star,
            "sigmaA_star": sigmaA_star,
            "DTD": dtd_value
        })

    result_df = pd.DataFrame(results).sort_values("year")
    return result_df

###############################################################################
# PLOTTING HELPERS
###############################################################################
def plot_zscore_figure(df, date_col, partial_cols, total_col, partial_names, total_name, title_text, zones, ticker):
    """
    Creates stacked bar for partial contributions plus a line for the total Z.
    Draws shading for distress/gray/safe zones. Full width.
    """
    fig = go.Figure()

    x_min = df[date_col].min()
    x_max = df[date_col].max()
    total_max = df[total_col].max()
    partial_sum_max = df[partial_cols].sum(axis=1).max() if not df[partial_cols].empty else 0
    y_max = max(total_max, partial_sum_max, 0) * 1.2
    y_min = min(df[total_col].min(), 0) * 1.2 if df[total_col].min() < 0 else 0

    # Distress
    fig.add_shape(
        type="rect",
        x0=x_min, x1=x_max,
        y0=y_min, y1=zones['distress'],
        fillcolor="red",
        opacity=0.2,
        layer="below",
        line=dict(width=0)
    )
    # Gray
    fig.add_shape(
        type="rect",
        x0=x_min, x1=x_max,
        y0=zones['gray_lower'], y1=zones['gray_upper'],
        fillcolor="gray",
        opacity=0.2,
        layer="below",
        line=dict(width=0)
    )
    # Safe
    fig.add_shape(
        type="rect",
        x0=x_min, x1=x_max,
        y0=zones['safe'], y1=y_max,
        fillcolor="green",
        opacity=0.2,
        layer="below",
        line=dict(width=0)
    )

    # Stacked bars
    for col, name, color in partial_names:
        fig.add_trace(go.Bar(
            x=df[date_col],
            y=df[col],
            name=name,
            marker_color=color
        ))

    # Line
    fig.add_trace(go.Scatter(
        x=df[date_col],
        y=df[total_col],
        mode='lines+markers+text',
        text=df[total_col].round(2),
        textposition='top center',
        textfont=dict(size=16),
        name=total_name,
        line=dict(color='white', width=2)
    ))

    fig.update_layout(
        title=dict(
            text=f"{title_text} for {ticker}",
            font=dict(size=26, color="white")
        ),
        
        legend=dict(
            font=dict(color="white", size=18)
        ),
        barmode="stack",
        template="plotly_dark",
        paper_bgcolor="#0e1117",
        plot_bgcolor="#0e1117",
        xaxis=dict(
            title="Year",
            tickangle=45,
            tickformat="%Y",
            dtick="M12",
            showgrid=True,
            gridcolor="rgba(255, 255, 255, 0.1)"
        ),
        yaxis=dict(
            title="Z-Score Contribution",
            showgrid=True,
            gridcolor="rgba(255, 255, 255, 0.1)"
        ),
        margin=dict(l=40, r=40, t=80, b=80),
        height=700
    )
    st.plotly_chart(fig, use_container_width=True)

###############################################################################
# STREAMLIT APP
###############################################################################
st.title("Bankruptcy Risk Estimation")

#st.write("## Overview")
st.write("This tool assesses a firm's bankruptcy and default risk using two widely recognized models:")
st.write("1) **Altman Z-Score**: A financial distress predictor based on accounting ratios.")
st.write("2) **Merton Distance-to-Default (DTD)**: A market-based risk measure derived from option pricing theory.")
#st.write("Select a page from the sidebar to explore each model’s estimates and methodology.")

# Sidebar for user inputs
with st.sidebar:
    st.write("## Input Parameters")
    
    # Page selector in an open-by-default expander
    with st.expander("Page Selector", expanded=True):
        page = st.radio("Select Page:", ["Altman Z Score", "Distance-to-Default"])
    
    with st.expander("General Settings", expanded=True):
        ticker = st.text_input("Ticker", value="AAPL", help="Enter a valid stock ticker")
        years_back = st.number_input("Years back", min_value=1, max_value=30, value=10, step=1,
                                     help="How many years of data to retrieve?")
    run_button = st.button("Run Analysis", help="Fetch data and compute metrics")

# If user clicks to run, fetch data
if run_button:
    # Altman Z
    z_data = run_altman_zscore_calculations(ticker, years_back)
    st.session_state["altman_results"] = z_data

    # DTD
    try:
        dtd_df = calculate_yearly_distance_to_default(
            symbol=ticker,
            years_back=years_back,
            debt_method="TOTAL",
            risk_free_ticker="^TNX",
            apikey=FMP_API_KEY
        )
        st.session_state["dtd_results"] = dtd_df
    except ValueError:
        st.warning("No valid data was returned. Check your inputs.")

###############################################################################
# PAGE 1: ALTMAN Z
###############################################################################
if page == "Altman Z Score":
    z_df = st.session_state.get("altman_results", None)
    if z_df is None or z_df.empty:
        st.info("Select Page, input the paramters and click 'Run Analysis' on the sidebar.")
    else:
        # Original
        #st.subheader("Original Altman Z-Score (1968)")
        
        with st.expander("Methodology: Original Altman Z-Score (1968)", expanded=False):
            st.write("The **Altman Z-Score** is a financial distress prediction model developed by Edward Altman in 1968. It combines five financial ratios to assess the likelihood of corporate bankruptcy.")
            
            # Formula
            st.latex(r"Z = 1.2 \times X_1 + 1.4 \times X_2 + 3.3 \times X_3 + 0.6 \times X_4 + 1.0 \times X_5")
            
            # Definitions of variables
            st.latex(r"X_1 = \frac{\text{Working Capital}}{\text{Total Assets}}")
            st.write("**Liquidity (X₁)**: Measures short-term financial health by comparing working capital to total assets. Higher values suggest better liquidity and lower default risk.")

            st.latex(r"X_2 = \frac{\text{Retained Earnings}}{\text{Total Assets}}")
            st.write("**Accumulated Profitability (X₂)**: Indicates the proportion of assets financed through retained earnings. Firms with strong retained earnings are less dependent on external financing.")

            st.latex(r"X_3 = \frac{\text{EBIT}}{\text{Total Assets}}")
            st.write("**Earnings Strength (X₃)**: EBIT (Earnings Before Interest and Taxes) relative to total assets reflects operating profitability and efficiency.")

            st.latex(r"X_4 = \frac{\text{Market Value of Equity}}{\text{Total Liabilities}}")
            st.write("**Leverage (X₄)**: Compares a firm's market capitalization to its total liabilities. A higher ratio suggests lower financial risk, as equity holders have a stronger claim.")

            st.latex(r"X_5 = \frac{\text{Revenue}}{\text{Total Assets}}")
            st.write("**Asset Turnover (X₅)**: Assesses how efficiently a company generates revenue from its assets. High turnover suggests better asset utilization.")

            # Academic Justification
            st.write("##### Academic Justification")
            st.write(
                "The Altman Z-Score was developed using **discriminant analysis** on a dataset of manufacturing firms. "
                "The Altman Z-Score was found to correctly predict bankruptcy **72-80%** of the time in original studies, typically with a **one-year lead time** before actual default."
                "The model’s strength lies in its ability to quantify financial health across multiple dimensions—liquidity, profitability, leverage, and efficiency."
            )

            # Interpretation
            st.write("##### Interpretation")
            st.write(
                "**Z > 2.99**: Company is considered financially healthy (Low risk of bankruptcy).  \n"
                "**1.81 ≤ Z ≤ 2.99**: 'Gray Area' where financial stability is uncertain.  \n"
                "**Z < 1.81**: High financial distress, indicating potential bankruptcy risk."
            )

            # Downsides / Limitations
            st.write("##### Limitations")
            st.write(
                "- Developed using **only manufacturing firms**, which limits its applicability to other industries.\n"
                "- Uses **historical accounting data**, which may not reflect current market conditions.\n"
                "- Market Value of Equity (X₄) makes the score **sensitive to stock price volatility**.\n"
                "- Does not incorporate forward-looking indicators such as market sentiment or macroeconomic risks."
            )
        
        
        orig_partial_names = [
            ('o_part1', "1.2 × (WC/TA)", 'blue'),
            ('o_part2', "1.4 × (RE/TA)", 'orange'),
            ('o_part3', "3.3 × (EBIT/TA)", 'green'),
            ('o_part4', "0.6 × (MktCap/TL)", 'red'),
            ('o_part5', "1.0 × (Rev/TA)", 'purple'),
        ]
        orig_zones = {
            'distress': 1.81,
            'gray_lower': 1.81,
            'gray_upper': 2.99,
            'safe': 2.99
        }
        plot_zscore_figure(
            df=z_df,
            date_col='date',
            partial_cols=['o_part1','o_part2','o_part3','o_part4','o_part5'],
            total_col='z_original',
            partial_names=orig_partial_names,
            total_name="Original Z (Total)",
            title_text="Original Altman Z-Score (1968)",
            zones=orig_zones,
            ticker=ticker
        )
        

        with st.expander("Interpretation", expanded=False):
            # EXACT TEXT from user code (Original Z)
            latest_z = z_df['z_original'].iloc[-1]
            # For time-series logic:
            first_val = z_df['z_original'].iloc[0]
            if latest_z > first_val:
                trend = "increased"
            elif latest_z < first_val:
                trend = "decreased"
            else:
                trend = "remained the same"
            min_val = z_df['z_original'].min()
            max_val = z_df['z_original'].max()
            min_idx = z_df['z_original'].idxmin()
            max_idx = z_df['z_original'].idxmax()
            min_year = z_df.loc[min_idx, 'date'].year
            max_year = z_df.loc[max_idx, 'date'].year

            st.write("**--- Interpretation for Original Z-Score ---")
            st.write(f"Over the entire time series, the Z-Score has {trend}.")
            st.write(f"The lowest value was {min_val:.2f} in {min_year}.")
            st.write(f"The highest value was {max_val:.2f} in {max_year}.")

            if latest_z < orig_zones['distress']:
                st.write("Current reading is in distress zone. This suggests high financial risk.")
            elif latest_z < orig_zones['gray_upper']:
                st.write("Current reading is in the gray area. This signals mixed financial stability.")
            else:
                st.write("Current reading is in the safe zone. This implies a stronger financial condition.")

            latest_data = z_df.iloc[-1]
            orig_partials = {
                'o_part1': latest_data['o_part1'],
                'o_part2': latest_data['o_part2'],
                'o_part3': latest_data['o_part3'],
                'o_part4': latest_data['o_part4'],
                'o_part5': latest_data['o_part5']
            }
            key_driver = max(orig_partials, key=orig_partials.get)
            if key_driver == 'o_part1':
                st.write("The most significant factor is Working Capital. This suggests the company's ability to cover short-term obligations with current assets. ")
                st.write("A high contribution from Working Capital means strong liquidity, but too much could indicate inefficient capital allocation. ")
                st.write("If the company holds excess current assets, it may not be deploying resources efficiently for growth.")
            elif key_driver == 'o_part2':
                st.write("The most significant factor is Retained Earnings. This reflects the company's history of profitability and reinvestment. ")
                st.write("A high retained earnings contribution indicates that past profits have been reinvested rather than paid out as dividends. ")
                st.write("This can be a positive sign of financial stability, but if earnings retention is excessive, investors may question the company’s capital allocation strategy.")
            elif key_driver == 'o_part3':
                st.write("The most significant factor is EBIT (Earnings Before Interest and Taxes). This underscores the company’s ability to generate profits from operations. ")
                st.write("A high EBIT contribution suggests that core business activities are profitable and drive financial health. ")
                st.write("However, if EBIT dominates the Z-Score, it may mean the company is heavily reliant on operational earnings, making it vulnerable to downturns in revenue.")
            elif key_driver == 'o_part4':
                st.write("The most significant factor is Market Cap to Liabilities. This reflects investor confidence in the company’s future performance relative to its debt burden. ")
                st.write("A strong market cap contribution means investors perceive the company as having high equity value compared to liabilities, reducing bankruptcy risk. ")
                st.write("However, if this is the dominant driver, financial stability may be tied to market sentiment, which can be volatile.")
            elif key_driver == 'o_part5':
                st.write("The most significant factor is Revenue. This indicates that top-line growth is a major driver of financial stability. ")
                st.write("A high revenue contribution is positive if it translates to strong margins, but if costs are rising at the same pace, profitability may not improve. ")
                st.write("If revenue dominates the Z-Score, the company must ensure sustainable cost management and profitability to maintain financial strength.")

            st.write("If management seeks to lower this Z-Score, they might reduce liquidity or raise liabilities.")
            st.write("A higher liability base or weaker earnings can press the score downward.")

        # Z'' 
        #st.subheader("Z'' (1993, Non-Manufacturing)")
        
        with st.expander("Methodology: Z'' (1993, Non-Manufacturing)", expanded=False):
            st.write("The **Z''-Score (1993)** is an adaptation of the original Altman Z-Score, developed to assess financial distress in **non-manufacturing firms**, particularly service and retail sectors. It removes the revenue-based efficiency metric (X₅) and adjusts weightings to better fit firms with different asset structures.")

            # Formula
            st.latex(r"Z'' = 6.56 \times X_1 + 3.26 \times X_2 + 6.72 \times X_3 + 1.05 \times X_4")

            # Definitions of variables
            st.latex(r"X_1 = \frac{\text{Working Capital}}{\text{Total Assets}}")
            st.write("**Liquidity (X₁)**: Measures short-term financial flexibility. Firms with higher working capital relative to assets are better positioned to meet short-term obligations.")

            st.latex(r"X_2 = \frac{\text{Retained Earnings}}{\text{Total Assets}}")
            st.write("**Cumulative Profitability (X₂)**: Higher retained earnings relative to total assets suggest long-term profitability and financial resilience.")

            st.latex(r"X_3 = \frac{\text{EBIT}}{\text{Total Assets}}")
            st.write("**Operating Profitability (X₃)**: Measures how efficiently a company generates profit from its assets, reflecting core business strength.")

            st.latex(r"X_4 = \frac{\text{Market Value of Equity}}{\text{Total Liabilities}}")
            st.write("**Leverage (X₄)**: A firm's ability to cover its liabilities with market value equity. A lower ratio suggests greater financial risk.")

            # Academic Justification
            st.write("##### Academic Justification")
            st.write(
                "The original Z-Score was optimized for **manufacturing firms**, making it less effective for firms with fewer tangible assets. "
                "Z'' (1993) improves bankruptcy prediction for **service and retail firms**, as it excludes the revenue turnover component (X₅) "
                "and places greater emphasis on profitability and liquidity. Empirical studies found Z'' to be **better suited for firms with lower capital intensity**."
            )

            # Interpretation
            st.write("##### Interpretation")
            st.write(
                "**Z'' > 2.60**: Firm is financially stable, with low bankruptcy risk.  \n"
                "**1.10 ≤ Z'' ≤ 2.60**: 'Gray Area'—financial condition is uncertain.  \n"
                "**Z'' < 1.10**: Firm is in financial distress, at a higher risk of default."
            )

            # Downsides / Limitations
            st.write("##### Limitations")
            st.write(
                "- Developed for **non-manufacturing firms**, but may not be applicable to banks or financial institutions.\n"
                "- Still **relies on historical accounting data**, which may not fully capture real-time financial conditions.\n"
                "- Market-based variable (X₄) makes the score **sensitive to stock market fluctuations**.\n"
                "- Does not consider external macroeconomic risks or qualitative factors like management decisions."
            )

        
        double_partial_names = [
            ('d_part1', "6.56 × (WC/TA)", 'blue'),
            ('d_part2', "3.26 × (RE/TA)", 'orange'),
            ('d_part3', "6.72 × (EBIT/TA)", 'green'),
            ('d_part4', "1.05 × (MktCap/TL)", 'red'),
        ]
        double_zones = {
            'distress': 1.1,
            'gray_lower': 1.1,
            'gray_upper': 2.6,
            'safe': 2.6
        }
        plot_zscore_figure(
            df=z_df,
            date_col='date',
            partial_cols=['d_part1','d_part2','d_part3','d_part4'],
            total_col='z_double_prime',
            partial_names=double_partial_names,
            total_name="Z'' (Total)",
            title_text="Z'' (1993, Non-Manufacturing)",
            zones=double_zones,
            ticker=ticker
        )

        with st.expander("Interpretation", expanded=False):
            latest_z_double = z_df['z_double_prime'].iloc[-1]
            first_val = z_df['z_double_prime'].iloc[0]
            if latest_z_double > first_val:
                trend_d = "increased"
            elif latest_z_double < first_val:
                trend_d = "decreased"
            else:
                trend_d = "remained the same"

            min_val_d = z_df['z_double_prime'].min()
            max_val_d = z_df['z_double_prime'].max()
            min_idx_d = z_df['z_double_prime'].idxmin()
            max_idx_d = z_df['z_double_prime'].idxmax()
            min_year_d = z_df.loc[min_idx_d, 'date'].year
            max_year_d = z_df.loc[max_idx_d, 'date'].year

            st.write("**--- Interpretation for Z'' (Non-Manufacturing) ---**")
            st.write(f"Over the chosen period, the Z-Score has {trend_d}.")
            st.write(f"Lowest: {min_val_d:.2f} in {min_year_d}.")
            st.write(f"Highest: {max_val_d:.2f} in {max_year_d}.")

            if latest_z_double < double_zones['distress']:
                st.write("Current reading is in distress zone. Financial risk is elevated.")
            elif latest_z_double < double_zones['gray_upper']:
                st.write("Current reading is in the gray zone. Financial signals are not clear.")
            else:
                st.write("Current reading is in the safe zone. Financial picture seems stable.")

            latest_data_double = z_df.iloc[-1]
            double_partials = {
                'd_part1': latest_data_double['d_part1'],
                'd_part2': latest_data_double['d_part2'],
                'd_part3': latest_data_double['d_part3'],
                'd_part4': latest_data_double['d_part4']
            }
            key_driver_double = max(double_partials, key=double_partials.get)

            if key_driver_double == 'd_part1':
                st.write("The key factor is Working Capital. This measures the company’s ability to cover short-term liabilities with current assets.")
                st.write("A strong working capital contribution means the company has a healthy liquidity buffer, reducing short-term financial risk.")
                st.write("However, excessive working capital can signal inefficient capital deployment, where too much cash is tied up in receivables or inventory.")
            elif key_driver_double == 'd_part2':
                st.write("The key factor is Retained Earnings. This represents accumulated profits that have been reinvested rather than distributed as dividends.")
                st.write("A high retained earnings contribution suggests financial discipline and the ability to self-finance operations, reducing reliance on external funding.")
                st.write("However, if retained earnings are excessive, investors may question whether the company is efficiently reinvesting in growth opportunities or hoarding cash.")
            elif key_driver_double == 'd_part3':
                st.write("The key factor is EBIT (Earnings Before Interest and Taxes). This highlights the strength of the company’s core operations in driving profitability.")
                st.write("A high EBIT contribution is a strong indicator of financial health, as it suggests the company generates consistent earnings before financing costs.")
                st.write("However, if EBIT is the dominant driver, the company may be vulnerable to economic downturns or market shifts that impact its ability to sustain margins.")
            elif key_driver_double == 'd_part4':
                st.write("The key factor is Market Cap vs. Liabilities. This shows how the market values the company relative to its total debt obligations.")
                st.write("A strong contribution from this metric suggests investor confidence in the company’s financial future, lowering perceived bankruptcy risk.")
                st.write("However, if market sentiment is the main driver, the company could be vulnerable to stock price fluctuations rather than underlying business fundamentals.")

            st.write("To decrease this score, raising debt or reducing EBIT can cause the drop.")
            st.write("An increase in liabilities often pulls down the ratio.")

        # Z'''
        #st.subheader("Z''' (2023, Service/Tech)")
        
        with st.expander("Methodology: Z''' (2023, Service/Tech)", expanded=False):
            st.write("The **Z'''-Score (2023)** is a further refinement of the Altman Z models, designed to assess financial distress in **modern service and technology firms**. This version accounts for the **intangible asset-heavy nature** of these companies, where traditional balance sheet metrics may not fully capture financial health.")

            # Formula
            st.latex(r"Z''' = 3.25 \times X_1 + 2.85 \times X_2 + 4.15 \times X_3 + 0.95 \times X_4")

            # Definitions of variables
            st.latex(r"X_1 = \frac{\text{Working Capital}}{\text{Total Assets}}")
            st.write("**Liquidity (X₁)**: Measures short-term financial flexibility. A strong working capital position helps firms cover immediate liabilities.")

            st.latex(r"X_2 = \frac{\text{Retained Earnings}}{\text{Total Assets}}")
            st.write("**Accumulated Profitability (X₂)**: Indicates the extent to which a firm’s assets are funded by retained earnings rather than external debt or equity.")

            st.latex(r"X_3 = \frac{\text{EBIT}}{\text{Total Assets}}")
            st.write("**Core Earnings Strength (X₃)**: Measures profitability before interest and taxes, reflecting operational efficiency.")

            st.latex(r"X_4 = \frac{\text{Market Value of Equity}}{\text{Total Liabilities}}")
            st.write("**Market Confidence (X₄)**: Assesses how the market values the firm relative to its total liabilities. Higher values suggest lower financial risk.")

            # Academic Justification
            st.write("##### Academic Justification")
            st.write(
                "Unlike traditional manufacturing firms, **service and tech firms rely heavily on intangible assets** (e.g., software, R&D, brand equity), "
                "which are often not reflected on the balance sheet. **Z''' (2023) adjusts for this** by rebalancing weightings to better account for profitability "
                "and market valuation. It provides a more relevant measure for industries where physical assets play a reduced role in financial stability."
            )

            # Interpretation
            st.write("##### Interpretation")
            st.write(
                "**Z''' > 2.90**: Firm is financially stable, with a low probability of distress.  \n"
                "**1.50 ≤ Z''' ≤ 2.90**: 'Gray Area'—financial condition is uncertain.  \n"
                "**Z''' < 1.50**: Firm is in financial distress, with an elevated bankruptcy risk."
            )

            # Downsides / Limitations
            st.write("##### Limitations")
            st.write(
                "- Developed for **service and tech firms**, but may not generalize well to capital-intensive industries.\n"
                "- **Still based on historical financial data**, which may lag behind real-time market shifts.\n"
                "- Market value component (X₄) **introduces volatility**, making results sensitive to stock price swings.\n"
                "- Does not explicitly factor in **R&D investment or future revenue potential**, which are key in tech sectors."
            )

        
        triple_partial_names = [
            ('t_part1', "3.25 × (WC/TA)", 'blue'),
            ('t_part2', "2.85 × (RE/TA)", 'orange'),
            ('t_part3', "4.15 × (EBIT/TA)", 'green'),
            ('t_part4', "0.95 × (MktCap/TL)", 'red'),
        ]
        triple_zones = {
            'distress': 1.5,
            'gray_lower': 1.5,
            'gray_upper': 2.9,
            'safe': 2.9
        }
        plot_zscore_figure(
            df=z_df,
            date_col='date',
            partial_cols=['t_part1','t_part2','t_part3','t_part4'],
            total_col='z_triple_prime_service',
            partial_names=triple_partial_names,
            total_name="Z''' (Total)",
            title_text="Z''' (2023, Service/Tech)",
            zones=triple_zones,
            ticker=ticker
        )
       
        with st.expander("Interpretation", expanded=False):
            latest_z_triple = z_df['z_triple_prime_service'].iloc[-1]
            first_val_t = z_df['z_triple_prime_service'].iloc[0]
            if latest_z_triple > first_val_t:
                trend_t = "increased"
            elif latest_z_triple < first_val_t:
                trend_t = "decreased"
            else:
                trend_t = "remained the same"

            min_val_t = z_df['z_triple_prime_service'].min()
            max_val_t = z_df['z_triple_prime_service'].max()
            min_idx_t = z_df['z_triple_prime_service'].idxmin()
            max_idx_t = z_df['z_triple_prime_service'].idxmax()
            min_year_t = z_df.loc[min_idx_t, 'date'].year
            max_year_t = z_df.loc[max_idx_t, 'date'].year

            st.write("**--- Interpretation for Z''' (Service/Tech) ---**")
            st.write(f"Across the selected years, this Z-Score has {trend_t}.")
            st.write(f"Minimum was {min_val_t:.2f} in {min_year_t}.")
            st.write(f"Maximum was {max_val_t:.2f} in {max_year_t}.")

            if latest_z_triple < triple_zones['distress']:
                st.write("Current reading is in the distress zone. This indicates possible financial strain.")
            elif latest_z_triple < triple_zones['gray_upper']:
                st.write("Current reading is in the gray range. This means uncertain financial signals.")
            else:
                st.write("Current reading is in the safe zone. Financial health looks positive.")

            latest_data_triple = z_df.iloc[-1]
            triple_partials = {
                't_part1': latest_data_triple['t_part1'],
                't_part2': latest_data_triple['t_part2'],
                't_part3': latest_data_triple['t_part3'],
                't_part4': latest_data_triple['t_part4']
            }
            key_driver_triple = max(triple_partials, key=triple_partials.get)
            if key_driver_triple == 't_part1':
                st.write("Working Capital stands out as the main influence, emphasizing the company's short-term financial flexibility.")
                st.write("A strong working capital contribution indicates a well-managed balance between current assets and liabilities, reducing liquidity risk.")
                st.write("However, if too much capital is tied up in cash or inventory, it may suggest inefficiency in deploying assets for growth.")
            elif key_driver_triple == 't_part2':
                st.write("Retained Earnings plays the biggest role, highlighting the company's ability to reinvest past profits into future growth.")
                st.write("A high retained earnings contribution suggests the company has a history of profitability and financial discipline, reducing reliance on external financing.")
                st.write("However, if retained earnings dominate, it raises questions about whether capital is allocated effectively.")
            elif key_driver_triple == 't_part3':
                st.write("EBIT is the dominant factor, meaning the company’s operational efficiency is the primary driver of financial stability.")
                st.write("A strong EBIT contribution indicates that core business activities are profitable. This supports the firm's financial health.")
                st.write("But if EBIT is the largest driver, the company may be heavily dependent on margins, making it vulnerable to cost pressures.")
            elif key_driver_triple == 't_part4':
                st.write("Market Cap vs. Liabilities leads, suggesting that investor confidence and market valuation are key drivers of financial stability.")
                st.write("A high contribution from this metric means the company’s equity is valued significantly higher than its liabilities.")
                st.write("Reliance on market sentiment can expose the firm to stock price volatility.")

            st.write("If the goal is to reduce this Z-Score, rising debt or shrinking EBIT will push it downward.")
            st.write("Lower liquidity or lower equity value can also move the score lower.")

        # Show raw data
        with st.expander("Raw Altman Z Data", expanded=False):
            st.dataframe(z_df)

###############################################################################
# PAGE 2: DISTANCE TO DEFAULT
###############################################################################
if page == "Distance-to-Default":

    dtd_df = st.session_state.get("dtd_results", None)
    if dtd_df is None or dtd_df.empty:
        st.info("Select Page, input the paramters and click 'Run Analysis' on the sidebar.")
    else:
        valid_df = dtd_df.dropna(subset=["chosenDebt", "A_star", "sigmaA_star", "DTD"])
        if valid_df.empty:
            st.warning("No valid rows for Merton calculations in the chosen range.")
        else:
            with st.expander("Methodology: Merton Distance-to-Default (DTD)", expanded=False):
                st.write(
                    "The **Distance-to-Default (DTD)** is a structural credit risk model based on Merton's (1974) option pricing theory. "
                    "It estimates the likelihood that a firm's asset value will fall below its debt obligations, triggering default."
                )

                # Merton Model Core Equations
                st.latex(r"V_t = S_t + D_t")
                st.write("**Firm Value (Vₜ)**: The total market value of the firm, consisting of equity (Sₜ) and debt (Dₜ).")

                st.latex(r"\sigma_V = \frac{S_t}{V_t} \sigma_S")
                st.write("**Asset Volatility (σ_V)**: Derived from the observed equity volatility (σ_S), using the Merton model.")

                st.latex(r"d_1 = \frac{\ln{\left(\frac{V_t}{D_t}\right)} + \left( r - \frac{1}{2} \sigma_V^2 \right)T}{\sigma_V \sqrt{T}}")
                st.latex(r"d_2 = d_1 - \sigma_V \sqrt{T}")
                st.write("**Merton's d₁ and d₂**: Standardized metrics capturing the firm's asset dynamics relative to debt.")

                st.latex(r"\text{DTD} = d_2 = \frac{\ln{\left(\frac{V_t}{D_t}\right)} + \left( r - \frac{1}{2} \sigma_V^2 \right)T}{\sigma_V \sqrt{T}}")
                st.write("**Distance-to-Default (DTD)**: Measures how many standard deviations the firm's asset value is from the default threshold (Dₜ).")

                # Academic Justification
                st.write("##### Academic Justification")
                st.write(
                    "Merton's model treats a firm's equity as a **call option** on its assets, where default occurs if asset value (Vₜ) "
                    "falls below debt (Dₜ) at time T. **DTD quantifies this probability** by measuring how far the firm is from this threshold, "
                    "adjusting for volatility. Studies show that **lower DTD values correlate with higher default probabilities**, making it "
                    "a key metric for credit risk analysis in corporate finance and banking."
                )

                # Interpretation
                st.write("##### Interpretation")
                st.write(
                    "**DTD > 2.0**: Low probability of default (strong financial health).  \n"
                    "**1.0 ≤ DTD ≤ 2.0**: Moderate risk—firm is financially stable but should be monitored.  \n"
                    "**DTD < 1.0**: High default risk—firm is approaching financial distress.  \n"
                    "**DTD < 0.0**: Extreme risk—firm’s asset value is below its debt obligations."
                )

                # Downsides / Limitations
                st.write("##### Limitations")
                st.write(
                    "- **Assumes market efficiency**, meaning it relies heavily on accurate stock price movements.\n"
                    "- **Volatility estimates impact accuracy**, as market fluctuations can distort results.\n"
                    "- **Ignores liquidity constraints**—a firm may default due to cash flow problems, even if assets exceed liabilities.\n"
                    "- **Not designed for financial institutions**, where leverage and risk dynamics differ significantly.\n"
                    "- **Short-term focused**, making it less predictive for long-term financial health."
                )
            
            #st.subheader("Annual Distance to Default (Merton Model)")

            # Chart 1
            fig_time = go.Figure()
            fig_time.add_trace(
                go.Scatter(
                    x=dtd_df["year"],
                    y=dtd_df["DTD"],
                    mode="lines+markers",
                    name="Distance to Default"
                )
            )
            fig_time.update_layout(
                title=f"{ticker} Annual Distance to Default (Merton Model)",
                title_font=dict(size=26, color="white"),
                xaxis_title="Year",
                yaxis_title="Distance to Default (d2)",
                template="plotly_dark",
                paper_bgcolor="#0e1117",
                plot_bgcolor="#0e1117",
                xaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                ),
                yaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                )
            )
            st.plotly_chart(fig_time, use_container_width=True)

            # --- Dynamic Interpretation for Chart 1 (Exact user code) ---
            with st.expander("Interpretation", expanded=False):
                dtd_series = dtd_df["DTD"].dropna()
                if len(dtd_series) > 1:
                    first_val = dtd_series.iloc[0]
                    last_val = dtd_series.iloc[-1]
                    trend_str = "increased" if last_val > first_val else "decreased" if last_val < first_val else "remained stable"
                    min_val = dtd_series.min()
                    max_val = dtd_series.max()
                    min_yr = dtd_df.loc[dtd_series.idxmin(), "year"]
                    max_yr = dtd_df.loc[dtd_series.idxmax(), "year"]

                    st.write("Dynamic Interpretation for Annual Distance to Default:")
                    st.write(f"**1) The time series shows that DTD has {trend_str} from {first_val:.2f} to {last_val:.2f}.**")
                    st.write(f"**2) The lowest DTD was {min_val:.2f} in {min_yr}, and the highest was {max_val:.2f} in {max_yr}.**")
                    if last_val < 0:
                        st.write("   Current DTD is negative. The firm may be in distress territory, implying higher default risk.")
                    elif last_val < 1:
                        st.write("   Current DTD is below 1. This suggests caution, as default risk is higher than comfortable.")
                    elif last_val < 2:
                        st.write("   Current DTD is between 1 and 2. This is moderate territory. Risk is not extreme but warrants monitoring.")
                    else:
                        st.write("   Current DTD is above 2. This generally indicates safer conditions and lower default probability.")
                else:
                    st.write("DTD time series is insufficient for a dynamic interpretation.")

            # Chart 2: Distribution
            #st.subheader("Distribution of Simulated Distance-to-Default")
            latest_data = valid_df.iloc[-1]
            A_star = latest_data["A_star"]
            sigmaA_star = latest_data["sigmaA_star"]
            D = latest_data["chosenDebt"]
            r = latest_data["rf"]
            T = 1.0
            dtd_value = latest_data["DTD"]

            num_simulations = 10000
            A_simulated = np.random.normal(A_star, sigmaA_star * A_star, num_simulations)
            A_simulated = np.where(A_simulated > 0, A_simulated, np.nan)
            DTD_simulated = (np.log(A_simulated / D) + (r - 0.5 * sigmaA_star**2) * T) / (sigmaA_star * np.sqrt(T))
            DTD_simulated = DTD_simulated[~np.isnan(DTD_simulated)]

            fig_hist = ff.create_distplot(
                [DTD_simulated],
                ["Simulated DTD"],
                show_hist=True,
                show_rug=False,
                curve_type='kde'
            )
            fig_hist.add_vline(
                x=dtd_value,
                line=dict(color="red", dash="dash"),
                annotation_text=f"Actual DTD = {dtd_value:.2f}"
            )
            fig_hist.update_layout(
                title=f"{ticker} Distribution of Simulated Distance-to-Default (DTD)",
                title_font=dict(size=26, color="white"),
                xaxis_title="Distance-to-Default (DTD)",
                yaxis_title="Frequency",
                template="plotly_dark",
                paper_bgcolor="#0e1117",
                plot_bgcolor="#0e1117",
                xaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                ),
                yaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                )
            )
            st.plotly_chart(fig_hist, use_container_width=True)

            # --- Dynamic Interpretation for Chart 2 (Exact user code) ---
            with st.expander("Interpretation", expanded=False):
                mean_sim = np.mean(DTD_simulated)
                median_sim = np.median(DTD_simulated)
                st.write("\n--- Dynamic Interpretation for DTD Distribution ---")
                st.write(f"**1) The mean simulated Distance-to-Default (DTD) is {mean_sim:.2f}, while the median is {median_sim:.2f}.**")
                if mean_sim < 0:
                    st.write("   On average, the simulations suggest the firm is in distress. A negative mean DTD implies that, in many scenarios, asset value falls below debt obligations.")
                    st.write("   This significantly raises default risk, indicating a high probability of financial distress under typical market conditions.")
                elif mean_sim < 1:
                    st.write("   A large portion of simulations yield a DTD below 1, signaling heightened risk. The firm’s financial cushion against default is thin.")
                    st.write("   Companies in this range often face higher borrowing costs and investor skepticism, as they are perceived as more vulnerable to downturns.")
                elif mean_sim < 2:
                    st.write("   The majority of simulations fall between 1 and 2, meaning the firm is not in immediate danger but isn’t fully secure either.")
                    st.write("   This suggests moderate financial health. While not at crisis levels, management should remain cautious about leverage and volatility.")
                else:
                    st.write("   The distribution is mostly above 2, implying that, under most scenarios, the firm maintains a strong buffer against default.")
                    st.write("   Companies in this range generally enjoy greater financial stability, better credit ratings, and lower risk premiums.")

                if dtd_value < mean_sim:
                    st.write(f"**2) The actual DTD ({dtd_value:.2f}) is below the simulation average ({mean_sim:.2f}).**")
                    st.write("   This suggests that the real-world financial position of the company is weaker than the average simulated outcome.")
                    st.write("   It may imply that recent market conditions or company-specific factors have increased risk beyond what the model predicts.")
                    st.write("   Management might need to reinforce liquidity or reassess capital structure to avoid sliding into higher-risk territory.")
                else:
                    st.write(f"2) The actual DTD ({dtd_value:.2f}) is above the simulation average ({mean_sim:.2f}).")
                    st.write("   This is a positive signal, suggesting that real-world financial conditions are better than the typical simulated scenario.")
                    st.write("   The firm may have a stronger-than-expected balance sheet or be benefiting from favorable market conditions.")
                    st.write("   While this is reassuring, it is important to monitor whether this stability is due to structural financial strength or short-term market factors.")

            # Chart 3: Sensitivity of DTD to Asset Value
            #st.subheader("Sensitivity of DTD to Asset Value")
            asset_range = np.linspace(D, 1.1 * A_star, 200)
            dtd_asset = (np.log(asset_range / D) + (r - 0.5 * sigmaA_star**2) * T) / (sigmaA_star * np.sqrt(T))

            fig_asset = go.Figure()
            fig_asset.add_trace(
                go.Scatter(
                    x=asset_range,
                    y=dtd_asset,
                    mode='lines',
                    name="DTD vs. Asset Value",
                    line=dict(color="blue")
                )
            )
            fig_asset.add_vline(
                x=A_star,
                line=dict(color="red", dash="dash"),
                annotation_text=f"Estimated A = {A_star:,.2f}"
            )
            fig_asset.update_layout(
                title=f"{ticker} Sensitivity of DTD to Variation in Asset Value",
                title_font=dict(size=26, color="white"),
                xaxis_title="Asset Value (A)",
                yaxis_title="Distance-to-Default (d2)",
                xaxis_type="log",
                template="plotly_dark",
                paper_bgcolor="#0e1117",
                plot_bgcolor="#0e1117",
                xaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                ),
                yaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                )
            )
            st.plotly_chart(fig_asset, use_container_width=True)

            # --- Dynamic Interpretation for Chart 3 (Exact user code) ---
            with st.expander("Interpretation", expanded=False):
                dtd_lowA = dtd_asset[0]
                dtd_highA = dtd_asset[-1]
                st.write("\nDynamic Interpretation for Asset Value Sensitivity:")
                st.write(f"**1) At the lower bound (A = {asset_range[0]:,.2f}), DTD is {dtd_lowA:.2f}.**")
                st.write(f"**2) At the higher bound (A = {asset_range[-1]:,.2f}), DTD rises to {dtd_highA:.2f}.**")
                if dtd_highA > 2:
                    st.write("   If asset value grows, the firm gains a comfortable buffer against default.")
                else:
                    st.write("   Even at higher asset values, default risk remains moderate. Growth alone may not guarantee safety.")

            # Chart 4: Sensitivity of DTD to Debt Variation
            #st.subheader("Sensitivity of DTD to Debt Variation")
            debt_range = np.linspace(0.1 * D, 1.2 * A_star, 300)
            dtd_debt = (np.log(A_star / debt_range) + (r - 0.5 * sigmaA_star**2) * T) / (sigmaA_star * np.sqrt(T))

            fig_debt = go.Figure()
            fig_debt.add_trace(
                go.Scatter(
                    x=debt_range,
                    y=dtd_debt,
                    mode='lines',
                    name="DTD vs. Debt",
                    line=dict(color="green")
                )
            )
            fig_debt.add_vline(
                x=D,
                line=dict(color="red", dash="dash"),
                annotation_text=f"Estimated D = {D:,.2f}"
            )
            fig_debt.update_layout(
                title=f"{ticker} Sensitivity of DTD to Variation in Debt (Extended Range)",
                title_font=dict(size=26, color="white"),
                xaxis_title="Debt (D)",
                yaxis_title="Distance-to-Default (d2)",
                xaxis_type="log",
                template="plotly_dark",
                paper_bgcolor="#0e1117",
                plot_bgcolor="#0e1117",
                xaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                ),
                yaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                )
            )
            st.plotly_chart(fig_debt, use_container_width=True)

            # --- Dynamic Interpretation for Chart 4 (Exact user code) ---
            with st.expander("Interpretation", expanded=False):
                dtd_lowD = dtd_debt[0]
                dtd_highD = dtd_debt[-1]
                st.write("\n--- Dynamic Interpretation for Debt Variation ---")
                st.write(f"**1) At lower debt levels (D ≈ {debt_range[0]:,.2f}), the estimated Distance-to-Default (DTD) is {dtd_lowD:.2f}.**")
                st.write(f"**2) At higher debt levels (D ≈ {debt_range[-1]:,.2f}), the estimated DTD drops to {dtd_highD:.2f}.**")

                if dtd_lowD > 2:
                    st.write("   With lower debt, the firm has a strong financial cushion. A DTD above 2 typically indicates low default risk.")
                    st.write("   This suggests the company could sustain economic downturns or earnings declines without significantly increasing its probability of distress.")
                    st.write("   In this range, the firm may enjoy better credit ratings, lower borrowing costs, and greater investor confidence.")
                elif 1 < dtd_lowD <= 2:
                    st.write("   Even with reduced debt, the firm remains in a moderate risk zone. While the probability of default is not alarming, it isn't fully secure.")
                    st.write("   This suggests that other financial pressures—such as earnings volatility or low asset returns—might be limiting the risk buffer.")
                    st.write("   Maintaining a balanced capital structure with prudent debt management will be key to ensuring financial stability.")
                else:
                    st.write("   Despite lowering debt, the firm remains in a high-risk category. This indicates that other financial weaknesses, such as low asset returns or high volatility, are still dominant.")
                    st.write("   The company may need a more aggressive strategy to strengthen its financial position, such as improving earnings stability or reducing operational risks.")

                if dtd_highD < 0:
                    st.write("   At significantly higher debt levels, the model suggests a **negative DTD**, which signals extreme financial distress.")
                    st.write("   This implies that, under this scenario, the company's total asset value would likely fall below its debt obligations.")
                    st.write("   If this situation were to materialize, the company would be seen as highly vulnerable, potentially leading to credit downgrades or refinancing difficulties.")
                elif 0 <= dtd_highD < 1:
                    st.write("   With higher debt, DTD drops to below 1, meaning the firm is dangerously close to default.")
                    st.write("   A DTD below 1 indicates that even small negative shocks to asset value could push the firm into financial distress.")
                    st.write("   This could lead to increased borrowing costs, investor concerns, and potential restrictions on raising further capital.")
                elif 1 <= dtd_highD < 2:
                    st.write("   The firm’s risk profile worsens with higher debt, but it remains in the moderate zone. The probability of distress increases but is not immediately alarming.")
                    st.write("   Companies in this range often need to manage debt maturities carefully and ensure steady cash flow generation to avoid further deterioration.")
                else:
                    st.write("   Even at a higher debt level, the firm maintains a strong buffer (DTD > 2).")
                    st.write("   This suggests the company has **enough asset value or earnings strength to comfortably manage the additional leverage**.")
                    st.write("   However, increasing debt too aggressively, even in a safe zone, could reduce financial flexibility in downturns.")

            # Chart 5: Asset Value vs. Debt
            #st.subheader("Asset Value vs. Default Point (Debt)")
            fig_bar = go.Figure()
            fig_bar.add_trace(
                go.Bar(
                    x=["Asset Value (A)", "Debt (D)"],
                    y=[A_star, D],
                    marker=dict(color=["blue", "orange"])
                )
            )
            fig_bar.update_layout(
                title=f"{ticker} Asset Value vs. Default Point",
                title_font=dict(size=26, color="white"),
                yaxis_title="Value (USD)",
                template="plotly_dark",
                paper_bgcolor="#0e1117",
                plot_bgcolor="#0e1117",
                xaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                ),
                yaxis=dict(
                    showgrid=True,
                    gridcolor="rgba(255, 255, 255, 0.1)"
                )
            )
            st.plotly_chart(fig_bar, use_container_width=True)

            # --- Dynamic Interpretation for Chart 5 (Exact user code) ---
            with st.expander("Interpretation", expanded=False):
                st.write("\n--- Dynamic Interpretation for Asset Value vs. Debt ---")
                if A_star > D:
                    st.write(f"**1) The estimated asset value ({A_star:,.2f}) exceeds total debt ({D:,.2f}), providing a financial buffer.**")
                    asset_debt_ratio = A_star / D
                    if asset_debt_ratio > 2:
                        st.write("   The asset-to-debt ratio is above 2, meaning the firm holds **more than double the assets compared to its debt obligations**.")
                        st.write("   This implies a highly secure financial position, with a strong ability to absorb economic downturns or revenue declines.")
                    elif 1.5 <= asset_debt_ratio <= 2:
                        st.write("   The asset-to-debt ratio is between 1.5 and 2, which is considered **moderately strong**.")
                        st.write("   While there is a solid financial cushion, **prudent debt management is still necessary** to maintain stability.")
                    else:
                        st.write("   The asset-to-debt ratio is between 1 and 1.5, meaning the firm has a **narrower but still positive buffer.**")
                        st.write("   This level is acceptable, but **a small decline in asset value could quickly increase financial risk.**")
                else:
                    st.write(f"1) The estimated asset value ({A_star:,.2f}) is **less than or close to total debt** ({D:,.2f}).")
                    st.write("   This signals **a limited financial cushion**, increasing the probability of distress in unfavorable conditions.")
                    if A_star < D:
                        st.write("   **Warning:** The company’s total assets are lower than its total debt.")
                        st.write("   This implies that if the company were to liquidate its assets today, it would still **not be able to fully cover its obligations**.")
                        st.write("   Such a position increases the likelihood of credit downgrades and difficulty in securing additional financing.")
                    elif A_star / D < 1.1:
                        st.write("   The asset buffer is extremely thin. A minor shock in earnings or asset valuation could put the firm in distress.")
                        st.write("   The company should consider **reducing leverage or improving asset utilization** to reinforce financial stability.")

                gap = A_star - D
                if gap > 0.5 * D:
                    st.write("**2) The firm has a **comfortable margin** between assets and debt. Even with some decline in asset value, financial stability is not immediately at risk.**")
                elif 0.2 * D < gap <= 0.5 * D:
                    st.write("2) The firm has **a moderate cushion**, but there is some vulnerability to financial shocks.")
                    st.write("   If debt levels increase or asset values decline, risk could rise quickly.")
                else:
                    st.write("2) The asset buffer is **very narrow**, making the firm susceptible to external risks such as declining revenues, rising interest rates, or asset write-downs.")
                    st.write("   A **small misstep in financial strategy could significantly increase default probability.**")

        with st.expander("Raw Distance-to-Default Data", expanded=False):
            st.dataframe(dtd_df)


# Hide default Streamlit style
st.markdown(
    """
    <style>
    #MainMenu {visibility: hidden;}
    footer {visibility: hidden;}
    </style>
    """,
    unsafe_allow_html=True
)