danielostrow commited on
Commit
3751c05
·
verified ·
1 Parent(s): cff7545

Upload folder using huggingface_hub

Browse files
API_REFERENCE.md ADDED
@@ -0,0 +1,749 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # C2Sentinel API Reference
2
+
3
+ Complete technical documentation for the C2Sentinel Python API.
4
+
5
+ **Author:** Daniel Ostrow
6
+ **Website:** [neuralintellect.com](https://neuralintellect.com)
7
+
8
+ ---
9
+
10
+ ## Table of Contents
11
+
12
+ 1. [C2Sentinel Class](#c2sentinel-class)
13
+ 2. [AnalysisResult Class](#analysisresult-class)
14
+ 3. [ConnectionContext Class](#connectioncontext-class)
15
+ 4. [ReconSupport Class](#reconsupport-class)
16
+ 5. [FeatureExtractor Class](#featureextractor-class)
17
+ 6. [LogParser Class](#logparser-class)
18
+ 7. [Enums and Constants](#enums-and-constants)
19
+
20
+ ---
21
+
22
+ ## C2Sentinel Class
23
+
24
+ Main interface for C2 detection.
25
+
26
+ ### Constructor
27
+
28
+ ```python
29
+ C2Sentinel(model: LogBERTC2Sentinel, config: C2SentinelConfig, device: str = 'auto')
30
+ ```
31
+
32
+ | Parameter | Type | Description |
33
+ |-----------|------|-------------|
34
+ | `model` | LogBERTC2Sentinel | The neural network model |
35
+ | `config` | C2SentinelConfig | Model configuration |
36
+ | `device` | str | Device for inference ('auto', 'cpu', 'cuda') |
37
+
38
+ ### Class Methods
39
+
40
+ #### load
41
+
42
+ ```python
43
+ @classmethod
44
+ def load(cls, path: str, device: str = 'auto') -> 'C2Sentinel'
45
+ ```
46
+
47
+ Load a pre-trained model from safetensors format.
48
+
49
+ | Parameter | Type | Description |
50
+ |-----------|------|-------------|
51
+ | `path` | str | Path to model files (without extension) |
52
+ | `device` | str | Device for inference |
53
+
54
+ **Returns:** C2Sentinel instance
55
+
56
+ **Example:**
57
+ ```python
58
+ sentinel = C2Sentinel.load('c2_sentinel')
59
+ sentinel = C2Sentinel.load('/path/to/c2_sentinel', device='cuda')
60
+ ```
61
+
62
+ #### create_new
63
+
64
+ ```python
65
+ @classmethod
66
+ def create_new(cls, device: str = 'auto') -> 'C2Sentinel'
67
+ ```
68
+
69
+ Create a new untrained model instance.
70
+
71
+ **Returns:** C2Sentinel instance with random weights
72
+
73
+ ---
74
+
75
+ ### Instance Methods
76
+
77
+ #### analyze
78
+
79
+ ```python
80
+ def analyze(
81
+ self,
82
+ connections: List[Dict],
83
+ threshold: float = 0.5,
84
+ context: Optional[ConnectionContext] = None,
85
+ include_features: bool = False,
86
+ strict_mode: bool = False
87
+ ) -> AnalysisResult
88
+ ```
89
+
90
+ Analyze a list of connections for C2 activity.
91
+
92
+ | Parameter | Type | Default | Description |
93
+ |-----------|------|---------|-------------|
94
+ | `connections` | List[Dict] | required | List of connection records |
95
+ | `threshold` | float | 0.5 | Detection threshold (0.0-1.0) |
96
+ | `context` | ConnectionContext | None | Optional context for enrichment |
97
+ | `include_features` | bool | False | Include raw feature vector in result |
98
+ | `strict_mode` | bool | False | Enforce minimum 0.7 threshold |
99
+
100
+ **Returns:** AnalysisResult object
101
+
102
+ **Connection Record Fields:**
103
+ ```python
104
+ {
105
+ 'timestamp': float, # Required: Unix timestamp
106
+ 'dst_ip': str, # Required: Destination IP
107
+ 'dst_port': int, # Required: Destination port
108
+ 'bytes_sent': int, # Required: Bytes sent
109
+ 'bytes_recv': int, # Required: Bytes received
110
+ 'src_ip': str, # Optional: Source IP
111
+ 'src_port': int, # Optional: Source port
112
+ 'protocol': str, # Optional: 'tcp' or 'udp'
113
+ 'duration': float # Optional: Duration in seconds
114
+ }
115
+ ```
116
+
117
+ **Example:**
118
+ ```python
119
+ connections = [
120
+ {'timestamp': 1000, 'dst_ip': '10.0.0.1', 'dst_port': 443,
121
+ 'bytes_sent': 200, 'bytes_recv': 500},
122
+ {'timestamp': 1060, 'dst_ip': '10.0.0.1', 'dst_port': 443,
123
+ 'bytes_sent': 200, 'bytes_recv': 500},
124
+ ]
125
+
126
+ result = sentinel.analyze(connections)
127
+ result = sentinel.analyze(connections, threshold=0.7, strict_mode=True)
128
+ ```
129
+
130
+ ---
131
+
132
+ #### analyze_batch
133
+
134
+ ```python
135
+ def analyze_batch(
136
+ self,
137
+ connection_groups: List[List[Dict]],
138
+ threshold: float = 0.5,
139
+ contexts: Optional[List[ConnectionContext]] = None,
140
+ parallel: bool = True
141
+ ) -> List[AnalysisResult]
142
+ ```
143
+
144
+ Analyze multiple connection groups.
145
+
146
+ | Parameter | Type | Default | Description |
147
+ |-----------|------|---------|-------------|
148
+ | `connection_groups` | List[List[Dict]] | required | List of connection lists |
149
+ | `threshold` | float | 0.5 | Detection threshold |
150
+ | `contexts` | List[ConnectionContext] | None | Context for each group |
151
+ | `parallel` | bool | True | Enable parallel processing |
152
+
153
+ **Returns:** List of AnalysisResult objects
154
+
155
+ **Example:**
156
+ ```python
157
+ groups = [
158
+ [conn1, conn2, conn3],
159
+ [conn4, conn5, conn6],
160
+ ]
161
+ results = sentinel.analyze_batch(groups)
162
+ ```
163
+
164
+ ---
165
+
166
+ #### analyze_logs
167
+
168
+ ```python
169
+ def analyze_logs(
170
+ self,
171
+ log_lines: List[str],
172
+ group_by_dst: bool = True,
173
+ threshold: float = 0.5
174
+ ) -> List[Dict]
175
+ ```
176
+
177
+ Parse and analyze raw log lines.
178
+
179
+ | Parameter | Type | Default | Description |
180
+ |-----------|------|---------|-------------|
181
+ | `log_lines` | List[str] | required | Raw log lines |
182
+ | `group_by_dst` | bool | True | Group connections by destination IP |
183
+ | `threshold` | float | 0.5 | Detection threshold |
184
+
185
+ **Returns:** List of result dictionaries, sorted by probability (descending)
186
+
187
+ **Supported Formats:**
188
+ - JSON logs with standard fields
189
+ - Zeek/Bro conn.log (tab-separated)
190
+ - Syslog with IP:port patterns
191
+
192
+ **Example:**
193
+ ```python
194
+ with open('conn.log') as f:
195
+ lines = f.readlines()
196
+
197
+ results = sentinel.analyze_logs(lines, group_by_dst=True)
198
+ for r in results:
199
+ print(f"{r['dst_ip']}: {r['c2_probability']}")
200
+ ```
201
+
202
+ ---
203
+
204
+ #### add_whitelist
205
+
206
+ ```python
207
+ def add_whitelist(
208
+ self,
209
+ ips: List[str] = None,
210
+ domains: List[str] = None
211
+ )
212
+ ```
213
+
214
+ Add IPs or domains to the whitelist. Whitelisted destinations receive reduced C2 probability.
215
+
216
+ | Parameter | Type | Description |
217
+ |-----------|------|-------------|
218
+ | `ips` | List[str] | IP addresses to whitelist |
219
+ | `domains` | List[str] | Domain names to whitelist |
220
+
221
+ **Example:**
222
+ ```python
223
+ sentinel.add_whitelist(
224
+ ips=['8.8.8.8', '1.1.1.1'],
225
+ domains=['google.com', 'github.com']
226
+ )
227
+ ```
228
+
229
+ ---
230
+
231
+ #### add_blacklist
232
+
233
+ ```python
234
+ def add_blacklist(
235
+ self,
236
+ ips: List[str] = None,
237
+ domains: List[str] = None
238
+ )
239
+ ```
240
+
241
+ Add IPs or domains to the blacklist. Blacklisted destinations receive increased C2 probability.
242
+
243
+ | Parameter | Type | Description |
244
+ |-----------|------|-------------|
245
+ | `ips` | List[str] | IP addresses to blacklist |
246
+ | `domains` | List[str] | Domain names to blacklist |
247
+
248
+ ---
249
+
250
+ #### save
251
+
252
+ ```python
253
+ def save(self, path: str)
254
+ ```
255
+
256
+ Save model to safetensors format.
257
+
258
+ | Parameter | Type | Description |
259
+ |-----------|------|-------------|
260
+ | `path` | str | Output path (without extension) |
261
+
262
+ Creates two files:
263
+ - `{path}.safetensors` - Model weights
264
+ - `{path}.json` - Configuration
265
+
266
+ ---
267
+
268
+ ### Instance Attributes
269
+
270
+ | Attribute | Type | Description |
271
+ |-----------|------|-------------|
272
+ | `model` | LogBERTC2Sentinel | The neural network |
273
+ | `config` | C2SentinelConfig | Model configuration |
274
+ | `device` | torch.device | Inference device |
275
+ | `feature_extractor` | FeatureExtractor | Feature extraction module |
276
+ | `log_parser` | LogParser | Log parsing module |
277
+ | `context_engine` | ContextInference | Context inference module |
278
+ | `recon` | ReconSupport | Reconnaissance module |
279
+
280
+ ---
281
+
282
+ ## AnalysisResult Class
283
+
284
+ Dataclass containing analysis results.
285
+
286
+ ### Attributes
287
+
288
+ | Attribute | Type | Description |
289
+ |-----------|------|-------------|
290
+ | `is_c2` | bool | True if C2 detected |
291
+ | `c2_probability` | float | Probability score (0.0-1.0) |
292
+ | `anomaly_score` | float | Anomaly detection score |
293
+ | `evasion_score` | float | Evasion technique detection score |
294
+ | `confidence` | float | Model confidence in prediction |
295
+ | `c2_type` | str | Detected C2 framework type |
296
+ | `c2_type_confidence` | float | Confidence in C2 type classification |
297
+ | `detection_method` | str | Detection method used |
298
+ | `immediate_detection` | bool | True if signature-based detection |
299
+ | `context_applied` | bool | True if context was applied |
300
+ | `original_probability` | float | Probability before context adjustment |
301
+ | `probability_modifier` | float | Context probability modifier |
302
+ | `matched_legitimate_pattern` | str | Name of matched legitimate pattern |
303
+ | `legitimate_confidence` | float | Confidence in legitimate pattern match |
304
+ | `risk_factors` | List[str] | Factors supporting C2 classification |
305
+ | `mitigating_factors` | List[str] | Factors against C2 classification |
306
+ | `service_type` | str | Detected service type |
307
+ | `recommendations` | List[str] | Suggested follow-up actions |
308
+ | `features` | List[float] | Raw 40-dimensional feature vector |
309
+
310
+ ### Methods
311
+
312
+ #### to_dict
313
+
314
+ ```python
315
+ def to_dict(self) -> Dict[str, Any]
316
+ ```
317
+
318
+ Convert result to dictionary.
319
+
320
+ **Returns:** Dictionary representation of all attributes
321
+
322
+ ---
323
+
324
+ ## ConnectionContext Class
325
+
326
+ Dataclass for providing additional context to improve detection accuracy.
327
+
328
+ ### Constructor
329
+
330
+ ```python
331
+ ConnectionContext(
332
+ # Process information
333
+ process_name: Optional[str] = None,
334
+ process_path: Optional[str] = None,
335
+ process_pid: Optional[int] = None,
336
+ parent_process: Optional[str] = None,
337
+ command_line: Optional[str] = None,
338
+
339
+ # Network metadata
340
+ dns_queries: Optional[List[str]] = None,
341
+ resolved_hostname: Optional[str] = None,
342
+ tls_sni: Optional[str] = None,
343
+ tls_ja3: Optional[str] = None,
344
+ tls_ja3s: Optional[str] = None,
345
+ certificate_issuer: Optional[str] = None,
346
+ certificate_subject: Optional[str] = None,
347
+ certificate_valid: Optional[bool] = None,
348
+ http_user_agent: Optional[str] = None,
349
+ http_host: Optional[str] = None,
350
+
351
+ # Reputation
352
+ ip_reputation: Optional[float] = None,
353
+ domain_reputation: Optional[float] = None,
354
+ known_good: Optional[bool] = None,
355
+ known_bad: Optional[bool] = None,
356
+ threat_intel_match: Optional[str] = None,
357
+
358
+ # Host context
359
+ source_hostname: Optional[str] = None,
360
+ source_user: Optional[str] = None,
361
+ source_is_server: Optional[bool] = None,
362
+ source_is_workstation: Optional[bool] = None,
363
+
364
+ # Additional
365
+ geo_country: Optional[str] = None,
366
+ geo_asn: Optional[str] = None,
367
+ tags: Optional[List[str]] = None
368
+ )
369
+ ```
370
+
371
+ ### Attribute Details
372
+
373
+ | Attribute | Type | Effect on Analysis |
374
+ |-----------|------|-------------------|
375
+ | `process_name` | str | Known processes reduce probability |
376
+ | `known_good` | bool | True reduces probability by 90% |
377
+ | `known_bad` | bool | True increases probability by 5x |
378
+ | `ip_reputation` | float | Score > 0.8 reduces probability |
379
+ | `threat_intel_match` | str | Match increases probability by 5x |
380
+ | `tls_ja3` | str | Known C2 JA3 increases probability |
381
+ | `certificate_valid` | bool | False increases probability |
382
+
383
+ ### Methods
384
+
385
+ #### to_dict
386
+
387
+ ```python
388
+ def to_dict(self) -> Dict[str, Any]
389
+ ```
390
+
391
+ Convert to dictionary, excluding None values.
392
+
393
+ ---
394
+
395
+ ## ReconSupport Class
396
+
397
+ Reconnaissance and enrichment utilities.
398
+
399
+ ### Class Methods
400
+
401
+ #### analyze_ip
402
+
403
+ ```python
404
+ @classmethod
405
+ def analyze_ip(cls, ip: str) -> Dict[str, Any]
406
+ ```
407
+
408
+ Analyze an IP address.
409
+
410
+ | Parameter | Type | Description |
411
+ |-----------|------|-------------|
412
+ | `ip` | str | IP address to analyze |
413
+
414
+ **Returns:**
415
+ ```python
416
+ {
417
+ 'ip': str, # Original IP
418
+ 'is_valid': bool, # Valid IP format
419
+ 'is_private': bool, # RFC 1918 private range
420
+ 'is_loopback': bool, # Loopback address
421
+ 'is_multicast': bool, # Multicast address
422
+ 'is_cdn': bool, # Known CDN range
423
+ 'cdn_provider': str, # CDN name if applicable
424
+ 'ip_version': int, # 4 or 6
425
+ 'reverse_dns': str, # Reverse DNS lookup result
426
+ 'numeric': int # Numeric representation
427
+ }
428
+ ```
429
+
430
+ **Known CDN Ranges:**
431
+ - Cloudflare
432
+ - AWS
433
+ - Google Cloud
434
+ - Azure
435
+ - Akamai
436
+
437
+ ---
438
+
439
+ #### analyze_connection_patterns
440
+
441
+ ```python
442
+ @classmethod
443
+ def analyze_connection_patterns(cls, connections: List[Dict]) -> Dict[str, Any]
444
+ ```
445
+
446
+ Analyze connection patterns for threat hunting.
447
+
448
+ | Parameter | Type | Description |
449
+ |-----------|------|-------------|
450
+ | `connections` | List[Dict] | Connection records |
451
+
452
+ **Returns:**
453
+ ```python
454
+ {
455
+ 'connection_count': int,
456
+ 'unique_destinations': int,
457
+ 'unique_ports': int,
458
+
459
+ 'timing': {
460
+ 'duration_seconds': float,
461
+ 'mean_interval': float,
462
+ 'interval_stddev': float,
463
+ 'interval_cv': float # Coefficient of variation
464
+ },
465
+
466
+ 'volume': {
467
+ 'total_sent': int,
468
+ 'total_recv': int,
469
+ 'mean_sent': float,
470
+ 'mean_recv': float,
471
+ 'sent_recv_ratio': float
472
+ },
473
+
474
+ 'ports': {
475
+ port_number: count, # Port distribution
476
+ ...
477
+ },
478
+
479
+ 'destinations': {
480
+ ip: analyze_ip_result, # Per-IP analysis
481
+ ...
482
+ },
483
+
484
+ 'indicators': {
485
+ 'single_destination': bool,
486
+ 'consistent_timing': bool,
487
+ 'consistent_sizes': bool,
488
+ 'uses_common_port': bool,
489
+ 'uses_high_port': bool,
490
+ 'has_cdn_destination': bool,
491
+ 'all_private_destinations': bool
492
+ }
493
+ }
494
+ ```
495
+
496
+ ---
497
+
498
+ #### generate_iocs
499
+
500
+ ```python
501
+ @classmethod
502
+ def generate_iocs(
503
+ cls,
504
+ connections: List[Dict],
505
+ result: Dict
506
+ ) -> Dict[str, List[str]]
507
+ ```
508
+
509
+ Generate Indicators of Compromise from detected C2.
510
+
511
+ | Parameter | Type | Description |
512
+ |-----------|------|-------------|
513
+ | `connections` | List[Dict] | Connection records |
514
+ | `result` | Dict | Analysis result dictionary |
515
+
516
+ **Returns:**
517
+ ```python
518
+ {
519
+ 'ips': List[str], # Destination IPs
520
+ 'ports': List[str], # Destination ports
521
+ 'timing_signatures': List[str], # Beacon timing patterns
522
+ 'behavioral_indicators': List[str] # Behavioral markers
523
+ }
524
+ ```
525
+
526
+ Only generates IOCs if `result['is_c2']` is True.
527
+
528
+ ---
529
+
530
+ ## FeatureExtractor Class
531
+
532
+ Extracts 40-dimensional feature vectors from connections.
533
+
534
+ ### Constants
535
+
536
+ #### C2_TYPES
537
+
538
+ List of detectable C2 framework types:
539
+ ```python
540
+ [
541
+ 'unknown', 'metasploit', 'cobalt_strike', 'sliver', 'havoc',
542
+ 'mythic', 'poshc2', 'merlin', 'empire', 'covenant',
543
+ 'brute_ratel', 'koadic', 'pupy', 'silenttrinity', 'faction',
544
+ 'ibombshell', 'godoh', 'dnscat2', 'iodine', 'dns_generic',
545
+ 'http_custom', 'https_custom', 'websocket', 'domain_fronting',
546
+ 'cloud_fronting', 'cdn_abuse', 'apt_generic', 'apt28', 'apt29',
547
+ 'apt41', 'lazarus', 'fin7', 'turla', 'winnti', 'custom'
548
+ ]
549
+ ```
550
+
551
+ ### Methods
552
+
553
+ #### extract_features
554
+
555
+ ```python
556
+ def extract_features(self, connections: List[Dict]) -> np.ndarray
557
+ ```
558
+
559
+ Extract 40-dimensional feature vector.
560
+
561
+ **Returns:** numpy array of shape (40,)
562
+
563
+ **Feature Groups:**
564
+ - Features 0-9: Timing (intervals, jitter, regularity, periodicity)
565
+ - Features 10-17: Destinations (diversity, persistence, ports)
566
+ - Features 18-27: Payload (sizes, ratios, consistency)
567
+ - Features 28-35: Evasion (jitter patterns, bursts, session length)
568
+ - Features 36-39: Advanced (night activity, fast beacon ratio, duration)
569
+
570
+ ---
571
+
572
+ #### check_metasploit_signature
573
+
574
+ ```python
575
+ def check_metasploit_signature(
576
+ self,
577
+ connections: List[Dict]
578
+ ) -> Tuple[bool, float]
579
+ ```
580
+
581
+ Check for Metasploit-specific signature patterns.
582
+
583
+ **Returns:** (is_metasploit, confidence)
584
+
585
+ ---
586
+
587
+ #### check_ssh_keepalive
588
+
589
+ ```python
590
+ def check_ssh_keepalive(
591
+ self,
592
+ connections: List[Dict]
593
+ ) -> Tuple[bool, float]
594
+ ```
595
+
596
+ Check for SSH keepalive pattern.
597
+
598
+ **Criteria:**
599
+ - Port 22
600
+ - Small packets (< 100 bytes)
601
+ - Symmetric traffic (sent/recv ratio 0.5-2.0)
602
+ - Consistent sizes (CV < 0.2)
603
+ - Regular intervals matching common keepalive values
604
+
605
+ **Returns:** (is_ssh_keepalive, confidence)
606
+
607
+ ---
608
+
609
+ ## LogParser Class
610
+
611
+ Parses various log formats into connection records.
612
+
613
+ ### Static Methods
614
+
615
+ #### parse_json
616
+
617
+ ```python
618
+ @staticmethod
619
+ def parse_json(log_line: str) -> Optional[Dict]
620
+ ```
621
+
622
+ Parse JSON formatted log line.
623
+
624
+ **Recognized Fields:**
625
+ - timestamp, @timestamp
626
+ - src_ip, source_ip, src
627
+ - dst_ip, dest_ip, dst
628
+ - src_port, source_port
629
+ - dst_port, dest_port
630
+ - bytes_sent, bytes_out
631
+ - bytes_recv, bytes_in
632
+
633
+ ---
634
+
635
+ #### parse_zeek_conn
636
+
637
+ ```python
638
+ @staticmethod
639
+ def parse_zeek_conn(log_line: str) -> Optional[Dict]
640
+ ```
641
+
642
+ Parse Zeek/Bro conn.log format (tab-separated).
643
+
644
+ ---
645
+
646
+ #### parse_syslog
647
+
648
+ ```python
649
+ @staticmethod
650
+ def parse_syslog(log_line: str) -> Optional[Dict]
651
+ ```
652
+
653
+ Parse common syslog/netflow patterns.
654
+
655
+ **Recognized Patterns:**
656
+ - `YYYY-MM-DD HH:MM:SS ... IP:port -> IP:port`
657
+ - `src=IP ... dst=IP ... sport=port ... dport=port`
658
+
659
+ ---
660
+
661
+ ## Enums and Constants
662
+
663
+ ### DetectionMethod
664
+
665
+ ```python
666
+ class DetectionMethod(Enum):
667
+ SIGNATURE = "signature" # Port + behavior signature match
668
+ BEHAVIORAL = "behavioral" # Pure behavioral analysis
669
+ ML = "ml" # Machine learning inference
670
+ CONTEXT = "context" # Context-adjusted detection
671
+ HEURISTIC = "heuristic" # Rule-based detection
672
+ WHITELIST = "whitelist" # Matched whitelist pattern
673
+ ```
674
+
675
+ ### ServiceType
676
+
677
+ ```python
678
+ class ServiceType(Enum):
679
+ SSH = "ssh"
680
+ HTTP = "http"
681
+ HTTPS = "https"
682
+ DNS = "dns"
683
+ DATABASE = "database"
684
+ API = "api"
685
+ STREAMING = "streaming"
686
+ GAMING = "gaming"
687
+ VPN = "vpn"
688
+ MONITORING = "monitoring"
689
+ UNKNOWN = "unknown"
690
+ ```
691
+
692
+ ### C2_INDICATOR_PORTS
693
+
694
+ High-confidence C2 signature ports:
695
+ ```python
696
+ {4444, 4445, 5555, 31337, 40056}
697
+ ```
698
+
699
+ ### C2_COMMON_PORTS
700
+
701
+ Ports commonly used by C2 (require behavioral analysis):
702
+ ```python
703
+ {80, 443, 53, 8080, 8443, 8888}
704
+ ```
705
+
706
+ ---
707
+
708
+ ## Convenience Functions
709
+
710
+ ### load_model
711
+
712
+ ```python
713
+ def load_model(path: str, device: str = 'auto') -> C2Sentinel
714
+ ```
715
+
716
+ Shorthand for `C2Sentinel.load()`.
717
+
718
+ ### create_model
719
+
720
+ ```python
721
+ def create_model(device: str = 'auto') -> C2Sentinel
722
+ ```
723
+
724
+ Shorthand for `C2Sentinel.create_new()`.
725
+
726
+ ### quick_analyze
727
+
728
+ ```python
729
+ def quick_analyze(
730
+ connections: List[Dict],
731
+ model_path: str = 'c2_sentinel'
732
+ ) -> AnalysisResult
733
+ ```
734
+
735
+ One-shot analysis without keeping model in memory.
736
+
737
+ ---
738
+
739
+ ## Error Handling
740
+
741
+ The API uses standard Python exceptions:
742
+
743
+ | Exception | Cause |
744
+ |-----------|-------|
745
+ | `FileNotFoundError` | Model files not found |
746
+ | `ValueError` | Invalid connection format |
747
+ | `RuntimeError` | CUDA/device errors |
748
+
749
+ All methods handle empty or malformed input gracefully, returning neutral results rather than raising exceptions.
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Daniel Ostrow
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # C2Sentinel
2
+
3
+ A machine learning model for detecting Command and Control (C2) beacon communications in network traffic. Built on a fine-tuned LogBERT transformer architecture.
4
+
5
+ **Author:** Daniel Ostrow
6
+ **Website:** [neuralintellect.com](https://neuralintellect.com)
7
+ **Release Date:** January 18, 2026
8
+
9
+ ---
10
+
11
+ ## Overview
12
+
13
+ C2Sentinel analyzes network connection patterns to identify C2 beacon activity. The model uses behavioral analysis rather than port-based filtering, enabling detection of C2 communications on any port. This approach catches C2 activity regardless of whether attackers use expected ports (4444) or attempt to blend in on common ports (443, 80, 53).
14
+
15
+ ### Capabilities
16
+
17
+ - Detection of 34+ C2 framework behavioral patterns across all ports
18
+ - Slow beacon detection (intervals from seconds to hours)
19
+ - Legitimate traffic pattern recognition (SSH keepalive, health checks, database connections)
20
+ - Optional context enrichment (process information, reputation scores, threat intelligence)
21
+ - IP reconnaissance and IOC generation
22
+ - Safetensors format for secure model loading
23
+
24
+ ---
25
+
26
+ ## Installation
27
+
28
+ ```bash
29
+ pip install torch numpy safetensors
30
+ ```
31
+
32
+ ---
33
+
34
+ ## Usage
35
+
36
+ ### Loading the Model
37
+
38
+ ```python
39
+ from c2sentinel import C2Sentinel
40
+
41
+ sentinel = C2Sentinel.load('c2_sentinel')
42
+ ```
43
+
44
+ ### Analyzing Connections
45
+
46
+ ```python
47
+ connections = [
48
+ {
49
+ 'timestamp': 1000000,
50
+ 'dst_ip': '10.0.0.1',
51
+ 'dst_port': 443,
52
+ 'bytes_sent': 200,
53
+ 'bytes_recv': 500
54
+ },
55
+ {
56
+ 'timestamp': 1000060,
57
+ 'dst_ip': '10.0.0.1',
58
+ 'dst_port': 443,
59
+ 'bytes_sent': 200,
60
+ 'bytes_recv': 500
61
+ },
62
+ ]
63
+
64
+ result = sentinel.analyze(connections)
65
+
66
+ if result.is_c2:
67
+ print(f"C2 detected: {result.c2_type}")
68
+ print(f"Probability: {result.c2_probability}")
69
+ else:
70
+ print("No C2 detected")
71
+ ```
72
+
73
+ ---
74
+
75
+ ## Connection Record Format
76
+
77
+ | Field | Type | Required | Description |
78
+ |-------|------|----------|-------------|
79
+ | `timestamp` | float | Yes | Unix timestamp |
80
+ | `dst_ip` | str | Yes | Destination IP address |
81
+ | `dst_port` | int | Yes | Destination port |
82
+ | `bytes_sent` | int | Yes | Bytes sent |
83
+ | `bytes_recv` | int | Yes | Bytes received |
84
+ | `src_ip` | str | No | Source IP address |
85
+ | `src_port` | int | No | Source port |
86
+ | `protocol` | str | No | Protocol (tcp/udp) |
87
+ | `duration` | float | No | Connection duration in seconds |
88
+
89
+ ---
90
+
91
+ ## Analysis Options
92
+
93
+ ### Threshold
94
+
95
+ ```python
96
+ # Default threshold (0.5)
97
+ result = sentinel.analyze(connections)
98
+
99
+ # Lower threshold for higher sensitivity
100
+ result = sentinel.analyze(connections, threshold=0.3)
101
+
102
+ # Higher threshold for higher precision
103
+ result = sentinel.analyze(connections, threshold=0.7)
104
+
105
+ # Strict mode enforces minimum 0.7 threshold
106
+ result = sentinel.analyze(connections, strict_mode=True)
107
+ ```
108
+
109
+ ### Context
110
+
111
+ ```python
112
+ from c2sentinel import ConnectionContext
113
+
114
+ context = ConnectionContext(
115
+ process_name='sshd',
116
+ known_good=True,
117
+ ip_reputation=0.95,
118
+ dns_queries=['api.example.com']
119
+ )
120
+
121
+ result = sentinel.analyze(connections, context=context)
122
+ ```
123
+
124
+ ### Whitelist and Blacklist
125
+
126
+ ```python
127
+ sentinel.add_whitelist(ips=['8.8.8.8'], domains=['google.com'])
128
+ sentinel.add_blacklist(ips=['10.10.10.10'], domains=['malware.example'])
129
+ ```
130
+
131
+ ---
132
+
133
+ ## Result Object
134
+
135
+ The `AnalysisResult` object contains:
136
+
137
+ | Attribute | Type | Description |
138
+ |-----------|------|-------------|
139
+ | `is_c2` | bool | True if C2 detected |
140
+ | `c2_probability` | float | Probability score (0.0-1.0) |
141
+ | `c2_type` | str | Detected C2 framework type |
142
+ | `confidence` | float | Model confidence |
143
+ | `detection_method` | str | Method used (signature/ml/context/whitelist) |
144
+ | `immediate_detection` | bool | True if signature-based |
145
+ | `risk_factors` | list | Factors supporting C2 classification |
146
+ | `mitigating_factors` | list | Factors against C2 classification |
147
+ | `matched_legitimate_pattern` | str | Matched legitimate pattern name |
148
+ | `service_type` | str | Detected service type |
149
+ | `recommendations` | list | Suggested actions |
150
+
151
+ ---
152
+
153
+ ## Batch Analysis
154
+
155
+ ```python
156
+ connection_groups = [
157
+ [conn1, conn2, conn3],
158
+ [conn4, conn5, conn6],
159
+ ]
160
+
161
+ results = sentinel.analyze_batch(connection_groups)
162
+ ```
163
+
164
+ ---
165
+
166
+ ## Log File Parsing
167
+
168
+ ```python
169
+ with open('conn.log', 'r') as f:
170
+ log_lines = f.readlines()
171
+
172
+ results = sentinel.analyze_logs(log_lines, group_by_dst=True)
173
+ ```
174
+
175
+ Supported formats: JSON, Zeek conn.log, syslog
176
+
177
+ ---
178
+
179
+ ## Reconnaissance
180
+
181
+ ### IP Analysis
182
+
183
+ ```python
184
+ info = sentinel.recon.analyze_ip('104.16.132.229')
185
+ # Returns: is_valid, is_private, is_cdn, cdn_provider, reverse_dns
186
+ ```
187
+
188
+ ### Pattern Analysis
189
+
190
+ ```python
191
+ patterns = sentinel.recon.analyze_connection_patterns(connections)
192
+ # Returns: timing stats, volume stats, behavioral indicators
193
+ ```
194
+
195
+ ### IOC Generation
196
+
197
+ ```python
198
+ if result.is_c2:
199
+ iocs = sentinel.recon.generate_iocs(connections, result.to_dict())
200
+ # Returns: ips, ports, timing_signatures, behavioral_indicators
201
+ ```
202
+
203
+ ---
204
+
205
+ ## Detection Methodology
206
+
207
+ ### C2 Indicators
208
+
209
+ - Consistent beacon intervals (low timing variance)
210
+ - Consistent packet sizes (low size variance)
211
+ - Single persistent destination
212
+ - Balanced request/response ratio
213
+
214
+ ### Signature Detection
215
+
216
+ Immediate detection for high-confidence C2 ports with matching behavioral patterns:
217
+ - Port 4444 (Metasploit default)
218
+ - Port 5555 (Metasploit alternative)
219
+ - Port 31337 (Sliver)
220
+ - Port 40056 (Havoc)
221
+
222
+ ### Legitimate Traffic Indicators
223
+
224
+ - High response size variance
225
+ - Asymmetric traffic patterns (small requests, large responses)
226
+ - Multiple destinations
227
+ - SSH keepalive patterns (small symmetric packets on port 22)
228
+ - Health check patterns (regular intervals, variable response sizes)
229
+
230
+ ---
231
+
232
+ ## Model Specifications
233
+
234
+ | Specification | Value |
235
+ |---------------|-------|
236
+ | Architecture | LogBERT Transformer |
237
+ | Parameters | 4.9 million |
238
+ | Feature Dimensions | 40 |
239
+ | Encoder Layers | 6 |
240
+ | Attention Heads | 8 |
241
+ | Hidden Dimension | 256 |
242
+ | Format | Safetensors |
243
+ | Size | 20 MB |
244
+
245
+ ---
246
+
247
+ ## Files
248
+
249
+ ```
250
+ c2sentinel/
251
+ c2sentinel.py # Main module
252
+ c2_sentinel.safetensors # Model weights
253
+ c2_sentinel.json # Model configuration
254
+ README.md # Documentation
255
+ API_REFERENCE.md # API reference
256
+ examples/
257
+ basic_usage.py
258
+ advanced_usage.py
259
+ ```
260
+
261
+ ---
262
+
263
+ ## License
264
+
265
+ MIT License
266
+
267
+ Copyright (c) 2026 Daniel Ostrow
268
+
269
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
270
+
271
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
272
+
273
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
c2_sentinel.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "num_features": 40,
3
+ "d_model": 256,
4
+ "nhead": 8,
5
+ "num_encoder_layers": 6,
6
+ "dim_feedforward": 1024,
7
+ "dropout": 0.1,
8
+ "max_seq_length": 512,
9
+ "num_c2_types": 35,
10
+ "version": "1.0.0"
11
+ }
c2_sentinel.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d3eda8bedb0445bf7ab877f2156013ba81722702c6e5436ffa7bffce29a35b4
3
+ size 20144868
c2sentinel.py ADDED
@@ -0,0 +1,1855 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ C2Sentinel - Network Traffic C2 Beacon Detection Model
4
+
5
+ A machine learning model for detecting Command and Control (C2) beacon
6
+ communications in network traffic. Built on a fine-tuned LogBERT transformer
7
+ architecture.
8
+
9
+ Author: Daniel Ostrow
10
+ Website: https://neuralintellect.com
11
+
12
+ Features:
13
+ - Detection of 34+ C2 framework behavioral patterns across all ports
14
+ - Smart context inference for additional metadata (process, DNS, reputation)
15
+ - Legitimate service pattern recognition (SSH keepalive, health checks)
16
+ - Reconnaissance support (IP enrichment, IOC generation)
17
+ - Comprehensive scripting API for automation
18
+
19
+ Uses safetensors format for secure model serialization.
20
+ """
21
+
22
+ import torch
23
+ import torch.nn as nn
24
+ import torch.nn.functional as F
25
+ import numpy as np
26
+ import json
27
+ import math
28
+ import socket
29
+ import struct
30
+ import hashlib
31
+ from pathlib import Path
32
+ from typing import Dict, List, Tuple, Optional, Union, Any, Callable
33
+ from dataclasses import dataclass, asdict, field
34
+ from collections import defaultdict
35
+ from enum import Enum
36
+ import re
37
+ from datetime import datetime
38
+ import ipaddress
39
+
40
+ # Safetensors for safe model serialization
41
+ from safetensors.torch import save_file, load_file
42
+
43
+
44
+ # ============================================================================
45
+ # ENUMS AND CONSTANTS
46
+ # ============================================================================
47
+
48
+ class DetectionMethod(Enum):
49
+ """Detection method used for classification."""
50
+ SIGNATURE = "signature"
51
+ BEHAVIORAL = "behavioral"
52
+ ML = "ml"
53
+ CONTEXT = "context"
54
+ HEURISTIC = "heuristic"
55
+ WHITELIST = "whitelist"
56
+
57
+
58
+ class TrafficType(Enum):
59
+ """Classification of traffic type."""
60
+ C2_BEACON = "c2_beacon"
61
+ C2_EXFIL = "c2_exfiltration"
62
+ C2_LATERAL = "c2_lateral_movement"
63
+ LEGITIMATE = "legitimate"
64
+ SUSPICIOUS = "suspicious"
65
+ UNKNOWN = "unknown"
66
+
67
+
68
+ class ServiceType(Enum):
69
+ """Known service types for context."""
70
+ SSH = "ssh"
71
+ HTTP = "http"
72
+ HTTPS = "https"
73
+ DNS = "dns"
74
+ DATABASE = "database"
75
+ API = "api"
76
+ STREAMING = "streaming"
77
+ GAMING = "gaming"
78
+ VPN = "vpn"
79
+ MONITORING = "monitoring"
80
+ UNKNOWN = "unknown"
81
+
82
+
83
+ @dataclass
84
+ class C2SentinelConfig:
85
+ """Configuration for LogBERT-C2Sentinel model."""
86
+ num_features: int = 40
87
+ d_model: int = 256
88
+ nhead: int = 8
89
+ num_encoder_layers: int = 6
90
+ dim_feedforward: int = 1024
91
+ dropout: float = 0.1
92
+ max_seq_length: int = 512
93
+ num_c2_types: int = 35
94
+ version: str = "2.0.0"
95
+
96
+ def to_dict(self) -> dict:
97
+ return asdict(self)
98
+
99
+ @classmethod
100
+ def from_dict(cls, d: dict) -> 'C2SentinelConfig':
101
+ return cls(**{k: v for k, v in d.items() if k in cls.__dataclass_fields__})
102
+
103
+
104
+ # High-confidence C2 ports - these are VERY rarely used legitimately
105
+ C2_INDICATOR_PORTS = {
106
+ 4444, # Metasploit default
107
+ 4445, # Metasploit alternative
108
+ 5555, # Metasploit (Note: Android debug uses this too)
109
+ 31337, # Elite/Sliver
110
+ 40056, # Havoc default
111
+ }
112
+
113
+ # Ports commonly used by C2 (but also legitimate traffic)
114
+ C2_COMMON_PORTS = {
115
+ 80, # HTTP
116
+ 443, # HTTPS
117
+ 53, # DNS
118
+ 8080, # HTTP alt
119
+ 8443, # HTTPS alt
120
+ 8888, # Sliver default
121
+ }
122
+
123
+ # Known legitimate service ports with expected behaviors
124
+ LEGITIMATE_SERVICE_PORTS = {
125
+ 22: ServiceType.SSH,
126
+ 80: ServiceType.HTTP,
127
+ 443: ServiceType.HTTPS,
128
+ 53: ServiceType.DNS,
129
+ 3306: ServiceType.DATABASE, # MySQL
130
+ 5432: ServiceType.DATABASE, # PostgreSQL
131
+ 6379: ServiceType.DATABASE, # Redis
132
+ 27017: ServiceType.DATABASE, # MongoDB
133
+ 5000: ServiceType.API, # Flask default
134
+ 3000: ServiceType.API, # Node.js default
135
+ 8080: ServiceType.API, # Common API port
136
+ 9090: ServiceType.MONITORING,# Prometheus
137
+ 3100: ServiceType.MONITORING,# Grafana Loki
138
+ }
139
+
140
+ # C2 Framework Signatures
141
+ C2_SIGNATURES = {
142
+ 'metasploit': {
143
+ 'ports': [4444, 4445, 5555],
144
+ 'interval_range': (1, 30),
145
+ 'packet_sizes': [(50, 200), (500, 2000)],
146
+ 'jitter_range': (0.0, 0.3),
147
+ },
148
+ 'cobalt_strike': {
149
+ 'ports': [50050],
150
+ 'interval_range': (30, 300),
151
+ 'packet_sizes': [(68, 200), (200, 1000)],
152
+ 'jitter_range': (0.0, 0.5),
153
+ },
154
+ 'sliver': {
155
+ 'ports': [8888, 31337],
156
+ 'interval_range': (5, 60),
157
+ 'packet_sizes': [(100, 500)],
158
+ 'jitter_range': (0.0, 0.3),
159
+ },
160
+ 'havoc': {
161
+ 'ports': [40056],
162
+ 'interval_range': (2, 30),
163
+ 'packet_sizes': [(64, 256)],
164
+ 'jitter_range': (0.0, 0.2),
165
+ },
166
+ }
167
+
168
+
169
+ # ============================================================================
170
+ # LEGITIMATE SERVICE PATTERNS - Key to reducing false positives
171
+ # ============================================================================
172
+
173
+ @dataclass
174
+ class LegitimatePattern:
175
+ """Defines a known legitimate traffic pattern."""
176
+ name: str
177
+ service_type: ServiceType
178
+ port: Optional[int] = None
179
+ ports: Optional[List[int]] = None
180
+ min_packet_size: int = 0
181
+ max_packet_size: int = 100000
182
+ symmetric_ratio: Tuple[float, float] = (0.0, 10.0) # sent/recv ratio range
183
+ max_interval_cv: float = 1.0 # coefficient of variation for intervals
184
+ max_size_cv: float = 1.0 # coefficient of variation for sizes
185
+ description: str = ""
186
+
187
+ def matches(self, connections: List[Dict], stats: Dict) -> Tuple[bool, float]:
188
+ """Check if connections match this legitimate pattern. Returns (matches, confidence)."""
189
+ if not connections:
190
+ return False, 0.0
191
+
192
+ ports = set(conn.get('dst_port', 0) for conn in connections)
193
+
194
+ # Check port match
195
+ if self.port and self.port not in ports:
196
+ return False, 0.0
197
+ if self.ports and not any(p in ports for p in self.ports):
198
+ return False, 0.0
199
+
200
+ # Check packet sizes
201
+ bytes_sent = [conn.get('bytes_sent', 0) for conn in connections]
202
+ bytes_recv = [conn.get('bytes_recv', 0) for conn in connections]
203
+
204
+ if bytes_sent:
205
+ if max(bytes_sent) > self.max_packet_size or min(bytes_sent) < self.min_packet_size:
206
+ return False, 0.0
207
+
208
+ # Check ratio
209
+ total_sent = sum(bytes_sent)
210
+ total_recv = sum(bytes_recv)
211
+ if total_recv > 0:
212
+ ratio = total_sent / total_recv
213
+ if not (self.symmetric_ratio[0] <= ratio <= self.symmetric_ratio[1]):
214
+ return False, 0.0
215
+
216
+ # CRITICAL: Check size variance - legitimate traffic has HIGH variance
217
+ # C2 traffic has LOW variance (consistent beacon sizes)
218
+ recv_cv = stats.get('recv_cv', 0)
219
+ sent_cv = stats.get('sent_cv', 0)
220
+
221
+ # If BOTH sent and recv are very consistent (CV < 0.3), this is likely C2
222
+ # Legitimate patterns should have at least some variance
223
+ if recv_cv < 0.3 and sent_cv < 0.3:
224
+ # Exception: SSH keepalive is intentionally consistent but tiny
225
+ if self.name == "ssh_keepalive":
226
+ pass # Allow SSH keepalive to match
227
+ else:
228
+ # Too consistent for legitimate traffic - likely C2
229
+ return False, 0.0
230
+
231
+ return True, 0.8
232
+
233
+
234
+ # Pre-defined legitimate patterns
235
+ LEGITIMATE_PATTERNS = [
236
+ LegitimatePattern(
237
+ name="ssh_keepalive",
238
+ service_type=ServiceType.SSH,
239
+ port=22,
240
+ min_packet_size=20,
241
+ max_packet_size=100, # Keepalive packets are very small
242
+ symmetric_ratio=(0.8, 1.2), # Nearly symmetric
243
+ max_interval_cv=0.3,
244
+ max_size_cv=0.15, # Very consistent sizes
245
+ description="SSH keepalive probes - small symmetric packets at regular intervals"
246
+ ),
247
+ LegitimatePattern(
248
+ name="ssh_interactive",
249
+ service_type=ServiceType.SSH,
250
+ port=22,
251
+ min_packet_size=20,
252
+ max_packet_size=50000,
253
+ symmetric_ratio=(0.01, 100.0), # Can be asymmetric
254
+ max_interval_cv=2.0, # Very variable timing (human typing)
255
+ max_size_cv=2.0, # Very variable sizes
256
+ description="Interactive SSH session with variable human-driven timing"
257
+ ),
258
+ LegitimatePattern(
259
+ name="health_check",
260
+ service_type=ServiceType.MONITORING,
261
+ ports=[80, 443, 8080, 8443, 9090],
262
+ min_packet_size=50,
263
+ max_packet_size=10000,
264
+ symmetric_ratio=(0.01, 0.5), # Small requests, larger responses
265
+ max_interval_cv=0.3, # Regular intervals
266
+ max_size_cv=1.0, # Response sizes can vary (status data)
267
+ description="Health check endpoint with variable response sizes"
268
+ ),
269
+ LegitimatePattern(
270
+ name="database_heartbeat",
271
+ service_type=ServiceType.DATABASE,
272
+ ports=[3306, 5432, 6379, 27017],
273
+ min_packet_size=20,
274
+ max_packet_size=100000,
275
+ symmetric_ratio=(0.01, 100.0),
276
+ max_interval_cv=0.3,
277
+ max_size_cv=5.0, # Query results vary dramatically
278
+ description="Database connection with variable query responses"
279
+ ),
280
+ LegitimatePattern(
281
+ name="websocket_ping",
282
+ service_type=ServiceType.API,
283
+ ports=[80, 443, 8080],
284
+ min_packet_size=10,
285
+ max_packet_size=100000,
286
+ symmetric_ratio=(0.001, 100.0), # Can receive large data pushes
287
+ max_interval_cv=0.5,
288
+ max_size_cv=5.0, # Large variance in push data
289
+ description="WebSocket connection with ping/pong and data pushes"
290
+ ),
291
+ ]
292
+
293
+
294
+ # ============================================================================
295
+ # CONTEXT INFERENCE SYSTEM
296
+ # ============================================================================
297
+
298
+ @dataclass
299
+ class ConnectionContext:
300
+ """
301
+ Additional context for connection analysis.
302
+
303
+ Provide any available context to improve detection accuracy.
304
+ All fields are optional - more context = better analysis.
305
+ """
306
+ # Process information
307
+ process_name: Optional[str] = None
308
+ process_path: Optional[str] = None
309
+ process_pid: Optional[int] = None
310
+ parent_process: Optional[str] = None
311
+ command_line: Optional[str] = None
312
+
313
+ # Network metadata
314
+ dns_queries: Optional[List[str]] = None # Associated DNS lookups
315
+ resolved_hostname: Optional[str] = None
316
+ tls_sni: Optional[str] = None # TLS Server Name Indication
317
+ tls_ja3: Optional[str] = None # JA3 fingerprint
318
+ tls_ja3s: Optional[str] = None # JA3S fingerprint
319
+ certificate_issuer: Optional[str] = None
320
+ certificate_subject: Optional[str] = None
321
+ certificate_valid: Optional[bool] = None
322
+ http_user_agent: Optional[str] = None
323
+ http_host: Optional[str] = None
324
+
325
+ # Reputation and intelligence
326
+ ip_reputation: Optional[float] = None # 0.0 (bad) to 1.0 (good)
327
+ domain_reputation: Optional[float] = None
328
+ known_good: Optional[bool] = None # Explicitly whitelisted
329
+ known_bad: Optional[bool] = None # Explicitly blacklisted
330
+ threat_intel_match: Optional[str] = None # Matched threat intel indicator
331
+
332
+ # Host context
333
+ source_hostname: Optional[str] = None
334
+ source_user: Optional[str] = None
335
+ source_is_server: Optional[bool] = None
336
+ source_is_workstation: Optional[bool] = None
337
+
338
+ # Additional metadata
339
+ geo_country: Optional[str] = None
340
+ geo_asn: Optional[str] = None
341
+ tags: Optional[List[str]] = None
342
+
343
+ def to_dict(self) -> Dict[str, Any]:
344
+ return {k: v for k, v in asdict(self).items() if v is not None}
345
+
346
+
347
+ class ContextInference:
348
+ """
349
+ Smart context inference engine.
350
+
351
+ Uses additional context to refine detection decisions and reduce false positives.
352
+ """
353
+
354
+ # Known legitimate process names
355
+ KNOWN_LEGITIMATE_PROCESSES = {
356
+ 'sshd', 'ssh', 'openssh', 'dropbear', # SSH
357
+ 'chrome', 'firefox', 'safari', 'edge', 'brave', # Browsers
358
+ 'curl', 'wget', 'httpd', 'nginx', 'apache2', # HTTP tools/servers
359
+ 'python', 'python3', 'node', 'java', 'ruby', # Interpreters
360
+ 'postgres', 'mysql', 'mongod', 'redis-server', # Databases
361
+ 'docker', 'containerd', 'kubelet', # Container tools
362
+ 'systemd', 'init', 'launchd', # System processes
363
+ 'prometheus', 'grafana', 'telegraf', # Monitoring
364
+ 'code', 'code-server', 'vim', 'emacs', # Editors
365
+ 'git', 'git-remote-https', # Version control
366
+ 'apt', 'yum', 'dnf', 'brew', 'pip', # Package managers
367
+ 'zoom', 'slack', 'teams', 'discord', # Communication
368
+ 'spotify', 'vlc', 'mpv', # Media
369
+ }
370
+
371
+ # Suspicious process names (often used by malware or C2)
372
+ SUSPICIOUS_PROCESSES = {
373
+ 'powershell', 'cmd', 'wscript', 'cscript', 'mshta', # Windows scripting
374
+ 'rundll32', 'regsvr32', 'msiexec', # Windows LOLBins
375
+ 'nc', 'netcat', 'ncat', 'socat', # Network utilities (legit but suspicious)
376
+ 'mimikatz', 'procdump', 'psexec', # Known attack tools
377
+ 'beacon', 'payload', 'implant', 'agent', # Common C2 names
378
+ }
379
+
380
+ # Known C2 JA3 fingerprints (example - would be populated from threat intel)
381
+ KNOWN_C2_JA3 = {
382
+ '72a589da586844d7f0818ce684948eea', # Cobalt Strike (example)
383
+ '51c64c77e60f3980eea90869b68c58a8', # Metasploit (example)
384
+ }
385
+
386
+ # Suspicious TLS certificate patterns
387
+ SUSPICIOUS_CERT_PATTERNS = [
388
+ r'localhost',
389
+ r'test\.',
390
+ r'example\.',
391
+ r'\.local$',
392
+ r'^C2',
393
+ r'beacon',
394
+ ]
395
+
396
+ def __init__(self):
397
+ self.whitelist_ips: set = set()
398
+ self.whitelist_domains: set = set()
399
+ self.blacklist_ips: set = set()
400
+ self.blacklist_domains: set = set()
401
+ self.custom_rules: List[Callable] = []
402
+
403
+ def add_whitelist_ip(self, ip: str):
404
+ """Add IP to whitelist."""
405
+ self.whitelist_ips.add(ip)
406
+
407
+ def add_whitelist_domain(self, domain: str):
408
+ """Add domain to whitelist."""
409
+ self.whitelist_domains.add(domain.lower())
410
+
411
+ def add_blacklist_ip(self, ip: str):
412
+ """Add IP to blacklist."""
413
+ self.blacklist_ips.add(ip)
414
+
415
+ def add_blacklist_domain(self, domain: str):
416
+ """Add domain to blacklist."""
417
+ self.blacklist_domains.add(domain.lower())
418
+
419
+ def add_custom_rule(self, rule: Callable[[List[Dict], ConnectionContext], Tuple[Optional[float], str]]):
420
+ """
421
+ Add custom inference rule.
422
+
423
+ Rule should return (probability_modifier, reason) or (None, "") to skip.
424
+ """
425
+ self.custom_rules.append(rule)
426
+
427
+ def infer(self, connections: List[Dict], context: Optional[ConnectionContext] = None) -> Dict[str, Any]:
428
+ """
429
+ Perform context-based inference.
430
+
431
+ Returns inference results that can modify detection probability.
432
+ """
433
+ result = {
434
+ 'probability_modifier': 1.0,
435
+ 'confidence_boost': 0.0,
436
+ 'is_whitelisted': False,
437
+ 'is_blacklisted': False,
438
+ 'matched_patterns': [],
439
+ 'risk_factors': [],
440
+ 'mitigating_factors': [],
441
+ 'service_type': ServiceType.UNKNOWN,
442
+ 'recommendations': [],
443
+ }
444
+
445
+ if not connections:
446
+ return result
447
+
448
+ dst_ips = set(conn.get('dst_ip', '') for conn in connections)
449
+ ports = set(conn.get('dst_port', 0) for conn in connections)
450
+
451
+ # Check whitelists
452
+ for ip in dst_ips:
453
+ if ip in self.whitelist_ips:
454
+ result['is_whitelisted'] = True
455
+ result['probability_modifier'] *= 0.1
456
+ result['mitigating_factors'].append(f"Destination IP {ip} is whitelisted")
457
+
458
+ # Check blacklists
459
+ for ip in dst_ips:
460
+ if ip in self.blacklist_ips:
461
+ result['is_blacklisted'] = True
462
+ result['probability_modifier'] *= 3.0
463
+ result['risk_factors'].append(f"Destination IP {ip} is blacklisted")
464
+
465
+ if context:
466
+ result = self._apply_context(result, connections, context, ports)
467
+
468
+ # Apply custom rules
469
+ for rule in self.custom_rules:
470
+ try:
471
+ modifier, reason = rule(connections, context)
472
+ if modifier is not None:
473
+ result['probability_modifier'] *= modifier
474
+ if modifier < 1.0:
475
+ result['mitigating_factors'].append(reason)
476
+ elif modifier > 1.0:
477
+ result['risk_factors'].append(reason)
478
+ except Exception:
479
+ pass
480
+
481
+ return result
482
+
483
+ def _apply_context(self, result: Dict, connections: List[Dict],
484
+ context: ConnectionContext, ports: set) -> Dict:
485
+ """Apply context-based inference rules."""
486
+
487
+ # Process name analysis
488
+ if context.process_name:
489
+ proc_lower = context.process_name.lower()
490
+
491
+ if proc_lower in self.KNOWN_LEGITIMATE_PROCESSES:
492
+ result['mitigating_factors'].append(f"Known legitimate process: {context.process_name}")
493
+ result['probability_modifier'] *= 0.5
494
+
495
+ if proc_lower in self.SUSPICIOUS_PROCESSES:
496
+ result['risk_factors'].append(f"Suspicious process: {context.process_name}")
497
+ result['probability_modifier'] *= 1.5
498
+
499
+ # SSH-specific checks
500
+ if proc_lower in ('sshd', 'ssh', 'openssh') and 22 in ports:
501
+ result['mitigating_factors'].append("SSH process on SSH port - expected behavior")
502
+ result['probability_modifier'] *= 0.3
503
+ result['service_type'] = ServiceType.SSH
504
+
505
+ # Explicit known_good/known_bad flags
506
+ if context.known_good:
507
+ result['is_whitelisted'] = True
508
+ result['probability_modifier'] *= 0.1
509
+ result['mitigating_factors'].append("Explicitly marked as known good")
510
+
511
+ if context.known_bad:
512
+ result['is_blacklisted'] = True
513
+ result['probability_modifier'] *= 5.0
514
+ result['risk_factors'].append("Explicitly marked as known bad")
515
+
516
+ # Reputation scores
517
+ if context.ip_reputation is not None:
518
+ if context.ip_reputation > 0.8:
519
+ result['mitigating_factors'].append(f"Good IP reputation: {context.ip_reputation:.2f}")
520
+ result['probability_modifier'] *= 0.6
521
+ elif context.ip_reputation < 0.3:
522
+ result['risk_factors'].append(f"Poor IP reputation: {context.ip_reputation:.2f}")
523
+ result['probability_modifier'] *= 1.5
524
+
525
+ if context.domain_reputation is not None:
526
+ if context.domain_reputation > 0.8:
527
+ result['mitigating_factors'].append(f"Good domain reputation: {context.domain_reputation:.2f}")
528
+ result['probability_modifier'] *= 0.6
529
+ elif context.domain_reputation < 0.3:
530
+ result['risk_factors'].append(f"Poor domain reputation: {context.domain_reputation:.2f}")
531
+ result['probability_modifier'] *= 1.5
532
+
533
+ # TLS/JA3 analysis
534
+ if context.tls_ja3:
535
+ if context.tls_ja3 in self.KNOWN_C2_JA3:
536
+ result['risk_factors'].append(f"Known C2 JA3 fingerprint: {context.tls_ja3}")
537
+ result['probability_modifier'] *= 3.0
538
+
539
+ # Certificate analysis
540
+ if context.certificate_subject:
541
+ for pattern in self.SUSPICIOUS_CERT_PATTERNS:
542
+ if re.search(pattern, context.certificate_subject, re.IGNORECASE):
543
+ result['risk_factors'].append(f"Suspicious certificate subject: {context.certificate_subject}")
544
+ result['probability_modifier'] *= 1.3
545
+ break
546
+
547
+ if context.certificate_valid is False:
548
+ result['risk_factors'].append("Invalid TLS certificate")
549
+ result['probability_modifier'] *= 1.4
550
+
551
+ # Threat intel match
552
+ if context.threat_intel_match:
553
+ result['is_blacklisted'] = True
554
+ result['risk_factors'].append(f"Threat intel match: {context.threat_intel_match}")
555
+ result['probability_modifier'] *= 5.0
556
+
557
+ # DNS analysis
558
+ if context.dns_queries:
559
+ # Check for suspicious DNS patterns
560
+ for query in context.dns_queries:
561
+ query_lower = query.lower()
562
+
563
+ # Check against domain blacklist
564
+ if query_lower in self.blacklist_domains:
565
+ result['risk_factors'].append(f"Blacklisted domain: {query}")
566
+ result['probability_modifier'] *= 2.0
567
+
568
+ # Check against whitelist
569
+ if query_lower in self.whitelist_domains:
570
+ result['mitigating_factors'].append(f"Whitelisted domain: {query}")
571
+ result['probability_modifier'] *= 0.5
572
+
573
+ # DGA-like patterns (high entropy)
574
+ if len(query) > 20 and self._calculate_entropy(query) > 3.5:
575
+ result['risk_factors'].append(f"Possible DGA domain: {query}")
576
+ result['probability_modifier'] *= 1.3
577
+
578
+ # Geo analysis
579
+ if context.geo_country:
580
+ # Could integrate with threat intel for high-risk countries
581
+ pass
582
+
583
+ return result
584
+
585
+ def _calculate_entropy(self, s: str) -> float:
586
+ """Calculate Shannon entropy of a string."""
587
+ if not s:
588
+ return 0.0
589
+ prob = [s.count(c) / len(s) for c in set(s)]
590
+ return -sum(p * math.log2(p) for p in prob if p > 0)
591
+
592
+
593
+ # ============================================================================
594
+ # NEURAL NETWORK COMPONENTS
595
+ # ============================================================================
596
+
597
+ class PositionalEncoding(nn.Module):
598
+ """Positional encoding for transformer."""
599
+
600
+ def __init__(self, d_model: int, max_len: int = 5000, dropout: float = 0.1):
601
+ super().__init__()
602
+ self.dropout = nn.Dropout(p=dropout)
603
+
604
+ pe = torch.zeros(max_len, d_model)
605
+ position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
606
+ div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
607
+ pe[:, 0::2] = torch.sin(position * div_term)
608
+ pe[:, 1::2] = torch.cos(position * div_term)
609
+ pe = pe.unsqueeze(0)
610
+ self.register_buffer('pe', pe)
611
+
612
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
613
+ x = x + self.pe[:, :x.size(1)]
614
+ return self.dropout(x)
615
+
616
+
617
+ class LogBERTC2Sentinel(nn.Module):
618
+ """LogBERT-based model for C2 beacon detection."""
619
+
620
+ def __init__(self, config: C2SentinelConfig):
621
+ super().__init__()
622
+ self.config = config
623
+
624
+ # Feature projection
625
+ self.feature_projection = nn.Sequential(
626
+ nn.Linear(config.num_features, config.d_model),
627
+ nn.LayerNorm(config.d_model),
628
+ nn.GELU(),
629
+ nn.Dropout(config.dropout)
630
+ )
631
+
632
+ # Positional encoding
633
+ self.pos_encoder = PositionalEncoding(config.d_model, config.max_seq_length, config.dropout)
634
+
635
+ # Transformer encoder
636
+ encoder_layer = nn.TransformerEncoderLayer(
637
+ d_model=config.d_model,
638
+ nhead=config.nhead,
639
+ dim_feedforward=config.dim_feedforward,
640
+ dropout=config.dropout,
641
+ activation='gelu',
642
+ batch_first=True
643
+ )
644
+ self.transformer_encoder = nn.TransformerEncoder(encoder_layer, config.num_encoder_layers)
645
+
646
+ # Multi-task heads
647
+ self.c2_head = nn.Sequential(
648
+ nn.Linear(config.d_model, config.d_model // 2),
649
+ nn.GELU(),
650
+ nn.Dropout(config.dropout),
651
+ nn.Linear(config.d_model // 2, 1)
652
+ )
653
+
654
+ self.anomaly_head = nn.Sequential(
655
+ nn.Linear(config.d_model, config.d_model // 2),
656
+ nn.GELU(),
657
+ nn.Dropout(config.dropout),
658
+ nn.Linear(config.d_model // 2, 1),
659
+ nn.Sigmoid()
660
+ )
661
+
662
+ self.evasion_head = nn.Sequential(
663
+ nn.Linear(config.d_model, config.d_model // 2),
664
+ nn.GELU(),
665
+ nn.Dropout(config.dropout),
666
+ nn.Linear(config.d_model // 2, 1),
667
+ nn.Sigmoid()
668
+ )
669
+
670
+ self.c2_type_head = nn.Sequential(
671
+ nn.Linear(config.d_model, config.d_model // 2),
672
+ nn.GELU(),
673
+ nn.Dropout(config.dropout),
674
+ nn.Linear(config.d_model // 2, config.num_c2_types)
675
+ )
676
+
677
+ self.confidence_head = nn.Sequential(
678
+ nn.Linear(config.d_model, config.d_model // 4),
679
+ nn.GELU(),
680
+ nn.Linear(config.d_model // 4, 1),
681
+ nn.Sigmoid()
682
+ )
683
+
684
+ def forward(self, x: torch.Tensor, mask: Optional[torch.Tensor] = None) -> Dict[str, torch.Tensor]:
685
+ if x.dim() == 2:
686
+ x = x.unsqueeze(1)
687
+
688
+ x = self.feature_projection(x)
689
+ x = self.pos_encoder(x)
690
+ encoded = self.transformer_encoder(x, src_key_padding_mask=mask)
691
+
692
+ if mask is not None:
693
+ mask_expanded = (~mask).unsqueeze(-1).float()
694
+ pooled = (encoded * mask_expanded).sum(dim=1) / mask_expanded.sum(dim=1).clamp(min=1)
695
+ else:
696
+ pooled = encoded.mean(dim=1)
697
+
698
+ return {
699
+ 'c2_logits': self.c2_head(pooled),
700
+ 'anomaly_score': self.anomaly_head(pooled),
701
+ 'evasion_score': self.evasion_head(pooled),
702
+ 'c2_type_logits': self.c2_type_head(pooled),
703
+ 'confidence': self.confidence_head(pooled)
704
+ }
705
+
706
+
707
+ # ============================================================================
708
+ # FEATURE EXTRACTION
709
+ # ============================================================================
710
+
711
+ class FeatureExtractor:
712
+ """Extracts 40-dimensional feature vectors from network traffic."""
713
+
714
+ C2_TYPES = [
715
+ 'unknown', 'metasploit', 'cobalt_strike', 'sliver', 'havoc',
716
+ 'mythic', 'poshc2', 'merlin', 'empire', 'covenant',
717
+ 'brute_ratel', 'koadic', 'pupy', 'silenttrinity', 'faction',
718
+ 'ibombshell', 'godoh', 'dnscat2', 'iodine', 'dns_generic',
719
+ 'http_custom', 'https_custom', 'websocket', 'domain_fronting',
720
+ 'cloud_fronting', 'cdn_abuse', 'apt_generic', 'apt28', 'apt29',
721
+ 'apt41', 'lazarus', 'fin7', 'turla', 'winnti', 'custom'
722
+ ]
723
+
724
+ METASPLOIT_PORTS = {4444, 4445, 5555}
725
+
726
+ def __init__(self):
727
+ self.connection_cache = defaultdict(list)
728
+ self.destination_history = defaultdict(set)
729
+
730
+ def check_metasploit_signature(self, connections: List[Dict]) -> Tuple[bool, float]:
731
+ """Check for Metasploit-specific signatures."""
732
+ if not connections:
733
+ return False, 0.0
734
+
735
+ confidence = 0.0
736
+ indicators = 0
737
+
738
+ ports = set(conn.get('dst_port', 0) for conn in connections)
739
+ metasploit_port_match = ports & self.METASPLOIT_PORTS
740
+
741
+ if not metasploit_port_match:
742
+ return False, 0.0
743
+
744
+ if 4444 in metasploit_port_match:
745
+ confidence += 0.6
746
+ indicators += 2
747
+ elif 4445 in metasploit_port_match or 5555 in metasploit_port_match:
748
+ confidence += 0.4
749
+ indicators += 1
750
+
751
+ if len(connections) > 1:
752
+ timestamps = sorted([conn.get('timestamp', 0) for conn in connections])
753
+ intervals = np.diff(timestamps)
754
+ if len(intervals) > 0:
755
+ mean_interval = np.mean(intervals)
756
+ if 1 <= mean_interval <= 30:
757
+ confidence += 0.15
758
+ indicators += 1
759
+
760
+ bytes_sent = [conn.get('bytes_sent', 0) for conn in connections]
761
+ if bytes_sent:
762
+ mean_size = np.mean(bytes_sent)
763
+ if 50 <= mean_size <= 200:
764
+ confidence += 0.1
765
+ indicators += 1
766
+
767
+ dst_ips = [conn.get('dst_ip', '') for conn in connections]
768
+ if dst_ips:
769
+ unique_dsts = len(set(dst_ips))
770
+ if unique_dsts == 1 and len(dst_ips) >= 3:
771
+ confidence += 0.1
772
+ indicators += 1
773
+
774
+ is_metasploit = indicators >= 2 and confidence >= 0.5
775
+ return is_metasploit, min(confidence, 1.0)
776
+
777
+ def check_ssh_keepalive(self, connections: List[Dict]) -> Tuple[bool, float]:
778
+ """
779
+ Check for SSH keepalive pattern to prevent false positives.
780
+
781
+ SSH keepalive characteristics:
782
+ - Port 22
783
+ - Very small packets (typically 48-64 bytes)
784
+ - Nearly symmetric (sent ≈ recv)
785
+ - Regular intervals (typically 30s, 60s, 120s)
786
+ - Very consistent sizes
787
+
788
+ Returns (is_ssh_keepalive, confidence)
789
+ """
790
+ if not connections or len(connections) < 3:
791
+ return False, 0.0
792
+
793
+ ports = set(conn.get('dst_port', 0) for conn in connections)
794
+
795
+ # Must be on SSH port
796
+ if 22 not in ports:
797
+ return False, 0.0
798
+
799
+ bytes_sent = [conn.get('bytes_sent', 0) for conn in connections]
800
+ bytes_recv = [conn.get('bytes_recv', 0) for conn in connections]
801
+
802
+ if not bytes_sent or not bytes_recv:
803
+ return False, 0.0
804
+
805
+ mean_sent = np.mean(bytes_sent)
806
+ mean_recv = np.mean(bytes_recv)
807
+
808
+ # Check for small packets (keepalive probes are tiny)
809
+ if mean_sent > 100 or mean_recv > 100:
810
+ # Larger packets = actual SSH traffic, not just keepalive
811
+ return False, 0.0
812
+
813
+ # Check for symmetric traffic (keepalive is bidirectional probe)
814
+ if mean_recv > 0:
815
+ ratio = mean_sent / mean_recv
816
+ if not (0.5 <= ratio <= 2.0):
817
+ # Asymmetric = data transfer, not keepalive
818
+ return False, 0.0
819
+
820
+ # Check for consistent sizes (keepalive is always same size)
821
+ sent_cv = np.std(bytes_sent) / (mean_sent + 1e-6)
822
+ recv_cv = np.std(bytes_recv) / (mean_recv + 1e-6)
823
+
824
+ if sent_cv > 0.2 or recv_cv > 0.2:
825
+ # Variable sizes = not keepalive
826
+ return False, 0.0
827
+
828
+ # Check for regular intervals (keepalive is very regular)
829
+ timestamps = sorted([conn.get('timestamp', 0) for conn in connections])
830
+ if len(timestamps) > 1:
831
+ intervals = np.diff(timestamps)
832
+ if len(intervals) > 0:
833
+ mean_interval = np.mean(intervals)
834
+ interval_cv = np.std(intervals) / (mean_interval + 1e-6)
835
+
836
+ # Check if intervals match common keepalive values (15, 30, 60, 120 seconds)
837
+ common_keepalive_intervals = [15, 30, 60, 120, 180, 300]
838
+ closest_match = min(common_keepalive_intervals, key=lambda x: abs(x - mean_interval))
839
+ interval_match = abs(mean_interval - closest_match) / closest_match < 0.2
840
+
841
+ if interval_cv < 0.15 and interval_match:
842
+ # Very regular intervals matching keepalive pattern
843
+ confidence = 0.95
844
+ elif interval_cv < 0.2:
845
+ confidence = 0.85
846
+ else:
847
+ return False, 0.0
848
+
849
+ return True, confidence
850
+
851
+ return False, 0.0
852
+
853
+ def check_legitimate_patterns(self, connections: List[Dict]) -> Tuple[Optional[LegitimatePattern], float]:
854
+ """
855
+ Check if connections match any known legitimate patterns.
856
+
857
+ Returns (matched_pattern, confidence) or (None, 0.0)
858
+ """
859
+ if not connections:
860
+ return None, 0.0
861
+
862
+ # Calculate stats once
863
+ bytes_sent = [conn.get('bytes_sent', 0) for conn in connections]
864
+ bytes_recv = [conn.get('bytes_recv', 0) for conn in connections]
865
+
866
+ stats = {
867
+ 'mean_sent': np.mean(bytes_sent) if bytes_sent else 0,
868
+ 'mean_recv': np.mean(bytes_recv) if bytes_recv else 0,
869
+ 'sent_cv': np.std(bytes_sent) / (np.mean(bytes_sent) + 1e-6) if bytes_sent else 0,
870
+ 'recv_cv': np.std(bytes_recv) / (np.mean(bytes_recv) + 1e-6) if bytes_recv else 0,
871
+ }
872
+
873
+ for pattern in LEGITIMATE_PATTERNS:
874
+ matches, confidence = pattern.matches(connections, stats)
875
+ if matches:
876
+ return pattern, confidence
877
+
878
+ return None, 0.0
879
+
880
+ def extract_features(self, connections: List[Dict]) -> np.ndarray:
881
+ """Extract 40 features from connection records."""
882
+ if not connections:
883
+ return np.zeros(40)
884
+
885
+ features = np.zeros(40)
886
+
887
+ # Parse timestamps
888
+ timestamps = []
889
+ for conn in connections:
890
+ ts = conn.get('timestamp', 0)
891
+ if isinstance(ts, str):
892
+ try:
893
+ ts = datetime.fromisoformat(ts.replace('Z', '+00:00')).timestamp()
894
+ except:
895
+ ts = 0
896
+ timestamps.append(float(ts))
897
+
898
+ timestamps = np.array(sorted(timestamps))
899
+
900
+ # === TIMING FEATURES (0-9) ===
901
+ if len(timestamps) > 1:
902
+ intervals = np.diff(timestamps)
903
+ intervals = intervals[intervals > 0]
904
+
905
+ if len(intervals) > 0:
906
+ features[0] = np.mean(intervals)
907
+ features[1] = np.std(intervals)
908
+ features[2] = np.std(intervals) / (np.mean(intervals) + 1e-6)
909
+ features[3] = np.median(intervals)
910
+ features[4] = np.min(intervals)
911
+ features[5] = np.max(intervals)
912
+
913
+ if len(intervals) > 2:
914
+ sorted_intervals = np.sort(intervals)
915
+ mode_estimate = sorted_intervals[len(sorted_intervals)//2]
916
+ regularity = 1.0 - np.mean(np.abs(intervals - mode_estimate) / (mode_estimate + 1e-6))
917
+ features[6] = max(0, min(1, regularity))
918
+
919
+ if len(intervals) >= 8:
920
+ fft = np.fft.fft(intervals - np.mean(intervals))
921
+ power = np.abs(fft[:len(fft)//2])**2
922
+ features[7] = np.max(power) / (np.sum(power) + 1e-6)
923
+
924
+ hours = [(ts % 86400) / 3600 for ts in timestamps]
925
+ features[8] = np.std(hours) / 12.0
926
+ business_hours = sum(1 for h in hours if 9 <= h <= 17) / len(hours)
927
+ features[9] = business_hours
928
+
929
+ # === DESTINATION FEATURES (10-17) ===
930
+ dst_ips = [conn.get('dst_ip', '') for conn in connections]
931
+ dst_ports = [conn.get('dst_port', 0) for conn in connections]
932
+
933
+ unique_dsts = len(set(dst_ips))
934
+ features[10] = unique_dsts
935
+ features[11] = unique_dsts / len(connections) if connections else 0
936
+
937
+ if dst_ips:
938
+ dst_counts = defaultdict(int)
939
+ for ip in dst_ips:
940
+ dst_counts[ip] += 1
941
+ max_persistence = max(dst_counts.values())
942
+ features[12] = max_persistence / len(connections)
943
+ features[13] = len([c for c in dst_counts.values() if c > 1]) / len(dst_counts) if dst_counts else 0
944
+
945
+ unique_ports = len(set(dst_ports))
946
+ features[14] = unique_ports
947
+ features[15] = 1.0 if 443 in dst_ports or 80 in dst_ports else 0.0
948
+
949
+ high_port_ratio = sum(1 for p in dst_ports if p > 10000) / len(dst_ports) if dst_ports else 0
950
+ features[16] = high_port_ratio
951
+
952
+ msf_port_hit = any(p in self.METASPLOIT_PORTS for p in dst_ports)
953
+ features[17] = 1.0 if msf_port_hit else 0.0
954
+
955
+ # === PAYLOAD FEATURES (18-27) ===
956
+ bytes_sent = [conn.get('bytes_sent', 0) for conn in connections]
957
+ bytes_recv = [conn.get('bytes_recv', 0) for conn in connections]
958
+
959
+ if bytes_sent:
960
+ features[18] = np.mean(bytes_sent)
961
+ features[19] = np.std(bytes_sent)
962
+ features[20] = np.std(bytes_sent) / (np.mean(bytes_sent) + 1e-6)
963
+
964
+ if bytes_recv:
965
+ features[21] = np.mean(bytes_recv)
966
+ features[22] = np.std(bytes_recv)
967
+
968
+ total_sent = sum(bytes_sent)
969
+ total_recv = sum(bytes_recv)
970
+ features[23] = total_sent / (total_recv + 1e-6) if total_recv else 0
971
+
972
+ if len(bytes_sent) > 1:
973
+ unique_sizes = len(set(bytes_sent))
974
+ features[24] = 1.0 - (unique_sizes / len(bytes_sent))
975
+
976
+ features[25] = sum(1 for b in bytes_sent if b < 500) / len(bytes_sent) if bytes_sent else 0
977
+
978
+ if bytes_sent:
979
+ size_hist, _ = np.histogram(bytes_sent, bins=10)
980
+ size_hist = size_hist / (sum(size_hist) + 1e-6)
981
+ entropy = -np.sum(size_hist * np.log2(size_hist + 1e-6))
982
+ features[26] = entropy / 3.32
983
+
984
+ features[27] = len(connections)
985
+
986
+ # === EVASION DETECTION FEATURES (28-35) ===
987
+ if len(timestamps) > 5:
988
+ intervals = np.diff(timestamps)
989
+ if len(intervals) > 0:
990
+ jitter_pattern = np.abs(np.diff(intervals))
991
+ if len(jitter_pattern) > 0:
992
+ features[28] = np.mean(jitter_pattern) / (np.mean(intervals) + 1e-6)
993
+
994
+ autocorr = np.correlate(intervals - np.mean(intervals), intervals - np.mean(intervals), mode='full')
995
+ autocorr = autocorr[len(autocorr)//2:]
996
+ if len(autocorr) > 1:
997
+ features[29] = autocorr[1] / (autocorr[0] + 1e-6)
998
+
999
+ if len(timestamps) > 3:
1000
+ intervals = np.diff(timestamps)
1001
+ burst_threshold = np.mean(intervals) * 0.1
1002
+ bursts = sum(1 for i in intervals if i < burst_threshold)
1003
+ features[30] = bursts / len(intervals) if intervals.size > 0 else 0
1004
+
1005
+ if timestamps.size > 0:
1006
+ session_length = timestamps[-1] - timestamps[0]
1007
+ features[31] = min(session_length / 86400, 1.0)
1008
+
1009
+ if len(timestamps) > 10:
1010
+ window_size = len(timestamps) // 5
1011
+ window_counts = []
1012
+ for i in range(5):
1013
+ start_idx = i * window_size
1014
+ end_idx = start_idx + window_size
1015
+ window_counts.append(end_idx - start_idx)
1016
+ features[32] = 1.0 - (np.std(window_counts) / (np.mean(window_counts) + 1e-6))
1017
+
1018
+ protocols = [conn.get('protocol', 'tcp').lower() for conn in connections]
1019
+ unique_protocols = len(set(protocols))
1020
+ features[33] = 1.0 if unique_protocols == 1 else 1.0 / unique_protocols
1021
+
1022
+ features[34] = sum(1 for p in dst_ports if p in [80, 443, 8080, 8443]) / len(dst_ports) if dst_ports else 0
1023
+ features[35] = sum(1 for p in dst_ports if p == 443) / len(dst_ports) if dst_ports else 0
1024
+
1025
+ # === ADVANCED PATTERN FEATURES (36-39) ===
1026
+ if timestamps.size > 0:
1027
+ night_hours = sum(1 for ts in timestamps if 0 <= (ts % 86400) / 3600 < 6)
1028
+ features[36] = night_hours / len(timestamps)
1029
+
1030
+ if len(timestamps) > 1:
1031
+ intervals = np.diff(timestamps)
1032
+ fast_beacon_ratio = sum(1 for i in intervals if 1 <= i <= 5) / len(intervals) if len(intervals) > 0 else 0
1033
+ features[37] = fast_beacon_ratio
1034
+
1035
+ durations = [conn.get('duration', 0) for conn in connections]
1036
+ if durations:
1037
+ features[38] = np.mean(durations)
1038
+ features[39] = np.std(durations) / (np.mean(durations) + 1e-6) if np.mean(durations) > 0 else 0
1039
+
1040
+ return features.astype(np.float32)
1041
+
1042
+
1043
+ # ============================================================================
1044
+ # LOG PARSING
1045
+ # ============================================================================
1046
+
1047
+ class LogParser:
1048
+ """Parses various log formats into connection records."""
1049
+
1050
+ @staticmethod
1051
+ def parse_zeek_conn(log_line: str) -> Optional[Dict]:
1052
+ """Parse Zeek/Bro conn.log format."""
1053
+ try:
1054
+ parts = log_line.strip().split('\t')
1055
+ if len(parts) >= 15:
1056
+ return {
1057
+ 'timestamp': float(parts[0]),
1058
+ 'src_ip': parts[2],
1059
+ 'src_port': int(parts[3]),
1060
+ 'dst_ip': parts[4],
1061
+ 'dst_port': int(parts[5]),
1062
+ 'protocol': parts[6],
1063
+ 'duration': float(parts[8]) if parts[8] != '-' else 0,
1064
+ 'bytes_sent': int(parts[9]) if parts[9] != '-' else 0,
1065
+ 'bytes_recv': int(parts[10]) if parts[10] != '-' else 0
1066
+ }
1067
+ except:
1068
+ pass
1069
+ return None
1070
+
1071
+ @staticmethod
1072
+ def parse_syslog(log_line: str) -> Optional[Dict]:
1073
+ """Parse common syslog/netflow formats."""
1074
+ patterns = [
1075
+ r'(\d{4}-\d{2}-\d{2}[T ]\d{2}:\d{2}:\d{2}).*?(\d+\.\d+\.\d+\.\d+):(\d+)\s*->\s*(\d+\.\d+\.\d+\.\d+):(\d+)',
1076
+ r'src=(\d+\.\d+\.\d+\.\d+).*?dst=(\d+\.\d+\.\d+\.\d+).*?sport=(\d+).*?dport=(\d+)',
1077
+ ]
1078
+
1079
+ for pattern in patterns:
1080
+ match = re.search(pattern, log_line)
1081
+ if match:
1082
+ groups = match.groups()
1083
+ try:
1084
+ if len(groups) == 5:
1085
+ return {
1086
+ 'timestamp': groups[0],
1087
+ 'src_ip': groups[1],
1088
+ 'src_port': int(groups[2]),
1089
+ 'dst_ip': groups[3],
1090
+ 'dst_port': int(groups[4]),
1091
+ 'protocol': 'tcp',
1092
+ 'bytes_sent': 0,
1093
+ 'bytes_recv': 0
1094
+ }
1095
+ except:
1096
+ pass
1097
+ return None
1098
+
1099
+ @staticmethod
1100
+ def parse_json(log_line: str) -> Optional[Dict]:
1101
+ """Parse JSON log format."""
1102
+ try:
1103
+ data = json.loads(log_line)
1104
+ return {
1105
+ 'timestamp': data.get('timestamp', data.get('@timestamp', 0)),
1106
+ 'src_ip': data.get('src_ip', data.get('source_ip', data.get('src', ''))),
1107
+ 'dst_ip': data.get('dst_ip', data.get('dest_ip', data.get('dst', ''))),
1108
+ 'src_port': int(data.get('src_port', data.get('source_port', 0))),
1109
+ 'dst_port': int(data.get('dst_port', data.get('dest_port', 0))),
1110
+ 'protocol': data.get('protocol', 'tcp'),
1111
+ 'bytes_sent': int(data.get('bytes_sent', data.get('bytes_out', 0))),
1112
+ 'bytes_recv': int(data.get('bytes_recv', data.get('bytes_in', 0))),
1113
+ 'duration': float(data.get('duration', 0))
1114
+ }
1115
+ except:
1116
+ return None
1117
+
1118
+
1119
+ # ============================================================================
1120
+ # RECONNAISSANCE SUPPORT
1121
+ # ============================================================================
1122
+
1123
+ class ReconSupport:
1124
+ """
1125
+ Reconnaissance and enrichment support for scripting.
1126
+
1127
+ Provides IP analysis, network intelligence, and enrichment functions
1128
+ useful for security automation and scripting.
1129
+ """
1130
+
1131
+ # Known CDN/Cloud provider IP ranges (simplified - in production, use full lists)
1132
+ KNOWN_CDNS = {
1133
+ 'cloudflare': ['104.16.0.0/12', '172.64.0.0/13', '131.0.72.0/22'],
1134
+ 'aws': ['52.0.0.0/6', '54.0.0.0/6'],
1135
+ 'google': ['35.190.0.0/16', '35.220.0.0/14', '142.250.0.0/15'],
1136
+ 'azure': ['13.64.0.0/11', '40.64.0.0/10'],
1137
+ 'akamai': ['23.0.0.0/12', '104.64.0.0/10'],
1138
+ }
1139
+
1140
+ # Private IP ranges
1141
+ PRIVATE_RANGES = [
1142
+ ipaddress.ip_network('10.0.0.0/8'),
1143
+ ipaddress.ip_network('172.16.0.0/12'),
1144
+ ipaddress.ip_network('192.168.0.0/16'),
1145
+ ipaddress.ip_network('127.0.0.0/8'),
1146
+ ipaddress.ip_network('169.254.0.0/16'),
1147
+ ]
1148
+
1149
+ @classmethod
1150
+ def analyze_ip(cls, ip: str) -> Dict[str, Any]:
1151
+ """
1152
+ Analyze an IP address for reconnaissance purposes.
1153
+
1154
+ Returns enrichment data about the IP.
1155
+ """
1156
+ result = {
1157
+ 'ip': ip,
1158
+ 'is_valid': False,
1159
+ 'is_private': False,
1160
+ 'is_loopback': False,
1161
+ 'is_multicast': False,
1162
+ 'is_cdn': False,
1163
+ 'cdn_provider': None,
1164
+ 'ip_version': None,
1165
+ 'reverse_dns': None,
1166
+ 'numeric': None,
1167
+ }
1168
+
1169
+ try:
1170
+ ip_obj = ipaddress.ip_address(ip)
1171
+ result['is_valid'] = True
1172
+ result['ip_version'] = ip_obj.version
1173
+ result['is_private'] = ip_obj.is_private
1174
+ result['is_loopback'] = ip_obj.is_loopback
1175
+ result['is_multicast'] = ip_obj.is_multicast
1176
+
1177
+ # Convert to numeric for range analysis
1178
+ if isinstance(ip_obj, ipaddress.IPv4Address):
1179
+ result['numeric'] = int(ip_obj)
1180
+
1181
+ # Check CDN ranges
1182
+ for cdn, ranges in cls.KNOWN_CDNS.items():
1183
+ for range_str in ranges:
1184
+ try:
1185
+ network = ipaddress.ip_network(range_str)
1186
+ if ip_obj in network:
1187
+ result['is_cdn'] = True
1188
+ result['cdn_provider'] = cdn
1189
+ break
1190
+ except:
1191
+ pass
1192
+ if result['is_cdn']:
1193
+ break
1194
+
1195
+ # Try reverse DNS (optional, may fail)
1196
+ try:
1197
+ result['reverse_dns'] = socket.gethostbyaddr(ip)[0]
1198
+ except:
1199
+ pass
1200
+
1201
+ except ValueError:
1202
+ pass
1203
+
1204
+ return result
1205
+
1206
+ @classmethod
1207
+ def analyze_connection_patterns(cls, connections: List[Dict]) -> Dict[str, Any]:
1208
+ """
1209
+ Analyze connection patterns for reconnaissance.
1210
+
1211
+ Provides high-level pattern analysis useful for threat hunting.
1212
+ """
1213
+ if not connections:
1214
+ return {'error': 'No connections provided'}
1215
+
1216
+ dst_ips = [conn.get('dst_ip', '') for conn in connections]
1217
+ dst_ports = [conn.get('dst_port', 0) for conn in connections]
1218
+ bytes_sent = [conn.get('bytes_sent', 0) for conn in connections]
1219
+ bytes_recv = [conn.get('bytes_recv', 0) for conn in connections]
1220
+
1221
+ timestamps = sorted([conn.get('timestamp', 0) for conn in connections])
1222
+ intervals = np.diff(timestamps) if len(timestamps) > 1 else []
1223
+
1224
+ # Destination analysis
1225
+ unique_dsts = set(dst_ips)
1226
+ dst_analysis = {}
1227
+ for ip in unique_dsts:
1228
+ if ip:
1229
+ dst_analysis[ip] = cls.analyze_ip(ip)
1230
+
1231
+ # Port analysis
1232
+ port_counts = defaultdict(int)
1233
+ for port in dst_ports:
1234
+ port_counts[port] += 1
1235
+
1236
+ # Calculate statistics
1237
+ result = {
1238
+ 'connection_count': len(connections),
1239
+ 'unique_destinations': len(unique_dsts),
1240
+ 'unique_ports': len(set(dst_ports)),
1241
+
1242
+ # Timing analysis
1243
+ 'timing': {
1244
+ 'duration_seconds': timestamps[-1] - timestamps[0] if len(timestamps) > 1 else 0,
1245
+ 'mean_interval': float(np.mean(intervals)) if len(intervals) > 0 else 0,
1246
+ 'interval_stddev': float(np.std(intervals)) if len(intervals) > 0 else 0,
1247
+ 'interval_cv': float(np.std(intervals) / (np.mean(intervals) + 1e-6)) if len(intervals) > 0 else 0,
1248
+ },
1249
+
1250
+ # Volume analysis
1251
+ 'volume': {
1252
+ 'total_sent': sum(bytes_sent),
1253
+ 'total_recv': sum(bytes_recv),
1254
+ 'mean_sent': float(np.mean(bytes_sent)) if bytes_sent else 0,
1255
+ 'mean_recv': float(np.mean(bytes_recv)) if bytes_recv else 0,
1256
+ 'sent_recv_ratio': sum(bytes_sent) / (sum(bytes_recv) + 1e-6) if bytes_recv else 0,
1257
+ },
1258
+
1259
+ # Port distribution
1260
+ 'ports': dict(port_counts),
1261
+
1262
+ # Destination enrichment
1263
+ 'destinations': dst_analysis,
1264
+
1265
+ # Pattern indicators
1266
+ 'indicators': {
1267
+ 'single_destination': len(unique_dsts) == 1,
1268
+ 'consistent_timing': float(np.std(intervals) / (np.mean(intervals) + 1e-6)) < 0.3 if len(intervals) > 0 else False,
1269
+ 'consistent_sizes': float(np.std(bytes_sent) / (np.mean(bytes_sent) + 1e-6)) < 0.2 if bytes_sent and np.mean(bytes_sent) > 0 else False,
1270
+ 'uses_common_port': bool(set(dst_ports) & {80, 443, 53, 22}),
1271
+ 'uses_high_port': any(p > 10000 for p in dst_ports),
1272
+ 'has_cdn_destination': any(d.get('is_cdn', False) for d in dst_analysis.values()),
1273
+ 'all_private_destinations': all(d.get('is_private', False) for d in dst_analysis.values() if d.get('is_valid')),
1274
+ },
1275
+ }
1276
+
1277
+ return result
1278
+
1279
+ @classmethod
1280
+ def generate_iocs(cls, connections: List[Dict], result: Dict) -> Dict[str, List[str]]:
1281
+ """
1282
+ Generate Indicators of Compromise (IOCs) from analysis.
1283
+
1284
+ Returns IOCs suitable for threat intelligence sharing.
1285
+ """
1286
+ iocs = {
1287
+ 'ips': [],
1288
+ 'ports': [],
1289
+ 'timing_signatures': [],
1290
+ 'behavioral_indicators': [],
1291
+ }
1292
+
1293
+ if not result.get('is_c2', False):
1294
+ return iocs
1295
+
1296
+ # Extract destination IPs
1297
+ dst_ips = set(conn.get('dst_ip', '') for conn in connections if conn.get('dst_ip'))
1298
+ iocs['ips'] = list(dst_ips)
1299
+
1300
+ # Extract ports
1301
+ dst_ports = set(conn.get('dst_port', 0) for conn in connections if conn.get('dst_port'))
1302
+ iocs['ports'] = [str(p) for p in dst_ports]
1303
+
1304
+ # Generate timing signature
1305
+ timestamps = sorted([conn.get('timestamp', 0) for conn in connections])
1306
+ if len(timestamps) > 1:
1307
+ intervals = np.diff(timestamps)
1308
+ mean_interval = np.mean(intervals)
1309
+ iocs['timing_signatures'].append(f"beacon_interval:{mean_interval:.1f}s±{np.std(intervals):.1f}s")
1310
+
1311
+ # Behavioral indicators
1312
+ if result.get('c2_type'):
1313
+ iocs['behavioral_indicators'].append(f"c2_type:{result['c2_type']}")
1314
+ if result.get('evasion_score', 0) > 0.5:
1315
+ iocs['behavioral_indicators'].append("evasion_detected")
1316
+
1317
+ return iocs
1318
+
1319
+
1320
+ # ============================================================================
1321
+ # MAIN API CLASS
1322
+ # ============================================================================
1323
+
1324
+ @dataclass
1325
+ class AnalysisResult:
1326
+ """Structured result from C2 analysis."""
1327
+ is_c2: bool
1328
+ c2_probability: float
1329
+ anomaly_score: float
1330
+ evasion_score: float
1331
+ confidence: float
1332
+ c2_type: str
1333
+ c2_type_confidence: float
1334
+ detection_method: str
1335
+ immediate_detection: bool
1336
+
1337
+ # Context-based adjustments
1338
+ context_applied: bool = False
1339
+ original_probability: float = 0.0
1340
+ probability_modifier: float = 1.0
1341
+
1342
+ # Legitimate pattern matching
1343
+ matched_legitimate_pattern: Optional[str] = None
1344
+ legitimate_confidence: float = 0.0
1345
+
1346
+ # Risk analysis
1347
+ risk_factors: List[str] = field(default_factory=list)
1348
+ mitigating_factors: List[str] = field(default_factory=list)
1349
+
1350
+ # Service classification
1351
+ service_type: str = "unknown"
1352
+
1353
+ # Recommendations
1354
+ recommendations: List[str] = field(default_factory=list)
1355
+
1356
+ # Raw features
1357
+ features: List[float] = field(default_factory=list)
1358
+
1359
+ def to_dict(self) -> Dict[str, Any]:
1360
+ return asdict(self)
1361
+
1362
+ def __repr__(self) -> str:
1363
+ status = "🚨 C2 DETECTED" if self.is_c2 else "✅ Clean"
1364
+ return f"<AnalysisResult: {status} | prob={self.c2_probability:.3f} | type={self.c2_type}>"
1365
+
1366
+
1367
+ class C2Sentinel:
1368
+ """
1369
+ Main API for LogBERT-C2Sentinel.
1370
+
1371
+ Advanced C2 detection with context inference and reconnaissance support.
1372
+
1373
+ Usage:
1374
+ # Load pre-trained model
1375
+ sentinel = C2Sentinel.load('c2_sentinel')
1376
+
1377
+ # Basic analysis
1378
+ result = sentinel.analyze(connections)
1379
+
1380
+ # With context
1381
+ context = ConnectionContext(process_name='sshd', known_good=True)
1382
+ result = sentinel.analyze(connections, context=context)
1383
+
1384
+ # Batch analysis
1385
+ results = sentinel.analyze_batch([conn_list1, conn_list2, ...])
1386
+
1387
+ # With reconnaissance
1388
+ recon = sentinel.recon.analyze_connection_patterns(connections)
1389
+ iocs = sentinel.recon.generate_iocs(connections, result)
1390
+ """
1391
+
1392
+ def __init__(self, model: LogBERTC2Sentinel, config: C2SentinelConfig, device: str = 'auto'):
1393
+ self.model = model
1394
+ self.config = config
1395
+ self.feature_extractor = FeatureExtractor()
1396
+ self.log_parser = LogParser()
1397
+ self.context_engine = ContextInference()
1398
+ self.recon = ReconSupport()
1399
+
1400
+ if device == 'auto':
1401
+ self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
1402
+ else:
1403
+ self.device = torch.device(device)
1404
+
1405
+ self.model.to(self.device)
1406
+ self.model.eval()
1407
+
1408
+ def analyze(
1409
+ self,
1410
+ connections: List[Dict],
1411
+ threshold: float = 0.5,
1412
+ context: Optional[ConnectionContext] = None,
1413
+ include_features: bool = False,
1414
+ strict_mode: bool = False
1415
+ ) -> AnalysisResult:
1416
+ """
1417
+ Analyze connections for C2 activity.
1418
+
1419
+ Args:
1420
+ connections: List of connection records
1421
+ threshold: Detection threshold (default 0.5, use 0.7 for fewer false positives)
1422
+ context: Optional ConnectionContext with additional metadata
1423
+ include_features: Include raw feature vector in result
1424
+ strict_mode: Require higher confidence for C2 detection
1425
+
1426
+ Returns:
1427
+ AnalysisResult with comprehensive detection information
1428
+ """
1429
+ ports = set(conn.get('dst_port', 0) for conn in connections)
1430
+
1431
+ # Initialize result
1432
+ result = AnalysisResult(
1433
+ is_c2=False,
1434
+ c2_probability=0.0,
1435
+ anomaly_score=0.0,
1436
+ evasion_score=0.0,
1437
+ confidence=0.0,
1438
+ c2_type='none',
1439
+ c2_type_confidence=0.0,
1440
+ detection_method='none',
1441
+ immediate_detection=False,
1442
+ )
1443
+
1444
+ if not connections:
1445
+ return result
1446
+
1447
+ # ================================================================
1448
+ # PHASE 1: Check for known legitimate patterns FIRST
1449
+ # ================================================================
1450
+
1451
+ # Check SSH keepalive specifically (common false positive)
1452
+ is_ssh_keepalive, ssh_ka_confidence = self.feature_extractor.check_ssh_keepalive(connections)
1453
+ if is_ssh_keepalive:
1454
+ result.matched_legitimate_pattern = "ssh_keepalive"
1455
+ result.legitimate_confidence = ssh_ka_confidence
1456
+ result.service_type = ServiceType.SSH.value
1457
+ result.mitigating_factors.append(f"Matches SSH keepalive pattern (confidence: {ssh_ka_confidence:.2f})")
1458
+ result.detection_method = DetectionMethod.WHITELIST.value
1459
+ result.recommendations.append("SSH keepalive is normal system behavior")
1460
+
1461
+ # SSH keepalive should NOT be flagged as C2
1462
+ result.is_c2 = False
1463
+ result.c2_probability = 0.05 # Very low probability
1464
+ result.confidence = ssh_ka_confidence
1465
+ return result
1466
+
1467
+ # Check other legitimate patterns
1468
+ matched_pattern, pattern_confidence = self.feature_extractor.check_legitimate_patterns(connections)
1469
+ if matched_pattern and pattern_confidence > 0.7:
1470
+ result.matched_legitimate_pattern = matched_pattern.name
1471
+ result.legitimate_confidence = pattern_confidence
1472
+ result.service_type = matched_pattern.service_type.value
1473
+ result.mitigating_factors.append(f"Matches {matched_pattern.name} pattern: {matched_pattern.description}")
1474
+
1475
+ # ================================================================
1476
+ # PHASE 2: Check for high-confidence C2 signatures
1477
+ # ================================================================
1478
+
1479
+ is_msf, msf_confidence = self.feature_extractor.check_metasploit_signature(connections)
1480
+ if is_msf:
1481
+ result.is_c2 = True
1482
+ result.c2_probability = msf_confidence
1483
+ result.anomaly_score = 0.95
1484
+ result.evasion_score = 0.1
1485
+ result.confidence = msf_confidence
1486
+ result.c2_type = 'metasploit'
1487
+ result.c2_type_confidence = msf_confidence
1488
+ result.immediate_detection = True
1489
+ result.detection_method = DetectionMethod.SIGNATURE.value
1490
+ result.risk_factors.append("Matches Metasploit signature (high-confidence C2 port + behavior)")
1491
+ if include_features:
1492
+ result.features = self.feature_extractor.extract_features(connections).tolist()
1493
+ return result
1494
+
1495
+ # ================================================================
1496
+ # PHASE 3: ML-based behavioral analysis
1497
+ # ================================================================
1498
+
1499
+ features = self.feature_extractor.extract_features(connections)
1500
+ features_tensor = torch.tensor(features, dtype=torch.float32).unsqueeze(0).to(self.device)
1501
+
1502
+ with torch.no_grad():
1503
+ outputs = self.model(features_tensor)
1504
+
1505
+ c2_prob = torch.sigmoid(outputs['c2_logits']).item()
1506
+ result.original_probability = c2_prob
1507
+ result.anomaly_score = outputs['anomaly_score'].item()
1508
+ result.evasion_score = outputs['evasion_score'].item()
1509
+ result.confidence = outputs['confidence'].item()
1510
+
1511
+ # Get C2 type prediction
1512
+ c2_type_probs = F.softmax(outputs['c2_type_logits'], dim=-1)
1513
+ c2_type_idx = torch.argmax(c2_type_probs, dim=-1).item()
1514
+ result.c2_type = FeatureExtractor.C2_TYPES[c2_type_idx]
1515
+ result.c2_type_confidence = c2_type_probs[0, c2_type_idx].item()
1516
+
1517
+ # ================================================================
1518
+ # PHASE 4: Behavioral refinement
1519
+ # ================================================================
1520
+
1521
+ dst_ips = set(conn.get('dst_ip', '') for conn in connections)
1522
+ bytes_recv = [conn.get('bytes_recv', 0) for conn in connections]
1523
+ bytes_sent = [conn.get('bytes_sent', 0) for conn in connections]
1524
+
1525
+ recv_cv = np.std(bytes_recv) / (np.mean(bytes_recv) + 1e-6) if bytes_recv else 0
1526
+ sent_cv = np.std(bytes_sent) / (np.mean(bytes_sent) + 1e-6) if bytes_sent else 0
1527
+ total_sent = sum(bytes_sent)
1528
+ total_recv = sum(bytes_recv)
1529
+ req_resp_ratio = total_sent / (total_recv + 1e-6) if total_recv else float('inf')
1530
+
1531
+ # Multiple destinations with high variance = likely benign
1532
+ if len(dst_ips) > 5 and bytes_recv and recv_cv > 2:
1533
+ c2_prob *= 0.4
1534
+ result.mitigating_factors.append("Multiple destinations with high response variance")
1535
+
1536
+ # Single destination analysis
1537
+ if len(dst_ips) == 1 and len(connections) >= 5:
1538
+ timestamps = sorted([c.get('timestamp', 0) for c in connections])
1539
+ if len(timestamps) > 1:
1540
+ intervals = np.diff(timestamps)
1541
+ mean_interval = np.mean(intervals) if len(intervals) > 0 else 0
1542
+ interval_cv = np.std(intervals) / (mean_interval + 1e-6) if mean_interval > 0 else 0
1543
+
1544
+ # Response variance analysis
1545
+ if recv_cv > 0.5:
1546
+ c2_prob *= 0.5
1547
+ result.mitigating_factors.append("High response size variance (likely data retrieval)")
1548
+ elif recv_cv < 0.2 and sent_cv < 0.2:
1549
+ c2_prob = min(1.0, c2_prob * 1.4)
1550
+ result.risk_factors.append("Very consistent request/response sizes")
1551
+
1552
+ # Request/response ratio
1553
+ if req_resp_ratio < 0.1:
1554
+ c2_prob *= 0.4
1555
+ result.mitigating_factors.append("Asymmetric traffic (small requests, large responses)")
1556
+ elif 0.2 < req_resp_ratio < 0.8:
1557
+ c2_prob = min(1.0, c2_prob * 1.2)
1558
+ result.risk_factors.append("Balanced request/response ratio (C2-like)")
1559
+
1560
+ # Beacon regularity
1561
+ if interval_cv < 0.3 and mean_interval > 0 and recv_cv < 0.3:
1562
+ c2_prob = min(1.0, c2_prob * 1.3)
1563
+ result.risk_factors.append("Regular timing with consistent sizes")
1564
+
1565
+ # Slow beacon detection
1566
+ if mean_interval > 60 and recv_cv < 0.15 and sent_cv < 0.15:
1567
+ c2_prob = min(1.0, c2_prob * 1.5)
1568
+ result.risk_factors.append("APT-style slow beacon pattern")
1569
+
1570
+ # ================================================================
1571
+ # PHASE 5: Apply legitimate pattern discount
1572
+ # ================================================================
1573
+
1574
+ if matched_pattern and pattern_confidence > 0.5:
1575
+ # Reduce probability based on pattern match
1576
+ discount = 1.0 - (pattern_confidence * 0.7) # Up to 70% reduction
1577
+ c2_prob *= discount
1578
+ result.mitigating_factors.append(f"Legitimate pattern match reduces probability by {(1-discount)*100:.0f}%")
1579
+
1580
+ # ================================================================
1581
+ # PHASE 6: Apply context inference (always check whitelist/blacklist)
1582
+ # ================================================================
1583
+
1584
+ # Always run inference to check whitelist/blacklist
1585
+ inference = self.context_engine.infer(connections, context)
1586
+
1587
+ if inference['probability_modifier'] != 1.0 or context:
1588
+ result.context_applied = True
1589
+ result.probability_modifier = inference['probability_modifier']
1590
+ c2_prob *= inference['probability_modifier']
1591
+
1592
+ result.risk_factors.extend(inference['risk_factors'])
1593
+ result.mitigating_factors.extend(inference['mitigating_factors'])
1594
+ result.recommendations.extend(inference['recommendations'])
1595
+
1596
+ if inference['is_whitelisted']:
1597
+ result.mitigating_factors.append("Destination is whitelisted")
1598
+ if inference['is_blacklisted']:
1599
+ result.risk_factors.append("Destination is blacklisted")
1600
+
1601
+ if inference['service_type'] != ServiceType.UNKNOWN:
1602
+ result.service_type = inference['service_type'].value
1603
+
1604
+ # ================================================================
1605
+ # PHASE 7: Final decision
1606
+ # ================================================================
1607
+
1608
+ # Apply strict mode if requested
1609
+ effective_threshold = threshold
1610
+ if strict_mode:
1611
+ effective_threshold = max(threshold, 0.7)
1612
+
1613
+ result.c2_probability = min(max(c2_prob, 0.0), 1.0)
1614
+ result.is_c2 = result.c2_probability >= effective_threshold
1615
+ result.detection_method = DetectionMethod.ML.value if not result.context_applied else DetectionMethod.CONTEXT.value
1616
+
1617
+ if result.is_c2:
1618
+ result.c2_type = FeatureExtractor.C2_TYPES[c2_type_idx] if c2_type_idx > 0 else 'unknown'
1619
+ else:
1620
+ result.c2_type = 'none'
1621
+
1622
+ # Add recommendations based on analysis
1623
+ if result.is_c2:
1624
+ result.recommendations.append("Investigate destination IP for known C2 infrastructure")
1625
+ result.recommendations.append("Check for associated process and user activity")
1626
+ if result.evasion_score > 0.5:
1627
+ result.recommendations.append("C2 may be using evasion techniques - correlate with other telemetry")
1628
+
1629
+ if include_features:
1630
+ result.features = features.tolist()
1631
+
1632
+ return result
1633
+
1634
+ def analyze_batch(
1635
+ self,
1636
+ connection_groups: List[List[Dict]],
1637
+ threshold: float = 0.5,
1638
+ contexts: Optional[List[ConnectionContext]] = None,
1639
+ parallel: bool = True
1640
+ ) -> List[AnalysisResult]:
1641
+ """
1642
+ Analyze multiple connection groups efficiently.
1643
+
1644
+ Args:
1645
+ connection_groups: List of connection lists to analyze
1646
+ threshold: Detection threshold
1647
+ contexts: Optional list of contexts (one per group)
1648
+ parallel: Use batch processing for efficiency
1649
+
1650
+ Returns:
1651
+ List of AnalysisResults
1652
+ """
1653
+ results = []
1654
+
1655
+ for i, connections in enumerate(connection_groups):
1656
+ context = contexts[i] if contexts and i < len(contexts) else None
1657
+ result = self.analyze(connections, threshold=threshold, context=context)
1658
+ results.append(result)
1659
+
1660
+ return results
1661
+
1662
+ def analyze_logs(
1663
+ self,
1664
+ log_lines: List[str],
1665
+ group_by_dst: bool = True,
1666
+ threshold: float = 0.5
1667
+ ) -> List[Dict]:
1668
+ """Analyze raw log lines for C2 activity."""
1669
+ connections = []
1670
+ for line in log_lines:
1671
+ conn = self.log_parser.parse_json(line)
1672
+ if not conn:
1673
+ conn = self.log_parser.parse_zeek_conn(line)
1674
+ if not conn:
1675
+ conn = self.log_parser.parse_syslog(line)
1676
+ if conn:
1677
+ connections.append(conn)
1678
+
1679
+ if not connections:
1680
+ return []
1681
+
1682
+ results = []
1683
+
1684
+ if group_by_dst:
1685
+ grouped = defaultdict(list)
1686
+ for conn in connections:
1687
+ grouped[conn.get('dst_ip', 'unknown')].append(conn)
1688
+
1689
+ for dst_ip, group_conns in grouped.items():
1690
+ if len(group_conns) >= 3:
1691
+ result = self.analyze(group_conns, threshold)
1692
+ result_dict = result.to_dict()
1693
+ result_dict['dst_ip'] = dst_ip
1694
+ result_dict['connection_count'] = len(group_conns)
1695
+ results.append(result_dict)
1696
+ else:
1697
+ result = self.analyze(connections, threshold)
1698
+ result_dict = result.to_dict()
1699
+ result_dict['connection_count'] = len(connections)
1700
+ results.append(result_dict)
1701
+
1702
+ return sorted(results, key=lambda x: x['c2_probability'], reverse=True)
1703
+
1704
+ def add_whitelist(self, ips: List[str] = None, domains: List[str] = None):
1705
+ """Add IPs or domains to whitelist."""
1706
+ if ips:
1707
+ for ip in ips:
1708
+ self.context_engine.add_whitelist_ip(ip)
1709
+ if domains:
1710
+ for domain in domains:
1711
+ self.context_engine.add_whitelist_domain(domain)
1712
+
1713
+ def add_blacklist(self, ips: List[str] = None, domains: List[str] = None):
1714
+ """Add IPs or domains to blacklist."""
1715
+ if ips:
1716
+ for ip in ips:
1717
+ self.context_engine.add_blacklist_ip(ip)
1718
+ if domains:
1719
+ for domain in domains:
1720
+ self.context_engine.add_blacklist_domain(domain)
1721
+
1722
+ def save(self, path: str):
1723
+ """Save model to safetensors format."""
1724
+ path = Path(path)
1725
+ model_path = path.with_suffix('.safetensors')
1726
+ save_file(self.model.state_dict(), str(model_path))
1727
+
1728
+ config_path = path.with_suffix('.json')
1729
+ with open(config_path, 'w') as f:
1730
+ json.dump(self.config.to_dict(), f, indent=2)
1731
+
1732
+ print(f"Model saved to {model_path}")
1733
+ print(f"Config saved to {config_path}")
1734
+
1735
+ @classmethod
1736
+ def load(cls, path: str, device: str = 'auto') -> 'C2Sentinel':
1737
+ """Load model from safetensors format."""
1738
+ path = Path(path)
1739
+
1740
+ if path.suffix == '.safetensors':
1741
+ model_path = path
1742
+ config_path = path.with_suffix('.json')
1743
+ else:
1744
+ model_path = path.with_suffix('.safetensors')
1745
+ config_path = path.with_suffix('.json')
1746
+
1747
+ with open(config_path, 'r') as f:
1748
+ config = C2SentinelConfig.from_dict(json.load(f))
1749
+
1750
+ model = LogBERTC2Sentinel(config)
1751
+ state_dict = load_file(str(model_path))
1752
+ model.load_state_dict(state_dict)
1753
+
1754
+ return cls(model, config, device)
1755
+
1756
+ @classmethod
1757
+ def create_new(cls, device: str = 'auto') -> 'C2Sentinel':
1758
+ """Create a new untrained model."""
1759
+ config = C2SentinelConfig()
1760
+ model = LogBERTC2Sentinel(config)
1761
+ return cls(model, config, device)
1762
+
1763
+
1764
+ # ============================================================================
1765
+ # CONVENIENCE FUNCTIONS
1766
+ # ============================================================================
1767
+
1768
+ def load_model(path: str, device: str = 'auto') -> C2Sentinel:
1769
+ """Load a pre-trained C2Sentinel model."""
1770
+ return C2Sentinel.load(path, device)
1771
+
1772
+
1773
+ def create_model(device: str = 'auto') -> C2Sentinel:
1774
+ """Create a new untrained C2Sentinel model."""
1775
+ return C2Sentinel.create_new(device)
1776
+
1777
+
1778
+ def quick_analyze(connections: List[Dict], model_path: str = 'c2_sentinel') -> AnalysisResult:
1779
+ """Quick one-shot analysis without keeping model in memory."""
1780
+ sentinel = C2Sentinel.load(model_path)
1781
+ return sentinel.analyze(connections)
1782
+
1783
+
1784
+ # ============================================================================
1785
+ # CLI AND TESTING
1786
+ # ============================================================================
1787
+
1788
+ if __name__ == '__main__':
1789
+ print("LogBERT-C2Sentinel v2.0: Advanced C2 Detection with Context Inference")
1790
+ print("=" * 70)
1791
+
1792
+ sentinel = C2Sentinel.create_new()
1793
+ print(f"Model created with {sentinel.config.num_features} features")
1794
+ print(f"Device: {sentinel.device}")
1795
+
1796
+ # Test 1: Metasploit signature detection
1797
+ print("\n[TEST 1] Metasploit Meterpreter (port 4444)...")
1798
+ msf_connections = [
1799
+ {'timestamp': 1000 + i*5, 'dst_ip': '192.168.1.100', 'dst_port': 4444,
1800
+ 'bytes_sent': 150, 'bytes_recv': 400}
1801
+ for i in range(8)
1802
+ ]
1803
+ result = sentinel.analyze(msf_connections)
1804
+ print(f" {result}")
1805
+
1806
+ # Test 2: SSH keepalive (should NOT be flagged)
1807
+ print("\n[TEST 2] SSH Keepalive (should be clean)...")
1808
+ ssh_keepalive = [
1809
+ {'timestamp': 1000 + i*30, 'dst_ip': '192.168.1.10', 'dst_port': 22,
1810
+ 'bytes_sent': 48, 'bytes_recv': 48}
1811
+ for i in range(15)
1812
+ ]
1813
+ result = sentinel.analyze(ssh_keepalive)
1814
+ print(f" {result}")
1815
+ print(f" Matched pattern: {result.matched_legitimate_pattern}")
1816
+ print(f" Mitigating factors: {result.mitigating_factors}")
1817
+
1818
+ # Test 3: SSH with context
1819
+ print("\n[TEST 3] SSH Keepalive with process context...")
1820
+ context = ConnectionContext(process_name='sshd', known_good=True)
1821
+ result = sentinel.analyze(ssh_keepalive, context=context)
1822
+ print(f" {result}")
1823
+
1824
+ # Test 4: C2 beacon on 443
1825
+ print("\n[TEST 4] C2 Beacon on port 443...")
1826
+ c2_beacon = [
1827
+ {'timestamp': 1000 + i*60, 'dst_ip': '10.10.10.10', 'dst_port': 443,
1828
+ 'bytes_sent': 200, 'bytes_recv': 500}
1829
+ for i in range(10)
1830
+ ]
1831
+ result = sentinel.analyze(c2_beacon)
1832
+ print(f" {result}")
1833
+
1834
+ # Test 5: Benign browsing
1835
+ print("\n[TEST 5] Benign web browsing...")
1836
+ import random
1837
+ browsing = [
1838
+ {'timestamp': 1000 + i*random.uniform(5, 120),
1839
+ 'dst_ip': f"{random.randint(1,200)}.{random.randint(0,255)}.{random.randint(0,255)}.{random.randint(1,254)}",
1840
+ 'dst_port': 443,
1841
+ 'bytes_sent': random.randint(500, 3000),
1842
+ 'bytes_recv': random.randint(10000, 500000)}
1843
+ for i in range(15)
1844
+ ]
1845
+ result = sentinel.analyze(browsing)
1846
+ print(f" {result}")
1847
+
1848
+ # Test reconnaissance support
1849
+ print("\n[TEST 6] Reconnaissance support...")
1850
+ ip_info = sentinel.recon.analyze_ip('104.16.132.229')
1851
+ print(f" IP Analysis: {ip_info}")
1852
+
1853
+ print("\n" + "=" * 70)
1854
+ print("Model ready for deployment!")
1855
+ print("=" * 70)
examples/advanced_usage.py ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ C2Sentinel Advanced Usage Example
4
+
5
+ Demonstrates context enrichment, whitelist/blacklist management,
6
+ batch analysis, log parsing, and reconnaissance features.
7
+ """
8
+
9
+ from c2sentinel import C2Sentinel, ConnectionContext
10
+
11
+ def main():
12
+ # Load the model
13
+ sentinel = C2Sentinel.load('c2_sentinel')
14
+
15
+ # =========================================================================
16
+ # Context Enrichment
17
+ # =========================================================================
18
+
19
+ print("=" * 60)
20
+ print("Context Enrichment")
21
+ print("=" * 60)
22
+
23
+ # Create connections that might look suspicious
24
+ connections = []
25
+ timestamp = 1705600000
26
+
27
+ for i in range(10):
28
+ connections.append({
29
+ 'timestamp': timestamp + (i * 60),
30
+ 'dst_ip': '10.0.0.50',
31
+ 'dst_port': 443,
32
+ 'bytes_sent': 200,
33
+ 'bytes_recv': 500,
34
+ })
35
+
36
+ # Analyze without context
37
+ result_no_ctx = sentinel.analyze(connections)
38
+ print(f"Without context: is_c2={result_no_ctx.is_c2}, prob={result_no_ctx.c2_probability:.2f}")
39
+
40
+ # Analyze with context indicating this is a known monitoring agent
41
+ context = ConnectionContext(
42
+ process_name='prometheus',
43
+ known_good=True,
44
+ ip_reputation=0.95,
45
+ dns_queries=['metrics.internal.company.com']
46
+ )
47
+
48
+ result_with_ctx = sentinel.analyze(connections, context=context)
49
+ print(f"With context: is_c2={result_with_ctx.is_c2}, prob={result_with_ctx.c2_probability:.2f}")
50
+ print(f"Context applied: {result_with_ctx.context_applied}")
51
+ print()
52
+
53
+ # =========================================================================
54
+ # Whitelist and Blacklist Management
55
+ # =========================================================================
56
+
57
+ print("=" * 60)
58
+ print("Whitelist and Blacklist")
59
+ print("=" * 60)
60
+
61
+ # Add trusted infrastructure to whitelist
62
+ sentinel.add_whitelist(
63
+ ips=['8.8.8.8', '1.1.1.1'],
64
+ domains=['google.com', 'cloudflare.com']
65
+ )
66
+
67
+ # Add known malicious indicators to blacklist
68
+ sentinel.add_blacklist(
69
+ ips=['10.10.10.10'],
70
+ domains=['malware.example.com']
71
+ )
72
+
73
+ # Test whitelisted IP
74
+ dns_connections = []
75
+ for i in range(10):
76
+ dns_connections.append({
77
+ 'timestamp': timestamp + (i * 5),
78
+ 'dst_ip': '8.8.8.8',
79
+ 'dst_port': 53,
80
+ 'bytes_sent': 50,
81
+ 'bytes_recv': 200,
82
+ })
83
+
84
+ result = sentinel.analyze(dns_connections)
85
+ print(f"Whitelisted DNS (8.8.8.8): is_c2={result.is_c2}")
86
+
87
+ # Test blacklisted IP
88
+ blacklist_connections = []
89
+ for i in range(10):
90
+ blacklist_connections.append({
91
+ 'timestamp': timestamp + (i * 60),
92
+ 'dst_ip': '10.10.10.10',
93
+ 'dst_port': 443,
94
+ 'bytes_sent': 200,
95
+ 'bytes_recv': 500,
96
+ })
97
+
98
+ result = sentinel.analyze(blacklist_connections)
99
+ print(f"Blacklisted IP (10.10.10.10): is_c2={result.is_c2}, prob={result.c2_probability:.2f}")
100
+ print()
101
+
102
+ # =========================================================================
103
+ # Batch Analysis
104
+ # =========================================================================
105
+
106
+ print("=" * 60)
107
+ print("Batch Analysis")
108
+ print("=" * 60)
109
+
110
+ # Create multiple connection groups for batch processing
111
+ connection_groups = []
112
+
113
+ # Group 1: Normal web browsing (variable sizes, multiple destinations)
114
+ web_group = []
115
+ for i, dest in enumerate(['93.184.216.34', '151.101.1.140', '172.217.14.206']):
116
+ for j in range(3):
117
+ web_group.append({
118
+ 'timestamp': timestamp + (i * 10) + j,
119
+ 'dst_ip': dest,
120
+ 'dst_port': 443,
121
+ 'bytes_sent': 100 + (j * 50),
122
+ 'bytes_recv': 5000 + (j * 1000),
123
+ })
124
+ connection_groups.append(web_group)
125
+
126
+ # Group 2: Potential C2 beacon
127
+ beacon_group = []
128
+ for i in range(10):
129
+ beacon_group.append({
130
+ 'timestamp': timestamp + (i * 60),
131
+ 'dst_ip': '45.33.32.156',
132
+ 'dst_port': 8080,
133
+ 'bytes_sent': 200,
134
+ 'bytes_recv': 500,
135
+ })
136
+ connection_groups.append(beacon_group)
137
+
138
+ # Group 3: Database connection pool
139
+ db_group = []
140
+ for i in range(15):
141
+ db_group.append({
142
+ 'timestamp': timestamp + (i * 0.5),
143
+ 'dst_ip': '10.0.1.100',
144
+ 'dst_port': 5432,
145
+ 'bytes_sent': 100 + (i * 10),
146
+ 'bytes_recv': 2000 + (i * 500),
147
+ })
148
+ connection_groups.append(db_group)
149
+
150
+ # Analyze all groups at once
151
+ results = sentinel.analyze_batch(connection_groups)
152
+
153
+ for i, result in enumerate(results):
154
+ print(f"Group {i+1}: is_c2={result.is_c2}, prob={result.c2_probability:.2f}, "
155
+ f"pattern={result.matched_legitimate_pattern or 'None'}")
156
+ print()
157
+
158
+ # =========================================================================
159
+ # Reconnaissance Features
160
+ # =========================================================================
161
+
162
+ print("=" * 60)
163
+ print("Reconnaissance Features")
164
+ print("=" * 60)
165
+
166
+ # IP Analysis
167
+ print("\nIP Analysis:")
168
+ ip_info = sentinel.recon.analyze_ip('104.16.132.229')
169
+ print(f" IP: 104.16.132.229")
170
+ print(f" Valid: {ip_info['is_valid']}")
171
+ print(f" Private: {ip_info['is_private']}")
172
+ print(f" CDN: {ip_info['is_cdn']}")
173
+ if ip_info['cdn_provider']:
174
+ print(f" CDN Provider: {ip_info['cdn_provider']}")
175
+
176
+ # Connection Pattern Analysis
177
+ print("\nConnection Pattern Analysis:")
178
+ patterns = sentinel.recon.analyze_connection_patterns(beacon_group)
179
+ print(f" Mean Interval: {patterns['timing']['mean_interval']:.2f}s")
180
+ print(f" Interval CV: {patterns['timing']['interval_cv']:.4f}")
181
+ print(f" Mean Bytes Sent: {patterns['volume']['mean_bytes_sent']:.0f}")
182
+ print(f" Single Destination: {patterns['behavioral']['single_destination']}")
183
+
184
+ # IOC Generation (only if C2 detected)
185
+ print("\nIOC Generation:")
186
+ beacon_result = sentinel.analyze(beacon_group)
187
+ if beacon_result.is_c2:
188
+ iocs = sentinel.recon.generate_iocs(beacon_group, beacon_result.to_dict())
189
+ print(f" IPs: {iocs['ips']}")
190
+ print(f" Ports: {iocs['ports']}")
191
+ print(f" Timing Signature: {iocs['timing_signatures']}")
192
+ print()
193
+
194
+ # =========================================================================
195
+ # Log File Parsing
196
+ # =========================================================================
197
+
198
+ print("=" * 60)
199
+ print("Log File Parsing")
200
+ print("=" * 60)
201
+
202
+ # Example with JSON log format
203
+ json_logs = [
204
+ '{"timestamp": 1705600000, "dst_ip": "10.0.0.1", "dst_port": 443, "bytes_sent": 200, "bytes_recv": 500}',
205
+ '{"timestamp": 1705600060, "dst_ip": "10.0.0.1", "dst_port": 443, "bytes_sent": 200, "bytes_recv": 500}',
206
+ '{"timestamp": 1705600120, "dst_ip": "10.0.0.1", "dst_port": 443, "bytes_sent": 200, "bytes_recv": 500}',
207
+ '{"timestamp": 1705600180, "dst_ip": "10.0.0.1", "dst_port": 443, "bytes_sent": 200, "bytes_recv": 500}',
208
+ '{"timestamp": 1705600240, "dst_ip": "10.0.0.1", "dst_port": 443, "bytes_sent": 200, "bytes_recv": 500}',
209
+ ]
210
+
211
+ results = sentinel.analyze_logs(json_logs, group_by_dst=True)
212
+ print(f"Analyzed {len(json_logs)} log lines")
213
+ for dst, result in results.items():
214
+ print(f" {dst}: is_c2={result.is_c2}, prob={result.c2_probability:.2f}")
215
+ print()
216
+
217
+ # =========================================================================
218
+ # Result Object Details
219
+ # =========================================================================
220
+
221
+ print("=" * 60)
222
+ print("Full Result Object")
223
+ print("=" * 60)
224
+
225
+ result = sentinel.analyze(beacon_group)
226
+ result_dict = result.to_dict()
227
+
228
+ for key, value in result_dict.items():
229
+ if isinstance(value, list) and len(value) > 3:
230
+ print(f" {key}: [{value[0]}, {value[1]}, ... ({len(value)} items)]")
231
+ else:
232
+ print(f" {key}: {value}")
233
+
234
+
235
+ if __name__ == '__main__':
236
+ main()
examples/basic_usage.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ C2Sentinel Basic Usage Example
4
+
5
+ Demonstrates loading the model and analyzing network connections
6
+ for C2 beacon detection.
7
+ """
8
+
9
+ from c2sentinel import C2Sentinel
10
+
11
+ def main():
12
+ # Load the model
13
+ sentinel = C2Sentinel.load('c2_sentinel')
14
+
15
+ # Example 1: Analyze a series of connections to a single destination
16
+ # This pattern shows regular 60-second intervals with consistent packet sizes
17
+ # - a common C2 beacon signature
18
+
19
+ connections = []
20
+ timestamp = 1705600000 # Starting timestamp
21
+
22
+ for i in range(10):
23
+ connections.append({
24
+ 'timestamp': timestamp + (i * 60), # 60-second intervals
25
+ 'dst_ip': '10.0.0.100',
26
+ 'dst_port': 443,
27
+ 'bytes_sent': 200,
28
+ 'bytes_recv': 500,
29
+ })
30
+
31
+ result = sentinel.analyze(connections)
32
+
33
+ print("Example 1: Regular beacon pattern")
34
+ print(f" Is C2: {result.is_c2}")
35
+ print(f" Probability: {result.c2_probability:.2f}")
36
+ print(f" C2 Type: {result.c2_type}")
37
+ print(f" Detection Method: {result.detection_method}")
38
+ print()
39
+
40
+ # Example 2: Legitimate SSH keepalive traffic
41
+ # Small symmetric packets on port 22 at regular intervals
42
+
43
+ ssh_connections = []
44
+ timestamp = 1705600000
45
+
46
+ for i in range(10):
47
+ ssh_connections.append({
48
+ 'timestamp': timestamp + (i * 30), # 30-second keepalive
49
+ 'dst_ip': '192.168.1.50',
50
+ 'dst_port': 22,
51
+ 'bytes_sent': 48,
52
+ 'bytes_recv': 48,
53
+ })
54
+
55
+ result = sentinel.analyze(ssh_connections)
56
+
57
+ print("Example 2: SSH keepalive pattern")
58
+ print(f" Is C2: {result.is_c2}")
59
+ print(f" Matched Pattern: {result.matched_legitimate_pattern}")
60
+ print(f" Service Type: {result.service_type}")
61
+ print()
62
+
63
+ # Example 3: High-confidence C2 on known malicious port
64
+
65
+ c2_connections = []
66
+ timestamp = 1705600000
67
+
68
+ for i in range(10):
69
+ c2_connections.append({
70
+ 'timestamp': timestamp + (i * 30),
71
+ 'dst_ip': '45.33.32.156',
72
+ 'dst_port': 4444, # Metasploit default
73
+ 'bytes_sent': 150,
74
+ 'bytes_recv': 300,
75
+ })
76
+
77
+ result = sentinel.analyze(c2_connections)
78
+
79
+ print("Example 3: High-confidence C2 port")
80
+ print(f" Is C2: {result.is_c2}")
81
+ print(f" C2 Type: {result.c2_type}")
82
+ print(f" Probability: {result.c2_probability:.2f}")
83
+ print(f" Immediate Detection: {result.immediate_detection}")
84
+ print(f" Risk Factors: {result.risk_factors}")
85
+ print()
86
+
87
+ # Example 4: Using threshold adjustment
88
+
89
+ print("Example 4: Threshold adjustment")
90
+
91
+ # Lower threshold for higher sensitivity
92
+ result_low = sentinel.analyze(connections, threshold=0.3)
93
+ print(f" Low threshold (0.3): is_c2={result_low.is_c2}, prob={result_low.c2_probability:.2f}")
94
+
95
+ # Higher threshold for higher precision
96
+ result_high = sentinel.analyze(connections, threshold=0.7)
97
+ print(f" High threshold (0.7): is_c2={result_high.is_c2}, prob={result_high.c2_probability:.2f}")
98
+
99
+ # Strict mode (minimum 0.7 threshold)
100
+ result_strict = sentinel.analyze(connections, strict_mode=True)
101
+ print(f" Strict mode: is_c2={result_strict.is_c2}, prob={result_strict.c2_probability:.2f}")
102
+
103
+
104
+ if __name__ == '__main__':
105
+ main()