Spaces:
Running
Running
equation
Browse files- index.html +6 -5
index.html
CHANGED
|
@@ -40,6 +40,7 @@
|
|
| 40 |
if (!$(this).hasClass('selected')) {
|
| 41 |
|
| 42 |
$('.formula').hide(200);
|
|
|
|
| 43 |
$('.formula-list > a').removeClass('selected');
|
| 44 |
$(this).addClass('selected');
|
| 45 |
var target = $(this).attr('href');
|
|
@@ -424,11 +425,11 @@
|
|
| 424 |
the classification branch can be formulated as $\mathbb{C} = f\circ g$ and the representation branch as $\mathbb{R} = f\circ h$.
|
| 425 |
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
| 426 |
|
| 427 |
-
where $\mathcal{S}$ represents cosine similarity, $k$ represents the number of generated neighbors,
|
| 428 |
and the linear augmentation function $W(x)=W(x,p);~p\sim P$ randomly samples $p$ from the parameter distribution $P$ to generate different neighbors.
|
| 429 |
Note that we guarantee the generated neighbors are fixed each time by fixing the random seed. The adaptive adversaries perform attacks on the following objective function:
|
| 430 |
|
| 431 |
-
where $\mathcal{L}_C$ indicates classifier's loss function, $y_t$ is the targeted class, and $\alpha$ refers to a hyperparameter.
|
| 432 |
</div>
|
| 433 |
</div>
|
| 434 |
|
|
@@ -463,19 +464,19 @@
|
|
| 463 |
|
| 464 |
<div class="columns is-centered">
|
| 465 |
<div class="column">
|
| 466 |
-
<p id="label-loss">
|
| 467 |
Attackers can design adaptive attacks to try to bypass BEYOND when the attacker knows all the parameters of the model
|
| 468 |
and the detection strategy. For an SSL model with a feature extractor $f$, a projector $h$, and a classification head $g$,
|
| 469 |
the classification branch can be formulated as $\mathbb{C} = f\circ g$ and the representation branch as $\mathbb{R} = f\circ h$.
|
| 470 |
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
| 471 |
</p>
|
| 472 |
-
<p id="representation-loss"
|
| 473 |
where $\mathcal{S}$ represents cosine similarity, $k$ represents the number of generated neighbors,
|
| 474 |
and the linear augmentation function $W(x)=W(x,p);~p\sim P$ randomly samples $p$ from the parameter distribution $P$ to generate different neighbors.
|
| 475 |
Note that we guarantee the generated neighbors are fixed each time by fixing the random seed. The adaptive adversaries perform attacks on the following objective function:
|
| 476 |
</p>
|
| 477 |
|
| 478 |
-
<p id="total-loss"
|
| 479 |
where $\mathcal{L}_C$ indicates classifier's loss function, $y_t$ is the targeted class, and $\alpha$ refers to a hyperparameter.
|
| 480 |
</p>
|
| 481 |
</div>
|
|
|
|
| 40 |
if (!$(this).hasClass('selected')) {
|
| 41 |
|
| 42 |
$('.formula').hide(200);
|
| 43 |
+
$('.eq-des').hide(200);
|
| 44 |
$('.formula-list > a').removeClass('selected');
|
| 45 |
$(this).addClass('selected');
|
| 46 |
var target = $(this).attr('href');
|
|
|
|
| 425 |
the classification branch can be formulated as $\mathbb{C} = f\circ g$ and the representation branch as $\mathbb{R} = f\circ h$.
|
| 426 |
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
| 427 |
|
| 428 |
+
<!-- where $\mathcal{S}$ represents cosine similarity, $k$ represents the number of generated neighbors,
|
| 429 |
and the linear augmentation function $W(x)=W(x,p);~p\sim P$ randomly samples $p$ from the parameter distribution $P$ to generate different neighbors.
|
| 430 |
Note that we guarantee the generated neighbors are fixed each time by fixing the random seed. The adaptive adversaries perform attacks on the following objective function:
|
| 431 |
|
| 432 |
+
where $\mathcal{L}_C$ indicates classifier's loss function, $y_t$ is the targeted class, and $\alpha$ refers to a hyperparameter. -->
|
| 433 |
</div>
|
| 434 |
</div>
|
| 435 |
|
|
|
|
| 464 |
|
| 465 |
<div class="columns is-centered">
|
| 466 |
<div class="column">
|
| 467 |
+
<p id="label-loss" class="eq-des">
|
| 468 |
Attackers can design adaptive attacks to try to bypass BEYOND when the attacker knows all the parameters of the model
|
| 469 |
and the detection strategy. For an SSL model with a feature extractor $f$, a projector $h$, and a classification head $g$,
|
| 470 |
the classification branch can be formulated as $\mathbb{C} = f\circ g$ and the representation branch as $\mathbb{R} = f\circ h$.
|
| 471 |
To attack effectively, the adversary must deceive the target model while guaranteeing the label consistency and representation similarity of the SSL model.
|
| 472 |
</p>
|
| 473 |
+
<p id="representation-loss" class="eq-des" style="display: none">
|
| 474 |
where $\mathcal{S}$ represents cosine similarity, $k$ represents the number of generated neighbors,
|
| 475 |
and the linear augmentation function $W(x)=W(x,p);~p\sim P$ randomly samples $p$ from the parameter distribution $P$ to generate different neighbors.
|
| 476 |
Note that we guarantee the generated neighbors are fixed each time by fixing the random seed. The adaptive adversaries perform attacks on the following objective function:
|
| 477 |
</p>
|
| 478 |
|
| 479 |
+
<p id="total-loss" class="eq-des" style="display: none;">
|
| 480 |
where $\mathcal{L}_C$ indicates classifier's loss function, $y_t$ is the targeted class, and $\alpha$ refers to a hyperparameter.
|
| 481 |
</p>
|
| 482 |
</div>
|