+
+
+
+
+(a)(b)
+
+
+(a)(b) (c)
+
int8 loss landscape c) Training loss trajectory comparison. Integer training setup: int8 linear layer, int8 convolutional layer and int8 batch-norm layer. $$\begin{align*} + X := & U_X; \quad U_X \sim \text{Mixture}\big(0.5 N(0, 1) + 0.5 N(b, 1) \big) \\ + \pi(x) & = \frac{N(X; 0, 1)}{N(X; 0, 1) + N(X; b, 1)} \\ + A := & \begin{cases} + 1, & -U_A < \log \big( \pi(x) / (1 - \pi(x))\big)\\ + 0, & \text{otherwise} + \end{cases}; U_A \sim \text{Logistic}(0, 1) \\ + Y := & U_Y + \begin{cases} + X^2 -1.82 X + 2, & A = 1 \\ + 2.18 X + 1.5, & A = 0 \end{cases}; \quad U_Y \sim N(0, 1) +\end{align*}$$
+Ripple time series interpretability approach from logs collection to concept vector analysis.Raindrop time series classification approach for educational data: (1) observe an action and obtain interaction embeddings h; (2) use message passing to compute interaction embeddings for the unobserved actions; (3) learn action embeddings z from the interaction embeddings; (4) learn student embeddings s from the action embeddings; (5) course failure classification from student embeddings.
+
+
+
![]() |
++ |
+
+repθ(x). (Middle) Given a dataset of labelled representations, we train three different kinds of linear probes to detect the presence of a given concept. (Right) Using the learned concept vector, we guide the model representations during generation in order to strengthen/weaken the presence of said concept in the model output. We plot how activations evolve along the residual path, along a projected 2D subspace.
+
+
+
+
+
+
+
+
+