| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:21:40.808350Z" |
| }, |
| "title": "Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors", |
| "authors": [ |
| { |
| "first": "Zeyu", |
| "middle": [], |
| "last": "Yun", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UC Berkeley", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yubei", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "yubeichen<yubeic@fb.com" |
| }, |
| { |
| "first": "Bruno", |
| "middle": [ |
| "A" |
| ], |
| "last": "Olshausen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UC Berkeley", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yann", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Transformer networks have revolutionized NLP representation learning since they were introduced. Though a great effort has been made to explain the representation in transformers, it is widely recognized that our understanding is not sufficient. One important reason is that there lack enough visualization tools for detailed analysis. In this paper, we propose to use dictionary learning to open up these 'black boxes' as linear superpositions of transformer factors. Through visualization, we demonstrate the hierarchical semantic structures captured by the transformer factors, e.g., word-level polysemy disambiguation, sentence-level pattern formation, and long-range dependency. While some of these patterns confirm the conventional prior linguistic knowledge, the rest are relatively unexpected, which may provide new insights. We hope this visualization tool can bring further knowledge and a better understanding of how transformer networks work. The code is available at https://github. com/zeyuyun1/TransformerVis.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Transformer networks have revolutionized NLP representation learning since they were introduced. Though a great effort has been made to explain the representation in transformers, it is widely recognized that our understanding is not sufficient. One important reason is that there lack enough visualization tools for detailed analysis. In this paper, we propose to use dictionary learning to open up these 'black boxes' as linear superpositions of transformer factors. Through visualization, we demonstrate the hierarchical semantic structures captured by the transformer factors, e.g., word-level polysemy disambiguation, sentence-level pattern formation, and long-range dependency. While some of these patterns confirm the conventional prior linguistic knowledge, the rest are relatively unexpected, which may provide new insights. We hope this visualization tool can bring further knowledge and a better understanding of how transformer networks work. The code is available at https://github. com/zeyuyun1/TransformerVis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Though the transformer networks (Vaswani et al., 2017; Devlin et al., 2018) have achieved great success, our understanding of how they work is still fairly limited. This has triggered increasing efforts to visualize and analyze these \"black boxes\". Besides a direct visualization of the attention weights, most of the current efforts to interpret transformer models involve \"probing tasks\". They are achieved by attaching a light-weighted auxiliary classifier at the output of the target transformer layer. Then only the auxiliary classifier is trained for wellknown NLP tasks like part-of-speech (POS) Tagging, Named-entity recognition (NER) Tagging, Syntactic Dependency, etc. Tenney et al. (2019) and Liu et al. (2019) show transformer models have excellent performance in those probing tasks. These results indicate that transformer models have learned the language representation related to the probing tasks. Though the probing tasks are great tools for interpreting language models, their limitation is explained in Rogers et al. (2020) . We summarize the limitation into three major points:", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 54, |
| "text": "(Vaswani et al., 2017;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 55, |
| "end": 75, |
| "text": "Devlin et al., 2018)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 679, |
| "end": 699, |
| "text": "Tenney et al. (2019)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 704, |
| "end": 721, |
| "text": "Liu et al. (2019)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1023, |
| "end": 1043, |
| "text": "Rogers et al. (2020)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Most probing tasks, like POS and NER tagging, are too simple. A model that performs well in those probing tasks does not reflect the model's true capacity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Probing tasks can only verify whether a certain prior structure is learned in a language model. They can not reveal the structures beyond our prior knowledge.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 It's hard to locate where exactly the related linguistic representation is learned in the transformer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Efforts are made to remove those limitations and make probing tasks more diverse. For instance, Hewitt and Manning (2019) proposes \"structural probe\", which is a much more intricate probing task. Jiang et al. (2020) proposes to generate specific probing tasks automatically. Non-probing methods are also explored to relieve the last two limitations. For example, Reif et al. (2019) visualizes embedding from BERT using UMAP and shows that the embeddings of the same word under different contexts are separated into different clusters. Ethayarajh (2019) analyzes the similarity between embeddings of the same word in different contexts. Both of these works show transformers provide a context-specific representation. Faruqui et al. (2015) ; Arora et al. (2018) ; Zhang et al. (2019) demonstrate how to use dictionary learning to explain, improve, and visualize the uncontextualized word embedding representations. In this work, we propose to use dictionary learning to alleviate the limitations of the other transformer interpretation techniques. Our results show that dictionary learning provides a powerful visualization tool, leading to some surprising new knowledge.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 121, |
| "text": "Hewitt and Manning (2019)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 196, |
| "end": 215, |
| "text": "Jiang et al. (2020)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 363, |
| "end": 381, |
| "text": "Reif et al. (2019)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 535, |
| "end": 552, |
| "text": "Ethayarajh (2019)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 717, |
| "end": 738, |
| "text": "Faruqui et al. (2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 741, |
| "end": 760, |
| "text": "Arora et al. (2018)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 763, |
| "end": 782, |
| "text": "Zhang et al. (2019)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Hypothesis: contextualized word embedding as a sparse linear superposition of transformer factors. It is shown that word embedding vectors can be factorized into a sparse linear combination of word factors (Arora et al., 2018; Zhang et al., 2019) , which correspond to elementary semantic meanings. An example is: apple =0.09\"dessert\" + 0.11\"organism\" + 0.16 \"fruit\" + 0.22\"mobile&IT\" + 0.42\"other\".", |
| "cite_spans": [ |
| { |
| "start": 206, |
| "end": 226, |
| "text": "(Arora et al., 2018;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 227, |
| "end": 246, |
| "text": "Zhang et al., 2019)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We view the latent representation of words in a transformer as contextualized word embedding. Similarly, we hypothesize that a contextualized word embedding vector can also be factorized as a sparse linear superposition of a set of elementary elements, which we call transformer factors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The exact definition will be presented later in this section. Due to the skip connections in each of the transformer blocks, we hypothesize that the representation in any layer would be a superposition of the hierarchical representations in all of the lower layers. As a result, the output of a particular transformer block would be the sum of all of the modifications along the way. Indeed, we verify this intuition with the experiments. Based on the above observation, we propose to learn a single dictionary for the contextualized word vectors from different layers' output.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To learn a dictionary of transformer factors with non-negative sparse coding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given a set of tokenized text sequences, we collect the contextualized embedding of every word using a transformer model. We define the set of all word embedding vectors from lth layer of transformer model as X (l) . Furthermore, we collect the embeddings across all layers into a single set", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 214, |
| "text": "(l)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "X = X (1) \u222a X (2) \u222a \u2022 \u2022 \u2022 \u222a X (L) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "By our hypothesis, we assume each embedding vector x \u2208 X is a sparse linear superposition of transformer factors:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "x = \u03a6\u03b1 + , s.t. \u03b1 0,", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where \u03a6 \u2208 IR d\u00d7m is a dictionary matrix with columns \u03a6 :,c , \u03b1 \u2208 IR m is a sparse vector of coefficients to be inferred and is a vector containing independent Gaussian noise samples, which are assumed to be small relative to x. Typically m > d so that the representation is overcomplete. This inverse problem can be efficiently solved by FISTA algorithm (Beck and Teboulle, 2009) . The dictionary matrix \u03a6 can be learned in an iterative fashion by using non-negative sparse coding, which we leave to the appendix section C. Each column \u03a6 :,c of \u03a6 is a transformer factor and its corresponding sparse coefficient \u03b1 c is its activation level.", |
| "cite_spans": [ |
| { |
| "start": 354, |
| "end": 379, |
| "text": "(Beck and Teboulle, 2009)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Visualization by top activation and LIME interpretation. An important empirical method to visualize a feature in deep learning is to use the input samples, which trigger the top activation of the feature (Zeiler and Fergus, 2014). We adopt this convention. As a starting point, we try to visualize each of the dimensions of a particular layer, X (l) . Unfortunately, the hidden dimensions of transformers are not semantically meaningful, which is similar to the uncontextualized word embeddings (Zhang et al., 2019) . Instead, we can try to visualize the transformer factors. For a transformer factor \u03a6 :,c and for a layer-l, we denote the 1000 contextualized word vectors with the largest sparse coefficients \u03b1 (l) , which correspond to 1000 different sequences. For example, Figure 3 shows the top 5 words that activated transformer factor-17 \u03a6 :,17 at layer-0, layer-2, and layer-6 respectively. Since a contextualized word vector is generally affected by many tokens in the sequence, we can use LIME (Ribeiro et al., 2016) to assign a weight to each token in the sequence to identify their relative importance to \u03b1 c . The detailed method is left to Section 3.", |
| "cite_spans": [ |
| { |
| "start": 346, |
| "end": 349, |
| "text": "(l)", |
| "ref_id": null |
| }, |
| { |
| "start": 495, |
| "end": 515, |
| "text": "(Zhang et al., 2019)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 712, |
| "end": 715, |
| "text": "(l)", |
| "ref_id": null |
| }, |
| { |
| "start": 999, |
| "end": 1026, |
| "text": "LIME (Ribeiro et al., 2016)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 777, |
| "end": 785, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(l) c as X (l) c \u2282 X", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To determine low-, mid-, and high-level transformer factors with importance score. As we build a single dictionary for all of the transformer layers, the semantic meaning of the transformer factors has different levels. While some of the factors appear in lower layers and continue to be used in the later stages, the rest of the factors may only be activated in the higher layers of the transformer network. A central question in representation learning is: \"where does the network learn certain information?\" To answer this question, we can compute an \"importance score\" for each transformer factor \u03a6 :,c at layer-l as I", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(l) c . I (l)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "c is the average of the largest 1000 sparse coefficients \u03b1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(l) c 's, which cor- respond to X (l)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "c . We plot the importance scores for each transformer factor as a curve is shown in Figure 2 . We then use these importance score (IS) curves to identify which layer a transformer factor emerges. Figure 2a shows an IS curve peak in the earlier layer. The corresponding transformer factor emerges in the earlier stage, which may capture lower-level semantic meanings. In contrast, Figure 2b shows a peak in the higher layers, which indicates the transformer factor emerges much later and may correspond to mid-or high-level semantic structures. More subtleties are involved when distinguishing between mid-level and high-level factors, which will be discussed later.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 85, |
| "end": 93, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 197, |
| "end": 206, |
| "text": "Figure 2a", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "An important characteristic is that the IS curve for each transformer factor is relatively smooth. This indicates if a vital feature is learned in the beginning layers, it won't disappear in later stages. Instead, it will be carried all the way to the end with gradually decayed weight since many more features would join along the way. Similarly, abstract information learned in higher layers is slowly developed from the early layers. Figure 3 and 5 confirm this idea, which will be explained in the next section.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 437, |
| "end": 445, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We use a 12-layer pre-trained BERT model (Pre; Devlin et al., 2018) and freeze the weights. Since we learn a single dictionary of transformer factors for all of the layers in the transformer, we show that these transformer factors correspond to different levels of semantic or syntactic patterns. The patterns can be roughly divided into three categories: word-level disambiguation, sentence-level pattern formation, and long-range dependency. In the following, we provide detailed visualization for each pattern category. Due to the space limit, only a small amount of the factors are demonstrated in the paper. To alleviate the \"cherry-picking\" bias, we also build a website for the interested readers to play with these results.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 67, |
| "text": "Devlin et al., 2018)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Low-level: word-level polysemy disambiguation. While the input embedding of a token contains polysemy, we find transformer factors with early IS curve peaks usually correspond to a specific word-level meaning. By visualizing the top activation sequences, we can see how word-level disambiguation is gradually developed in a transformer. We show how the disambiguation effect develops progressively through each layer in Figure 3 . In Figure 3 , the top 5 activated words and their contexts for transformer factor \u03a6 :,30 in different layers are listed. The top activated words in layer 0 contain the word \"left\" varying senses, which is being mostly disambiguated in layer 2 albeit not completely. In layer 4, the word \"left\" is fully disambiguated since the top-activated word contains only \"left\" with the word sense \"leaving, exiting.\" We also show more examples of those types of transformer factors in Table 1 : for each transformer factor, we list out the top 3 activated words and their contexts in layer 4. As shown in the table, nearly all top-activated words are disambiguated into a single sense.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 420, |
| "end": 428, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 434, |
| "end": 442, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 906, |
| "end": 913, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Further, we can quantify the quality of the disambiguation ability of the transformer model. In the example above, since the top 1000 activated words Figure 3: Visualization of a low-level transformer factor, \u03a6 :,30 at different layers. (a), (b) and (c) are the topactivated words and contexts for \u03a6 :,30 in layer-0, 2 and 4 respectively. We can see that at layer-0, this transformer factor corresponds to word vectors that encode the word \"left\" with different senses. In layer-2, a majority of the top activated words \"left\" correspond to a single sense, \"leaving, exiting.\" In layer 4, all of the top-activated words \"left\" have corresponded to the same sense, \"leaving, exiting.\" Due to space limitations, we invite the readers to use our website to see more of those disambiguation effects.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Top 3 activated words and their contexts Explanation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u03a6:,2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 that snare shot sounded like somebody' d kicked open the door to your mind\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 i became very frustrated with that and finally made up my mind to start getting back into things.\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 when evita asked for more time so she could make up her mind, the crowd demanded,\" \u00a1 ahora, evita,<", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Word \"mind\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Noun \u2022 Definition: the element of a person that enables them to be aware of the world and their experiences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discoveries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022nington joined the five members xero and the band was renamed to linkin park.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 times about his feelings about gordon, and the price family even sat away from park' s supporters during the trial itself.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 on 25 january 2010, the morning of park' s 66th birthday, he was found hanged and unconscious in his \u2022 saying that he has left the outsiders, kovu asks simba to let him join his pride \u2022 eventually, all boycott' s employees left, forcing him to run the estate without help.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 the story concerned the attempts of a scientist to photograph the soul as it left the body.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Word \"left\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Verb \u2022 Definition: leaving, exiting \u03a6:,33", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 forced to visit the sarajevo television station at night and to film with as little light as possible to avoid the attention of snipers and bombers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 by the modest, cream@-@ colored attire in the airy, light@-@ filled clip.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 the man asked her to help him carry the case to his car, a light@-@ brown volkswagen beetle.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Word \"light\" \u2022 Noun \u2022 Definition: the natural agent that stimulates sight and makes things visible and contexts are \"left\" with only the word sense \"leave, exiting\", we can assume \"left\" when used as a verb, triggers higher activation in \u03a6 :,30 than \"left\" used as other sense of speech. We can verify this hypothesis using a human-annotated corpus: Brown corpus (Francis and Kucera, 1979) . In this corpus, each word is annotated with its corresponding part-of-speech. We collect all the sentences contains the word \"left\" annotated as a verb in one set and sentences contains \"left\" annotated as other part-of-speech. As shown in Figure 4a , in layer 0, the average activation of \u03a6 :,30 for the word \"left\" marked as a verb is no different from \"left\" as other senses. However, at layer 2, \"left\" marked as a verb triggers a higher activation of \u03a6 :,30 . In layer 4, this difference further increases, indicating disambiguation develops progressively across layers. In fact, we plot the activation of \"left\" marked as verb and the activation of other \"left\" in Figure 4b . In layer 4, they are nearly linearly separable by this for this transformer factor in layer-4, 6, and 8 respectively. Again, the position of the word vector is marked blue. Please notice that sometimes only a part of a word is marked blue. This is due to that BERT uses word-piece tokenizer instead of whole word tokenizer. This transformer factor corresponds to the pattern of \"consecutive adjective\". As shown in the figure, this feature starts to develop at layer-4 and fully develops at layer-8. Table 2 : Evaluation of binary POS tagging task: predict whether or not \"left\" in a given context is a verb. single feature. Since each word \"left\" corresponds to an activation value, we can perform a logistic regression classification to differentiate those two types of \"left\". From the result shown in Figure 4a , it is pretty fascinating to see that the disambiguation ability of just \u03a6 :,30 is better than the other two classifiers trained with supervised data. This result confirms that disambiguation is indeed done in the early part of pre-trained transformer model and we are able to detect it via dictionary learning.", |
| "cite_spans": [ |
| { |
| "start": 365, |
| "end": 391, |
| "text": "(Francis and Kucera, 1979)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 634, |
| "end": 643, |
| "text": "Figure 4a", |
| "ref_id": null |
| }, |
| { |
| "start": 1065, |
| "end": 1074, |
| "text": "Figure 4b", |
| "ref_id": null |
| }, |
| { |
| "start": 1577, |
| "end": 1584, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1882, |
| "end": 1891, |
| "text": "Figure 4a", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "Mid level: sentence-level pattern formation. We find most of the transformer factors, with an IS curve peak after layer 6, capture mid-level or highlevel semantic meanings. In particular, the midlevel ones correspond to semantic patterns like phrases and sentences pattern.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "We first show two detailed examples of mid-level transformer factors. Figure 5 shows a transformer factor that detects the pattern of consecutive usage of adjectives. This pattern starts to emerge at layer 4, develops at layer 6, and becomes quite reliable at layer 8. Figure 6 shows a transformer factor, which corresponds to a pretty unexpected pattern: \"unit exchange\", e.g., 56 inches (140 cm). Although this exact pattern only starts to appear at layer 8, the sub-structures that make this pattern, e.g., parenthesis and numbers, appear to trigger this factor in layers 4 and 6. Thus this transformer factor is also 6 (a) layer 4 (b) layer 6 (c) layer 8 Figure 6 : Another example of a mid-level transformer factor visualized at layer-4, 6, and 8. The pattern that corresponds to this transformer factor is \"unit exchange\". Such a pattern is somewhat unexpected based on linguistic prior knowledge. \u2022 technologist at the united states marine hospital in key west, florida who developed a morbid obsession for \u2022 00\u00b0,11\", w, near smith valley, nevada.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 70, |
| "end": 78, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 269, |
| "end": 277, |
| "text": "Figure 6", |
| "ref_id": null |
| }, |
| { |
| "start": 659, |
| "end": 667, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "Places in US, followings the convention \"city, state\" 51.5 91.5 91.0 77.5 Table 3 : A list of typical mid-level transformer factors. The top-activation words and their context sequences for each transformer factor at layer-8 are shown in the second column. We summarize the patterns of each transformer factor in the third column. The last 4 columns are the percentage of the top 200 activated words and sequences that contain the summarized patterns in layer-4,6,8, and 10 respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 74, |
| "end": 81, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "gradually developed through several layers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "While some mid-level transformer factors verify common semantic or syntactic patterns, there are also many surprising mid-level transformer factors. We list a few in Table 3 with quantitative analysis.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 166, |
| "end": 173, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "For each listed transformer factor, we analyze the top 200 activating words and their contexts in each layer. We record the percentage of those words and contexts that correspond to the factors' semantic pattern in Table 3 Remove the first three adjectives in sentence (o). 7.8 (e) album as \"full of natural, smooth, rock, electronic and sometimes downright silly songs\"", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 215, |
| "end": 222, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "Replace the adjectives in sentence (o) with neutral adjectives. 6.2 (f) each participant starts the battle with one balloon. these can be re@-@ inflated up to four Use a random sentence. 0.0 (g) The book is described as \"innovative, beautiful and brilliant\". It receive the highest opinion from James Wood", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "We create this sentence that contain the pattern of consecutive adjective. 7.9 percentages of top-activated words and contexts do corresponds to the pattern we describe. It also shows most of these mid-level patterns start to develop at layer 4 or 6. More detailed examples are provided in the appendix section F. Though it's still mysterious why the transformer network develops representations for these surprising patterns, we believe such a direct visualization can provide additional insights, which complements the \"probing tasks\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "To further confirm a transformer factor does correspond to a specific pattern, we can use constructed example words and context to probe their activation. In Table 4 , we construct several text sequences that are similar to the patterns corresponding to a particular transformer factor but with subtle differences. The result confirms that the context that strictly follows the pattern represented by that transformer factor triggers a high activation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 158, |
| "end": 165, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "On the other hand, the closer the adversarial example to this pattern, the higher activation it receives at this transformer factor.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "High-level: long-range dependency. High-level transformer factors correspond to those linguistic patterns that span an extended range in the text. Since the IS curves of mid-level and high-level transformer factors are similar, it is difficult to distinguish those transformer factors based on their IS cures. Thus, we have to manually examine the top-activation words and contexts for each transformer factor to differentiate between mid-level and high-level transformer factors. To ease the process, we choose to use the black-box interpreta-tion algorithm LIME (Ribeiro et al., 2016) to identify the contribution of each token in a sequence. There also exist interpretation tools that specifically leverage the transformer architecture (Chefer et al., 2021 (Chefer et al., , 2020 . In the future, one could adapt those interpretation tools, which may potentially provide better visualization.", |
| "cite_spans": [ |
| { |
| "start": 564, |
| "end": 586, |
| "text": "(Ribeiro et al., 2016)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 739, |
| "end": 759, |
| "text": "(Chefer et al., 2021", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 760, |
| "end": 782, |
| "text": "(Chefer et al., , 2020", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "Given a sequence s \u2208 S, we can treat \u03b1 c,i (s). To do so, we generated a sequence set S(s), where each s \u2208 S(s) is the same as s except for that several random positions in s are masked by ['UNK'] (the unknown token). Then we learns a linear model g w (s ) with weights w \u2208 R T to approximate f (s ), where T is the length of sentence s. This can be solved as a ridge regression:", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 196, |
| "text": "['UNK']", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "min w\u2208R T L(f, w, S(s)) + \u03c3 w 2 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "The learned weights w can serve as a saliency map that reflects the \"contribution\" of each token in the sequence s. Like in Figure 7 , the color reflects the weights w at each position. Red means the given position has positive weight and green means negative weight. The magnitude of weight is represented by the intensity. The redder a token is, the more it contributions to the activation of the transformer factor. We leave more implementation and mathematical formulation details of LIME algorithm in the appendix. We provide detailed visualization for two different transformer factors that show long-range dependency in Figure 7 , 8. Since visualization of highlevel information requires more extended context, we only offer the top two activated words and their contexts for each such transformer factor. Many more will be provided in the appendix section G.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 124, |
| "end": 132, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 627, |
| "end": 635, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "We name the pattern for transformer factor \u03a6 :,297 in Figure 7 as \"repetitive pattern detector\". All top activated contexts for \u03a6 :,297 contain an obvious repetitive structure. Specifically, the text snippet \"can't get you out of my head\" appears twice in the first example, and the text snippet \"xxx class passenger, star alliance\" appears three times in the second example. Compared to the patterns we found in the mid-level [6], the high-level patterns like \"repetitive pattern detector\" are much more abstract. In some sense, the transformer detects if there are two (or multiple) almost identical embedding vectors at layer-10 without caring what they are. Such behavior might be highly related to the concept proposed in the capsule networks (Sabour et al., 2017; Hinton, 2021) . To further understand this behavior and study how the self-attention mechanism helps model the relationships between the features outlines an interesting future research direction. Figure 8 shown another high-level factor, which detects text snippets related to \"the beginning of a biography\". The necessary components, day of birth as month and four-digit years, first name and last name, familial relation, and career, are all midlevel information. In Figure 8 , we see that all the information relates to biography has a high weight in the saliency map. Thus, they are all together combined to detect the high-level pattern. Figure 7 : Two examples of the high activated words and their contexts for transformer factor \u03a6 :,297 . We also provide the saliency map of the tokens generated using LIME. This transformer factor corresponds to the concept: \"repetitive pattern detector\". In other words, repetitive text sequences will trigger high activation of \u03a6 :,297 . Figure 8 : Visualization of \u03a6 :,322 . This transformer factor corresponds to the concept: \"some born in some year\" in biography. All of the high-activation contexts contain the beginning of a biography. As shown in the figure, the attributes of someone, name, age, career, and familial relation all have high saliency weights.", |
| "cite_spans": [ |
| { |
| "start": 748, |
| "end": 769, |
| "text": "(Sabour et al., 2017;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 770, |
| "end": 783, |
| "text": "Hinton, 2021)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 54, |
| "end": 62, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 967, |
| "end": 975, |
| "text": "Figure 8", |
| "ref_id": null |
| }, |
| { |
| "start": 1240, |
| "end": 1248, |
| "text": "Figure 8", |
| "ref_id": null |
| }, |
| { |
| "start": 1414, |
| "end": 1422, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 1754, |
| "end": 1762, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u03a6:,16", |
| "sec_num": null |
| }, |
| { |
| "text": "Dictionary learning has been successfully used to visualize the classical word embeddings (Arora et al., 2018; Zhang et al., 2019) . In this paper, we propose to use this simple method to visualize the representation learned in transformer networks to supplement the implicit \"probing-tasks\" methods. Our results show that the learned transformer factors are relatively reliable and can even provide many surprising insights into the linguistic structures. This simple tool can open up the transformer networks and show the hierarchical semantic or syntactic representation learned at different stages. In short, we find word-level disambiguation, sentence-level pattern formation, and long-range dependency. The idea of a neural network learns low-level features in early layers, and abstract concepts in the later stages are very similar to the visualization in CNN (Zeiler and Fergus, 2014) . Dictionary learning can be a convenient tool to help visualize a broad category of neural networks with skip connections, like ResNet (He et al., 2016) , ViT models (Dosovitskiy et al., 2020) , etc. For more interested readers, we provide an interactive website 1 for the readers to gain some further insights.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 110, |
| "text": "(Arora et al., 2018;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 111, |
| "end": 130, |
| "text": "Zhang et al., 2019)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 864, |
| "end": 893, |
| "text": "CNN (Zeiler and Fergus, 2014)", |
| "ref_id": null |
| }, |
| { |
| "start": 1030, |
| "end": 1047, |
| "text": "(He et al., 2016)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1061, |
| "end": 1087, |
| "text": "(Dosovitskiy et al., 2020)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank our reviewers for their detailed and insightful comments. We also thank Yuhao Zhang for his suggestions during the preparation of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Linear algebraic structure of word senses, with applications to polysemy", |
| "authors": [ |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuanzhi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yingyu", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tengyu", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrej", |
| "middle": [], |
| "last": "Risteski", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "6", |
| "issue": "", |
| "pages": "483--495", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018. Linear algebraic struc- ture of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics, 6:483-495.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", |
| "authors": [ |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Beck", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Teboulle", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "SIAM journal on imaging sciences", |
| "volume": "2", |
| "issue": "1", |
| "pages": "183--202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amir Beck and Marc Teboulle. 2009. A fast itera- tive shrinkage-thresholding algorithm for linear in- verse problems. SIAM journal on imaging sciences, 2(1):183-202.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Transformer interpretability beyond attention visualization", |
| "authors": [ |
| { |
| "first": "Hila", |
| "middle": [], |
| "last": "Chefer", |
| "suffix": "" |
| }, |
| { |
| "first": "Shir", |
| "middle": [], |
| "last": "Gur", |
| "suffix": "" |
| }, |
| { |
| "first": "Lior", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hila Chefer, Shir Gur, and Lior Wolf. 2020. Trans- former interpretability beyond attention visualiza- tion. CoRR, abs/2012.09838.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Generic attention-model explainability for interpreting bimodal and encoder-decoder transformers", |
| "authors": [ |
| { |
| "first": "Hila", |
| "middle": [], |
| "last": "Chefer", |
| "suffix": "" |
| }, |
| { |
| "first": "Shir", |
| "middle": [], |
| "last": "Gur", |
| "suffix": "" |
| }, |
| { |
| "first": "Lior", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hila Chefer, Shir Gur, and Lior Wolf. 2021. Generic attention-model explainability for interpreting bi- modal and encoder-decoder transformers. CoRR, abs/2103.15679.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "BERT: pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "An image is worth 16x16 words: Transformers for image recognition at scale", |
| "authors": [ |
| { |
| "first": "Alexey", |
| "middle": [], |
| "last": "Dosovitskiy", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucas", |
| "middle": [], |
| "last": "Beyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Kolesnikov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Weissenborn", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaohua", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Unterthiner", |
| "suffix": "" |
| }, |
| { |
| "first": "Mostafa", |
| "middle": [], |
| "last": "Dehghani", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Minderer", |
| "suffix": "" |
| }, |
| { |
| "first": "Georg", |
| "middle": [], |
| "last": "Heigold", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Gelly", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2010.11929" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Adaptive subgradient methods for online learning and stochastic optimization", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Duchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Elad", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2121--2159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "How contextual are contextualized word representations? comparing the geometry of bert, elmo, and GPT-2 embeddings", |
| "authors": [ |
| { |
| "first": "Kawin", |
| "middle": [], |
| "last": "Ethayarajh", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "55--65", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kawin Ethayarajh. 2019. How contextual are contex- tualized word representations? comparing the geom- etry of bert, elmo, and GPT-2 embeddings. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing, EMNLP-IJCNLP, pages 55-65. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Sparse overcomplete word vector representations", |
| "authors": [ |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Tsvetkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dani", |
| "middle": [], |
| "last": "Yogatama", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse overcom- plete word vector representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Brown corpus manual", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "N" |
| ], |
| "last": "Francis", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Kucera", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. N. Francis and H. Kucera. 1979. Brown corpus manual. Technical report, Department of Linguis- tics, Brown University, Providence, Rhode Island, US.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Deep residual learning for image recognition", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaoqing", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "770--778", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A structural probe for finding syntax in word representations", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Hewitt", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "How to represent part-whole hierarchies in a neural network", |
| "authors": [ |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2102.12627" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Geoffrey Hinton. 2021. How to represent part-whole hierarchies in a neural network. arXiv preprint arXiv:2102.12627.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "How can we know what language models know", |
| "authors": [ |
| { |
| "first": "Zhengbao", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [ |
| "F" |
| ], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Trans. Assoc. Comput. Linguistics", |
| "volume": "8", |
| "issue": "", |
| "pages": "423--438", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know. Trans. Assoc. Comput. Linguistics, 8:423-438.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Linguistic knowledge and transferability of contextual representations", |
| "authors": [ |
| { |
| "first": "Nelson", |
| "middle": [ |
| "F" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Belinkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [ |
| "E" |
| ], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers). Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Visualizing and measuring the geometry of BERT", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Reif", |
| "suffix": "" |
| }, |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Wattenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernanda", |
| "middle": [ |
| "B" |
| ], |
| "last": "Vi\u00e9gas", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Coenen", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Pearce", |
| "suffix": "" |
| }, |
| { |
| "first": "Been", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, (NeurIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "8592--8600", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Vi\u00e9gas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of BERT. In Advances in Neural Information Process- ing Systems 32: Annual Conference on Neural Infor- mation Processing Systems, (NeurIPS), pages 8592- 8600.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "why should I trust you?\": Explaining the predictions of any classifier", |
| "authors": [ |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Marco T\u00falio Ribeiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Guestrin", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco T\u00falio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should I trust you?\": Ex- plaining the predictions of any classifier. CoRR, abs/1602.04938.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A primer in bertology: What we know about how BERT works", |
| "authors": [ |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Rogers", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Kovaleva", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Rumshisky", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Trans. Assoc. Comput. Linguistics", |
| "volume": "8", |
| "issue": "", |
| "pages": "842--866", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how BERT works. Trans. Assoc. Comput. Linguis- tics, 8:842-866.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Dynamic routing between capsules", |
| "authors": [ |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Sabour", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Frosst", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1710.09829" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. arXiv preprint arXiv:1710.09829.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "What do you learn from context? probing for sentence structure in contextualized word representations", |
| "authors": [ |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Tenney", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Berlin", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Poliak", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mccoy", |
| "suffix": "" |
| }, |
| { |
| "first": "Najoung", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Samuel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1905.06316" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipan- jan Das, et al. 2019. What do you learn from context? probing for sentence structure in con- textualized word representations. arXiv preprint arXiv:1905.06316.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1706.03762" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Visualizing and understanding convolutional networks", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Matthew", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Zeiler", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fergus", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "European conference on computer vision", |
| "volume": "", |
| "issue": "", |
| "pages": "818--833", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Euro- pean conference on computer vision, pages 818-833. Springer.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Word embedding visualization via dictionary learning", |
| "authors": [ |
| { |
| "first": "Juexiao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yubei", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Cheung", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruno", |
| "middle": [ |
| "A" |
| ], |
| "last": "Olshausen", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1910.03833" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Juexiao Zhang, Yubei Chen, Brian Cheung, and Bruno A Olshausen. 2019. Word embedding vi- sualization via dictionary learning. arXiv preprint arXiv:1910.03833.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Building block (layer) of transformer", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "Importance score (IS) across all layers for two different transformer factors. (a) This figure shows a typical IS curve of a transformer factor corresponding to low-level information. (b) This figure shows a typical IS curve of a transformer factor corresponds to mid-level information.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": "(a) Average activation of \u03a6 :,30 for word vector \"left\" across different layers. (b) Instead of averaging, we plot the activation of all \"left\" with different contexts in layer-0, 2, and 4. Random noise is added to the y-axis to prevent overplotting. The activation of \u03a6 :,30 for two different word senses of \"left\" is blended together in layer-0. They disentangle to a great extent in layer-2 and nearly separable in layer-4 by this single dimension. Visualization of a mid-level transformer factor. (a), (b), (c) are the top 5 activated words and contexts", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "text": "activation of \u03a6 :,c in layer-l at location i, as a scalar function of s, f (l) c,i (s). Assume a sequence s triggers a high activation \u03b1 (l) c,i , i.e. f (l) c,i (s) is large. We want to know how much each token (or equivalently each position) in s contributes to f (l)", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "text": "Several examples of low-level transformer factors. Their top-activated words in layer 4 are marked blue, and the corresponding contexts are shown as examples for each transformer factor. As shown in the table, nearly all of the top-activated words are disambiguated into a single sense. Please note the last example of \u03a6 :,33 is a rare exception, the reader may check the appendix to see a more complete list. More examples, top-activated words and contexts are provided in Appendix." |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table><tr><td/><td>Adversarial Text</td><td>Explaination</td><td>\u03b135</td></tr><tr><td>(o)</td><td>album as \"full of exhilarating, ecstatic, thrilling, fun and sometimes downright silly songs\"</td><td>The original top-activated word and its \u03a6:,35 (not an adversarial text) context sentence for transformer factor</td><td>9.5</td></tr><tr><td>(a)</td><td>album as \"full of delightful, lively, exciting, interesting and sometimes downright silly songs\"</td><td>Replace the adjectives in sentence (o) with different adjectives.</td><td>9.2</td></tr><tr><td>(b)</td><td>album as \"full of unfortunate, heartbroken, annoying, bor-ing and sometimes downright silly songs\"</td><td>Replace the adjectives in sentence (o) with negative adjectives.</td><td>8.2</td></tr><tr><td>(c)</td><td>album as \"full of [UNK], [UNK], thrilling, [UNK] and sometimes downright silly songs\"</td><td>Mask the adjectives in sentence (o) with unknown tokens.</td><td>5.3</td></tr><tr><td>(d)</td><td>album as \"full of thrilling and sometimes downright silly songs\"</td><td/><td/></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": ". From the table, we see that large" |
| }, |
| "TABREF4": { |
| "html": null, |
| "content": "<table><tr><td>(8) 35 , w.r.t. the blue-marked word in layer 8.</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": "We construct adversarial texts similar but different to the pattern \"Consecutive adjective\". The last column shows the activation of \u03a6 :,35 , or \u03b1" |
| } |
| } |
| } |
| } |