ACL-OCL / Base_JSON /prefixE /json /econlp /2021.econlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:53:51.289286Z"
},
"title": "Privacy enabled Financial Text Classification using Differential Privacy and Federated Learning",
"authors": [
{
"first": "Priyam",
"middle": [],
"last": "Basu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Manipal Institute of Technology",
"location": {}
},
"email": "priyam.basu1@learner.manipal.edu"
},
{
"first": "Tiasa",
"middle": [],
"last": "Singha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Manipal Institute of Technology",
"location": {}
},
"email": "tiasa.singharoy@learner.manipal.edu"
},
{
"first": "Rakshit",
"middle": [],
"last": "Naidu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Manipal Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "Zumrut",
"middle": [],
"last": "Muftuoglu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "zumrutmuftuoglu@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Privacy is important considering the financial Domain as such data is highly confidential and sensitive. Natural Language Processing (NLP) techniques can be applied for text classification and entity detection purposes in financial domains such as customer feedback sentiment analysis, invoice entity detection, categorisation of financial documents by type etc. Due to the sensitive nature of such data, privacy measures need to be taken for handling and training large models with such data. In this work, we propose a contextualized transformer (BERT and RoBERTa) based text classification model integrated with privacy features such as Differential Privacy (DP) and Federated Learning (FL). We present how to privately train NLP models and desirable privacyutility tradeoffs and evaluate them on the Financial Phrase Bank dataset.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Privacy is important considering the financial Domain as such data is highly confidential and sensitive. Natural Language Processing (NLP) techniques can be applied for text classification and entity detection purposes in financial domains such as customer feedback sentiment analysis, invoice entity detection, categorisation of financial documents by type etc. Due to the sensitive nature of such data, privacy measures need to be taken for handling and training large models with such data. In this work, we propose a contextualized transformer (BERT and RoBERTa) based text classification model integrated with privacy features such as Differential Privacy (DP) and Federated Learning (FL). We present how to privately train NLP models and desirable privacyutility tradeoffs and evaluate them on the Financial Phrase Bank dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Divulging personally identifiable information during a business transaction has become a commonplace occurrence for most individuals. This activity can span from sharing of bank account numbers, loan account numbers, and credit/debit card numbers, to providing non-financial personally identifiable information such as name, social security number, driver's license number, address, and email address. Maintaining the privacy of confidential customer information has become essential for any firm which collects or stores personally identifiable data. The financial services industry operates and deals with a significant amount of confidential client and customer data for daily business transactions. Though many organizations are taking strides to improve their privacy practices, and consumers are becoming more privacy-aware, it remains a tremendous burden for users to manage their privacy (Anton et al., 2004) .",
"cite_spans": [
{
"start": 896,
"end": 916,
"text": "(Anton et al., 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NLP has major applications in the finance industry for many tasks such as detection of entities for gross tax calculation from invoice and payroll data, categorising different kinds of financial documents based on type, grouping of financial documents based on semantic similarity, sentiment analysis of financial text (Vicari and Gaspari, 2020) , conversational bots for banking systems, investment recommendation engines etc.",
"cite_spans": [
{
"start": 319,
"end": 345,
"text": "(Vicari and Gaspari, 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text Classification can be extended to many NLP applications including sentiment analysis, question answering, and topic labeling . For example, financial or government institutions that wish to train a chatbot for their clients cannot be allowed to upload all text data from the client-side to their central server due to strict privacy protection statements (Liu et al., 2021) . At this point, applying the federated learning paradigm presents an approach to solve the dilemma due to its advances in privacy preservation and collaborative training where the central server can train a powerful model with different local labeled data at client devices without uploading the raw data considering increasing privacy concerns in public.",
"cite_spans": [
{
"start": 360,
"end": 378,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of this paper is to propose a privacy enabled text classification system, combining stateof-the-art transformers (BERT and RoBERTa) with differential privacy, on both centralized and FL based setups, exploring different privacy budgets to investigate the privacy-utility trade-off and see how they perform when trying to classify financial document-based text sequences. For the federated setups, we try to explore both IID (Independent and Identically Distributed) and non-IID distributions of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Deep learning techniques have often been used to learn text representations via neural models by language application. The input text can give us individual demographic information about the author. Sentiment analysis can be used for the classification or categorization of financial documents. Xing Studies have been conducted on training differentially private deep models with the formal differential privacy approach in the literature (Abadi et al., 2016; McMahan et al., 2018; Yu et al., 2019) . Fernandes et al. discuss the security through differential privacy in textual data. (Panchal, 2020) in his work portrays the use of DP in the generation of contextually similar messages for Honey encryption which encrypts messages using low minentropy keys such as passwords. Federated learning is another privacy-enhancing approach (McMahan et al., 2017; Yang et al., 2019; Kairouz et al., 2021; Jana and Biemann, 2021; Priyanshu and Naidu, 2021) , which relies on distributed training of models on devices and sharing of model gradients. Liu et al. show how FL can be used for decentralized training of heavy pre-trained NLP models. Basu et al. in their work have shown a detailed benchmark comparison of multiple BERT based models with DP and FL for depression detection. Jana and Biemann in their work show a differentially private sequence tagging system in a federated learning setup.",
"cite_spans": [
{
"start": 439,
"end": 459,
"text": "(Abadi et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 460,
"end": 481,
"text": "McMahan et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 482,
"end": 498,
"text": "Yu et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 834,
"end": 856,
"text": "(McMahan et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 857,
"end": 875,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 876,
"end": 897,
"text": "Kairouz et al., 2021;",
"ref_id": null
},
{
"start": 898,
"end": 921,
"text": "Jana and Biemann, 2021;",
"ref_id": "BIBREF13"
},
{
"start": 922,
"end": 948,
"text": "Priyanshu and Naidu, 2021)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The key arguments for the low utilization of statistical techniques in financial sentiment analysis have been the difficulty of implementation for practical applications and the lack of high-quality training data for building such models. Especially in the case of finance and economic texts, annotated collections are a scarce resource and many are reserved for proprietary use only. For this reason, we use the Financial Phrase Bank dataset (Malo et al., 2014) which was also used for benchmarking the pre-trained FinBERT model for sentiment analysis (Araci, 2019) . The dataset includes approximately 5000 phrases/sentences from financial news texts and company press releases. The objective of the phrase level annotation task is to classify each example sentence into a positive, negative or neutral category by considering only the information explicitly available in the given sentence. Since the study is focused only on financial and economic domains, the annotators were asked to consider the sentences from the viewpoint of an investor only; i.e. whether the news may have a positive, negative or neutral influence on the stock price. As a result, sentences that have a sentiment that is not relevant from an economic or financial perspective are considered neutral.",
"cite_spans": [
{
"start": 443,
"end": 462,
"text": "(Malo et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 553,
"end": 566,
"text": "(Araci, 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "Given a large number of overlapping annotations (5 to 8 annotations per sentence), there are several ways to define a majority vote-based gold standard. To provide an objective comparison, the authors have formed 4 alternative reference datasets based on the strength of majority agreement. For the purpose of this task, we use those sentences with 75% or more agreement. The final dataset has 3453 sentences in total out of which 60% belong to the neutral class, 28% belong to the positive class and 12% belong to the positive class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "Today, the text is the most widely used communication instrument.For years, researchers are studies focusing on implementing different approaches that make possible machines to imitate human reading (Ly et al., 2020) . Natural Language Processing(NLP) lays a bridge between computers and natural languages by helping machines to analyze human language (Manning and Sch\u00fctze, 1999) .Devlin et al. developed a model which is based on bidirectional encoder representation (Alyafeai et al., 2020) . RoBERTa is a modified form of BERT .",
"cite_spans": [
{
"start": 199,
"end": 216,
"text": "(Ly et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 468,
"end": 491,
"text": "(Alyafeai et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "4"
},
{
"text": "Transformer-based models have been used since they use a self-attention mechanism and process the entire input data at once instead of as a sequence to capture long-term dependencies for obtaining contextual meaning. Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) tokenizes words into sub-words (using Word-Piece) which are then given as input to the model. It also uses positional embeddings to replace recurrence.",
"cite_spans": [
{
"start": 280,
"end": 301,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "4.1"
},
{
"text": "Robustly Optimized BERT-Pretraining Approach (RoBERTa) ) is a state-of-the-art transformer model which improves BERT (Devlin et al., 2018) that uses a multi-headed attention mechanism which enables it to capture long term dependencies. It essentially fine-tunes the original BERT model along with data manipulation and uses Byte-Pair Encoding for utilizing the character and word level representations and removed Next Sentence Prediction (NSP) to match or even slightly improve downstream task performance.",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RoBERTa",
"sec_num": "4.2"
},
{
"text": "Differential Privacy (DP) is a privacy standard which allows data use in any analysis by presenting mathematical guarantee (Dwork and Roth, 2014) . It provides strong confidentiality in statistical databases and machine learning approaches through mathematical definition. This definition is an acceptable measure of privacy concern (Dwork, 2008) .",
"cite_spans": [
{
"start": 123,
"end": 145,
"text": "(Dwork and Roth, 2014)",
"ref_id": "BIBREF10"
},
{
"start": 333,
"end": 346,
"text": "(Dwork, 2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differential Privacy",
"sec_num": "4.3"
},
{
"text": "Definition 1.1 : M and E denote a random mechanism and each event (output) respectively. D and D are defined neighboring datasets having difference with one record. (\u03b5, \u03b4) protects confidentiality (Dwork, 2011) . M gives (\u03b5, \u03b4)-differential privacy for and D and D' if M satsifies:",
"cite_spans": [
{
"start": 197,
"end": 210,
"text": "(Dwork, 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differential Privacy",
"sec_num": "4.3"
},
{
"text": "Pr [M (D) \u2208 E] \u2264 e \u2022Pr M D \u2208 E +\u03b4 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differential Privacy",
"sec_num": "4.3"
},
{
"text": "where \u03b5 denotes the privacy budget and \u03b4 represents the probability of error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differential Privacy",
"sec_num": "4.3"
},
{
"text": "The privacy guarantee level of M is controlled through privacy budget of (Haeberlen et al., 2011) .There are two widely used privacy budget compositions as the sequential composition and the parallel composition.",
"cite_spans": [
{
"start": 73,
"end": 97,
"text": "(Haeberlen et al., 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Privacy Budget",
"sec_num": "4.3.1"
},
{
"text": "The ratio between the two mechanisms (M (D) and M (D )) limits by e \u03b5 . For \u03b4 = 0, M gives \u03b5differential privacy by its strictest definition. In other case, for some low probability cases, (\u03b5,\u03b4)differential privacy provides latitude to invade strict \u03b5-differential privacy. \u03b5-differential privacy is called as pure differential privacy and (\u03b5, \u03b4)differential privacy, where \u03b4 > 0, is called as approximate differential privacy (Beimel et al., 2014) . Differential privacy has two implementation settings: Centralized DP (CDP) via DP-SGD and Local DP (LDP) (Qu et al., 2021) .",
"cite_spans": [
{
"start": 427,
"end": 448,
"text": "(Beimel et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 556,
"end": 573,
"text": "(Qu et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Privacy Budget",
"sec_num": "4.3.1"
},
{
"text": "In CDP, a trusted data curator answers queries or releases differentially private models by using randomisation algorithms (Dwork and Roth, 2014) . In this article, we use DP-SGD (Differentially Private Stochastic Gradient Descent) (Abadi et al., 2016) to train our models.",
"cite_spans": [
{
"start": 123,
"end": 145,
"text": "(Dwork and Roth, 2014)",
"ref_id": "BIBREF10"
},
{
"start": 232,
"end": 252,
"text": "(Abadi et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Privacy Budget",
"sec_num": "4.3.1"
},
{
"text": "As conventional centralized learning systems require that all training data produced on different devices be uploaded to a server or cloud for training, it may give rise to serious privacy concerns (Privacy, 2017). FL allows training an algorithm in a decentralized way (McMahan et al., 2017 . It ensures multiple parties collectively train a machine learning model without exchanging the local data (Li et al., 2021) . To define mathematically, it is assumed that there are N parties, and each party is showed with T i , where i \u2208 [1, N ]. For the nonfederated setting, each party uses its local data and depicted by D i to train a local model M i and send the local model parameters to the server. The predictive data is sent only the local model parameters to the FL server. Most centralized setups have just the IID assumption for train test data but in a federated learning based decentralized setup, non-IID poses the problem of high skewness of different devices due to different data distribution (Liu et al., 2021) .",
"cite_spans": [
{
"start": 270,
"end": 291,
"text": "(McMahan et al., 2017",
"ref_id": "BIBREF23"
},
{
"start": 400,
"end": 417,
"text": "(Li et al., 2021)",
"ref_id": null
},
{
"start": 1005,
"end": 1023,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Federated Learning",
"sec_num": "4.4"
},
{
"text": "In federated language modeling, existing works (Yang et al., 2018) use FedAvg as the federated optimization algorithm. In FedAvg, gradients that are computed locally over a large population of clients are aggregated by the server to build a novel global model. Every client is trained by locally stored data and computes the average gradient with the current global model via one or more steps of SGD.",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Federated Learning",
"sec_num": "4.4"
},
{
"text": "Applying FL to text classification can cause problems such as designing proper aggregating algorithms for handling the gradients or weights uploaded by different client models. Zhu et al. proposed a text classification using the standard Fe-dAvg algorithm to update the model parameter with local trained models. Model compression has also been introduced to federated classification tasks due to the dilemma of computation restraints on the client-side, where an attempt to reduce the model size on the client-side to enable the real application of federated learning was made. For overcoming the communication dilemma of FL, central server can successfully train the central model with only one or a few rounds of communication under poor communication scenarios in a one-shot or few-shot setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Federated Learning",
"sec_num": "4.4"
},
{
"text": "In the scope of the study, the FinBERT (pre-trained) model is used as the base model. Two NLP models were trained by implementing DP and FL. In this section, the results presented in the tables are discussed. The results placed in the tables are the average and the standard deviation of the results obtained after running the models thrice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "The dataset was split into train set and test set with 80:20 train test ratio. BERT and RoBERTa based models were used for the language modelling part. It should be noted that the table contain the average and the standard deviation of the results obtained after running the models 3 times. Table 1 shows a comparison according to epsilon values between both the language models using Centralized DP and in a Federated Learning set up. The Opacus library was used along with PyTorch for the experiments. We implement DP, FL and DP-FL on BERT and RoBERTa for = 0.5, 5, 15, 20, 25. Our baseline model (with no noise) achieves an accuracy of 67.71% and 68.37% on BERT and RoBERTa respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 299,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "In baseline mode, we can see that RoBERTa has a slight improvement over BERT because of its robustness owing to a heavier pre-training procedure. We also notice that with the increase in epsilon values, the amount of standard deviation decreases as the model approaches towards its vanilla variant (without DP noise). Table 2 also shows us the results obtained when DP was applied in a federated learning mode, both in IID (Identical and Independently distributed) and Non-IID data silos. For Non-IID scenarios, we assume 10 shards of size 240 assigned to each client. We run it over 10 clients in total, selecting only a fraction of 0.5 in each round for training. We add DP locally, that is, to each client model at every iteration and aggregate them to perform Federated Averaging. We observe the best accuracies with RoBERTa for the centralised DP implementation, particularly with = 25 with an accuracy of 62.6%. BERT in a centralised DP setting does come close at = 25 with an accuracy of 60.03%. The results also show that the accuracy decreases by adding FL to the DP implementations. We also empirically observe that with increase in , accuracy of the models also increases. This happens because as the value of increases, privacy decreases with the addition of noise from a smaller range which results in smaller variance. Consequently, the accuracy of the model increases. Inherently, applying DP to deep learning yields loss of utility due to the addition of noise and clipping. We can also observe that the performance of federated language models still lies behind that of centralized ones.",
"cite_spans": [],
"ref_spans": [
{
"start": 318,
"end": 325,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Financial data is highly sensitive , hence the risks of collecting and sharing data can limit studies. Financial organizations work with a lot of confidential user data and therefore highly value protecting the data to retain the integrity of the user and we need to delve into research of private training of machine learning models to ensure this. During this study, we benchmark the utility of privacy models while attempting to preserve the performance of SOTA transformer models such as BERT and RoBERTa. Our empirical results show that the models show better performance with increasing \u03b5 as expected with the decrease in noise. The models come close to the performance of the baseline models near the higher \u03b5 values.The DP + FL shows a similar trend which showcases a greater protection feature without compromising the performance. As future work, we hope to improve our models further by hyper-parameter tuning, freezing partial layers of the NLP model and implementing focal loss on the unbalanced dataset to better the results. The complete code to this paper can be found here:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://www.github.com/tiasa2/Privacyenabled-Financial-Text-Classification-using-Differential-Privacy-and-Federated-Learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep learning with differential privacy",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "H",
"middle": [
"Brendan"
],
"last": "Mcmahan",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Mironov",
"suffix": ""
},
{
"first": "Kunal",
"middle": [],
"last": "Talwar",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2976749.2978318"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Abadi, Andy Chu, Ian Goodfellow, H. Bren- dan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential pri- vacy. Proceedings of the 2016 ACM SIGSAC Con- ference on Computer and Communications Security.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Maged Saeed AlShaibani, and Irfan Ahmad. 2020. A survey on transfer learning in natural language processing",
"authors": [
{
"first": "Zaid",
"middle": [],
"last": "Alyafeai",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zaid Alyafeai, Maged Saeed AlShaibani, and Irfan Ah- mad. 2020. A survey on transfer learning in natural language processing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Financial privacy policies and the need for standardization",
"authors": [
{
"first": "A",
"middle": [
"I"
],
"last": "Anton",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Earp",
"suffix": ""
},
{
"first": "Qingfeng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Stufflebeam",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bolchini",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Jensen",
"suffix": ""
}
],
"year": 2004,
"venue": "IEEE Security Privacy",
"volume": "2",
"issue": "2",
"pages": "36--45",
"other_ids": {
"DOI": [
"10.1109/MSECP.2004.1281243"
]
},
"num": null,
"urls": [],
"raw_text": "A.I. Anton, J.B. Earp, Qingfeng He, W. Stufflebeam, D. Bolchini, and C. Jensen. 2004. Financial privacy policies and the need for standardization. IEEE Se- curity Privacy, 2(2):36-45.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Finbert: Financial sentiment analysis with pre-trained language models",
"authors": [
{
"first": "Dogu",
"middle": [],
"last": "Araci",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10063"
]
},
"num": null,
"urls": [],
"raw_text": "Dogu Araci. 2019. Finbert: Financial sentiment analy- sis with pre-trained language models. arXiv preprint arXiv:1908.10063.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Zumrut Muftuoglu, Sahib Singh, and Fatemehsadat Mireshghallah. 2021. Benchmarking differential privacy and federated learning for bert models",
"authors": [
{
"first": "Priyam",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "Rakshit",
"middle": [],
"last": "Tiasa Singha Roy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Naidu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2106.13973"
]
},
"num": null,
"urls": [],
"raw_text": "Priyam Basu, Tiasa Singha Roy, Rakshit Naidu, Zum- rut Muftuoglu, Sahib Singh, and Fatemehsadat Mireshghallah. 2021. Benchmarking differential pri- vacy and federated learning for bert models. arXiv preprint arXiv:2106.13973.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Private learning and sanitization: Pure vs. approximate differential privacy",
"authors": [
{
"first": "Amos",
"middle": [],
"last": "Beimel",
"suffix": ""
},
{
"first": "Kobbi",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Uri",
"middle": [],
"last": "Stemmer",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amos Beimel, Kobbi Nissim, and Uri Stemmer. 2014. Private learning and sanitization: Pure vs. approxi- mate differential privacy.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Differential privacy: A survey of results",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Dwork",
"suffix": ""
}
],
"year": 2008,
"venue": "Theory and Applications of Models of Computation",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Dwork. 2008. Differential privacy: A sur- vey of results. In Theory and Applications of Mod- els of Computation, pages 1-19, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A firm foundation for private data analysis",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Dwork",
"suffix": ""
}
],
"year": 2011,
"venue": "Commun. ACM",
"volume": "54",
"issue": "1",
"pages": "86--95",
"other_ids": {
"DOI": [
"10.1145/1866739.1866758"
]
},
"num": null,
"urls": [],
"raw_text": "Cynthia Dwork. 2011. A firm foundation for private data analysis. Commun. ACM, 54(1):86-95.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The algorithmic foundations of differential privacy",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Dwork",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2014,
"venue": "Found. Trends Theor. Comput. Sci",
"volume": "9",
"issue": "3-4",
"pages": "211--407",
"other_ids": {
"DOI": [
"10.1561/0400000042"
]
},
"num": null,
"urls": [],
"raw_text": "Cynthia Dwork and Aaron Roth. 2014. The algo- rithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3-4):211-407.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generalised differential privacy for text document processing",
"authors": [
{
"first": "Natasha",
"middle": [],
"last": "Fernandes",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "Annabelle",
"middle": [],
"last": "Mciver",
"suffix": ""
}
],
"year": 2019,
"venue": "Principles of Security and Trust, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
"volume": "",
"issue": "",
"pages": "123--148",
"other_ids": {
"DOI": [
"10.1007/978-3-030-17138-4_6"
]
},
"num": null,
"urls": [],
"raw_text": "Natasha Fernandes, Mark Dras, and Annabelle McIver. 2019. Generalised differential privacy for text docu- ment processing. In Principles of Security and Trust, Lecture Notes in Computer Science (including sub- series Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 123-148, Germany. Springer-VDI-Verlag GmbH Co. KG.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Differential privacy under fire. SEC'11",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Haeberlen",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"C"
],
"last": "Pierce",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Narayan",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Haeberlen, Benjamin C. Pierce, and Arjun Narayan. 2011. Differential privacy under fire. SEC'11, page 33, USA. USENIX Association.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An investigation towards differentially private sequence tagging in a federated framework",
"authors": [
{
"first": "Abhik",
"middle": [],
"last": "Jana",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Third Workshop on Privacy in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhik Jana and Chris Biemann. 2021. An investigation towards differentially private sequence tagging in a federated framework. In Proceedings of the Third Workshop on Privacy in Natural Language Process- ing, pages 30-35.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Xu Liu, and Bingsheng He. 2021. A survey on federated learning systems: Vision, hype and reality for data privacy and protection",
"authors": [
{
"first": "Qinbin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zeyi",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Zhaomin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sixu",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Naibo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qinbin Li, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Yuan Li, Xu Liu, and Bingsheng He. 2021. A survey on federated learning systems: Vision, hype and reality for data privacy and protection.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Federated learning meets natural language processing: A survey",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Stella",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Mengqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Longxiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2107.12603"
]
},
"num": null,
"urls": [],
"raw_text": "Ming Liu, Stella Ho, Mengqi Wang, Longxiang Gao, Yuan Jin, and He Zhang. 2021. Federated learning meets natural language processing: A survey. arXiv preprint arXiv:2107.12603.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Finbert: A pre-trained financial language representation model for financial text mining",
"authors": [
{
"first": "Zhuang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Degen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kaiyu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhuang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "4513--4519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuang Liu, Degen Huang, Kaiyu Huang, Zhuang Li, and Jun Zhao. 2020. Finbert: A pre-trained finan- cial language representation model for financial text mining. In IJCAI, pages 4513-4519.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A survey on natural language processing (nlp) and applications in insurance",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Ly",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Uthayasooriyar",
"suffix": ""
},
{
"first": "Tingting",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Ly, Benno Uthayasooriyar, and Tingting Wang. 2020. A survey on natural language processing (nlp) and applications in insurance.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Good debt or bad debt: Detecting semantic orientations in economic texts",
"authors": [
{
"first": "Pekka",
"middle": [],
"last": "Malo",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Pekka",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of the Association for Information Science and Technology",
"volume": "65",
"issue": "4",
"pages": "782--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pekka Malo, Ankur Sinha, Pekka Korhonen, Jyrki Wal- lenius, and Pyry Takala. 2014. Good debt or bad debt: Detecting semantic orientations in economic texts. Journal of the Association for Information Sci- ence and Technology, 65(4):782-796.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Pro- cessing. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Federated learning of deep networks using model averaging",
"authors": [
{
"first": "H",
"middle": [
"B"
],
"last": "Mcmahan",
"suffix": ""
},
{
"first": "Eider",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "B",
"middle": [
"A Y"
],
"last": "Arcas",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. B. McMahan, Eider Moore, D. Ramage, and B. A. Y. Arcas. 2016. Federated learning of deep networks using model averaging. ArXiv, abs/1602.05629.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Communication-efficient learning of deep networks from decentralized data",
"authors": [
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Mcmahan",
"suffix": ""
},
{
"first": "Eider",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Hampson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Ag\u00fcera Y Arcas",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Ag\u00fcera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning differentially private recurrent language models",
"authors": [
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Mcmahan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "Kunal",
"middle": [],
"last": "Talwar",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning differentially private recurrent language models.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Evaluation of sentiment analysis in finance: From lexicons to transformers",
"authors": [
{
"first": "Kostadin",
"middle": [],
"last": "Mishev",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Gjorgjevikj",
"suffix": ""
},
{
"first": "Irena",
"middle": [],
"last": "Vodenska",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Lubomir",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Chitkushev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Trajanov",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "8",
"issue": "",
"pages": "131662--131682",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2020.3009626"
]
},
"num": null,
"urls": [],
"raw_text": "Kostadin Mishev, Ana Gjorgjevikj, Irena Vodenska, Lubomir T. Chitkushev, and Dimitar Trajanov. 2020. Evaluation of sentiment analysis in finance: From lexicons to transformers. IEEE Access, 8:131662- 131682.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Differential privacy and natural language processing to generate contextually similar decoy messages in honey encryption scheme",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.15985"
]
},
"num": null,
"urls": [],
"raw_text": "Kunjal Panchal. 2020. Differential privacy and natural language processing to generate contextually similar decoy messages in honey encryption scheme. arXiv preprint arXiv:2010.15985.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning with privacy at scale",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Apple Differential Privacy. 2017. Learning with pri- vacy at scale.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Fedpandemic: A cross-device federated learning approach towards elementary prognosis of diseases during a pandemic",
"authors": [
{
"first": "Aman",
"middle": [],
"last": "Priyanshu",
"suffix": ""
},
{
"first": "Rakshit",
"middle": [],
"last": "Naidu",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aman Priyanshu and Rakshit Naidu. 2021. Fedpan- demic: A cross-device federated learning approach towards elementary prognosis of diseases during a pandemic.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Privacy-adaptive bert for natural language understanding",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Weize",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Liu",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Privacy-adaptive bert for natural language under- standing.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Analysis of news sentiments using natural language processing and deep learning",
"authors": [
{
"first": "Mattia",
"middle": [],
"last": "Vicari",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Gaspari",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mattia Vicari and Mauro Gaspari. 2020. Analysis of news sentiments using natural language processing and deep learning. Ai & Society, pages 1-7.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Financial sentiment analysis: an investigation into common mistakes and silver bullets",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Malandri",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "978--987",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Xing, Lorenzo Malandri, Yue Zhang, and Erik Cambria. 2020. Financial sentiment analysis: an in- vestigation into common mistakes and silver bullets. In Proceedings of the 28th International Conference on Computational Linguistics, pages 978-987.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Federated machine learning: Concept and applications",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tianjian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yongxin",
"middle": [],
"last": "Tong",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Daniel Ramage, and Fran\u00e7oise Beaufays",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Hubert",
"middle": [],
"last": "Eichner",
"suffix": ""
},
{
"first": "Haicheng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": 2018,
"venue": "Applied federated learning: Improving google keyboard query suggestions",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.02903"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ra- mage, and Fran\u00e7oise Beaufays. 2018. Applied fed- erated learning: Improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Differentially private model publishing for deep learning",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Calton",
"middle": [],
"last": "Pu",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Symposium on Security and Privacy (SP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/sp.2019.00019"
]
},
"num": null,
"urls": [],
"raw_text": "Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gur- soy, and Stacey Truex. 2019. Differentially private model publishing for deep learning. 2019 IEEE Sym- posium on Security and Privacy (SP).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Empirical studies of institutional federated learning for natural language processing",
"authors": [
{
"first": "Xinghua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Jianzong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhenhou",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "625--634",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.55"
]
},
"num": null,
"urls": [],
"raw_text": "Xinghua Zhu, Jianzong Wang, Zhenhou Hong, and Jing Xiao. 2020. Empirical studies of institutional federated learning for natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 625-634, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Pipelineet al. investigate the error patterns of some widely acknowledged sentiment analysis methods in the finance domain. Mishev et al. perform more than one hundred experiments using publicly available datasets, labeled by financial experts. In their work, Liu et al. propose a domain-specific language model pre-trained on large-scale financial corpora and evaluate it on the Financial Phrase Bank dataset. Araci presents a BERT-based model which is pre-trained on a large amount of financebased data in his study."
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Setup</td><td>Epsilon( )</td><td>BERT</td><td>RoBERTa</td></tr><tr><td/><td>0.5</td><td>31.5\u00b123.94</td><td>31.36 \u00b1 26.35</td></tr><tr><td>Centralized DP</td><td>5 15 20</td><td colspan=\"2\">37.48\u00b120.42 51.71 \u00b114.71 51.34 \u00b1 15.45 38.34 \u00b120.08 55.37 \u00b1 5.49 55.54 \u00b1 5.54</td></tr><tr><td/><td>25</td><td>60.03 \u00b1 1.37</td><td>62.6 \u00b1 4.24</td></tr><tr><td/><td>0.5</td><td>14.57 \u00b1 2.86</td><td>20.11 \u00b1 7.68</td></tr><tr><td>DP-FL IID</td><td>5 15 20</td><td colspan=\"2\">30 \u00b125.6 40.34 \u00b1 20.55 50.26 \u00b1 20.84 30.04 \u00b1 28.22 51.05 \u00b1 7.95 54.78 \u00b1 2.99</td></tr><tr><td/><td>25</td><td>53.47 \u00b1 6.48</td><td>61.38 \u00b1 0.93</td></tr><tr><td/><td>0.5</td><td colspan=\"2\">19.82 \u00b1 5.97 33.13 \u00b1 25.41</td></tr><tr><td>DP-FL Non IID</td><td>5 15 20</td><td colspan=\"2\">35.74 \u00b1 21.48 36.51 \u00b1 26.87 45.87 \u00b1 15.56 49.83 \u00b1 20.6 52.43 \u00b1 4.08 53.36 \u00b1 3.27</td></tr><tr><td/><td>25</td><td>58.96 \u00b1 2.56</td><td>60.83 \u00b1 0.53</td></tr></table>",
"text": "Averaged Test Accuracies of FL and DPFL models",
"num": null
}
}
}
}